2002-10-12 05:32:24 +00:00
|
|
|
/*-
|
2017-11-20 19:43:44 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
2002-10-12 05:32:24 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1990, 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2016-09-15 13:16:20 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
2002-10-12 05:32:24 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2005-06-24 00:16:57 +00:00
|
|
|
#include "opt_hwpmc_hooks.h"
|
2008-03-20 01:32:48 +00:00
|
|
|
#include "opt_sched.h"
|
2005-06-24 00:16:57 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
2008-03-02 21:34:57 +00:00
|
|
|
#include <sys/cpuset.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/ktr.h>
|
|
|
|
#include <sys/lock.h>
|
2003-12-26 17:07:29 +00:00
|
|
|
#include <sys/kthread.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/mutex.h>
|
|
|
|
#include <sys/proc.h>
|
|
|
|
#include <sys/resourcevar.h>
|
|
|
|
#include <sys/sched.h>
|
2012-05-15 01:30:25 +00:00
|
|
|
#include <sys/sdt.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
#include <sys/smp.h>
|
|
|
|
#include <sys/sysctl.h>
|
|
|
|
#include <sys/sx.h>
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
#include <sys/turnstile.h>
|
2006-08-25 06:12:53 +00:00
|
|
|
#include <sys/umtx.h>
|
2006-06-29 19:37:31 +00:00
|
|
|
#include <machine/pcb.h>
|
2004-09-03 08:19:31 +00:00
|
|
|
#include <machine/smp.h>
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
#include <sys/pmckern.h>
|
|
|
|
#endif
|
|
|
|
|
2008-05-25 01:44:58 +00:00
|
|
|
#ifdef KDTRACE_HOOKS
|
|
|
|
#include <sys/dtrace_bsd.h>
|
2019-12-04 21:26:03 +00:00
|
|
|
int __read_mostly dtrace_vtime_active;
|
2008-05-25 01:44:58 +00:00
|
|
|
dtrace_vtime_switch_func_t dtrace_vtime_switch_func;
|
|
|
|
#endif
|
|
|
|
|
2002-11-21 09:14:13 +00:00
|
|
|
/*
|
|
|
|
* INVERSE_ESTCPU_WEIGHT is only suitable for statclock() frequencies in
|
|
|
|
* the range 100-256 Hz (approximately).
|
|
|
|
*/
|
|
|
|
#define ESTCPULIM(e) \
|
|
|
|
min((e), INVERSE_ESTCPU_WEIGHT * (NICE_WEIGHT * (PRIO_MAX - PRIO_MIN) - \
|
|
|
|
RQ_PPQ) + INVERSE_ESTCPU_WEIGHT - 1)
|
2003-11-09 13:45:54 +00:00
|
|
|
#ifdef SMP
|
|
|
|
#define INVERSE_ESTCPU_WEIGHT (8 * smp_cpus)
|
|
|
|
#else
|
2002-11-21 09:14:13 +00:00
|
|
|
#define INVERSE_ESTCPU_WEIGHT 8 /* 1 / (priorities per estcpu level). */
|
2003-11-09 13:45:54 +00:00
|
|
|
#endif
|
2002-11-21 09:14:13 +00:00
|
|
|
#define NICE_WEIGHT 1 /* Priorities per nice level. */
|
|
|
|
|
2009-01-25 07:35:10 +00:00
|
|
|
#define TS_NAME_LEN (MAXCOMLEN + sizeof(" td ") + sizeof(__XSTRING(UINT_MAX)))
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* The schedulable entity that runs a context.
|
2006-12-06 06:34:57 +00:00
|
|
|
* This is an extension to the thread structure and is tailored to
|
2016-04-17 11:04:27 +00:00
|
|
|
* the requirements of this scheduler.
|
|
|
|
* All fields are protected by the scheduler lock.
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched {
|
2016-04-17 11:04:27 +00:00
|
|
|
fixpt_t ts_pctcpu; /* %cpu during p_swtime. */
|
|
|
|
u_int ts_estcpu; /* Estimated cpu utilization. */
|
|
|
|
int ts_cpticks; /* Ticks of cpu time. */
|
|
|
|
int ts_slptime; /* Seconds !RUNNING. */
|
2012-08-09 18:09:59 +00:00
|
|
|
int ts_slice; /* Remaining part of time slice. */
|
2008-07-28 17:25:24 +00:00
|
|
|
int ts_flags;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct runq *ts_runq; /* runq the thread is currently on */
|
2009-01-17 07:17:57 +00:00
|
|
|
#ifdef KTR
|
|
|
|
char ts_name[TS_NAME_LEN];
|
|
|
|
#endif
|
2003-01-12 19:04:49 +00:00
|
|
|
};
|
2004-09-05 02:09:54 +00:00
|
|
|
|
|
|
|
/* flags kept in td_flags */
|
2006-12-06 06:34:57 +00:00
|
|
|
#define TDF_DIDRUN TDF_SCHED0 /* thread actually ran. */
|
2008-03-20 05:51:16 +00:00
|
|
|
#define TDF_BOUND TDF_SCHED1 /* Bound to one CPU. */
|
2012-08-09 19:26:13 +00:00
|
|
|
#define TDF_SLICEEND TDF_SCHED2 /* Thread time slice is over. */
|
2004-10-05 21:10:44 +00:00
|
|
|
|
2008-07-28 17:25:24 +00:00
|
|
|
/* flags kept in ts_flags */
|
|
|
|
#define TSF_AFFINITY 0x0001 /* Has a non-"full" CPU set. */
|
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
#define SKE_RUNQ_PCPU(ts) \
|
|
|
|
((ts)->ts_runq != 0 && (ts)->ts_runq != &runq)
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2008-07-28 17:25:24 +00:00
|
|
|
#define THREAD_CAN_SCHED(td, cpu) \
|
|
|
|
CPU_ISSET((cpu), &(td)->td_cpuset->cs_mask)
|
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
_Static_assert(sizeof(struct thread) + sizeof(struct td_sched) <=
|
|
|
|
sizeof(struct thread0_storage),
|
|
|
|
"increase struct thread0_storage.t0st_sched size");
|
|
|
|
|
2014-04-29 20:51:57 +00:00
|
|
|
static struct mtx sched_lock;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2012-08-10 19:02:49 +00:00
|
|
|
static int realstathz = 127; /* stathz is sometimes 0 and run off of hz. */
|
2004-02-01 02:46:47 +00:00
|
|
|
static int sched_tdcnt; /* Total runnable threads in the system. */
|
2012-08-10 19:02:49 +00:00
|
|
|
static int sched_slice = 12; /* Thread run time before rescheduling. */
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static void setup_runqs(void);
|
2003-12-26 17:07:29 +00:00
|
|
|
static void schedcpu(void);
|
2004-01-25 08:00:04 +00:00
|
|
|
static void schedcpu_thread(void);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void sched_priority(struct thread *td, u_char prio);
|
2002-10-12 05:32:24 +00:00
|
|
|
static void sched_setup(void *dummy);
|
|
|
|
static void maybe_resched(struct thread *td);
|
2006-10-26 21:42:22 +00:00
|
|
|
static void updatepri(struct thread *td);
|
|
|
|
static void resetpriority(struct thread *td);
|
|
|
|
static void resetpriority_thread(struct thread *td);
|
2004-09-03 09:19:49 +00:00
|
|
|
#ifdef SMP
|
2008-07-28 17:25:24 +00:00
|
|
|
static int sched_pickcpu(struct thread *td);
|
2008-07-28 15:52:02 +00:00
|
|
|
static int forward_wakeup(int cpunum);
|
|
|
|
static void kick_other_cpu(int pri, int cpuid);
|
2004-09-03 09:19:49 +00:00
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static struct kproc_desc sched_kp = {
|
|
|
|
"schedcpu",
|
|
|
|
schedcpu_thread,
|
|
|
|
NULL
|
|
|
|
};
|
2013-07-24 09:45:31 +00:00
|
|
|
SYSINIT(schedcpu, SI_SUB_LAST, SI_ORDER_FIRST, kproc_start,
|
2008-03-16 10:58:09 +00:00
|
|
|
&sched_kp);
|
|
|
|
SYSINIT(sched_setup, SI_SUB_RUN_QUEUE, SI_ORDER_FIRST, sched_setup, NULL);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2012-08-09 18:09:59 +00:00
|
|
|
static void sched_initticks(void *dummy);
|
|
|
|
SYSINIT(sched_initticks, SI_SUB_CLOCKS, SI_ORDER_THIRD, sched_initticks,
|
|
|
|
NULL);
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Global run queue.
|
|
|
|
*/
|
|
|
|
static struct runq runq;
|
2004-01-25 08:00:04 +00:00
|
|
|
|
|
|
|
#ifdef SMP
|
|
|
|
/*
|
|
|
|
* Per-CPU run queues
|
|
|
|
*/
|
|
|
|
static struct runq runq_pcpu[MAXCPU];
|
2008-07-28 17:25:24 +00:00
|
|
|
long runq_length[MAXCPU];
|
2011-04-30 22:30:18 +00:00
|
|
|
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
static cpuset_t idle_cpus_mask;
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
|
|
|
|
2010-09-11 07:08:22 +00:00
|
|
|
struct pcpuidlestat {
|
|
|
|
u_int idlecalls;
|
|
|
|
u_int oldidlecalls;
|
|
|
|
};
|
2018-07-05 17:13:37 +00:00
|
|
|
DPCPU_DEFINE_STATIC(struct pcpuidlestat, idlestat);
|
2010-09-11 07:08:22 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
static void
|
|
|
|
setup_runqs(void)
|
|
|
|
{
|
|
|
|
#ifdef SMP
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < MAXCPU; ++i)
|
|
|
|
runq_init(&runq_pcpu[i]);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
runq_init(&runq);
|
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2012-08-10 19:02:49 +00:00
|
|
|
static int
|
|
|
|
sysctl_kern_quantum(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
|
|
|
int error, new_val, period;
|
|
|
|
|
|
|
|
period = 1000000 / realstathz;
|
|
|
|
new_val = period * sched_slice;
|
|
|
|
error = sysctl_handle_int(oidp, &new_val, 0, req);
|
2012-08-11 20:24:39 +00:00
|
|
|
if (error != 0 || req->newptr == NULL)
|
2012-08-10 19:02:49 +00:00
|
|
|
return (error);
|
|
|
|
if (new_val <= 0)
|
|
|
|
return (EINVAL);
|
2012-08-11 20:24:39 +00:00
|
|
|
sched_slice = imax(1, (new_val + period / 2) / period);
|
|
|
|
hogticks = imax(1, (2 * hz * sched_slice + realstathz / 2) /
|
|
|
|
realstathz);
|
2012-08-10 19:02:49 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2020-02-26 14:26:36 +00:00
|
|
|
SYSCTL_NODE(_kern, OID_AUTO, sched, CTLFLAG_RD | CTLFLAG_MPSAFE, 0,
|
|
|
|
"Scheduler");
|
2004-06-21 22:05:46 +00:00
|
|
|
|
2004-07-23 23:09:00 +00:00
|
|
|
SYSCTL_STRING(_kern_sched, OID_AUTO, name, CTLFLAG_RD, "4BSD", 0,
|
|
|
|
"Scheduler name");
|
2020-02-26 14:26:36 +00:00
|
|
|
SYSCTL_PROC(_kern_sched, OID_AUTO, quantum,
|
|
|
|
CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE, NULL, 0,
|
|
|
|
sysctl_kern_quantum, "I",
|
2012-08-11 20:24:39 +00:00
|
|
|
"Quantum for timeshare threads in microseconds");
|
2012-08-09 18:09:59 +00:00
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, slice, CTLFLAG_RW, &sched_slice, 0,
|
2012-08-11 20:24:39 +00:00
|
|
|
"Quantum for timeshare threads in stathz ticks");
|
2004-09-03 09:15:10 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
/* Enable forwarding of wakeups to all other cpus */
|
2020-02-26 14:26:36 +00:00
|
|
|
static SYSCTL_NODE(_kern_sched, OID_AUTO, ipiwakeup,
|
|
|
|
CTLFLAG_RD | CTLFLAG_MPSAFE, NULL,
|
2011-11-07 15:43:11 +00:00
|
|
|
"Kernel SMP");
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2008-03-20 02:14:02 +00:00
|
|
|
static int runq_fuzz = 1;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, runq_fuzz, CTLFLAG_RW, &runq_fuzz, 0, "");
|
|
|
|
|
2004-09-05 02:19:53 +00:00
|
|
|
static int forward_wakeup_enabled = 1;
|
2004-09-03 07:42:31 +00:00
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, enabled, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_enabled, 0,
|
|
|
|
"Forwarding of wakeup to idle CPUs");
|
|
|
|
|
|
|
|
static int forward_wakeups_requested = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, requested, CTLFLAG_RD,
|
|
|
|
&forward_wakeups_requested, 0,
|
|
|
|
"Requests for Forwarding of wakeup to idle CPUs");
|
|
|
|
|
|
|
|
static int forward_wakeups_delivered = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, delivered, CTLFLAG_RD,
|
|
|
|
&forward_wakeups_delivered, 0,
|
|
|
|
"Completed Forwarding of wakeup to idle CPUs");
|
|
|
|
|
2004-09-05 02:19:53 +00:00
|
|
|
static int forward_wakeup_use_mask = 1;
|
2004-09-03 07:42:31 +00:00
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, usemask, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_mask, 0,
|
|
|
|
"Use the mask of idle cpus");
|
|
|
|
|
|
|
|
static int forward_wakeup_use_loop = 0;
|
|
|
|
SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, useloop, CTLFLAG_RW,
|
|
|
|
&forward_wakeup_use_loop, 0,
|
|
|
|
"Use a loop to find idle cpus");
|
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#endif
|
2006-12-06 06:34:57 +00:00
|
|
|
#if 0
|
2004-09-10 21:04:38 +00:00
|
|
|
static int sched_followon = 0;
|
|
|
|
SYSCTL_INT(_kern_sched, OID_AUTO, followon, CTLFLAG_RW,
|
|
|
|
&sched_followon, 0,
|
|
|
|
"allow threads to share a quantum");
|
2006-10-26 21:42:22 +00:00
|
|
|
#endif
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROVIDER_DEFINE(sched);
|
|
|
|
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE_DEFINE3(sched, , , change__pri, "struct thread *",
|
2012-05-15 01:30:25 +00:00
|
|
|
"struct proc *", "uint8_t");
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE_DEFINE3(sched, , , dequeue, "struct thread *",
|
2012-05-15 01:30:25 +00:00
|
|
|
"struct proc *", "void *");
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE_DEFINE4(sched, , , enqueue, "struct thread *",
|
2012-05-15 01:30:25 +00:00
|
|
|
"struct proc *", "void *", "int");
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE_DEFINE4(sched, , , lend__pri, "struct thread *",
|
2012-05-15 01:30:25 +00:00
|
|
|
"struct proc *", "uint8_t", "struct thread *");
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE_DEFINE2(sched, , , load__change, "int", "int");
|
|
|
|
SDT_PROBE_DEFINE2(sched, , , off__cpu, "struct thread *",
|
2012-05-15 01:30:25 +00:00
|
|
|
"struct proc *");
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE_DEFINE(sched, , , on__cpu);
|
|
|
|
SDT_PROBE_DEFINE(sched, , , remain__cpu);
|
|
|
|
SDT_PROBE_DEFINE2(sched, , , surrender, "struct thread *",
|
2012-05-15 01:30:25 +00:00
|
|
|
"struct proc *");
|
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
static __inline void
|
|
|
|
sched_load_add(void)
|
|
|
|
{
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_tdcnt++;
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt);
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE2(sched, , , load__change, NOCPU, sched_tdcnt);
|
2004-12-26 00:16:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static __inline void
|
|
|
|
sched_load_rem(void)
|
|
|
|
{
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_tdcnt--;
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt);
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE2(sched, , , load__change, NOCPU, sched_tdcnt);
|
2004-12-26 00:16:24 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Arrange to reschedule if necessary, taking the priorities and
|
|
|
|
* schedulers into account.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
maybe_resched(struct thread *td)
|
|
|
|
{
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2004-09-05 02:09:54 +00:00
|
|
|
if (td->td_priority < curthread->td_priority)
|
2003-02-17 09:55:10 +00:00
|
|
|
curthread->td_flags |= TDF_NEEDRESCHED;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2008-03-20 02:14:02 +00:00
|
|
|
/*
|
|
|
|
* This function is called when a thread is about to be put on run queue
|
|
|
|
* because it has been made runnable or its priority has been adjusted. It
|
2016-11-12 00:14:13 +00:00
|
|
|
* determines if the new thread should preempt the current thread. If so,
|
|
|
|
* it sets td_owepreempt to request a preemption.
|
2008-03-20 02:14:02 +00:00
|
|
|
*/
|
|
|
|
int
|
|
|
|
maybe_preempt(struct thread *td)
|
|
|
|
{
|
|
|
|
#ifdef PREEMPTION
|
|
|
|
struct thread *ctd;
|
|
|
|
int cpri, pri;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The new thread should not preempt the current thread if any of the
|
|
|
|
* following conditions are true:
|
|
|
|
*
|
|
|
|
* - The kernel is in the throes of crashing (panicstr).
|
|
|
|
* - The current thread has a higher (numerically lower) or
|
|
|
|
* equivalent priority. Note that this prevents curthread from
|
|
|
|
* trying to preempt to itself.
|
|
|
|
* - The current thread has an inhibitor set or is in the process of
|
|
|
|
* exiting. In this case, the current thread is about to switch
|
|
|
|
* out anyways, so there's no point in preempting. If we did,
|
|
|
|
* the current thread would not be properly resumed as well, so
|
|
|
|
* just avoid that whole landmine.
|
|
|
|
* - If the new thread's priority is not a realtime priority and
|
|
|
|
* the current thread's priority is not an idle priority and
|
|
|
|
* FULL_PREEMPTION is disabled.
|
|
|
|
*
|
|
|
|
* If all of these conditions are false, but the current thread is in
|
|
|
|
* a nested critical section, then we have to defer the preemption
|
|
|
|
* until we exit the critical section. Otherwise, switch immediately
|
|
|
|
* to the new thread.
|
|
|
|
*/
|
|
|
|
ctd = curthread;
|
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
|
|
|
KASSERT((td->td_inhibitors == 0),
|
|
|
|
("maybe_preempt: trying to run inhibited thread"));
|
|
|
|
pri = td->td_priority;
|
|
|
|
cpri = ctd->td_priority;
|
2020-01-12 06:07:54 +00:00
|
|
|
if (KERNEL_PANICKED() || pri >= cpri /* || dumping */ ||
|
2008-03-20 02:14:02 +00:00
|
|
|
TD_IS_INHIBITED(ctd))
|
|
|
|
return (0);
|
|
|
|
#ifndef FULL_PREEMPTION
|
|
|
|
if (pri > PRI_MAX_ITHD && cpri < PRI_MIN_IDLE)
|
|
|
|
return (0);
|
|
|
|
#endif
|
|
|
|
|
2016-11-12 00:14:13 +00:00
|
|
|
CTR0(KTR_PROC, "maybe_preempt: scheduling preemption");
|
|
|
|
ctd->td_owepreempt = 1;
|
2008-03-20 02:14:02 +00:00
|
|
|
return (1);
|
|
|
|
#else
|
|
|
|
return (0);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* Constants for digital decay and forget:
|
2016-04-17 11:04:27 +00:00
|
|
|
* 90% of (ts_estcpu) usage in 5 * loadav time
|
2006-12-06 06:34:57 +00:00
|
|
|
* 95% of (ts_pctcpu) usage in 60 seconds (load insensitive)
|
2002-10-12 05:32:24 +00:00
|
|
|
* Note that, as ps(1) mentions, this can let percentages
|
|
|
|
* total over 100% (I've seen 137.9% for 3 processes).
|
|
|
|
*
|
2016-04-17 11:04:27 +00:00
|
|
|
* Note that schedclock() updates ts_estcpu and p_cpticks asynchronously.
|
2002-10-12 05:32:24 +00:00
|
|
|
*
|
2016-04-17 11:04:27 +00:00
|
|
|
* We wish to decay away 90% of ts_estcpu in (5 * loadavg) seconds.
|
2002-10-12 05:32:24 +00:00
|
|
|
* That is, the system wants to compute a value of decay such
|
|
|
|
* that the following for loop:
|
|
|
|
* for (i = 0; i < (5 * loadavg); i++)
|
2016-04-17 11:04:27 +00:00
|
|
|
* ts_estcpu *= decay;
|
2002-10-12 05:32:24 +00:00
|
|
|
* will compute
|
2016-04-17 11:04:27 +00:00
|
|
|
* ts_estcpu *= 0.1;
|
2002-10-12 05:32:24 +00:00
|
|
|
* for all values of loadavg:
|
|
|
|
*
|
|
|
|
* Mathematically this loop can be expressed by saying:
|
|
|
|
* decay ** (5 * loadavg) ~= .1
|
|
|
|
*
|
|
|
|
* The system computes decay as:
|
|
|
|
* decay = (2 * loadavg) / (2 * loadavg + 1)
|
|
|
|
*
|
|
|
|
* We wish to prove that the system's computation of decay
|
|
|
|
* will always fulfill the equation:
|
|
|
|
* decay ** (5 * loadavg) ~= .1
|
|
|
|
*
|
|
|
|
* If we compute b as:
|
|
|
|
* b = 2 * loadavg
|
|
|
|
* then
|
|
|
|
* decay = b / (b + 1)
|
|
|
|
*
|
|
|
|
* We now need to prove two things:
|
|
|
|
* 1) Given factor ** (5 * loadavg) ~= .1, prove factor == b/(b+1)
|
|
|
|
* 2) Given b/(b+1) ** power ~= .1, prove power == (5 * loadavg)
|
|
|
|
*
|
|
|
|
* Facts:
|
|
|
|
* For x close to zero, exp(x) =~ 1 + x, since
|
|
|
|
* exp(x) = 0! + x**1/1! + x**2/2! + ... .
|
|
|
|
* therefore exp(-1/b) =~ 1 - (1/b) = (b-1)/b.
|
|
|
|
* For x close to zero, ln(1+x) =~ x, since
|
|
|
|
* ln(1+x) = x - x**2/2 + x**3/3 - ... -1 < x < 1
|
|
|
|
* therefore ln(b/(b+1)) = ln(1 - 1/(b+1)) =~ -1/(b+1).
|
|
|
|
* ln(.1) =~ -2.30
|
|
|
|
*
|
|
|
|
* Proof of (1):
|
|
|
|
* Solve (factor)**(power) =~ .1 given power (5*loadav):
|
|
|
|
* solving for factor,
|
|
|
|
* ln(factor) =~ (-2.30/5*loadav), or
|
|
|
|
* factor =~ exp(-1/((5/2.30)*loadav)) =~ exp(-1/(2*loadav)) =
|
|
|
|
* exp(-1/b) =~ (b-1)/b =~ b/(b+1). QED
|
|
|
|
*
|
|
|
|
* Proof of (2):
|
|
|
|
* Solve (factor)**(power) =~ .1 given factor == (b/(b+1)):
|
|
|
|
* solving for power,
|
|
|
|
* power*ln(b/(b+1)) =~ -2.30, or
|
|
|
|
* power =~ 2.3 * (b + 1) = 4.6*loadav + 2.3 =~ 5*loadav. QED
|
|
|
|
*
|
|
|
|
* Actual power values for the implemented algorithm are as follows:
|
|
|
|
* loadav: 1 2 3 4
|
|
|
|
* power: 5.68 10.32 14.94 19.55
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* calculations for digital decay to forget 90% of usage in 5*loadav sec */
|
|
|
|
#define loadfactor(loadav) (2 * (loadav))
|
|
|
|
#define decay_cpu(loadfac, cpu) (((loadfac) * (cpu)) / ((loadfac) + FSCALE))
|
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
/* decay 95% of `ts_pctcpu' in 60 seconds; see CCPU_SHIFT before changing */
|
2002-10-12 05:32:24 +00:00
|
|
|
static fixpt_t ccpu = 0.95122942450071400909 * FSCALE; /* exp(-1/20) */
|
2020-03-02 15:30:52 +00:00
|
|
|
SYSCTL_UINT(_kern, OID_AUTO, ccpu, CTLFLAG_RD, &ccpu, 0,
|
|
|
|
"Decay factor used for updating %CPU");
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If `ccpu' is not equal to `exp(-1/20)' and you still want to use the
|
|
|
|
* faster/more-accurate formula, you'll have to estimate CCPU_SHIFT below
|
|
|
|
* and possibly adjust FSHIFT in "param.h" so that (FSHIFT >= CCPU_SHIFT).
|
|
|
|
*
|
|
|
|
* To estimate CCPU_SHIFT for exp(-1/20), the following formula was used:
|
|
|
|
* 1 - exp(-1/20) ~= 0.0487 ~= 0.0488 == 1 (fixed pt, *11* bits).
|
|
|
|
*
|
|
|
|
* If you don't want to bother with the faster/more-accurate formula, you
|
|
|
|
* can set CCPU_SHIFT to (FSHIFT + 1) which will use a slower/less-accurate
|
|
|
|
* (more general) method of calculating the %age of CPU used by a process.
|
|
|
|
*/
|
|
|
|
#define CCPU_SHIFT 11
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recompute process priorities, every hz ticks.
|
|
|
|
* MP-safe, called without the Giant mutex.
|
|
|
|
*/
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
2003-12-26 17:07:29 +00:00
|
|
|
schedcpu(void)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2017-05-17 00:34:34 +00:00
|
|
|
fixpt_t loadfac = loadfactor(averunnable.ldavg[0]);
|
2002-10-12 05:32:24 +00:00
|
|
|
struct thread *td;
|
|
|
|
struct proc *p;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2012-08-09 18:09:59 +00:00
|
|
|
int awake;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
|
|
|
sx_slock(&allproc_lock);
|
|
|
|
FOREACH_PROC_IN_SYSTEM(p) {
|
2008-03-19 06:19:01 +00:00
|
|
|
PROC_LOCK(p);
|
2011-04-06 17:47:22 +00:00
|
|
|
if (p->p_state == PRS_NEW) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
continue;
|
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
2002-10-12 05:32:24 +00:00
|
|
|
awake = 0;
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* Increment sleep time (if sleeping). We
|
|
|
|
* ignore overflow, as above.
|
|
|
|
*/
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* The td_sched slptimes are not touched in wakeup
|
|
|
|
* because the thread may not HAVE everything in
|
|
|
|
* memory? XXX I think this is out of date.
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2007-01-23 08:46:51 +00:00
|
|
|
if (TD_ON_RUNQ(td)) {
|
2006-10-26 21:42:22 +00:00
|
|
|
awake = 1;
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags &= ~TDF_DIDRUN;
|
2007-01-23 08:46:51 +00:00
|
|
|
} else if (TD_IS_RUNNING(td)) {
|
2006-10-26 21:42:22 +00:00
|
|
|
awake = 1;
|
2008-03-20 05:51:16 +00:00
|
|
|
/* Do not clear TDF_DIDRUN */
|
|
|
|
} else if (td->td_flags & TDF_DIDRUN) {
|
2006-10-26 21:42:22 +00:00
|
|
|
awake = 1;
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags &= ~TDF_DIDRUN;
|
2006-10-26 21:42:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* ts_pctcpu is only for ps and ttyinfo().
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_pctcpu = (ts->ts_pctcpu * ccpu) >> FSHIFT;
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* If the td_sched has been idle the entire second,
|
2006-10-26 21:42:22 +00:00
|
|
|
* stop recalculating its priority until
|
|
|
|
* it wakes up.
|
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
if (ts->ts_cpticks != 0) {
|
2006-10-26 21:42:22 +00:00
|
|
|
#if (FSHIFT >= CCPU_SHIFT)
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_pctcpu += (realstathz == 100)
|
|
|
|
? ((fixpt_t) ts->ts_cpticks) <<
|
|
|
|
(FSHIFT - CCPU_SHIFT) :
|
|
|
|
100 * (((fixpt_t) ts->ts_cpticks)
|
|
|
|
<< (FSHIFT - CCPU_SHIFT)) / realstathz;
|
2006-10-26 21:42:22 +00:00
|
|
|
#else
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_pctcpu += ((FSCALE - ccpu) *
|
|
|
|
(ts->ts_cpticks *
|
|
|
|
FSCALE / realstathz)) >> FSHIFT;
|
2006-10-26 21:42:22 +00:00
|
|
|
#endif
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_cpticks = 0;
|
2006-11-14 05:48:27 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
/*
|
2006-10-26 21:42:22 +00:00
|
|
|
* If there are ANY running threads in this process,
|
2002-10-12 05:32:24 +00:00
|
|
|
* then don't count it as sleeping.
|
2008-07-28 15:52:02 +00:00
|
|
|
* XXX: this is broken.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
|
|
|
if (awake) {
|
2007-09-21 04:10:23 +00:00
|
|
|
if (ts->ts_slptime > 1) {
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* In an ideal world, this should not
|
|
|
|
* happen, because whoever woke us
|
|
|
|
* up from the long sleep should have
|
|
|
|
* unwound the slptime and reset our
|
|
|
|
* priority before we run at the stale
|
|
|
|
* priority. Should KASSERT at some
|
|
|
|
* point when all the cases are fixed.
|
|
|
|
*/
|
2006-10-26 21:42:22 +00:00
|
|
|
updatepri(td);
|
|
|
|
}
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime = 0;
|
2006-10-26 21:42:22 +00:00
|
|
|
} else
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime++;
|
|
|
|
if (ts->ts_slptime > 1) {
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
continue;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
2016-04-17 11:04:27 +00:00
|
|
|
ts->ts_estcpu = decay_cpu(loadfac, ts->ts_estcpu);
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority(td);
|
|
|
|
resetpriority_thread(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2008-03-19 06:19:01 +00:00
|
|
|
PROC_UNLOCK(p);
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
sx_sunlock(&allproc_lock);
|
2003-12-26 17:07:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Main loop for a kthread that executes schedcpu once a second.
|
|
|
|
*/
|
|
|
|
static void
|
2004-01-25 08:00:04 +00:00
|
|
|
schedcpu_thread(void)
|
2003-12-26 17:07:29 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
schedcpu();
|
2007-02-27 17:23:29 +00:00
|
|
|
pause("-", hz);
|
2003-12-26 17:07:29 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recalculate the priority of a process after it has slept for a while.
|
2016-04-17 11:04:27 +00:00
|
|
|
* For all load averages >= 1 and max ts_estcpu of 255, sleeping for at
|
|
|
|
* least six times the loadfactor will decay ts_estcpu to zero.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
|
|
|
static void
|
2006-10-26 21:42:22 +00:00
|
|
|
updatepri(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2007-09-21 04:10:23 +00:00
|
|
|
struct td_sched *ts;
|
|
|
|
fixpt_t loadfac;
|
|
|
|
unsigned int newcpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2003-08-15 21:29:06 +00:00
|
|
|
loadfac = loadfactor(averunnable.ldavg[0]);
|
2007-09-21 04:10:23 +00:00
|
|
|
if (ts->ts_slptime > 5 * loadfac)
|
2016-04-17 11:04:27 +00:00
|
|
|
ts->ts_estcpu = 0;
|
2002-10-12 05:32:24 +00:00
|
|
|
else {
|
2016-04-17 11:04:27 +00:00
|
|
|
newcpu = ts->ts_estcpu;
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime--; /* was incremented in schedcpu() */
|
|
|
|
while (newcpu && --ts->ts_slptime)
|
2002-10-12 05:32:24 +00:00
|
|
|
newcpu = decay_cpu(loadfac, newcpu);
|
2016-04-17 11:04:27 +00:00
|
|
|
ts->ts_estcpu = newcpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute the priority of a process when running in user mode.
|
|
|
|
* Arrange to reschedule if the resulting priority is better
|
|
|
|
* than that of the current process.
|
|
|
|
*/
|
|
|
|
static void
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2016-04-17 11:04:27 +00:00
|
|
|
u_int newpriority;
|
|
|
|
|
|
|
|
if (td->td_pri_class != PRI_TIMESHARE)
|
|
|
|
return;
|
2016-06-05 17:04:03 +00:00
|
|
|
newpriority = PUSER +
|
|
|
|
td_get_sched(td)->ts_estcpu / INVERSE_ESTCPU_WEIGHT +
|
2016-04-17 11:04:27 +00:00
|
|
|
NICE_WEIGHT * (td->td_proc->p_nice - PRIO_MIN);
|
|
|
|
newpriority = min(max(newpriority, PRI_MIN_TIMESHARE),
|
|
|
|
PRI_MAX_TIMESHARE);
|
|
|
|
sched_user_prio(td, newpriority);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* Update the thread's priority when the associated process's user
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
* priority changes.
|
|
|
|
*/
|
|
|
|
static void
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority_thread(struct thread *td)
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
/* Only change threads with a time sharing user priority. */
|
|
|
|
if (td->td_priority < PRI_MIN_TIMESHARE ||
|
|
|
|
td->td_priority > PRI_MAX_TIMESHARE)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* XXX the whole needresched thing is broken, but not silly. */
|
|
|
|
maybe_resched(td);
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_prio(td, td->td_user_pri);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
|
|
|
static void
|
|
|
|
sched_setup(void *dummy)
|
|
|
|
{
|
2003-08-15 21:29:06 +00:00
|
|
|
|
2012-08-10 19:02:49 +00:00
|
|
|
setup_runqs();
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-02-01 02:46:47 +00:00
|
|
|
/* Account for thread0. */
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2012-08-09 18:09:59 +00:00
|
|
|
/*
|
2012-08-10 19:02:49 +00:00
|
|
|
* This routine determines time constants after stathz and hz are setup.
|
2012-08-09 18:09:59 +00:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
sched_initticks(void *dummy)
|
|
|
|
{
|
|
|
|
|
|
|
|
realstathz = stathz ? stathz : hz;
|
|
|
|
sched_slice = realstathz / 10; /* ~100ms */
|
2012-08-11 20:24:39 +00:00
|
|
|
hogticks = imax(1, (2 * hz * sched_slice + realstathz / 2) /
|
|
|
|
realstathz);
|
2012-08-09 18:09:59 +00:00
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/* External interfaces start here */
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
/*
|
|
|
|
* Very early in the boot some setup of scheduler-specific
|
2005-04-15 14:01:43 +00:00
|
|
|
* parts of proc0 and of some scheduler resources needs to be done.
|
2004-09-05 02:09:54 +00:00
|
|
|
* Called from:
|
|
|
|
* proc0_init()
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
schedinit(void)
|
|
|
|
{
|
2016-06-05 17:04:03 +00:00
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
/*
|
2016-06-05 17:04:03 +00:00
|
|
|
* Set up the scheduler specific parts of thread0.
|
2004-09-05 02:09:54 +00:00
|
|
|
*/
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread0.td_lock = &sched_lock;
|
2016-06-05 17:04:03 +00:00
|
|
|
td_get_sched(&thread0)->ts_slice = sched_slice;
|
2019-12-15 21:26:50 +00:00
|
|
|
mtx_init(&sched_lock, "sched lock", NULL, MTX_SPIN);
|
2004-09-05 02:09:54 +00:00
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
int
|
|
|
|
sched_runnable(void)
|
|
|
|
{
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
|
|
|
return runq_check(&runq) + runq_check(&runq_pcpu[PCPU_GET(cpuid)]);
|
|
|
|
#else
|
|
|
|
return runq_check(&runq);
|
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
int
|
2002-10-12 05:32:24 +00:00
|
|
|
sched_rr_interval(void)
|
|
|
|
{
|
2012-08-09 18:09:59 +00:00
|
|
|
|
|
|
|
/* Convert sched_slice from stathz to hz. */
|
2012-08-11 20:24:39 +00:00
|
|
|
return (imax(1, (sched_slice * hz + realstathz / 2) / realstathz));
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-04-17 11:04:27 +00:00
|
|
|
* We adjust the priority of the current process. The priority of a
|
|
|
|
* process gets worse as it accumulates CPU time. The cpu usage
|
|
|
|
* estimator (ts_estcpu) is increased here. resetpriority() will
|
|
|
|
* compute a different priority each time ts_estcpu increases by
|
|
|
|
* INVERSE_ESTCPU_WEIGHT (until PRI_MAX_TIMESHARE is reached). The
|
|
|
|
* cpu usage estimator ramps up quite quickly when the process is
|
|
|
|
* running (linearly), and decays away exponentially, at a rate which
|
|
|
|
* is proportionally slower when the system is busy. The basic
|
|
|
|
* principle is that the system will 90% forget that the process used
|
|
|
|
* a lot of CPU time in 5 * loadav seconds. This causes the system to
|
|
|
|
* favor processes which haven't run much recently, and to round-robin
|
|
|
|
* among other processes.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
2019-12-08 01:17:38 +00:00
|
|
|
static void
|
|
|
|
sched_clock_tick(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2010-09-11 07:08:22 +00:00
|
|
|
struct pcpuidlestat *stat;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2003-04-11 03:39:48 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
|
|
|
|
ts->ts_cpticks++;
|
2016-04-17 11:04:27 +00:00
|
|
|
ts->ts_estcpu = ESTCPULIM(ts->ts_estcpu + 1);
|
|
|
|
if ((ts->ts_estcpu % INVERSE_ESTCPU_WEIGHT) == 0) {
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority(td);
|
|
|
|
resetpriority_thread(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2007-10-27 22:07:40 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Force a context switch if the current thread has used up a full
|
2012-08-10 19:02:49 +00:00
|
|
|
* time slice (default is 100ms).
|
2007-10-27 22:07:40 +00:00
|
|
|
*/
|
2012-08-10 19:02:49 +00:00
|
|
|
if (!TD_IS_IDLETHREAD(td) && --ts->ts_slice <= 0) {
|
2012-08-09 18:09:59 +00:00
|
|
|
ts->ts_slice = sched_slice;
|
2012-08-09 19:26:13 +00:00
|
|
|
td->td_flags |= TDF_NEEDRESCHED | TDF_SLICEEND;
|
2012-08-09 18:09:59 +00:00
|
|
|
}
|
2010-09-11 07:08:22 +00:00
|
|
|
|
|
|
|
stat = DPCPU_PTR(idlestat);
|
|
|
|
stat->oldidlecalls = stat->idlecalls;
|
|
|
|
stat->idlecalls = 0;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2003-08-15 21:29:06 +00:00
|
|
|
|
2019-12-08 01:17:38 +00:00
|
|
|
void
|
|
|
|
sched_clock(struct thread *td, int cnt)
|
|
|
|
{
|
|
|
|
|
|
|
|
for ( ; cnt > 0; cnt--)
|
|
|
|
sched_clock_tick(td);
|
|
|
|
}
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
2008-07-28 15:52:02 +00:00
|
|
|
* Charge child's scheduling CPU usage to parent.
|
2006-10-26 21:42:22 +00:00
|
|
|
*/
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2004-07-18 23:36:13 +00:00
|
|
|
sched_exit(struct proc *p, struct thread *td)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(td), "proc exit",
|
2011-08-26 18:00:07 +00:00
|
|
|
"prio:%d", td->td_priority);
|
2009-01-17 07:17:57 +00:00
|
|
|
|
2008-03-19 06:19:01 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2006-12-06 06:34:57 +00:00
|
|
|
sched_exit_thread(FIRST_THREAD_IN_PROC(p), td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2003-04-11 03:39:48 +00:00
|
|
|
sched_exit_thread(struct thread *td, struct thread *child)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(child), "exit",
|
2011-08-26 18:00:07 +00:00
|
|
|
"prio:%d", child->td_priority);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2016-06-05 17:04:03 +00:00
|
|
|
td_get_sched(td)->ts_estcpu = ESTCPULIM(td_get_sched(td)->ts_estcpu +
|
|
|
|
td_get_sched(child)->ts_estcpu);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2009-11-03 16:46:52 +00:00
|
|
|
thread_lock(child);
|
|
|
|
if ((child->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2009-11-03 16:46:52 +00:00
|
|
|
thread_unlock(child);
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork(struct thread *td, struct thread *childtd)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork_thread(td, childtd);
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
2003-01-12 19:04:49 +00:00
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2004-09-05 02:09:54 +00:00
|
|
|
sched_fork_thread(struct thread *td, struct thread *childtd)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
2016-06-05 17:04:03 +00:00
|
|
|
struct td_sched *ts, *tsc;
|
2008-03-20 03:06:33 +00:00
|
|
|
|
2015-08-03 20:43:36 +00:00
|
|
|
childtd->td_oncpu = NOCPU;
|
|
|
|
childtd->td_lastcpu = NOCPU;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
childtd->td_lock = &sched_lock;
|
2008-03-02 21:34:57 +00:00
|
|
|
childtd->td_cpuset = cpuset_ref(td->td_cpuset);
|
2018-01-12 22:48:23 +00:00
|
|
|
childtd->td_domain.dr_policy = td->td_cpuset->cs_domain;
|
2011-01-06 22:24:00 +00:00
|
|
|
childtd->td_priority = childtd->td_base_pri;
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(childtd);
|
2008-03-20 03:06:33 +00:00
|
|
|
bzero(ts, sizeof(*ts));
|
2016-06-05 17:04:03 +00:00
|
|
|
tsc = td_get_sched(td);
|
|
|
|
ts->ts_estcpu = tsc->ts_estcpu;
|
|
|
|
ts->ts_flags |= (tsc->ts_flags & TSF_AFFINITY);
|
2012-08-09 18:09:59 +00:00
|
|
|
ts->ts_slice = 1;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2004-06-16 00:26:31 +00:00
|
|
|
sched_nice(struct proc *p, int nice)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
struct thread *td;
|
2003-04-22 20:50:38 +00:00
|
|
|
|
2004-06-16 00:26:31 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
|
|
|
p->p_nice = nice;
|
2006-10-26 21:42:22 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
resetpriority(td);
|
|
|
|
resetpriority_thread(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_unlock(td);
|
2006-10-26 21:42:22 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2003-04-11 03:39:48 +00:00
|
|
|
void
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_class(struct thread *td, int class)
|
2003-04-11 03:39:48 +00:00
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_pri_class = class;
|
2003-04-11 03:39:48 +00:00
|
|
|
}
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* Adjust the priority of a thread.
|
|
|
|
*/
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
static void
|
|
|
|
sched_priority(struct thread *td, u_char prio)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2017-02-16 21:18:31 +00:00
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_POINT3(KTR_SCHED, "thread", sched_tdname(td), "priority change",
|
|
|
|
"prio:%d", td->td_priority, "new prio:%d", prio, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE3(sched, , , change__pri, td, td->td_proc, prio);
|
2009-01-17 07:17:57 +00:00
|
|
|
if (td != curthread && prio > td->td_priority) {
|
|
|
|
KTR_POINT3(KTR_SCHED, "thread", sched_tdname(curthread),
|
|
|
|
"lend prio", "prio:%d", td->td_priority, "new prio:%d",
|
|
|
|
prio, KTR_ATTR_LINKED, sched_tdname(td));
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE4(sched, , , lend__pri, td, td->td_proc, prio,
|
2012-05-15 01:30:25 +00:00
|
|
|
curthread);
|
2009-01-17 07:17:57 +00:00
|
|
|
}
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
if (td->td_priority == prio)
|
|
|
|
return;
|
2007-01-23 08:46:51 +00:00
|
|
|
td->td_priority = prio;
|
2008-03-20 05:51:16 +00:00
|
|
|
if (TD_ON_RUNQ(td) && td->td_rqindex != (prio / RQ_PPQ)) {
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_rem(td);
|
2019-12-15 21:11:15 +00:00
|
|
|
sched_add(td, SRQ_BORING | SRQ_HOLDTD);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
/*
|
|
|
|
* Update a thread's priority when it is lent another thread's
|
|
|
|
* priority.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_lend_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
|
|
|
|
td->td_flags |= TDF_BORROWING;
|
|
|
|
sched_priority(td, prio);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore a thread's priority when priority propagation is
|
|
|
|
* over. The prio argument is the minimum priority the thread
|
|
|
|
* needs to have to satisfy other possible priority lending
|
|
|
|
* requests. If the thread's regulary priority is less
|
|
|
|
* important than prio the thread will keep a priority boost
|
|
|
|
* of prio.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_unlend_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
u_char base_pri;
|
|
|
|
|
|
|
|
if (td->td_base_pri >= PRI_MIN_TIMESHARE &&
|
|
|
|
td->td_base_pri <= PRI_MAX_TIMESHARE)
|
2006-10-26 21:42:22 +00:00
|
|
|
base_pri = td->td_user_pri;
|
Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these
functions to ask the scheduler to lend a thread a set priority and to
tell the scheduler when it thinks it is ok for a thread to stop borrowing
priority. The unlend case is slightly complex in that the turnstile code
tells the scheduler what the minimum priority of the thread needs to be
to satisfy the requirements of any other threads blocked on locks owned
by the thread in question. The scheduler then decides where the thread
can go back to normal mode (if it's normal priority is high enough to
satisfy the pending lock requests) or it it should continue to use the
priority specified to the sched_unlend_prio() call. This involves adding
a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
on a turnstile, it will call a new function turnstile_adjust() to inform
the turnstile code of the change. This function resorts the thread on
the priority list of the turnstile if needed, and if the thread ends up
at the head of the list (due to having the highest priority) and its
priority was raised, then it will propagate that new priority to the
owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
of its associated kse group has been consolidated in a new static
function resetpriority_thread(). One change to this function is that
it will now only adjust the priority of a thread if it already has a
time sharing priority, thus preserving any boosts from a tsleep() until
the thread returns to userland. Also, resetpriority() no longer calls
maybe_resched() on each thread in the group. Instead, the code calling
resetpriority() is responsible for calling resetpriority_thread() on
any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
group priority has been adjusted.
Discussed with: bde
Reviewed by: ups, jeffr
Tested on: 4bsd, ule
Tested on: i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
|
|
|
else
|
|
|
|
base_pri = td->td_base_pri;
|
|
|
|
if (prio >= base_pri) {
|
|
|
|
td->td_flags &= ~TDF_BORROWING;
|
|
|
|
sched_prio(td, base_pri);
|
|
|
|
} else
|
|
|
|
sched_lend_prio(td, prio);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
u_char oldprio;
|
|
|
|
|
|
|
|
/* First, update the base priority. */
|
|
|
|
td->td_base_pri = prio;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the thread is borrowing another thread's priority, don't ever
|
|
|
|
* lower the priority.
|
|
|
|
*/
|
|
|
|
if (td->td_flags & TDF_BORROWING && td->td_priority < prio)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Change the real priority. */
|
|
|
|
oldprio = td->td_priority;
|
|
|
|
sched_priority(td, prio);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the thread is on a turnstile, then let the turnstile update
|
|
|
|
* its state.
|
|
|
|
*/
|
|
|
|
if (TD_ON_LOCK(td) && oldprio != prio)
|
|
|
|
turnstile_adjust(td, oldprio);
|
|
|
|
}
|
|
|
|
|
2006-08-25 06:12:53 +00:00
|
|
|
void
|
2006-10-26 21:42:22 +00:00
|
|
|
sched_user_prio(struct thread *td, u_char prio)
|
2006-08-25 06:12:53 +00:00
|
|
|
{
|
|
|
|
|
2007-12-11 08:25:36 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_base_user_pri = prio;
|
2010-12-09 02:42:02 +00:00
|
|
|
if (td->td_lend_user_pri <= prio)
|
2006-11-11 13:11:29 +00:00
|
|
|
return;
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_user_pri = prio;
|
2006-08-25 06:12:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_lend_user_prio(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
|
2007-12-11 08:25:36 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2010-12-09 02:42:02 +00:00
|
|
|
td->td_lend_user_pri = prio;
|
2010-12-29 09:26:46 +00:00
|
|
|
td->td_user_pri = min(prio, td->td_base_user_pri);
|
|
|
|
if (td->td_priority > td->td_user_pri)
|
|
|
|
sched_prio(td, td->td_user_pri);
|
|
|
|
else if (td->td_priority != td->td_user_pri)
|
|
|
|
td->td_flags |= TDF_NEEDRESCHED;
|
2006-08-25 06:12:53 +00:00
|
|
|
}
|
|
|
|
|
2019-05-08 16:30:38 +00:00
|
|
|
/*
|
|
|
|
* Like the above but first check if there is anything to do.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_lend_user_prio_cond(struct thread *td, u_char prio)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (td->td_lend_user_pri != prio)
|
|
|
|
goto lend;
|
|
|
|
if (td->td_user_pri != min(prio, td->td_base_user_pri))
|
|
|
|
goto lend;
|
|
|
|
if (td->td_priority >= td->td_user_pri)
|
|
|
|
goto lend;
|
|
|
|
return;
|
|
|
|
|
|
|
|
lend:
|
|
|
|
thread_lock(td);
|
|
|
|
sched_lend_user_prio(td, prio);
|
|
|
|
thread_unlock(td);
|
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2008-03-12 06:31:06 +00:00
|
|
|
sched_sleep(struct thread *td, int pri)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2003-04-23 18:51:05 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-09-21 04:10:23 +00:00
|
|
|
td->td_slptick = ticks;
|
2016-06-05 17:04:03 +00:00
|
|
|
td_get_sched(td)->ts_slptime = 0;
|
2011-01-14 17:06:54 +00:00
|
|
|
if (pri != 0 && PRI_BASE(td->td_pri_class) == PRI_TIMESHARE)
|
2008-03-12 06:31:06 +00:00
|
|
|
sched_prio(td, pri);
|
2009-12-31 18:52:58 +00:00
|
|
|
if (TD_IS_SUSPENDED(td) || pri >= PSOCK)
|
2008-03-12 06:31:06 +00:00
|
|
|
td->td_flags |= TDF_CANSWAP;
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2019-12-15 21:26:50 +00:00
|
|
|
sched_switch(struct thread *td, int flags)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2019-12-15 21:26:50 +00:00
|
|
|
struct thread *newtd;
|
2010-01-23 15:54:21 +00:00
|
|
|
struct mtx *tmtx;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2002-10-12 05:32:24 +00:00
|
|
|
struct proc *p;
|
2012-08-09 19:26:13 +00:00
|
|
|
int preempted;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2019-12-15 21:11:15 +00:00
|
|
|
tmtx = &sched_lock;
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
p = td->td_proc;
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2003-04-10 17:35:44 +00:00
|
|
|
td->td_lastcpu = td->td_oncpu;
|
fix a thread preemption regression in schedulers introduced in r270423
Commit r270423 fixed a regression in sched_yield() that was introduced
in earlier changes. Unfortunately, at the same time it introduced an
new regression. The problem is that SWT_RELINQUISH (6), like all other
SWT_* constants and unlike SW_* flags, is not a bit flag. So, (flags &
SWT_RELINQUISH) is true in cases where that was not really indended,
for example, with SWT_OWEPREEMPT (2) and SWT_REMOTEPREEMPT (11).
A straight forward fix would be to use (flags & SW_TYPE_MASK) ==
SWT_RELINQUISH, but my impression is that the switch types are designed
mostly for gathering statistics, not for influencing scheduling
decisions.
So, I decided that it would be better to check for SW_PREEMPT flag
instead. That's also the same flag that was checked before r239157.
I double-checked how that flag is used and I am confident that the flag
is set only in the places where we really have the preemption:
- critical_exit + td_owepreempt
- sched_preempt in the ULE scheduler
- sched_preempt in the 4BSD scheduler
Reviewed by: kib, mav
MFC after: 4 days
Sponsored by: Panzura
Differential Revision: https://reviews.freebsd.org/D9230
2017-01-19 18:46:41 +00:00
|
|
|
preempted = (td->td_flags & TDF_SLICEEND) == 0 &&
|
|
|
|
(flags & SW_PREEMPT) != 0;
|
2012-08-09 19:26:13 +00:00
|
|
|
td->td_flags &= ~(TDF_NEEDRESCHED | TDF_SLICEEND);
|
2005-04-08 03:37:53 +00:00
|
|
|
td->td_owepreempt = 0;
|
2004-02-01 02:46:47 +00:00
|
|
|
td->td_oncpu = NOCPU;
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
/*
|
|
|
|
* At the last moment, if this thread is still marked RUNNING,
|
|
|
|
* then put it back on the run queue as it has not been suspended
|
2004-07-02 19:09:50 +00:00
|
|
|
* or stopped or any thing else similar. We never put the idle
|
|
|
|
* threads on the run queue, however.
|
2002-10-12 05:32:24 +00:00
|
|
|
*/
|
2007-02-02 05:14:22 +00:00
|
|
|
if (td->td_flags & TDF_IDLETD) {
|
2004-07-02 19:09:50 +00:00
|
|
|
TD_SET_CAN_RUN(td);
|
2007-02-02 05:14:22 +00:00
|
|
|
#ifdef SMP
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_CLR(PCPU_GET(cpuid), &idle_cpus_mask);
|
2007-02-02 05:14:22 +00:00
|
|
|
#endif
|
|
|
|
} else {
|
2004-09-05 02:09:54 +00:00
|
|
|
if (TD_IS_RUNNING(td)) {
|
2006-12-06 06:34:57 +00:00
|
|
|
/* Put us back on the run queue. */
|
2012-08-09 19:26:13 +00:00
|
|
|
sched_add(td, preempted ?
|
2019-12-15 21:11:15 +00:00
|
|
|
SRQ_HOLDTD|SRQ_OURSELF|SRQ_YIELDING|SRQ_PREEMPTED :
|
|
|
|
SRQ_HOLDTD|SRQ_OURSELF|SRQ_YIELDING);
|
2004-09-05 02:09:54 +00:00
|
|
|
}
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2019-12-15 21:11:15 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Switch to the sched lock to fix things up and pick
|
|
|
|
* a new thread. Block the td_lock in order to avoid
|
|
|
|
* breaking the critical path.
|
|
|
|
*/
|
|
|
|
if (td->td_lock != &sched_lock) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
tmtx = thread_lock_block(td);
|
|
|
|
mtx_unlock_spin(tmtx);
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
|
|
|
sched_load_rem();
|
|
|
|
|
2019-12-15 21:26:50 +00:00
|
|
|
newtd = choosethread();
|
2019-12-15 21:11:15 +00:00
|
|
|
MPASS(newtd->td_lock == &sched_lock);
|
|
|
|
|
move thread switch tracing from mi_switch to sched_switch
This is done so that the thread state changes during the switch
are not confused with the thread state changes reported when the thread
spins on a lock.
Here is an example, three consecutive entries for the same thread (from top to
bottom):
KTRGRAPH group:"thread", id:"zio_write_intr_3 tid 100260", state:"sleep", attributes: prio:84, wmesg:"-", lockname:"(null)"
KTRGRAPH group:"thread", id:"zio_write_intr_3 tid 100260", state:"spinning", attributes: lockname:"sched lock 1"
KTRGRAPH group:"thread", id:"zio_write_intr_3 tid 100260", state:"running", attributes: none
The above trace could leave an impression that the final state of
the thread was "running".
After this change the sleep state will be reported after the "spinning"
and "running" states reported for the sched lock.
Reviewed by: jhb, markj
MFC after: 1 week
Sponsored by: Panzura
Differential Revision: https://reviews.freebsd.org/D9961
2017-03-23 08:57:04 +00:00
|
|
|
#if (KTR_COMPILE & KTR_SCHED) != 0
|
|
|
|
if (TD_IS_IDLETHREAD(td))
|
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(td), "idle",
|
|
|
|
"prio:%d", td->td_priority);
|
|
|
|
else
|
|
|
|
KTR_STATE3(KTR_SCHED, "thread", sched_tdname(td), KTDSTATE(td),
|
|
|
|
"prio:%d", td->td_priority, "wmesg:\"%s\"", td->td_wmesg,
|
|
|
|
"lockname:\"%s\"", td->td_lockname);
|
|
|
|
#endif
|
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
if (td != newtd) {
|
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
if (PMC_PROC_IS_USING_PMCS(td->td_proc))
|
|
|
|
PMC_SWITCH_CONTEXT(td, PMC_FN_CSW_OUT);
|
|
|
|
#endif
|
2012-05-15 01:30:25 +00:00
|
|
|
|
2013-12-29 17:08:30 +00:00
|
|
|
SDT_PROBE2(sched, , , off__cpu, newtd, newtd->td_proc);
|
2012-05-15 01:30:25 +00:00
|
|
|
|
2007-02-02 05:14:22 +00:00
|
|
|
/* I feel sleepy */
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_release_lock(&sched_lock.lock_object);
|
2008-05-25 01:44:58 +00:00
|
|
|
#ifdef KDTRACE_HOOKS
|
|
|
|
/*
|
|
|
|
* If DTrace has set the active vtime enum to anything
|
|
|
|
* other than INACTIVE (0), then it should have set the
|
|
|
|
* function to call.
|
|
|
|
*/
|
|
|
|
if (dtrace_vtime_active)
|
|
|
|
(*dtrace_vtime_switch_func)(newtd);
|
|
|
|
#endif
|
|
|
|
|
2019-12-15 21:11:15 +00:00
|
|
|
cpu_switch(td, newtd, tmtx);
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_obtain_lock_success(&sched_lock.lock_object,
|
|
|
|
0, 0, __FILE__, __LINE__);
|
2007-02-02 05:14:22 +00:00
|
|
|
/*
|
|
|
|
* Where am I? What year is it?
|
|
|
|
* We are in the same thread that went to sleep above,
|
2008-07-28 15:52:02 +00:00
|
|
|
* but any amount of time may have passed. All our context
|
2007-02-02 05:14:22 +00:00
|
|
|
* will still be available as will local variables.
|
|
|
|
* PCPU values however may have changed as we may have
|
|
|
|
* changed CPU so don't trust cached values of them.
|
|
|
|
* New threads will go to fork_exit() instead of here
|
|
|
|
* so if you change things here you may need to change
|
|
|
|
* things there too.
|
2008-07-28 15:52:02 +00:00
|
|
|
*
|
2007-02-02 05:14:22 +00:00
|
|
|
* If the thread above was exiting it will never wake
|
|
|
|
* up again here, so either it has saved everything it
|
|
|
|
* needed to, or the thread_wait() or wait() will
|
|
|
|
* need to reap it.
|
|
|
|
*/
|
2012-05-15 01:30:25 +00:00
|
|
|
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE0(sched, , , on__cpu);
|
2005-04-19 04:01:25 +00:00
|
|
|
#ifdef HWPMC_HOOKS
|
|
|
|
if (PMC_PROC_IS_USING_PMCS(td->td_proc))
|
|
|
|
PMC_SWITCH_CONTEXT(td, PMC_FN_CSW_IN);
|
|
|
|
#endif
|
2019-12-15 21:11:15 +00:00
|
|
|
} else {
|
|
|
|
td->td_lock = &sched_lock;
|
2013-11-26 08:46:27 +00:00
|
|
|
SDT_PROBE0(sched, , , remain__cpu);
|
2019-12-15 21:11:15 +00:00
|
|
|
}
|
2005-04-19 04:01:25 +00:00
|
|
|
|
move thread switch tracing from mi_switch to sched_switch
This is done so that the thread state changes during the switch
are not confused with the thread state changes reported when the thread
spins on a lock.
Here is an example, three consecutive entries for the same thread (from top to
bottom):
KTRGRAPH group:"thread", id:"zio_write_intr_3 tid 100260", state:"sleep", attributes: prio:84, wmesg:"-", lockname:"(null)"
KTRGRAPH group:"thread", id:"zio_write_intr_3 tid 100260", state:"spinning", attributes: lockname:"sched lock 1"
KTRGRAPH group:"thread", id:"zio_write_intr_3 tid 100260", state:"running", attributes: none
The above trace could leave an impression that the final state of
the thread was "running".
After this change the sleep state will be reported after the "spinning"
and "running" states reported for the sched lock.
Reviewed by: jhb, markj
MFC after: 1 week
Sponsored by: Panzura
Differential Revision: https://reviews.freebsd.org/D9961
2017-03-23 08:57:04 +00:00
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(td), "running",
|
|
|
|
"prio:%d", td->td_priority);
|
|
|
|
|
2007-02-02 05:14:22 +00:00
|
|
|
#ifdef SMP
|
|
|
|
if (td->td_flags & TDF_IDLETD)
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_SET(PCPU_GET(cpuid), &idle_cpus_mask);
|
2007-02-02 05:14:22 +00:00
|
|
|
#endif
|
2003-10-16 08:53:46 +00:00
|
|
|
sched_lock.mtx_lock = (uintptr_t)td;
|
|
|
|
td->td_oncpu = PCPU_GET(cpuid);
|
2019-12-15 21:26:50 +00:00
|
|
|
spinlock_enter();
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2019-12-15 21:11:15 +00:00
|
|
|
sched_wakeup(struct thread *td, int srqflags)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2007-09-21 04:10:23 +00:00
|
|
|
struct td_sched *ts;
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2008-03-12 06:31:06 +00:00
|
|
|
td->td_flags &= ~TDF_CANSWAP;
|
2007-09-21 04:10:23 +00:00
|
|
|
if (ts->ts_slptime > 1) {
|
2006-10-26 21:42:22 +00:00
|
|
|
updatepri(td);
|
|
|
|
resetpriority(td);
|
|
|
|
}
|
2010-01-08 14:55:11 +00:00
|
|
|
td->td_slptick = 0;
|
2007-09-21 04:10:23 +00:00
|
|
|
ts->ts_slptime = 0;
|
2012-08-09 18:09:59 +00:00
|
|
|
ts->ts_slice = sched_slice;
|
2019-12-15 21:11:15 +00:00
|
|
|
sched_add(td, srqflags);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2004-09-03 09:15:10 +00:00
|
|
|
#ifdef SMP
|
2004-09-03 07:42:31 +00:00
|
|
|
static int
|
2008-07-28 15:52:02 +00:00
|
|
|
forward_wakeup(int cpunum)
|
2004-09-03 07:42:31 +00:00
|
|
|
{
|
|
|
|
struct pcpu *pc;
|
2011-07-04 12:04:52 +00:00
|
|
|
cpuset_t dontuse, map, map2;
|
|
|
|
u_int id, me;
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
int iscpuset;
|
2004-09-03 07:42:31 +00:00
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
|
2004-09-05 02:09:54 +00:00
|
|
|
CTR0(KTR_RUNQ, "forward_wakeup()");
|
2004-09-03 07:42:31 +00:00
|
|
|
|
|
|
|
if ((!forward_wakeup_enabled) ||
|
|
|
|
(forward_wakeup_use_mask == 0 && forward_wakeup_use_loop == 0))
|
|
|
|
return (0);
|
2020-01-12 06:07:54 +00:00
|
|
|
if (!smp_started || KERNEL_PANICKED())
|
2004-09-03 07:42:31 +00:00
|
|
|
return (0);
|
|
|
|
|
|
|
|
forward_wakeups_requested++;
|
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
/*
|
|
|
|
* Check the idle mask we received against what we calculated
|
|
|
|
* before in the old version.
|
2004-09-03 07:42:31 +00:00
|
|
|
*/
|
2011-07-04 12:04:52 +00:00
|
|
|
me = PCPU_GET(cpuid);
|
2008-07-28 15:52:02 +00:00
|
|
|
|
|
|
|
/* Don't bother if we should be doing it ourself. */
|
2011-07-04 12:04:52 +00:00
|
|
|
if (CPU_ISSET(me, &idle_cpus_mask) &&
|
|
|
|
(cpunum == NOCPU || me == cpunum))
|
2004-09-03 07:42:31 +00:00
|
|
|
return (0);
|
|
|
|
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_SETOF(me, &dontuse);
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_OR(&dontuse, &stopped_cpus);
|
|
|
|
CPU_OR(&dontuse, &hlt_cpus_mask);
|
|
|
|
CPU_ZERO(&map2);
|
2004-09-03 07:42:31 +00:00
|
|
|
if (forward_wakeup_use_loop) {
|
2011-05-31 21:22:44 +00:00
|
|
|
STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) {
|
2011-07-04 12:04:52 +00:00
|
|
|
id = pc->pc_cpuid;
|
|
|
|
if (!CPU_ISSET(id, &dontuse) &&
|
2004-09-03 07:42:31 +00:00
|
|
|
pc->pc_curthread == pc->pc_idlethread) {
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_SET(id, &map2);
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (forward_wakeup_use_mask) {
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
map = idle_cpus_mask;
|
2019-12-13 09:32:16 +00:00
|
|
|
CPU_ANDNOT(&map, &dontuse);
|
2004-09-03 07:42:31 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
/* If they are both on, compare and use loop if different. */
|
2004-09-03 07:42:31 +00:00
|
|
|
if (forward_wakeup_use_loop) {
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (CPU_CMP(&map, &map2)) {
|
2011-04-30 23:28:07 +00:00
|
|
|
printf("map != map2, loop method preferred\n");
|
|
|
|
map = map2;
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
2011-04-30 23:28:07 +00:00
|
|
|
map = map2;
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
|
|
|
|
/* If we only allow a specific CPU, then mask off all the others. */
|
2004-09-03 07:42:31 +00:00
|
|
|
if (cpunum != NOCPU) {
|
|
|
|
KASSERT((cpunum <= mp_maxcpus),("forward_wakeup: bad cpunum."));
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
iscpuset = CPU_ISSET(cpunum, &map);
|
|
|
|
if (iscpuset == 0)
|
|
|
|
CPU_ZERO(&map);
|
|
|
|
else
|
|
|
|
CPU_SETOF(cpunum, &map);
|
2004-09-03 07:42:31 +00:00
|
|
|
}
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (!CPU_EMPTY(&map)) {
|
2004-09-03 07:42:31 +00:00
|
|
|
forward_wakeups_delivered++;
|
2011-05-31 21:22:44 +00:00
|
|
|
STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) {
|
2011-07-04 12:04:52 +00:00
|
|
|
id = pc->pc_cpuid;
|
|
|
|
if (!CPU_ISSET(id, &map))
|
2010-09-11 07:08:22 +00:00
|
|
|
continue;
|
|
|
|
if (cpu_idle_wakeup(pc->pc_cpuid))
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_CLR(id, &map);
|
2010-09-11 07:08:22 +00:00
|
|
|
}
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (!CPU_EMPTY(&map))
|
2010-09-11 07:08:22 +00:00
|
|
|
ipi_selected(map, IPI_AST);
|
2004-09-03 07:42:31 +00:00
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
if (cpunum == NOCPU)
|
|
|
|
printf("forward_wakeup: Idle processor not found\n");
|
|
|
|
return (0);
|
|
|
|
}
|
2005-06-09 18:26:31 +00:00
|
|
|
|
|
|
|
static void
|
2008-07-28 15:52:02 +00:00
|
|
|
kick_other_cpu(int pri, int cpuid)
|
|
|
|
{
|
|
|
|
struct pcpu *pcpu;
|
|
|
|
int cpri;
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
pcpu = pcpu_find(cpuid);
|
2011-07-04 12:04:52 +00:00
|
|
|
if (CPU_ISSET(cpuid, &idle_cpus_mask)) {
|
2005-06-09 18:26:31 +00:00
|
|
|
forward_wakeups_delivered++;
|
2010-09-11 07:08:22 +00:00
|
|
|
if (!cpu_idle_wakeup(cpuid))
|
|
|
|
ipi_cpu(cpuid, IPI_AST);
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
cpri = pcpu->pc_curthread->td_priority;
|
2005-06-09 18:26:31 +00:00
|
|
|
if (pri >= cpri)
|
|
|
|
return;
|
|
|
|
|
|
|
|
#if defined(IPI_PREEMPTION) && defined(PREEMPTION)
|
|
|
|
#if !defined(FULL_PREEMPTION)
|
|
|
|
if (pri <= PRI_MAX_ITHD)
|
|
|
|
#endif /* ! FULL_PREEMPTION */
|
|
|
|
{
|
2010-08-06 15:36:59 +00:00
|
|
|
ipi_cpu(cpuid, IPI_PREEMPT);
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* defined(IPI_PREEMPTION) && defined(PREEMPTION) */
|
|
|
|
|
|
|
|
pcpu->pc_curthread->td_flags |= TDF_NEEDRESCHED;
|
2010-08-06 15:36:59 +00:00
|
|
|
ipi_cpu(cpuid, IPI_AST);
|
2005-06-09 18:26:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* SMP */
|
|
|
|
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
static int
|
|
|
|
sched_pickcpu(struct thread *td)
|
|
|
|
{
|
|
|
|
int best, cpu;
|
|
|
|
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
|
2016-07-29 20:19:14 +00:00
|
|
|
if (td->td_lastcpu != NOCPU && THREAD_CAN_SCHED(td, td->td_lastcpu))
|
2008-07-28 20:39:21 +00:00
|
|
|
best = td->td_lastcpu;
|
|
|
|
else
|
|
|
|
best = NOCPU;
|
2010-06-11 18:46:34 +00:00
|
|
|
CPU_FOREACH(cpu) {
|
2008-07-28 17:25:24 +00:00
|
|
|
if (!THREAD_CAN_SCHED(td, cpu))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (best == NOCPU)
|
|
|
|
best = cpu;
|
|
|
|
else if (runq_length[cpu] < runq_length[best])
|
|
|
|
best = cpu;
|
|
|
|
}
|
|
|
|
KASSERT(best != NOCPU, ("no valid CPUs"));
|
|
|
|
|
|
|
|
return (best);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2004-09-01 02:11:28 +00:00
|
|
|
sched_add(struct thread *td, int flags)
|
2005-06-09 18:26:31 +00:00
|
|
|
#ifdef SMP
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2011-07-04 12:04:52 +00:00
|
|
|
cpuset_t tidlemsk;
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2011-07-04 12:04:52 +00:00
|
|
|
u_int cpu, cpuid;
|
2004-09-01 06:42:02 +00:00
|
|
|
int forwarded = 0;
|
2005-06-09 18:26:31 +00:00
|
|
|
int single_cpu = 0;
|
2003-10-16 08:39:15 +00:00
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-01-23 08:46:51 +00:00
|
|
|
KASSERT((td->td_inhibitors == 0),
|
|
|
|
("sched_add: trying to run inhibited thread"));
|
|
|
|
KASSERT((TD_CAN_RUN(td) || TD_IS_RUNNING(td)),
|
|
|
|
("sched_add: bad thread state"));
|
2007-09-17 05:31:39 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
|
|
|
("sched_add: thread swapped out"));
|
2009-01-17 07:17:57 +00:00
|
|
|
|
|
|
|
KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq add",
|
|
|
|
"prio:%d", td->td_priority, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
|
|
|
KTR_POINT1(KTR_SCHED, "thread", sched_tdname(curthread), "wokeup",
|
|
|
|
KTR_ATTR_LINKED, sched_tdname(td));
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE4(sched, , , enqueue, td, td->td_proc, NULL,
|
|
|
|
flags & SRQ_PREEMPTED);
|
2009-01-17 07:17:57 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
/*
|
|
|
|
* Now that the thread is moving to the run-queue, set the lock
|
|
|
|
* to the scheduler's lock.
|
|
|
|
*/
|
|
|
|
if (td->td_lock != &sched_lock) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
2019-12-15 21:11:15 +00:00
|
|
|
if ((flags & SRQ_HOLD) != 0)
|
|
|
|
td->td_lock = &sched_lock;
|
|
|
|
else
|
|
|
|
thread_lock_set(td, &sched_lock);
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
TD_SET_RUNQ(td);
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2011-04-26 20:34:30 +00:00
|
|
|
/*
|
|
|
|
* If SMP is started and the thread is pinned or otherwise limited to
|
|
|
|
* a specific set of CPUs, queue the thread to a per-CPU run queue.
|
|
|
|
* Otherwise, queue the thread to the global run queue.
|
|
|
|
*
|
|
|
|
* If SMP has not yet been started we must use the global run queue
|
|
|
|
* as per-CPU state may not be initialized yet and we may crash if we
|
|
|
|
* try to access the per-CPU run queues.
|
|
|
|
*/
|
|
|
|
if (smp_started && (td->td_pinned != 0 || td->td_flags & TDF_BOUND ||
|
|
|
|
ts->ts_flags & TSF_AFFINITY)) {
|
|
|
|
if (td->td_pinned != 0)
|
|
|
|
cpu = td->td_lastcpu;
|
|
|
|
else if (td->td_flags & TDF_BOUND) {
|
|
|
|
/* Find CPU from bound runq. */
|
|
|
|
KASSERT(SKE_RUNQ_PCPU(ts),
|
|
|
|
("sched_add: bound td_sched not on cpu runq"));
|
|
|
|
cpu = ts->ts_runq - &runq_pcpu[0];
|
|
|
|
} else
|
|
|
|
/* Find a valid CPU for our cpuset */
|
|
|
|
cpu = sched_pickcpu(td);
|
2008-07-28 17:25:24 +00:00
|
|
|
ts->ts_runq = &runq_pcpu[cpu];
|
|
|
|
single_cpu = 1;
|
|
|
|
CTR3(KTR_RUNQ,
|
|
|
|
"sched_add: Put td_sched:%p(td:%p) on cpu%d runq", ts, td,
|
|
|
|
cpu);
|
2008-07-28 15:52:02 +00:00
|
|
|
} else {
|
2004-09-01 06:42:02 +00:00
|
|
|
CTR2(KTR_RUNQ,
|
2008-07-28 15:52:02 +00:00
|
|
|
"sched_add: adding td_sched:%p (td:%p) to gbl runq", ts,
|
|
|
|
td);
|
2004-09-01 06:42:02 +00:00
|
|
|
cpu = NOCPU;
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_runq = &runq;
|
2005-06-09 18:26:31 +00:00
|
|
|
}
|
2008-07-28 15:52:02 +00:00
|
|
|
|
2016-11-12 00:14:13 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
|
|
|
sched_load_add();
|
|
|
|
runq_add(ts->ts_runq, td, flags);
|
|
|
|
if (cpu != NOCPU)
|
|
|
|
runq_length[cpu]++;
|
|
|
|
|
2011-07-04 12:04:52 +00:00
|
|
|
cpuid = PCPU_GET(cpuid);
|
|
|
|
if (single_cpu && cpu != cpuid) {
|
2008-07-28 15:52:02 +00:00
|
|
|
kick_other_cpu(td->td_priority, cpu);
|
2004-01-25 08:00:04 +00:00
|
|
|
} else {
|
2005-06-09 19:43:08 +00:00
|
|
|
if (!single_cpu) {
|
2011-07-04 12:04:52 +00:00
|
|
|
tidlemsk = idle_cpus_mask;
|
2019-12-13 09:32:16 +00:00
|
|
|
CPU_ANDNOT(&tidlemsk, &hlt_cpus_mask);
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_CLR(cpuid, &tidlemsk);
|
2005-06-09 18:26:31 +00:00
|
|
|
|
2011-07-04 12:04:52 +00:00
|
|
|
if (!CPU_ISSET(cpuid, &idle_cpus_mask) &&
|
|
|
|
((flags & SRQ_INTR) == 0) &&
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
!CPU_EMPTY(&tidlemsk))
|
2005-06-09 18:26:31 +00:00
|
|
|
forwarded = forward_wakeup(cpu);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!forwarded) {
|
2016-11-12 00:14:13 +00:00
|
|
|
if (!maybe_preempt(td))
|
2005-06-09 18:26:31 +00:00
|
|
|
maybe_resched(td);
|
|
|
|
}
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
2019-12-15 21:11:15 +00:00
|
|
|
if ((flags & SRQ_HOLDTD) == 0)
|
|
|
|
thread_unlock(td);
|
2005-06-09 18:26:31 +00:00
|
|
|
}
|
|
|
|
#else /* SMP */
|
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2008-07-28 17:25:24 +00:00
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2007-01-23 08:46:51 +00:00
|
|
|
KASSERT((td->td_inhibitors == 0),
|
|
|
|
("sched_add: trying to run inhibited thread"));
|
|
|
|
KASSERT((TD_CAN_RUN(td) || TD_IS_RUNNING(td)),
|
|
|
|
("sched_add: bad thread state"));
|
2007-09-17 05:31:39 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
|
|
|
("sched_add: thread swapped out"));
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq add",
|
|
|
|
"prio:%d", td->td_priority, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
|
|
|
KTR_POINT1(KTR_SCHED, "thread", sched_tdname(curthread), "wokeup",
|
|
|
|
KTR_ATTR_LINKED, sched_tdname(td));
|
2012-05-15 10:58:17 +00:00
|
|
|
SDT_PROBE4(sched, , , enqueue, td, td->td_proc, NULL,
|
2012-05-15 01:30:25 +00:00
|
|
|
flags & SRQ_PREEMPTED);
|
2008-07-28 15:52:02 +00:00
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
/*
|
|
|
|
* Now that the thread is moving to the run-queue, set the lock
|
|
|
|
* to the scheduler's lock.
|
|
|
|
*/
|
|
|
|
if (td->td_lock != &sched_lock) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
2019-12-15 21:11:15 +00:00
|
|
|
if ((flags & SRQ_HOLD) != 0)
|
|
|
|
td->td_lock = &sched_lock;
|
|
|
|
else
|
|
|
|
thread_lock_set(td, &sched_lock);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
TD_SET_RUNQ(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
CTR2(KTR_RUNQ, "sched_add: adding td_sched:%p (td:%p) to runq", ts, td);
|
|
|
|
ts->ts_runq = &runq;
|
2004-09-01 06:42:02 +00:00
|
|
|
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_add();
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_add(ts->ts_runq, td, flags);
|
2016-11-12 00:14:13 +00:00
|
|
|
if (!maybe_preempt(td))
|
|
|
|
maybe_resched(td);
|
2019-12-15 21:11:15 +00:00
|
|
|
if ((flags & SRQ_HOLDTD) == 0)
|
|
|
|
thread_unlock(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2005-06-09 18:26:31 +00:00
|
|
|
#endif /* SMP */
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_rem(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2003-10-16 08:39:15 +00:00
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2007-09-17 05:31:39 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
|
|
|
("sched_rem: thread swapped out"));
|
2007-01-23 08:46:51 +00:00
|
|
|
KASSERT(TD_ON_RUNQ(td),
|
2006-12-06 06:34:57 +00:00
|
|
|
("sched_rem: thread not on run queue"));
|
2002-10-12 05:32:24 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2009-01-17 07:17:57 +00:00
|
|
|
KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq rem",
|
|
|
|
"prio:%d", td->td_priority, KTR_ATTR_LINKED,
|
|
|
|
sched_tdname(curthread));
|
2012-05-15 01:30:25 +00:00
|
|
|
SDT_PROBE3(sched, , , dequeue, td, td->td_proc, NULL);
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2009-11-03 16:46:52 +00:00
|
|
|
if ((td->td_flags & TDF_NOLOAD) == 0)
|
2004-12-26 00:16:24 +00:00
|
|
|
sched_load_rem();
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
if (ts->ts_runq != &runq)
|
|
|
|
runq_length[ts->ts_runq - runq_pcpu]--;
|
|
|
|
#endif
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_remove(ts->ts_runq, td);
|
2007-01-23 08:46:51 +00:00
|
|
|
TD_SET_CAN_RUN(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2004-09-16 07:12:59 +00:00
|
|
|
/*
|
2008-07-28 15:52:02 +00:00
|
|
|
* Select threads to run. Note that running threads still consume a
|
|
|
|
* slot.
|
2004-09-16 07:12:59 +00:00
|
|
|
*/
|
2007-01-23 08:46:51 +00:00
|
|
|
struct thread *
|
2002-10-12 05:32:24 +00:00
|
|
|
sched_choose(void)
|
|
|
|
{
|
2008-03-20 05:51:16 +00:00
|
|
|
struct thread *td;
|
2004-01-25 08:00:04 +00:00
|
|
|
struct runq *rq;
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
2008-03-20 05:51:16 +00:00
|
|
|
struct thread *tdcpu;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
rq = &runq;
|
2008-03-20 05:51:16 +00:00
|
|
|
td = runq_choose_fuzz(&runq, runq_fuzz);
|
|
|
|
tdcpu = runq_choose(&runq_pcpu[PCPU_GET(cpuid)]);
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2008-07-28 15:52:02 +00:00
|
|
|
if (td == NULL ||
|
|
|
|
(tdcpu != NULL &&
|
2008-03-20 05:51:16 +00:00
|
|
|
tdcpu->td_priority < td->td_priority)) {
|
|
|
|
CTR2(KTR_RUNQ, "choosing td %p from pcpu runq %d", tdcpu,
|
2004-01-25 08:00:04 +00:00
|
|
|
PCPU_GET(cpuid));
|
2008-03-20 05:51:16 +00:00
|
|
|
td = tdcpu;
|
2004-01-25 08:00:04 +00:00
|
|
|
rq = &runq_pcpu[PCPU_GET(cpuid)];
|
2008-07-28 15:52:02 +00:00
|
|
|
} else {
|
2008-03-20 05:51:16 +00:00
|
|
|
CTR1(KTR_RUNQ, "choosing td_sched %p from main runq", td);
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
rq = &runq;
|
2008-03-20 05:51:16 +00:00
|
|
|
td = runq_choose(&runq);
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2008-03-20 05:51:16 +00:00
|
|
|
if (td) {
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
if (td == tdcpu)
|
|
|
|
runq_length[PCPU_GET(cpuid)]--;
|
|
|
|
#endif
|
2008-03-20 05:51:16 +00:00
|
|
|
runq_remove(rq, td);
|
|
|
|
td->td_flags |= TDF_DIDRUN;
|
2002-10-12 05:32:24 +00:00
|
|
|
|
2008-03-20 05:51:16 +00:00
|
|
|
KASSERT(td->td_flags & TDF_INMEM,
|
2007-09-17 05:31:39 +00:00
|
|
|
("sched_choose: thread swapped out"));
|
2008-03-20 05:51:16 +00:00
|
|
|
return (td);
|
2008-07-28 15:52:02 +00:00
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
return (PCPU_GET(idlethread));
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
|
|
|
|
2008-03-10 01:30:35 +00:00
|
|
|
void
|
|
|
|
sched_preempt(struct thread *td)
|
|
|
|
{
|
2012-05-15 01:30:25 +00:00
|
|
|
|
|
|
|
SDT_PROBE2(sched, , , surrender, td, td->td_proc);
|
2019-12-15 21:26:50 +00:00
|
|
|
if (td->td_critnest > 1) {
|
2008-03-10 01:30:35 +00:00
|
|
|
td->td_owepreempt = 1;
|
2019-12-15 21:26:50 +00:00
|
|
|
} else {
|
|
|
|
thread_lock(td);
|
|
|
|
mi_switch(SW_INVOL | SW_PREEMPT | SWT_PREEMPT);
|
|
|
|
}
|
2008-03-10 01:30:35 +00:00
|
|
|
}
|
|
|
|
|
2002-10-12 05:32:24 +00:00
|
|
|
void
|
2018-05-07 23:36:16 +00:00
|
|
|
sched_userret_slowpath(struct thread *td)
|
2002-10-12 05:32:24 +00:00
|
|
|
{
|
2018-05-07 23:36:16 +00:00
|
|
|
|
|
|
|
thread_lock(td);
|
|
|
|
td->td_priority = td->td_user_pri;
|
|
|
|
td->td_base_pri = td->td_user_pri;
|
|
|
|
thread_unlock(td);
|
2002-10-12 05:32:24 +00:00
|
|
|
}
|
2002-11-21 01:22:38 +00:00
|
|
|
|
2004-01-25 08:00:04 +00:00
|
|
|
void
|
|
|
|
sched_bind(struct thread *td, int cpu)
|
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2010-05-21 17:15:56 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED|MA_NOTRECURSED);
|
|
|
|
KASSERT(td == curthread, ("sched_bind: can only bind curthread"));
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2004-01-25 08:00:04 +00:00
|
|
|
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags |= TDF_BOUND;
|
2004-01-25 08:00:04 +00:00
|
|
|
#ifdef SMP
|
2006-12-06 06:34:57 +00:00
|
|
|
ts->ts_runq = &runq_pcpu[cpu];
|
2004-01-25 08:00:04 +00:00
|
|
|
if (PCPU_GET(cpuid) == cpu)
|
|
|
|
return;
|
|
|
|
|
2019-12-15 21:26:50 +00:00
|
|
|
mi_switch(SW_VOL);
|
|
|
|
thread_lock(td);
|
2004-01-25 08:00:04 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
sched_unbind(struct thread* td)
|
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2010-05-21 17:15:56 +00:00
|
|
|
KASSERT(td == curthread, ("sched_unbind: can only bind curthread"));
|
2008-03-20 05:51:16 +00:00
|
|
|
td->td_flags &= ~TDF_BOUND;
|
2004-01-25 08:00:04 +00:00
|
|
|
}
|
|
|
|
|
2005-04-19 04:01:25 +00:00
|
|
|
int
|
|
|
|
sched_is_bound(struct thread *td)
|
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2008-03-20 05:51:16 +00:00
|
|
|
return (td->td_flags & TDF_BOUND);
|
2005-04-19 04:01:25 +00:00
|
|
|
}
|
|
|
|
|
2006-06-15 06:37:39 +00:00
|
|
|
void
|
|
|
|
sched_relinquish(struct thread *td)
|
|
|
|
{
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
thread_lock(td);
|
2019-12-15 21:26:50 +00:00
|
|
|
mi_switch(SW_VOL | SWT_RELINQUISH);
|
2006-06-15 06:37:39 +00:00
|
|
|
}
|
|
|
|
|
2004-02-01 02:46:47 +00:00
|
|
|
int
|
|
|
|
sched_load(void)
|
|
|
|
{
|
|
|
|
return (sched_tdcnt);
|
|
|
|
}
|
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_proc(void)
|
|
|
|
{
|
|
|
|
return (sizeof(struct proc));
|
|
|
|
}
|
2006-06-15 06:37:39 +00:00
|
|
|
|
2002-11-21 01:22:38 +00:00
|
|
|
int
|
|
|
|
sched_sizeof_thread(void)
|
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
return (sizeof(struct thread) + sizeof(struct td_sched));
|
2002-11-21 01:22:38 +00:00
|
|
|
}
|
2002-11-21 09:30:55 +00:00
|
|
|
|
|
|
|
fixpt_t
|
2003-10-16 08:39:15 +00:00
|
|
|
sched_pctcpu(struct thread *td)
|
2002-11-21 09:30:55 +00:00
|
|
|
{
|
2006-12-06 06:34:57 +00:00
|
|
|
struct td_sched *ts;
|
2003-10-16 21:13:14 +00:00
|
|
|
|
2010-06-03 16:02:11 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
return (ts->ts_pctcpu);
|
2002-11-21 09:30:55 +00:00
|
|
|
}
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
|
2015-04-29 10:23:02 +00:00
|
|
|
#ifdef RACCT
|
2012-10-26 16:01:08 +00:00
|
|
|
/*
|
|
|
|
* Calculates the contribution to the thread cpu usage for the latest
|
|
|
|
* (unfinished) second.
|
|
|
|
*/
|
|
|
|
fixpt_t
|
|
|
|
sched_pctcpu_delta(struct thread *td)
|
|
|
|
{
|
|
|
|
struct td_sched *ts;
|
|
|
|
fixpt_t delta;
|
|
|
|
int realstathz;
|
|
|
|
|
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2012-10-26 16:01:08 +00:00
|
|
|
delta = 0;
|
|
|
|
realstathz = stathz ? stathz : hz;
|
|
|
|
if (ts->ts_cpticks != 0) {
|
|
|
|
#if (FSHIFT >= CCPU_SHIFT)
|
|
|
|
delta = (realstathz == 100)
|
|
|
|
? ((fixpt_t) ts->ts_cpticks) <<
|
|
|
|
(FSHIFT - CCPU_SHIFT) :
|
|
|
|
100 * (((fixpt_t) ts->ts_cpticks)
|
|
|
|
<< (FSHIFT - CCPU_SHIFT)) / realstathz;
|
|
|
|
#else
|
|
|
|
delta = ((FSCALE - ccpu) *
|
|
|
|
(ts->ts_cpticks *
|
|
|
|
FSCALE / realstathz)) >> FSHIFT;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
return (delta);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-04-17 11:04:27 +00:00
|
|
|
u_int
|
|
|
|
sched_estcpu(struct thread *td)
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
{
|
2016-04-17 11:04:27 +00:00
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
return (td_get_sched(td)->ts_estcpu);
|
Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
timeslice and interaction detecting algorithm are based
on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The actual idle process.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_idletd(void *dummy)
|
|
|
|
{
|
2010-09-11 07:08:22 +00:00
|
|
|
struct pcpuidlestat *stat;
|
2007-01-23 08:46:51 +00:00
|
|
|
|
2012-08-22 20:01:38 +00:00
|
|
|
THREAD_NO_SLEEPING();
|
2010-09-11 07:08:22 +00:00
|
|
|
stat = DPCPU_PTR(idlestat);
|
2007-01-23 08:46:51 +00:00
|
|
|
for (;;) {
|
|
|
|
mtx_assert(&Giant, MA_NOTOWNED);
|
|
|
|
|
2010-09-11 07:08:22 +00:00
|
|
|
while (sched_runnable() == 0) {
|
|
|
|
cpu_idle(stat->idlecalls + stat->oldidlecalls > 64);
|
|
|
|
stat->idlecalls++;
|
|
|
|
}
|
2007-01-23 08:46:51 +00:00
|
|
|
|
|
|
|
mtx_lock_spin(&sched_lock);
|
2019-12-15 21:26:50 +00:00
|
|
|
mi_switch(SW_VOL | SWT_IDLE);
|
2007-01-23 08:46:51 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
/*
|
|
|
|
* A CPU is entering for the first time or a thread is exiting.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sched_throw(struct thread *td)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Correct spinlock nesting. The idle thread context that we are
|
|
|
|
* borrowing was created so that it would start out with a single
|
|
|
|
* spin lock (sched_lock) held in fork_trampoline(). Since we've
|
|
|
|
* explicitly acquired locks in this function, the nesting count
|
|
|
|
* is now 2 rather than 1. Since we are nested, calling
|
|
|
|
* spinlock_exit() will simply adjust the counts without allowing
|
|
|
|
* spin lock using code to interrupt us.
|
|
|
|
*/
|
|
|
|
if (td == NULL) {
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
spinlock_exit();
|
2012-01-03 21:03:28 +00:00
|
|
|
PCPU_SET(switchtime, cpu_ticks());
|
|
|
|
PCPU_SET(switchticks, ticks);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
} else {
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_release_lock(&sched_lock.lock_object);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
MPASS(td->td_lock == &sched_lock);
|
2015-08-03 20:43:36 +00:00
|
|
|
td->td_lastcpu = td->td_oncpu;
|
|
|
|
td->td_oncpu = NOCPU;
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
|
|
|
KASSERT(curthread->td_md.md_spinlock_count == 1, ("invalid count"));
|
|
|
|
cpu_throw(td, choosethread()); /* doesn't return */
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2007-06-12 07:47:09 +00:00
|
|
|
sched_fork_exit(struct thread *td)
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finish setting up thread glue so that it begins execution in a
|
|
|
|
* non-nested critical section with sched_lock held but not recursed.
|
|
|
|
*/
|
2007-06-12 07:47:09 +00:00
|
|
|
td->td_oncpu = PCPU_GET(cpuid);
|
|
|
|
sched_lock.mtx_lock = (uintptr_t)td;
|
2007-12-15 23:13:31 +00:00
|
|
|
lock_profile_obtain_lock_success(&sched_lock.lock_object,
|
|
|
|
0, 0, __FILE__, __LINE__);
|
2007-06-12 07:47:09 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED | MA_NOTRECURSED);
|
2017-03-11 15:57:41 +00:00
|
|
|
|
|
|
|
KTR_STATE1(KTR_SCHED, "thread", sched_tdname(td), "running",
|
|
|
|
"prio:%d", td->td_priority);
|
|
|
|
SDT_PROBE0(sched, , , on__cpu);
|
Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
|
|
|
}
|
|
|
|
|
2009-01-17 07:17:57 +00:00
|
|
|
char *
|
|
|
|
sched_tdname(struct thread *td)
|
|
|
|
{
|
|
|
|
#ifdef KTR
|
|
|
|
struct td_sched *ts;
|
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2009-01-17 07:17:57 +00:00
|
|
|
if (ts->ts_name[0] == '\0')
|
|
|
|
snprintf(ts->ts_name, sizeof(ts->ts_name),
|
|
|
|
"%s tid %d", td->td_name, td->td_tid);
|
|
|
|
return (ts->ts_name);
|
|
|
|
#else
|
|
|
|
return (td->td_name);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2012-03-08 19:41:05 +00:00
|
|
|
#ifdef KTR
|
|
|
|
void
|
|
|
|
sched_clear_tdname(struct thread *td)
|
|
|
|
{
|
|
|
|
struct td_sched *ts;
|
|
|
|
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2012-03-08 19:41:05 +00:00
|
|
|
ts->ts_name[0] = '\0';
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-03-02 07:19:35 +00:00
|
|
|
void
|
|
|
|
sched_affinity(struct thread *td)
|
|
|
|
{
|
2008-07-28 17:25:24 +00:00
|
|
|
#ifdef SMP
|
|
|
|
struct td_sched *ts;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the TSF_AFFINITY flag if there is at least one CPU this
|
|
|
|
* thread can't run on.
|
|
|
|
*/
|
2016-06-05 17:04:03 +00:00
|
|
|
ts = td_get_sched(td);
|
2008-07-28 17:25:24 +00:00
|
|
|
ts->ts_flags &= ~TSF_AFFINITY;
|
2010-06-11 18:46:34 +00:00
|
|
|
CPU_FOREACH(cpu) {
|
2008-07-28 17:25:24 +00:00
|
|
|
if (!THREAD_CAN_SCHED(td, cpu)) {
|
|
|
|
ts->ts_flags |= TSF_AFFINITY;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this thread can run on all CPUs, nothing else to do.
|
|
|
|
*/
|
|
|
|
if (!(ts->ts_flags & TSF_AFFINITY))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Pinned threads and bound threads should be left alone. */
|
|
|
|
if (td->td_pinned != 0 || td->td_flags & TDF_BOUND)
|
|
|
|
return;
|
|
|
|
|
|
|
|
switch (td->td_state) {
|
|
|
|
case TDS_RUNQ:
|
|
|
|
/*
|
|
|
|
* If we are on a per-CPU runqueue that is in the set,
|
|
|
|
* then nothing needs to be done.
|
|
|
|
*/
|
|
|
|
if (ts->ts_runq != &runq &&
|
|
|
|
THREAD_CAN_SCHED(td, ts->ts_runq - runq_pcpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Put this thread on a valid per-CPU runqueue. */
|
|
|
|
sched_rem(td);
|
2019-12-15 21:11:15 +00:00
|
|
|
sched_add(td, SRQ_HOLDTD | SRQ_BORING);
|
2008-07-28 17:25:24 +00:00
|
|
|
break;
|
|
|
|
case TDS_RUNNING:
|
|
|
|
/*
|
|
|
|
* See if our current CPU is in the set. If not, force a
|
|
|
|
* context switch.
|
|
|
|
*/
|
|
|
|
if (THREAD_CAN_SCHED(td, td->td_oncpu))
|
|
|
|
return;
|
|
|
|
|
|
|
|
td->td_flags |= TDF_NEEDRESCHED;
|
|
|
|
if (td != curthread)
|
2010-08-06 15:36:59 +00:00
|
|
|
ipi_cpu(cpu, IPI_AST);
|
2008-07-28 17:25:24 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
#endif
|
2008-03-02 07:19:35 +00:00
|
|
|
}
|