freebsd-skq/sys/kern/kern_thread.c

931 lines
24 KiB
C
Raw Normal View History

/*-
2002-06-29 07:04:59 +00:00
* Copyright (C) 2001 Julian Elischer <julian@freebsd.org>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice(s), this list of conditions and the following disclaimer as
* the first lines of this file unmodified other than the possible
2002-06-29 07:04:59 +00:00
* addition of one or more copyright notices.
* 2. Redistributions in binary form must reproduce the above copyright
* notice(s), this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
* DAMAGE.
*/
2003-06-11 00:56:59 +00:00
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
2002-06-29 07:04:59 +00:00
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/resourcevar.h>
2004-06-11 17:48:20 +00:00
#include <sys/smp.h>
2002-06-29 07:04:59 +00:00
#include <sys/sysctl.h>
#include <sys/sched.h>
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
#include <sys/sleepqueue.h>
Add an implementation of turnstiles and change the sleep mutex code to use turnstiles to implement blocking isntead of implementing a thread queue directly. These turnstiles are somewhat similar to those used in Solaris 7 as described in Solaris Internals but are also different. Turnstiles do not come out of a fixed-sized pool. Rather, each thread is assigned a turnstile when it is created that it frees when it is destroyed. When a thread blocks on a lock, it donates its turnstile to that lock to serve as queue of blocked threads. The queue associated with a given lock is found by a lookup in a simple hash table. The turnstile itself is protected by a lock associated with its entry in the hash table. This means that sched_lock is no longer needed to contest on a mutex. Instead, sched_lock is only used when manipulating run queues or thread priorities. Turnstiles also implement priority propagation inherently. Currently turnstiles only support mutexes. Eventually, however, turnstiles may grow two queue's to support a non-sleepable reader/writer lock implementation. For more details, see the comments in sys/turnstile.h and kern/subr_turnstile.c. The two primary advantages from the turnstile code include: 1) the size of struct mutex shrinks by four pointers as it no longer stores the thread queue linkages directly, and 2) less contention on sched_lock in SMP systems including the ability for multiple CPUs to contend on different locks simultaneously (not that this last detail is necessarily that much of a big win). Note that 1) means that this commit is a kernel ABI breaker, so don't mix old modules with a new kernel and vice versa. Tested on: i386 SMP, sparc64 SMP, alpha SMP
2003-11-11 22:07:29 +00:00
#include <sys/turnstile.h>
2002-06-29 07:04:59 +00:00
#include <sys/ktr.h>
#include <sys/umtx.h>
2002-06-29 07:04:59 +00:00
#include <security/audit/audit.h>
2002-06-29 07:04:59 +00:00
#include <vm/vm.h>
#include <vm/vm_extern.h>
2002-06-29 07:04:59 +00:00
#include <vm/uma.h>
/*
* thread related storage.
*/
2002-06-29 07:04:59 +00:00
static uma_zone_t thread_zone;
SYSCTL_NODE(_kern, OID_AUTO, threads, CTLFLAG_RW, 0, "thread allocation");
int max_threads_per_proc = 1500;
SYSCTL_INT(_kern_threads, OID_AUTO, max_threads_per_proc, CTLFLAG_RW,
&max_threads_per_proc, 0, "Limit on threads per proc");
int max_threads_hits;
SYSCTL_INT(_kern_threads, OID_AUTO, max_threads_hits, CTLFLAG_RD,
&max_threads_hits, 0, "");
#ifdef KSE
2004-06-11 17:48:20 +00:00
int virtual_cpu;
#endif
TAILQ_HEAD(, thread) zombie_threads = TAILQ_HEAD_INITIALIZER(zombie_threads);
struct mtx kse_zombie_lock;
MTX_SYSINIT(kse_zombie_lock, &kse_zombie_lock, "kse zombie lock", MTX_SPIN);
2002-06-29 07:04:59 +00:00
#ifdef KSE
2004-06-11 17:48:20 +00:00
static int
sysctl_kse_virtual_cpu(SYSCTL_HANDLER_ARGS)
{
int error, new_val;
int def_val;
def_val = mp_ncpus;
if (virtual_cpu == 0)
new_val = def_val;
else
new_val = virtual_cpu;
error = sysctl_handle_int(oidp, &new_val, 0, req);
2004-08-14 07:21:20 +00:00
if (error != 0 || req->newptr == NULL)
2004-06-11 17:48:20 +00:00
return (error);
if (new_val < 0)
return (EINVAL);
virtual_cpu = new_val;
return (0);
}
/* DEBUG ONLY */
SYSCTL_PROC(_kern_threads, OID_AUTO, virtual_cpu, CTLTYPE_INT|CTLFLAG_RW,
0, sizeof(virtual_cpu), sysctl_kse_virtual_cpu, "I",
"debug virtual cpus");
#endif
struct mtx tid_lock;
static struct unrhdr *tid_unrhdr;
2002-06-29 07:04:59 +00:00
/*
* Prepare a thread for use.
2002-06-29 07:04:59 +00:00
*/
static int
thread_ctor(void *mem, int size, void *arg, int flags)
2002-06-29 07:04:59 +00:00
{
struct thread *td;
td = (struct thread *)mem;
td->td_state = TDS_INACTIVE;
2004-09-22 15:24:33 +00:00
td->td_oncpu = NOCPU;
2005-03-19 08:22:13 +00:00
td->td_tid = alloc_unr(tid_unrhdr);
/*
* Note that td_critnest begins life as 1 because the thread is not
* running and is thereby implicitly waiting to be on the receiving
* end of a context switch. A context switch must occur inside a
* critical section, and in fact, includes hand-off of the sched_lock.
* After a context switch to a newly created thread, it will release
* sched_lock for the first time, and its td_critnest will hit 0 for
* the first time. This happens on the far end of a context switch,
* and when it context switches away from itself, it will in fact go
* back into a critical section, and hand off the sched lock to the
* next thread.
*/
td->td_critnest = 1;
#ifdef AUDIT
audit_thread_alloc(td);
#endif
This is initial version of POSIX priority mutex support, a new userland mutex structure is added as following: struct umutex { __lwpid_t m_owner; uint32_t m_flags; uint32_t m_ceilings[2]; uint32_t m_spare[4]; }; The m_owner represents owner thread, it is a thread id, in non-contested case, userland can simply use atomic_cmpset_int to lock the mutex, if the mutex is contested, high order bit will be set, and userland should do locking and unlocking via kernel syscall. Flag UMUTEX_PRIO_INHERIT represents pthread's PTHREAD_PRIO_INHERIT mutex, which when contention happens, kernel should do priority propagating. Flag UMUTEX_PRIO_PROTECT indicates it is pthread's PTHREAD_PRIO_PROTECT mutex, userland should initialize m_owner to contested state UMUTEX_CONTESTED, then atomic_cmpset_int will be failure and kernel syscall should be invoked to do locking, this becauses for such a mutex, kernel should always boost the thread's priority before it can lock the mutex, m_ceilings is used by PTHREAD_PRIO_PROTECT mutex, the first element is used to boost thread's priority when it locked the mutex, second element is used when the mutex is unlocked, the PTHREAD_PRIO_PROTECT mutex's link list is kept in userland, the m_ceiling[1] is managed by thread library so kernel needn't allocate memory to keep the link list, when such a mutex is unlocked, kernel reset m_owner to UMUTEX_CONTESTED. Flag USYNC_PROCESS_SHARED indicate if the synchronization object is process shared, if the flag is not set, it saves a vm_map_lookup() call. The umtx chain is still used as a sleep queue, when a thread is blocked on PTHREAD_PRIO_INHERIT mutex, a umtx_pi is allocated to support priority propagating, it is dynamically allocated and reference count is used, it is not optimized but works well in my tests, while the umtx chain has its own locking protocol, the priority propagating protocol are all protected by sched_lock because priority propagating function is called with sched_lock held from scheduler. No visible performance degradation is found which these changes. Some parameter names in _umtx_op syscall are renamed.
2006-08-28 04:24:51 +00:00
umtx_thread_alloc(td);
return (0);
2002-06-29 07:04:59 +00:00
}
/*
* Reclaim a thread after use.
*/
static void
thread_dtor(void *mem, int size, void *arg)
{
struct thread *td;
2002-06-29 07:04:59 +00:00
td = (struct thread *)mem;
#ifdef INVARIANTS
/* Verify that this thread is in a safe state to free. */
switch (td->td_state) {
case TDS_INHIBITED:
case TDS_RUNNING:
case TDS_CAN_RUN:
2002-06-29 07:04:59 +00:00
case TDS_RUNQ:
/*
* We must never unlink a thread that is in one of
* these states, because it is currently active.
*/
panic("bad state for thread unlinking");
/* NOTREACHED */
case TDS_INACTIVE:
2002-06-29 07:04:59 +00:00
break;
default:
panic("bad thread state");
/* NOTREACHED */
}
#endif
#ifdef AUDIT
audit_thread_free(td);
#endif
2005-03-19 08:22:13 +00:00
free_unr(tid_unrhdr, td->td_tid);
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
sched_newthread(td);
2002-06-29 07:04:59 +00:00
}
/*
* Initialize type-stable parts of a thread (when newly created).
*/
static int
thread_init(void *mem, int size, int flags)
2002-06-29 07:04:59 +00:00
{
struct thread *td;
2002-06-29 07:04:59 +00:00
td = (struct thread *)mem;
vm_thread_new(td, 0);
2002-06-29 07:04:59 +00:00
cpu_thread_setup(td);
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
td->td_sleepqueue = sleepq_alloc();
Add an implementation of turnstiles and change the sleep mutex code to use turnstiles to implement blocking isntead of implementing a thread queue directly. These turnstiles are somewhat similar to those used in Solaris 7 as described in Solaris Internals but are also different. Turnstiles do not come out of a fixed-sized pool. Rather, each thread is assigned a turnstile when it is created that it frees when it is destroyed. When a thread blocks on a lock, it donates its turnstile to that lock to serve as queue of blocked threads. The queue associated with a given lock is found by a lookup in a simple hash table. The turnstile itself is protected by a lock associated with its entry in the hash table. This means that sched_lock is no longer needed to contest on a mutex. Instead, sched_lock is only used when manipulating run queues or thread priorities. Turnstiles also implement priority propagation inherently. Currently turnstiles only support mutexes. Eventually, however, turnstiles may grow two queue's to support a non-sleepable reader/writer lock implementation. For more details, see the comments in sys/turnstile.h and kern/subr_turnstile.c. The two primary advantages from the turnstile code include: 1) the size of struct mutex shrinks by four pointers as it no longer stores the thread queue linkages directly, and 2) less contention on sched_lock in SMP systems including the ability for multiple CPUs to contend on different locks simultaneously (not that this last detail is necessarily that much of a big win). Note that 1) means that this commit is a kernel ABI breaker, so don't mix old modules with a new kernel and vice versa. Tested on: i386 SMP, sparc64 SMP, alpha SMP
2003-11-11 22:07:29 +00:00
td->td_turnstile = turnstile_alloc();
td->td_sched = (struct td_sched *)&td[1];
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
sched_newthread(td);
This is initial version of POSIX priority mutex support, a new userland mutex structure is added as following: struct umutex { __lwpid_t m_owner; uint32_t m_flags; uint32_t m_ceilings[2]; uint32_t m_spare[4]; }; The m_owner represents owner thread, it is a thread id, in non-contested case, userland can simply use atomic_cmpset_int to lock the mutex, if the mutex is contested, high order bit will be set, and userland should do locking and unlocking via kernel syscall. Flag UMUTEX_PRIO_INHERIT represents pthread's PTHREAD_PRIO_INHERIT mutex, which when contention happens, kernel should do priority propagating. Flag UMUTEX_PRIO_PROTECT indicates it is pthread's PTHREAD_PRIO_PROTECT mutex, userland should initialize m_owner to contested state UMUTEX_CONTESTED, then atomic_cmpset_int will be failure and kernel syscall should be invoked to do locking, this becauses for such a mutex, kernel should always boost the thread's priority before it can lock the mutex, m_ceilings is used by PTHREAD_PRIO_PROTECT mutex, the first element is used to boost thread's priority when it locked the mutex, second element is used when the mutex is unlocked, the PTHREAD_PRIO_PROTECT mutex's link list is kept in userland, the m_ceiling[1] is managed by thread library so kernel needn't allocate memory to keep the link list, when such a mutex is unlocked, kernel reset m_owner to UMUTEX_CONTESTED. Flag USYNC_PROCESS_SHARED indicate if the synchronization object is process shared, if the flag is not set, it saves a vm_map_lookup() call. The umtx chain is still used as a sleep queue, when a thread is blocked on PTHREAD_PRIO_INHERIT mutex, a umtx_pi is allocated to support priority propagating, it is dynamically allocated and reference count is used, it is not optimized but works well in my tests, while the umtx chain has its own locking protocol, the priority propagating protocol are all protected by sched_lock because priority propagating function is called with sched_lock held from scheduler. No visible performance degradation is found which these changes. Some parameter names in _umtx_op syscall are renamed.
2006-08-28 04:24:51 +00:00
umtx_thread_init(td);
return (0);
2002-06-29 07:04:59 +00:00
}
/*
* Tear down type-stable parts of a thread (just before being discarded).
*/
static void
thread_fini(void *mem, int size)
{
struct thread *td;
2002-06-29 07:04:59 +00:00
td = (struct thread *)mem;
Add an implementation of turnstiles and change the sleep mutex code to use turnstiles to implement blocking isntead of implementing a thread queue directly. These turnstiles are somewhat similar to those used in Solaris 7 as described in Solaris Internals but are also different. Turnstiles do not come out of a fixed-sized pool. Rather, each thread is assigned a turnstile when it is created that it frees when it is destroyed. When a thread blocks on a lock, it donates its turnstile to that lock to serve as queue of blocked threads. The queue associated with a given lock is found by a lookup in a simple hash table. The turnstile itself is protected by a lock associated with its entry in the hash table. This means that sched_lock is no longer needed to contest on a mutex. Instead, sched_lock is only used when manipulating run queues or thread priorities. Turnstiles also implement priority propagation inherently. Currently turnstiles only support mutexes. Eventually, however, turnstiles may grow two queue's to support a non-sleepable reader/writer lock implementation. For more details, see the comments in sys/turnstile.h and kern/subr_turnstile.c. The two primary advantages from the turnstile code include: 1) the size of struct mutex shrinks by four pointers as it no longer stores the thread queue linkages directly, and 2) less contention on sched_lock in SMP systems including the ability for multiple CPUs to contend on different locks simultaneously (not that this last detail is necessarily that much of a big win). Note that 1) means that this commit is a kernel ABI breaker, so don't mix old modules with a new kernel and vice versa. Tested on: i386 SMP, sparc64 SMP, alpha SMP
2003-11-11 22:07:29 +00:00
turnstile_free(td->td_turnstile);
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
sleepq_free(td->td_sleepqueue);
This is initial version of POSIX priority mutex support, a new userland mutex structure is added as following: struct umutex { __lwpid_t m_owner; uint32_t m_flags; uint32_t m_ceilings[2]; uint32_t m_spare[4]; }; The m_owner represents owner thread, it is a thread id, in non-contested case, userland can simply use atomic_cmpset_int to lock the mutex, if the mutex is contested, high order bit will be set, and userland should do locking and unlocking via kernel syscall. Flag UMUTEX_PRIO_INHERIT represents pthread's PTHREAD_PRIO_INHERIT mutex, which when contention happens, kernel should do priority propagating. Flag UMUTEX_PRIO_PROTECT indicates it is pthread's PTHREAD_PRIO_PROTECT mutex, userland should initialize m_owner to contested state UMUTEX_CONTESTED, then atomic_cmpset_int will be failure and kernel syscall should be invoked to do locking, this becauses for such a mutex, kernel should always boost the thread's priority before it can lock the mutex, m_ceilings is used by PTHREAD_PRIO_PROTECT mutex, the first element is used to boost thread's priority when it locked the mutex, second element is used when the mutex is unlocked, the PTHREAD_PRIO_PROTECT mutex's link list is kept in userland, the m_ceiling[1] is managed by thread library so kernel needn't allocate memory to keep the link list, when such a mutex is unlocked, kernel reset m_owner to UMUTEX_CONTESTED. Flag USYNC_PROCESS_SHARED indicate if the synchronization object is process shared, if the flag is not set, it saves a vm_map_lookup() call. The umtx chain is still used as a sleep queue, when a thread is blocked on PTHREAD_PRIO_INHERIT mutex, a umtx_pi is allocated to support priority propagating, it is dynamically allocated and reference count is used, it is not optimized but works well in my tests, while the umtx chain has its own locking protocol, the priority propagating protocol are all protected by sched_lock because priority propagating function is called with sched_lock held from scheduler. No visible performance degradation is found which these changes. Some parameter names in _umtx_op syscall are renamed.
2006-08-28 04:24:51 +00:00
umtx_thread_fini(td);
vm_thread_dispose(td);
2002-06-29 07:04:59 +00:00
}
/*
* For a newly created process,
* link up all the structures and its initial threads etc.
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* called from:
* {arch}/{arch}/machdep.c ia64_init(), init386() etc.
* proc_dtor() (should go away)
* proc_init()
*/
void
proc_linkup(struct proc *p, struct thread *td)
{
TAILQ_INIT(&p->p_threads); /* all threads in proc */
TAILQ_INIT(&p->p_upcalls); /* upcall list */
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most changes in MD code are trivial, before this change, trapsignal and sendsig use discrete parameters, now they uses member fields of ksiginfo_t structure. For sendsig, this change allows us to pass POSIX realtime signal value to user code. 2. Remove cpu_thread_siginfo, it is no longer needed because we now always generate ksiginfo_t data and feed it to libpthread. 3. Add p_sigqueue to proc structure to hold shared signals which were blocked by all threads in the proc. 4. Add td_sigqueue to thread structure to hold all signals delivered to thread. 5. i386 and amd64 now return POSIX standard si_code, other arches will be fixed. 6. In this sigqueue implementation, pending signal set is kept as before, an extra siginfo list holds additional siginfo_t data for signals. kernel code uses psignal() still behavior as before, it won't be failed even under memory pressure, only exception is when deleting a signal, we should call sigqueue_delete to remove signal from sigqueue but not SIGDELSET. Current there is no kernel code will deliver a signal with additional data, so kernel should be as stable as before, a ksiginfo can carry more information, for example, allow signal to be delivered but throw away siginfo data if memory is not enough. SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can not be caught or masked. The sigqueue() syscall allows user code to queue a signal to target process, if resource is unavailable, EAGAIN will be returned as specification said. Just before thread exits, signal queue memory will be freed by sigqueue_flush. Current, all signals are allowed to be queued, not only realtime signals. Earlier patch reviewed by: jhb, deischen Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
sigqueue_init(&p->p_sigqueue, p);
2005-12-09 02:27:55 +00:00
p->p_ksi = ksiginfo_alloc(1);
if (p->p_ksi != NULL) {
/* XXX p_ksi may be null if ksiginfo zone is not ready */
p->p_ksi->ksi_flags = KSI_EXT | KSI_INS;
}
LIST_INIT(&p->p_mqnotifier);
p->p_numthreads = 0;
thread_link(td, p);
}
2002-06-29 07:04:59 +00:00
/*
* Initialize global thread allocation resources.
*/
void
threadinit(void)
{
mtx_init(&tid_lock, "TID lock", NULL, MTX_DEF);
tid_unrhdr = new_unrhdr(PID_MAX + 1, INT_MAX, &tid_lock);
thread_zone = uma_zcreate("THREAD", sched_sizeof_thread(),
2002-06-29 07:04:59 +00:00
thread_ctor, thread_dtor, thread_init, thread_fini,
UMA_ALIGN_CACHE, 0);
#ifdef KSE
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
kseinit(); /* set up kse specific stuff e.g. upcall zone*/
#endif
2002-06-29 07:04:59 +00:00
}
/*
* Stash an embarasingly extra thread into the zombie thread queue.
* Use the slpq as that must be unused by now.
2002-06-29 07:04:59 +00:00
*/
void
thread_stash(struct thread *td)
{
mtx_lock_spin(&kse_zombie_lock);
TAILQ_INSERT_HEAD(&zombie_threads, td, td_slpq);
mtx_unlock_spin(&kse_zombie_lock);
2002-06-29 07:04:59 +00:00
}
/*
* Reap zombie kse resource.
2002-06-29 07:04:59 +00:00
*/
void
thread_reap(void)
{
struct thread *td_first, *td_next;
2002-06-29 07:04:59 +00:00
/*
* Don't even bother to lock if none at this instant,
* we really don't care about the next instant..
2002-06-29 07:04:59 +00:00
*/
if (!TAILQ_EMPTY(&zombie_threads)) {
mtx_lock_spin(&kse_zombie_lock);
td_first = TAILQ_FIRST(&zombie_threads);
if (td_first)
TAILQ_INIT(&zombie_threads);
mtx_unlock_spin(&kse_zombie_lock);
while (td_first) {
td_next = TAILQ_NEXT(td_first, td_slpq);
if (td_first->td_ucred)
crfree(td_first->td_ucred);
thread_free(td_first);
td_first = td_next;
}
2002-06-29 07:04:59 +00:00
}
}
/*
* Allocate a thread.
*/
struct thread *
thread_alloc(void)
{
2002-06-29 07:04:59 +00:00
thread_reap(); /* check if any zombies to get */
return (uma_zalloc(thread_zone, M_WAITOK));
2002-06-29 07:04:59 +00:00
}
2002-06-29 07:04:59 +00:00
/*
* Deallocate a thread.
*/
void
thread_free(struct thread *td)
{
cpu_thread_clean(td);
2002-06-29 07:04:59 +00:00
uma_zfree(thread_zone, td);
}
/*
* Discard the current thread and exit from its context.
2004-06-11 17:48:20 +00:00
* Always called with scheduler locked.
2002-06-29 07:04:59 +00:00
*
* Because we can't free a thread while we're operating under its context,
* push the current thread into our CPU's deadthread holder. This means
* we needn't worry about someone else grabbing our context before we
2004-06-11 17:48:20 +00:00
* do a cpu_throw(). This may not be needed now as we are under schedlock.
* Maybe we can just do a thread_stash() as thr_exit1 does.
*/
/* XXX
* libthr expects its thread exit to return for the last
* thread, meaning that the program is back to non-threaded
* mode I guess. Because we do this (cpu_throw) unconditionally
* here, they have their own version of it. (thr_exit1())
* that doesn't do it all if this was the last thread.
* It is also called from thread_suspend_check().
* Of course in the end, they end up coming here through exit1
* anyhow.. After fixing 'thr' to play by the rules we should be able
* to merge these two functions together.
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
*
* called from:
* exit1()
* kse_exit()
* thr_exit()
* ifdef KSE
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* thread_user_enter()
* thread_userret()
* endif
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* thread_suspend_check()
2002-06-29 07:04:59 +00:00
*/
void
thread_exit(void)
{
uint64_t new_switchtime;
2002-06-29 07:04:59 +00:00
struct thread *td;
struct proc *p;
td = curthread;
p = td->td_proc;
mtx_assert(&sched_lock, MA_OWNED);
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
mtx_assert(&Giant, MA_NOTOWNED);
PROC_LOCK_ASSERT(p, MA_OWNED);
KASSERT(p != NULL, ("thread exiting without a process"));
CTR3(KTR_PROC, "thread_exit: thread %p (pid %ld, %s)", td,
(long)p->p_pid, p->p_comm);
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most changes in MD code are trivial, before this change, trapsignal and sendsig use discrete parameters, now they uses member fields of ksiginfo_t structure. For sendsig, this change allows us to pass POSIX realtime signal value to user code. 2. Remove cpu_thread_siginfo, it is no longer needed because we now always generate ksiginfo_t data and feed it to libpthread. 3. Add p_sigqueue to proc structure to hold shared signals which were blocked by all threads in the proc. 4. Add td_sigqueue to thread structure to hold all signals delivered to thread. 5. i386 and amd64 now return POSIX standard si_code, other arches will be fixed. 6. In this sigqueue implementation, pending signal set is kept as before, an extra siginfo list holds additional siginfo_t data for signals. kernel code uses psignal() still behavior as before, it won't be failed even under memory pressure, only exception is when deleting a signal, we should call sigqueue_delete to remove signal from sigqueue but not SIGDELSET. Current there is no kernel code will deliver a signal with additional data, so kernel should be as stable as before, a ksiginfo can carry more information, for example, allow signal to be delivered but throw away siginfo data if memory is not enough. SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can not be caught or masked. The sigqueue() syscall allows user code to queue a signal to target process, if resource is unavailable, EAGAIN will be returned as specification said. Just before thread exits, signal queue memory will be freed by sigqueue_flush. Current, all signals are allowed to be queued, not only realtime signals. Earlier patch reviewed by: jhb, deischen Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
KASSERT(TAILQ_EMPTY(&td->td_sigqueue.sq_list), ("signal pending"));
2002-06-29 07:04:59 +00:00
#ifdef AUDIT
AUDIT_SYSCALL_EXIT(0, td);
#endif
#ifdef KSE
if (td->td_standin != NULL) {
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* Note that we don't need to free the cred here as it
* is done in thread_reap().
*/
thread_stash(td->td_standin);
td->td_standin = NULL;
}
#endif
This is initial version of POSIX priority mutex support, a new userland mutex structure is added as following: struct umutex { __lwpid_t m_owner; uint32_t m_flags; uint32_t m_ceilings[2]; uint32_t m_spare[4]; }; The m_owner represents owner thread, it is a thread id, in non-contested case, userland can simply use atomic_cmpset_int to lock the mutex, if the mutex is contested, high order bit will be set, and userland should do locking and unlocking via kernel syscall. Flag UMUTEX_PRIO_INHERIT represents pthread's PTHREAD_PRIO_INHERIT mutex, which when contention happens, kernel should do priority propagating. Flag UMUTEX_PRIO_PROTECT indicates it is pthread's PTHREAD_PRIO_PROTECT mutex, userland should initialize m_owner to contested state UMUTEX_CONTESTED, then atomic_cmpset_int will be failure and kernel syscall should be invoked to do locking, this becauses for such a mutex, kernel should always boost the thread's priority before it can lock the mutex, m_ceilings is used by PTHREAD_PRIO_PROTECT mutex, the first element is used to boost thread's priority when it locked the mutex, second element is used when the mutex is unlocked, the PTHREAD_PRIO_PROTECT mutex's link list is kept in userland, the m_ceiling[1] is managed by thread library so kernel needn't allocate memory to keep the link list, when such a mutex is unlocked, kernel reset m_owner to UMUTEX_CONTESTED. Flag USYNC_PROCESS_SHARED indicate if the synchronization object is process shared, if the flag is not set, it saves a vm_map_lookup() call. The umtx chain is still used as a sleep queue, when a thread is blocked on PTHREAD_PRIO_INHERIT mutex, a umtx_pi is allocated to support priority propagating, it is dynamically allocated and reference count is used, it is not optimized but works well in my tests, while the umtx chain has its own locking protocol, the priority propagating protocol are all protected by sched_lock because priority propagating function is called with sched_lock held from scheduler. No visible performance degradation is found which these changes. Some parameter names in _umtx_op syscall are renamed.
2006-08-28 04:24:51 +00:00
umtx_thread_exit(td);
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* drop FPU & debug register state storage, or any other
* architecture specific resources that
* would not be on a new untouched process.
*/
2002-06-29 07:04:59 +00:00
cpu_thread_exit(td); /* XXXSMP */
#ifdef KSE
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* The thread is exiting. scheduler can release its stuff
* and collect stats etc.
* XXX this is not very right, since PROC_UNLOCK may still
* need scheduler stuff.
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
*/
sched_thread_exit(td);
#endif
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/* Do the same timestamp bookkeeping that mi_switch() would do. */
new_switchtime = cpu_ticks();
p->p_rux.rux_runtime += (new_switchtime - PCPU_GET(switchtime));
p->p_rux.rux_uticks += td->td_uticks;
p->p_rux.rux_sticks += td->td_sticks;
p->p_rux.rux_iticks += td->td_iticks;
PCPU_SET(switchtime, new_switchtime);
PCPU_SET(switchticks, ticks);
cnt.v_swtch++;
/* Add our usage into the usage of all our children. */
if (p->p_numthreads == 1)
ruadd(p->p_ru, &p->p_rux, &p->p_stats->p_cru, &p->p_crux);
/*
* The last thread is left attached to the process
* So that the whole bundle gets recycled. Skip
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* all this stuff if we never had threads.
* EXIT clears all sign of other threads when
* it goes to single threading, so the last thread always
* takes the short path.
*/
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
if (p->p_flag & P_HADTHREADS) {
if (p->p_numthreads > 1) {
thread_unlink(td);
sched_exit_thread(FIRST_THREAD_IN_PROC(p), td);
/*
* The test below is NOT true if we are the
* sole exiting thread. P_STOPPED_SNGL is unset
* in exit1() after it is the only survivor.
*/
if (P_SHOULDSTOP(p) == P_STOPPED_SINGLE) {
if (p->p_numthreads == p->p_suspcount) {
thread_unsuspend_one(p->p_singlethread);
}
}
#ifdef KSE
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* Because each upcall structure has an owner thread,
* owner thread exits only when process is in exiting
* state, so upcall to userland is no longer needed,
* deleting upcall structure is safe here.
* So when all threads in a group is exited, all upcalls
* in the group should be automatically freed.
* XXXKSE This is a KSE thing and should be exported
* there somehow.
*/
upcall_remove(td);
#endif
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
PROC_UNLOCK(p);
PCPU_SET(deadthread, td);
} else {
/*
* The last thread is exiting.. but not through exit()
* what should we do?
* Theoretically this can't happen
* exit1() - clears threading flags before coming here
* kse_exit() - treats last thread specially
* thr_exit() - treats last thread specially
* ifdef KSE
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* thread_user_enter() - only if more exist
* thread_userret() - only if more exist
* endif
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* thread_suspend_check() - only if more exist
*/
panic ("thread_exit: Last thread exiting on its own");
}
} else {
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* non threaded process comes here.
* This includes an EX threaded process that is coming
* here via exit1(). (exit1 dethreads the proc first).
*/
PROC_UNLOCK(p);
2002-06-29 07:04:59 +00:00
}
td->td_state = TDS_INACTIVE;
CTR1(KTR_PROC, "thread_exit: cpu_throw() thread %p", td);
Commit a partial lazy thread switch mechanism for i386. it isn't as lazy as it could be and can do with some more cleanup. Currently its under options LAZY_SWITCH. What this does is avoid %cr3 reloads for short context switches that do not involve another user process. ie: we can take an interrupt, switch to a kthread and return to the user without explicitly flushing the tlb. However, this isn't as exciting as it could be, the interrupt overhead is still high and too much blocks on Giant still. There are some debug sysctls, for stats and for an on/off switch. The main problem with doing this has been "what if the process that you're running on exits while we're borrowing its address space?" - in this case we use an IPI to give it a kick when we're about to reclaim the pmap. Its not compiled in unless you add the LAZY_SWITCH option. I want to fix a few more things and get some more feedback before turning it on by default. This is NOT a replacement for Bosko's lazy interrupt stuff. This was more meant for the kthread case, while his was for interrupts. Mine helps a little for interrupts, but his helps a lot more. The stats are enabled with options SWTCH_OPTIM_STATS - this has been a pseudo-option for years, I just added a bunch of stuff to it. One non-trivial change was to select a new thread before calling cpu_switch() in the first place. This allows us to catch the silly case of doing a cpu_switch() to the current process. This happens uncomfortably often. This simplifies a bit of the asm code in cpu_switch (no longer have to call choosethread() in the middle). This has been implemented on i386 and (thanks to jake) sparc64. The others will come soon. This is actually seperate to the lazy switch stuff. Glanced at by: jake, jhb
2003-04-02 23:53:30 +00:00
cpu_throw(td, choosethread());
panic("I'm a teapot!");
2002-06-29 07:04:59 +00:00
/* NOTREACHED */
}
/*
* Do any thread specific cleanups that may be needed in wait()
* called with Giant, proc and schedlock not held.
*/
void
thread_wait(struct proc *p)
{
struct thread *td;
mtx_assert(&Giant, MA_NOTOWNED);
KASSERT((p->p_numthreads == 1), ("Multiple threads in wait1()"));
FOREACH_THREAD_IN_PROC(p, td) {
#ifdef KSE
if (td->td_standin != NULL) {
if (td->td_standin->td_ucred != NULL) {
crfree(td->td_standin->td_ucred);
td->td_standin->td_ucred = NULL;
}
thread_free(td->td_standin);
td->td_standin = NULL;
}
#endif
cpu_thread_clean(td);
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
crfree(td->td_ucred);
}
thread_reap(); /* check for zombie threads etc. */
}
2002-06-29 07:04:59 +00:00
/*
* Link a thread to a process.
* set up anything that needs to be initialized for it to
* be used by the process.
2002-06-29 07:04:59 +00:00
*
* Note that we do not link to the proc's ucred here.
* The thread is linked as if running but no KSE assigned.
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
* Called from:
* proc_linkup()
* thread_schedule_upcall()
* thr_create()
2002-06-29 07:04:59 +00:00
*/
void
thread_link(struct thread *td, struct proc *p)
2002-06-29 07:04:59 +00:00
{
td->td_state = TDS_INACTIVE;
td->td_proc = p;
td->td_flags = 0;
2002-06-29 07:04:59 +00:00
LIST_INIT(&td->td_contested);
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most changes in MD code are trivial, before this change, trapsignal and sendsig use discrete parameters, now they uses member fields of ksiginfo_t structure. For sendsig, this change allows us to pass POSIX realtime signal value to user code. 2. Remove cpu_thread_siginfo, it is no longer needed because we now always generate ksiginfo_t data and feed it to libpthread. 3. Add p_sigqueue to proc structure to hold shared signals which were blocked by all threads in the proc. 4. Add td_sigqueue to thread structure to hold all signals delivered to thread. 5. i386 and amd64 now return POSIX standard si_code, other arches will be fixed. 6. In this sigqueue implementation, pending signal set is kept as before, an extra siginfo list holds additional siginfo_t data for signals. kernel code uses psignal() still behavior as before, it won't be failed even under memory pressure, only exception is when deleting a signal, we should call sigqueue_delete to remove signal from sigqueue but not SIGDELSET. Current there is no kernel code will deliver a signal with additional data, so kernel should be as stable as before, a ksiginfo can carry more information, for example, allow signal to be delivered but throw away siginfo data if memory is not enough. SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can not be caught or masked. The sigqueue() syscall allows user code to queue a signal to target process, if resource is unavailable, EAGAIN will be returned as specification said. Just before thread exits, signal queue memory will be freed by sigqueue_flush. Current, all signals are allowed to be queued, not only realtime signals. Earlier patch reviewed by: jhb, deischen Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
sigqueue_init(&td->td_sigqueue, p);
callout_init(&td->td_slpcallout, CALLOUT_MPSAFE);
2002-06-29 07:04:59 +00:00
TAILQ_INSERT_HEAD(&p->p_threads, td, td_plist);
p->p_numthreads++;
}
/*
* Convert a process with one thread to an unthreaded process.
* Called from:
* thread_single(exit) (called from execve and exit)
* kse_exit() XXX may need cleaning up wrt KSE stuff
*/
void
thread_unthread(struct thread *td)
{
struct proc *p = td->td_proc;
KASSERT((p->p_numthreads == 1), ("Unthreading with >1 threads"));
#ifdef KSE
upcall_remove(td);
p->p_flag &= ~(P_SA|P_HADTHREADS);
td->td_mailbox = NULL;
td->td_pflags &= ~(TDP_SA | TDP_CAN_UNBIND);
if (td->td_standin != NULL) {
thread_stash(td->td_standin);
td->td_standin = NULL;
}
sched_set_concurrency(p, 1);
#else
p->p_flag &= ~P_HADTHREADS;
#endif
}
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* Called from:
* thread_exit()
*/
void
thread_unlink(struct thread *td)
{
struct proc *p = td->td_proc;
mtx_assert(&sched_lock, MA_OWNED);
TAILQ_REMOVE(&p->p_threads, td, td_plist);
p->p_numthreads--;
/* could clear a few other things here */
/* Must NOT clear links to proc! */
}
2002-06-29 07:04:59 +00:00
/*
* Enforce single-threading.
*
* Returns 1 if the caller must abort (another thread is waiting to
* exit the process or similar). Process is locked!
* Returns 0 when you are successfully the only thread running.
* A process has successfully single threaded in the suspend mode when
* There are no threads in user mode. Threads in the kernel must be
* allowed to continue until they get to the user boundary. They may even
* copy out their return values and data before suspending. They may however be
* accelerated in reaching the user boundary as we will wake up
2002-06-29 07:04:59 +00:00
* any sleeping threads that are interruptable. (PCATCH).
*/
int
thread_single(int mode)
2002-06-29 07:04:59 +00:00
{
struct thread *td;
struct thread *td2;
struct proc *p;
int remaining;
2002-06-29 07:04:59 +00:00
td = curthread;
p = td->td_proc;
mtx_assert(&Giant, MA_NOTOWNED);
2002-06-29 07:04:59 +00:00
PROC_LOCK_ASSERT(p, MA_OWNED);
KASSERT((td != NULL), ("curthread is NULL"));
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
if ((p->p_flag & P_HADTHREADS) == 0)
2002-06-29 07:04:59 +00:00
return (0);
/* Is someone already single threading? */
if (p->p_singlethread != NULL && p->p_singlethread != td)
2002-06-29 07:04:59 +00:00
return (1);
if (mode == SINGLE_EXIT) {
p->p_flag |= P_SINGLE_EXIT;
p->p_flag &= ~P_SINGLE_BOUNDARY;
} else {
p->p_flag &= ~P_SINGLE_EXIT;
if (mode == SINGLE_BOUNDARY)
p->p_flag |= P_SINGLE_BOUNDARY;
else
p->p_flag &= ~P_SINGLE_BOUNDARY;
}
p->p_flag |= P_STOPPED_SINGLE;
mtx_lock_spin(&sched_lock);
2002-06-29 07:04:59 +00:00
p->p_singlethread = td;
if (mode == SINGLE_EXIT)
remaining = p->p_numthreads;
else if (mode == SINGLE_BOUNDARY)
remaining = p->p_numthreads - p->p_boundary_count;
else
remaining = p->p_numthreads - p->p_suspcount;
while (remaining != 1) {
if (P_SHOULDSTOP(p) != P_STOPPED_SINGLE)
goto stopme;
2002-06-29 07:04:59 +00:00
FOREACH_THREAD_IN_PROC(p, td2) {
if (td2 == td)
continue;
2003-04-19 04:39:10 +00:00
td2->td_flags |= TDF_ASTPENDING;
if (TD_IS_INHIBITED(td2)) {
switch (mode) {
case SINGLE_EXIT:
Add code to support debugging threaded process. 1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel thread current user thread is running on. Add tm_dflags into kse_thr_mailbox, the flags is written by debugger, it tells UTS and kernel what should be done when the process is being debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER. TMDF_SSTEP is used to tell kernel to turn on single stepping, or turn off if it is not set. TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall whenever possible, to UTS, it means do not run the user thread until debugger clears it, this behaviour is necessary because gdb wants to resume only one thread when the thread's pc is at a breakpoint, and thread needs to go forward, in order to avoid other threads sneak pass the breakpoints, it needs to remove breakpoint, only wants one thread to go. Also, add km_lwp to kse_mailbox, the lwp id is copied to kse_thr_mailbox at context switch time when process is not being debugged, so when process is attached, debugger can map kernel thread to user thread. 2. Add p_xthread to proc strcuture and td_xsig to thread structure. p_xthread is used by a thread when it wants to report event to debugger, every thread can set the pointer, especially, when it is used in ptracestop, it is the last thread reporting event will win the race. Every thread has a td_xsig to exchange signal with debugger, thread uses TDF_XSIG flag to indicate it is reporting signal to debugger, if the flag is not cleared, thread will keep retrying until it is cleared by debugger, p_xthread may be used by debugger to indicate CURRENT thread. The p_xstat is still in proc structure to keep wait() to work, in future, we may just use td_xsig. 3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend a thread. When process stops, debugger can set the flag for thread, thread will check the flag in thread_suspend_check, enters a loop, unless it is cleared by debugger, process is detached or process is existing. The flag is also checked in ptracestop, so debugger can temporarily suspend a thread even if the thread wants to exchange signal. 4. Current, in ptrace, we always resume all threads, but if a thread has already a TDF_DBSUSPEND flag set by debugger, it won't run. Encouraged by: marcel, julian, deischen
2004-07-13 07:20:10 +00:00
if (td->td_flags & TDF_DBSUSPEND)
td->td_flags &= ~TDF_DBSUSPEND;
if (TD_IS_SUSPENDED(td2))
thread_unsuspend_one(td2);
if (TD_ON_SLEEPQ(td2) &&
(td2->td_flags & TDF_SINTR))
sleepq_abort(td2, EINTR);
break;
case SINGLE_BOUNDARY:
if (TD_IS_SUSPENDED(td2) &&
!(td2->td_flags & TDF_BOUNDARY))
thread_unsuspend_one(td2);
if (TD_ON_SLEEPQ(td2) &&
(td2->td_flags & TDF_SINTR))
sleepq_abort(td2, ERESTART);
break;
default:
if (TD_IS_SUSPENDED(td2))
continue;
/*
* maybe other inhibitted states too?
*/
if ((td2->td_flags & TDF_SINTR) &&
(td2->td_inhibitors &
(TDI_SLEEPING | TDI_SWAPPED)))
thread_suspend_one(td2);
break;
}
2002-06-29 07:04:59 +00:00
}
#ifdef SMP
else if (TD_IS_RUNNING(td2) && td != td2) {
forward_signal(td2);
}
#endif
2002-06-29 07:04:59 +00:00
}
if (mode == SINGLE_EXIT)
remaining = p->p_numthreads;
else if (mode == SINGLE_BOUNDARY)
remaining = p->p_numthreads - p->p_boundary_count;
else
remaining = p->p_numthreads - p->p_suspcount;
/*
* Maybe we suspended some threads.. was it enough?
*/
if (remaining == 1)
break;
stopme:
2002-06-29 07:04:59 +00:00
/*
* Wake us up when everyone else has suspended.
* In the mean time we suspend as well.
2002-06-29 07:04:59 +00:00
*/
thread_stopped(p);
thread_suspend_one(td);
2002-06-29 07:04:59 +00:00
PROC_UNLOCK(p);
mi_switch(SW_VOL, NULL);
2002-06-29 07:04:59 +00:00
mtx_unlock_spin(&sched_lock);
PROC_LOCK(p);
mtx_lock_spin(&sched_lock);
if (mode == SINGLE_EXIT)
remaining = p->p_numthreads;
else if (mode == SINGLE_BOUNDARY)
remaining = p->p_numthreads - p->p_boundary_count;
else
remaining = p->p_numthreads - p->p_suspcount;
2002-06-29 07:04:59 +00:00
}
if (mode == SINGLE_EXIT) {
/*
* We have gotten rid of all the other threads and we
* are about to either exit or exec. In either case,
* we try our utmost to revert to being a non-threaded
* process.
*/
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
p->p_singlethread = NULL;
p->p_flag &= ~(P_STOPPED_SINGLE | P_SINGLE_EXIT);
thread_unthread(td);
}
mtx_unlock_spin(&sched_lock);
2002-06-29 07:04:59 +00:00
return (0);
}
/*
* Called in from locations that can safely check to see
* whether we have to suspend or at least throttle for a
* single-thread event (e.g. fork).
*
* Such locations include userret().
* If the "return_instead" argument is non zero, the thread must be able to
* accept 0 (caller may continue), or 1 (caller must abort) as a result.
*
* The 'return_instead' argument tells the function if it may do a
* thread_exit() or suspend, or whether the caller must abort and back
* out instead.
*
* If the thread that set the single_threading request has set the
* P_SINGLE_EXIT bit in the process flags then this call will never return
* if 'return_instead' is false, but will exit.
*
* P_SINGLE_EXIT | return_instead == 0| return_instead != 0
*---------------+--------------------+---------------------
* 0 | returns 0 | returns 0 or 1
* | when ST ends | immediatly
*---------------+--------------------+---------------------
* 1 | thread exits | returns 1
* | | immediatly
* 0 = thread_exit() or suspension ok,
* other = return error instead of stopping the thread.
*
* While a full suspension is under effect, even a single threading
* thread would be suspended if it made this call (but it shouldn't).
* This call should only be made from places where
* thread_exit() would be safe as that may be the outcome unless
2002-06-29 07:04:59 +00:00
* return_instead is set.
*/
int
thread_suspend_check(int return_instead)
{
struct thread *td;
struct proc *p;
2002-06-29 07:04:59 +00:00
td = curthread;
p = td->td_proc;
mtx_assert(&Giant, MA_NOTOWNED);
2002-06-29 07:04:59 +00:00
PROC_LOCK_ASSERT(p, MA_OWNED);
Add code to support debugging threaded process. 1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel thread current user thread is running on. Add tm_dflags into kse_thr_mailbox, the flags is written by debugger, it tells UTS and kernel what should be done when the process is being debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER. TMDF_SSTEP is used to tell kernel to turn on single stepping, or turn off if it is not set. TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall whenever possible, to UTS, it means do not run the user thread until debugger clears it, this behaviour is necessary because gdb wants to resume only one thread when the thread's pc is at a breakpoint, and thread needs to go forward, in order to avoid other threads sneak pass the breakpoints, it needs to remove breakpoint, only wants one thread to go. Also, add km_lwp to kse_mailbox, the lwp id is copied to kse_thr_mailbox at context switch time when process is not being debugged, so when process is attached, debugger can map kernel thread to user thread. 2. Add p_xthread to proc strcuture and td_xsig to thread structure. p_xthread is used by a thread when it wants to report event to debugger, every thread can set the pointer, especially, when it is used in ptracestop, it is the last thread reporting event will win the race. Every thread has a td_xsig to exchange signal with debugger, thread uses TDF_XSIG flag to indicate it is reporting signal to debugger, if the flag is not cleared, thread will keep retrying until it is cleared by debugger, p_xthread may be used by debugger to indicate CURRENT thread. The p_xstat is still in proc structure to keep wait() to work, in future, we may just use td_xsig. 3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend a thread. When process stops, debugger can set the flag for thread, thread will check the flag in thread_suspend_check, enters a loop, unless it is cleared by debugger, process is detached or process is existing. The flag is also checked in ptracestop, so debugger can temporarily suspend a thread even if the thread wants to exchange signal. 4. Current, in ptrace, we always resume all threads, but if a thread has already a TDF_DBSUSPEND flag set by debugger, it won't run. Encouraged by: marcel, julian, deischen
2004-07-13 07:20:10 +00:00
while (P_SHOULDSTOP(p) ||
((p->p_flag & P_TRACED) && (td->td_flags & TDF_DBSUSPEND))) {
if (P_SHOULDSTOP(p) == P_STOPPED_SINGLE) {
2002-06-29 07:04:59 +00:00
KASSERT(p->p_singlethread != NULL,
("singlethread not set"));
/*
* The only suspension in action is a
* single-threading. Single threader need not stop.
* XXX Should be safe to access unlocked
* as it can only be set to be true by us.
2002-06-29 07:04:59 +00:00
*/
if (p->p_singlethread == td)
2002-06-29 07:04:59 +00:00
return (0); /* Exempt from stopping. */
}
if ((p->p_flag & P_SINGLE_EXIT) && return_instead)
return (EINTR);
2002-06-29 07:04:59 +00:00
/* Should we goto user boundary if we didn't come from there? */
if (P_SHOULDSTOP(p) == P_STOPPED_SINGLE &&
(p->p_flag & P_SINGLE_BOUNDARY) && return_instead)
return (ERESTART);
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most changes in MD code are trivial, before this change, trapsignal and sendsig use discrete parameters, now they uses member fields of ksiginfo_t structure. For sendsig, this change allows us to pass POSIX realtime signal value to user code. 2. Remove cpu_thread_siginfo, it is no longer needed because we now always generate ksiginfo_t data and feed it to libpthread. 3. Add p_sigqueue to proc structure to hold shared signals which were blocked by all threads in the proc. 4. Add td_sigqueue to thread structure to hold all signals delivered to thread. 5. i386 and amd64 now return POSIX standard si_code, other arches will be fixed. 6. In this sigqueue implementation, pending signal set is kept as before, an extra siginfo list holds additional siginfo_t data for signals. kernel code uses psignal() still behavior as before, it won't be failed even under memory pressure, only exception is when deleting a signal, we should call sigqueue_delete to remove signal from sigqueue but not SIGDELSET. Current there is no kernel code will deliver a signal with additional data, so kernel should be as stable as before, a ksiginfo can carry more information, for example, allow signal to be delivered but throw away siginfo data if memory is not enough. SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can not be caught or masked. The sigqueue() syscall allows user code to queue a signal to target process, if resource is unavailable, EAGAIN will be returned as specification said. Just before thread exits, signal queue memory will be freed by sigqueue_flush. Current, all signals are allowed to be queued, not only realtime signals. Earlier patch reviewed by: jhb, deischen Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
/* If thread will exit, flush its pending signals */
if ((p->p_flag & P_SINGLE_EXIT) && (p->p_singlethread != td))
sigqueue_flush(&td->td_sigqueue);
mtx_lock_spin(&sched_lock);
thread_stopped(p);
2002-06-29 07:04:59 +00:00
/*
* If the process is waiting for us to exit,
* this thread should just suicide.
* Assumes that P_SINGLE_EXIT implies P_STOPPED_SINGLE.
2002-06-29 07:04:59 +00:00
*/
if ((p->p_flag & P_SINGLE_EXIT) && (p->p_singlethread != td))
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
thread_exit();
2002-06-29 07:04:59 +00:00
/*
* When a thread suspends, it just
* gets taken off all queues.
2002-06-29 07:04:59 +00:00
*/
thread_suspend_one(td);
if (return_instead == 0) {
p->p_boundary_count++;
td->td_flags |= TDF_BOUNDARY;
}
if (P_SHOULDSTOP(p) == P_STOPPED_SINGLE) {
if (p->p_numthreads == p->p_suspcount)
thread_unsuspend_one(p->p_singlethread);
}
PROC_UNLOCK(p);
mi_switch(SW_INVOL, NULL);
if (return_instead == 0) {
p->p_boundary_count--;
td->td_flags &= ~TDF_BOUNDARY;
}
2002-06-29 07:04:59 +00:00
mtx_unlock_spin(&sched_lock);
PROC_LOCK(p);
}
return (0);
}
In the kernel code, we have the tsleep() call with the PCATCH argument. PCATCH means 'if we get a signal, interrupt me!" and tsleep returns either EINTR or ERESTART depending on the circumstances. ERESTART is "special" because it causes the system call to fail, but right as it returns back to userland it tells the trap handler to move %eip back a bit so that userland will immediately re-run the syscall. This is a syscall restart. It only works for things like read() etc where nothing has changed yet. Note that *userland* is tricked into restarting the syscall by the kernel. The kernel doesn't actually do the restart. It is deadly for things like select, poll, nanosleep etc where it might cause the elapsed time to be reset and start again from scratch. So those syscalls do this to prevent userland rerunning the syscall: if (error == ERESTART) error = EINTR; Fake "signals" like SIGTSTP from ^Z etc do not normally invoke userland signal handlers. But, in -current, the PCATCH *is* being triggered and tsleep is returning ERESTART, and the syscall is aborted even though no userland signal handler was run. That is the fault here. We're triggering the PCATCH in cases that we shouldn't. ie: it is being triggered on *any* signal processing, rather than the case where the signal is posted to userland. --- Peter The work of psignal() is a patchwork of special case required by the process debugging and job-control facilities... --- Kirk McKusick "The design and impelementation of the 4.4BSD Operating system" Page 105 in STABLE source, when psignal is posting a STOP signal to sleeping process and the signal action of the process is SIG_DFL, system will directly change the process state from SSLEEP to SSTOP, and when SIGCONT is posted to the stopped process, if it finds that the process is still on sleep queue, the process state will be restored to SSLEEP, and won't wakeup the process. this commit mimics the behaviour in STABLE source tree. Reviewed by: Jon Mini, Tim Robbins, Peter Wemm Approved by: julian@freebsd.org (mentor)
2002-09-03 12:56:01 +00:00
void
thread_suspend_one(struct thread *td)
{
struct proc *p = td->td_proc;
mtx_assert(&sched_lock, MA_OWNED);
PROC_LOCK_ASSERT(p, MA_OWNED);
KASSERT(!TD_IS_SUSPENDED(td), ("already suspended"));
In the kernel code, we have the tsleep() call with the PCATCH argument. PCATCH means 'if we get a signal, interrupt me!" and tsleep returns either EINTR or ERESTART depending on the circumstances. ERESTART is "special" because it causes the system call to fail, but right as it returns back to userland it tells the trap handler to move %eip back a bit so that userland will immediately re-run the syscall. This is a syscall restart. It only works for things like read() etc where nothing has changed yet. Note that *userland* is tricked into restarting the syscall by the kernel. The kernel doesn't actually do the restart. It is deadly for things like select, poll, nanosleep etc where it might cause the elapsed time to be reset and start again from scratch. So those syscalls do this to prevent userland rerunning the syscall: if (error == ERESTART) error = EINTR; Fake "signals" like SIGTSTP from ^Z etc do not normally invoke userland signal handlers. But, in -current, the PCATCH *is* being triggered and tsleep is returning ERESTART, and the syscall is aborted even though no userland signal handler was run. That is the fault here. We're triggering the PCATCH in cases that we shouldn't. ie: it is being triggered on *any* signal processing, rather than the case where the signal is posted to userland. --- Peter The work of psignal() is a patchwork of special case required by the process debugging and job-control facilities... --- Kirk McKusick "The design and impelementation of the 4.4BSD Operating system" Page 105 in STABLE source, when psignal is posting a STOP signal to sleeping process and the signal action of the process is SIG_DFL, system will directly change the process state from SSLEEP to SSTOP, and when SIGCONT is posted to the stopped process, if it finds that the process is still on sleep queue, the process state will be restored to SSLEEP, and won't wakeup the process. this commit mimics the behaviour in STABLE source tree. Reviewed by: Jon Mini, Tim Robbins, Peter Wemm Approved by: julian@freebsd.org (mentor)
2002-09-03 12:56:01 +00:00
p->p_suspcount++;
TD_SET_SUSPENDED(td);
In the kernel code, we have the tsleep() call with the PCATCH argument. PCATCH means 'if we get a signal, interrupt me!" and tsleep returns either EINTR or ERESTART depending on the circumstances. ERESTART is "special" because it causes the system call to fail, but right as it returns back to userland it tells the trap handler to move %eip back a bit so that userland will immediately re-run the syscall. This is a syscall restart. It only works for things like read() etc where nothing has changed yet. Note that *userland* is tricked into restarting the syscall by the kernel. The kernel doesn't actually do the restart. It is deadly for things like select, poll, nanosleep etc where it might cause the elapsed time to be reset and start again from scratch. So those syscalls do this to prevent userland rerunning the syscall: if (error == ERESTART) error = EINTR; Fake "signals" like SIGTSTP from ^Z etc do not normally invoke userland signal handlers. But, in -current, the PCATCH *is* being triggered and tsleep is returning ERESTART, and the syscall is aborted even though no userland signal handler was run. That is the fault here. We're triggering the PCATCH in cases that we shouldn't. ie: it is being triggered on *any* signal processing, rather than the case where the signal is posted to userland. --- Peter The work of psignal() is a patchwork of special case required by the process debugging and job-control facilities... --- Kirk McKusick "The design and impelementation of the 4.4BSD Operating system" Page 105 in STABLE source, when psignal is posting a STOP signal to sleeping process and the signal action of the process is SIG_DFL, system will directly change the process state from SSLEEP to SSTOP, and when SIGCONT is posted to the stopped process, if it finds that the process is still on sleep queue, the process state will be restored to SSLEEP, and won't wakeup the process. this commit mimics the behaviour in STABLE source tree. Reviewed by: Jon Mini, Tim Robbins, Peter Wemm Approved by: julian@freebsd.org (mentor)
2002-09-03 12:56:01 +00:00
}
void
thread_unsuspend_one(struct thread *td)
{
struct proc *p = td->td_proc;
mtx_assert(&sched_lock, MA_OWNED);
PROC_LOCK_ASSERT(p, MA_OWNED);
KASSERT(TD_IS_SUSPENDED(td), ("Thread not suspended"));
TD_CLR_SUSPENDED(td);
In the kernel code, we have the tsleep() call with the PCATCH argument. PCATCH means 'if we get a signal, interrupt me!" and tsleep returns either EINTR or ERESTART depending on the circumstances. ERESTART is "special" because it causes the system call to fail, but right as it returns back to userland it tells the trap handler to move %eip back a bit so that userland will immediately re-run the syscall. This is a syscall restart. It only works for things like read() etc where nothing has changed yet. Note that *userland* is tricked into restarting the syscall by the kernel. The kernel doesn't actually do the restart. It is deadly for things like select, poll, nanosleep etc where it might cause the elapsed time to be reset and start again from scratch. So those syscalls do this to prevent userland rerunning the syscall: if (error == ERESTART) error = EINTR; Fake "signals" like SIGTSTP from ^Z etc do not normally invoke userland signal handlers. But, in -current, the PCATCH *is* being triggered and tsleep is returning ERESTART, and the syscall is aborted even though no userland signal handler was run. That is the fault here. We're triggering the PCATCH in cases that we shouldn't. ie: it is being triggered on *any* signal processing, rather than the case where the signal is posted to userland. --- Peter The work of psignal() is a patchwork of special case required by the process debugging and job-control facilities... --- Kirk McKusick "The design and impelementation of the 4.4BSD Operating system" Page 105 in STABLE source, when psignal is posting a STOP signal to sleeping process and the signal action of the process is SIG_DFL, system will directly change the process state from SSLEEP to SSTOP, and when SIGCONT is posted to the stopped process, if it finds that the process is still on sleep queue, the process state will be restored to SSLEEP, and won't wakeup the process. this commit mimics the behaviour in STABLE source tree. Reviewed by: Jon Mini, Tim Robbins, Peter Wemm Approved by: julian@freebsd.org (mentor)
2002-09-03 12:56:01 +00:00
p->p_suspcount--;
setrunnable(td);
In the kernel code, we have the tsleep() call with the PCATCH argument. PCATCH means 'if we get a signal, interrupt me!" and tsleep returns either EINTR or ERESTART depending on the circumstances. ERESTART is "special" because it causes the system call to fail, but right as it returns back to userland it tells the trap handler to move %eip back a bit so that userland will immediately re-run the syscall. This is a syscall restart. It only works for things like read() etc where nothing has changed yet. Note that *userland* is tricked into restarting the syscall by the kernel. The kernel doesn't actually do the restart. It is deadly for things like select, poll, nanosleep etc where it might cause the elapsed time to be reset and start again from scratch. So those syscalls do this to prevent userland rerunning the syscall: if (error == ERESTART) error = EINTR; Fake "signals" like SIGTSTP from ^Z etc do not normally invoke userland signal handlers. But, in -current, the PCATCH *is* being triggered and tsleep is returning ERESTART, and the syscall is aborted even though no userland signal handler was run. That is the fault here. We're triggering the PCATCH in cases that we shouldn't. ie: it is being triggered on *any* signal processing, rather than the case where the signal is posted to userland. --- Peter The work of psignal() is a patchwork of special case required by the process debugging and job-control facilities... --- Kirk McKusick "The design and impelementation of the 4.4BSD Operating system" Page 105 in STABLE source, when psignal is posting a STOP signal to sleeping process and the signal action of the process is SIG_DFL, system will directly change the process state from SSLEEP to SSTOP, and when SIGCONT is posted to the stopped process, if it finds that the process is still on sleep queue, the process state will be restored to SSLEEP, and won't wakeup the process. this commit mimics the behaviour in STABLE source tree. Reviewed by: Jon Mini, Tim Robbins, Peter Wemm Approved by: julian@freebsd.org (mentor)
2002-09-03 12:56:01 +00:00
}
2002-06-29 07:04:59 +00:00
/*
* Allow all threads blocked by single threading to continue running.
*/
void
thread_unsuspend(struct proc *p)
{
struct thread *td;
mtx_assert(&sched_lock, MA_OWNED);
2002-06-29 07:04:59 +00:00
PROC_LOCK_ASSERT(p, MA_OWNED);
if (!P_SHOULDSTOP(p)) {
FOREACH_THREAD_IN_PROC(p, td) {
if (TD_IS_SUSPENDED(td)) {
thread_unsuspend_one(td);
}
2002-06-29 07:04:59 +00:00
}
} else if ((P_SHOULDSTOP(p) == P_STOPPED_SINGLE) &&
2002-06-29 07:04:59 +00:00
(p->p_numthreads == p->p_suspcount)) {
/*
* Stopping everything also did the job for the single
* threading request. Now we've downgraded to single-threaded,
* let it continue.
*/
In the kernel code, we have the tsleep() call with the PCATCH argument. PCATCH means 'if we get a signal, interrupt me!" and tsleep returns either EINTR or ERESTART depending on the circumstances. ERESTART is "special" because it causes the system call to fail, but right as it returns back to userland it tells the trap handler to move %eip back a bit so that userland will immediately re-run the syscall. This is a syscall restart. It only works for things like read() etc where nothing has changed yet. Note that *userland* is tricked into restarting the syscall by the kernel. The kernel doesn't actually do the restart. It is deadly for things like select, poll, nanosleep etc where it might cause the elapsed time to be reset and start again from scratch. So those syscalls do this to prevent userland rerunning the syscall: if (error == ERESTART) error = EINTR; Fake "signals" like SIGTSTP from ^Z etc do not normally invoke userland signal handlers. But, in -current, the PCATCH *is* being triggered and tsleep is returning ERESTART, and the syscall is aborted even though no userland signal handler was run. That is the fault here. We're triggering the PCATCH in cases that we shouldn't. ie: it is being triggered on *any* signal processing, rather than the case where the signal is posted to userland. --- Peter The work of psignal() is a patchwork of special case required by the process debugging and job-control facilities... --- Kirk McKusick "The design and impelementation of the 4.4BSD Operating system" Page 105 in STABLE source, when psignal is posting a STOP signal to sleeping process and the signal action of the process is SIG_DFL, system will directly change the process state from SSLEEP to SSTOP, and when SIGCONT is posted to the stopped process, if it finds that the process is still on sleep queue, the process state will be restored to SSLEEP, and won't wakeup the process. this commit mimics the behaviour in STABLE source tree. Reviewed by: Jon Mini, Tim Robbins, Peter Wemm Approved by: julian@freebsd.org (mentor)
2002-09-03 12:56:01 +00:00
thread_unsuspend_one(p->p_singlethread);
2002-06-29 07:04:59 +00:00
}
}
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces. The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time. The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own. Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens. Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure. The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring. A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated. Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can. Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
2004-09-05 02:09:54 +00:00
/*
* End the single threading mode..
*/
2002-06-29 07:04:59 +00:00
void
thread_single_end(void)
{
struct thread *td;
struct proc *p;
td = curthread;
p = td->td_proc;
PROC_LOCK_ASSERT(p, MA_OWNED);
p->p_flag &= ~(P_STOPPED_SINGLE | P_SINGLE_EXIT | P_SINGLE_BOUNDARY);
mtx_lock_spin(&sched_lock);
2002-06-29 07:04:59 +00:00
p->p_singlethread = NULL;
/*
* If there are other threads they mey now run,
* unless of course there is a blanket 'stop order'
* on the process. The single threader must be allowed
* to continue however as this is a bad place to stop.
*/
if ((p->p_numthreads != 1) && (!P_SHOULDSTOP(p))) {
FOREACH_THREAD_IN_PROC(p, td) {
if (TD_IS_SUSPENDED(td)) {
thread_unsuspend_one(td);
}
}
}
mtx_unlock_spin(&sched_lock);
2002-06-29 07:04:59 +00:00
}
struct thread *
thread_find(struct proc *p, lwpid_t tid)
{
struct thread *td;
PROC_LOCK_ASSERT(p, MA_OWNED);
mtx_lock_spin(&sched_lock);
FOREACH_THREAD_IN_PROC(p, td) {
if (td->td_tid == tid)
break;
}
mtx_unlock_spin(&sched_lock);
return (td);
}