1996-01-22 00:23:58 +00:00
|
|
|
/*
|
1998-04-03 09:31:15 +00:00
|
|
|
* Copyright (c) 1995-1998 John Birrell <jb@cimlogic.com.au>.
|
1996-01-22 00:23:58 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by John Birrell.
|
|
|
|
* 4. Neither the name of the author nor the names of any co-contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY JOHN BIRRELL AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
1999-08-05 12:08:10 +00:00
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
1996-01-22 00:23:58 +00:00
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* Private thread definitions for the uthread kernel.
|
|
|
|
*
|
1999-08-28 00:22:10 +00:00
|
|
|
* $FreeBSD$
|
1996-01-22 00:23:58 +00:00
|
|
|
*/
|
|
|
|
|
2002-09-16 08:45:36 +00:00
|
|
|
#ifndef _THR_PRIVATE_H
|
|
|
|
#define _THR_PRIVATE_H
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Include files.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#include <setjmp.h>
|
1996-01-22 00:23:58 +00:00
|
|
|
#include <signal.h>
|
2000-10-13 22:12:32 +00:00
|
|
|
#include <stdio.h>
|
1998-04-11 07:47:22 +00:00
|
|
|
#include <sys/queue.h>
|
1996-01-22 00:23:58 +00:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/time.h>
|
2000-03-18 22:36:46 +00:00
|
|
|
#include <sys/cdefs.h>
|
2002-10-30 06:07:18 +00:00
|
|
|
#include <sys/kse.h>
|
1998-03-08 02:37:27 +00:00
|
|
|
#include <sched.h>
|
2002-02-17 17:21:27 +00:00
|
|
|
#include <ucontext.h>
|
2003-04-18 05:04:16 +00:00
|
|
|
#include <unistd.h>
|
|
|
|
#include <pthread.h>
|
1999-03-23 05:07:56 +00:00
|
|
|
#include <pthread_np.h>
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2004-07-18 04:22:01 +00:00
|
|
|
#ifndef LIBTHREAD_DB
|
2003-04-18 05:04:16 +00:00
|
|
|
#include "lock.h"
|
|
|
|
#include "pthread_md.h"
|
2004-07-18 04:22:01 +00:00
|
|
|
#endif
|
2003-04-18 05:04:16 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Evaluate the storage class specifier.
|
|
|
|
*/
|
|
|
|
#ifdef GLOBAL_PTHREAD_PRIVATE
|
|
|
|
#define SCLASS
|
|
|
|
#define SCLASS_PRESET(x...) = x
|
|
|
|
#else
|
|
|
|
#define SCLASS extern
|
|
|
|
#define SCLASS_PRESET(x...)
|
|
|
|
#endif
|
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/*
|
|
|
|
* Kernel fatal error handler macro.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define PANIC(string) _thr_exit(__FILE__,__LINE__,string)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2000-10-13 22:12:32 +00:00
|
|
|
|
1998-04-29 09:59:34 +00:00
|
|
|
/* Output debug messages like this: */
|
2003-04-18 05:04:16 +00:00
|
|
|
#define stdout_debug(args...) _thread_printf(STDOUT_FILENO, ##args)
|
|
|
|
#define stderr_debug(args...) _thread_printf(STDOUT_FILENO, ##args)
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define DBG_MUTEX 0x0001
|
|
|
|
#define DBG_SIG 0x0002
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2003-12-09 02:20:56 +00:00
|
|
|
#ifdef _PTHREADS_INVARIANTS
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_ASSERT(cond, msg) do { \
|
|
|
|
if (!(cond)) \
|
|
|
|
PANIC(msg); \
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
} while (0)
|
2003-12-09 02:20:56 +00:00
|
|
|
#else
|
|
|
|
#define THR_ASSERT(cond, msg)
|
|
|
|
#endif
|
1999-03-23 05:07:56 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* State change macro without scheduling queue change:
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_SET_STATE(thrd, newstate) do { \
|
1997-02-05 23:26:09 +00:00
|
|
|
(thrd)->state = newstate; \
|
|
|
|
(thrd)->fname = __FILE__; \
|
|
|
|
(thrd)->lineno = __LINE__; \
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
} while (0)
|
1997-02-05 23:26:09 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
|
|
|
|
#define TIMESPEC_ADD(dst, src, val) \
|
|
|
|
do { \
|
|
|
|
(dst)->tv_sec = (src)->tv_sec + (val)->tv_sec; \
|
|
|
|
(dst)->tv_nsec = (src)->tv_nsec + (val)->tv_nsec; \
|
|
|
|
if ((dst)->tv_nsec > 1000000000) { \
|
|
|
|
(dst)->tv_sec++; \
|
|
|
|
(dst)->tv_nsec -= 1000000000; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define TIMESPEC_SUB(dst, src, val) \
|
|
|
|
do { \
|
|
|
|
(dst)->tv_sec = (src)->tv_sec - (val)->tv_sec; \
|
|
|
|
(dst)->tv_nsec = (src)->tv_nsec - (val)->tv_nsec; \
|
|
|
|
if ((dst)->tv_nsec < 0) { \
|
|
|
|
(dst)->tv_sec--; \
|
|
|
|
(dst)->tv_nsec += 1000000000; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
1999-03-23 05:07:56 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Priority queues.
|
|
|
|
*
|
|
|
|
* XXX It'd be nice if these were contained in uthread_priority_queue.[ch].
|
|
|
|
*/
|
|
|
|
typedef struct pq_list {
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_HEAD(, pthread) pl_head; /* list of threads at this priority */
|
|
|
|
TAILQ_ENTRY(pq_list) pl_link; /* link for queue of priority lists */
|
1999-03-23 05:07:56 +00:00
|
|
|
int pl_prio; /* the priority of this list */
|
|
|
|
int pl_queued; /* is this in the priority queue */
|
|
|
|
} pq_list_t;
|
|
|
|
|
|
|
|
typedef struct pq_queue {
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_HEAD(, pq_list) pq_queue; /* queue of priority lists */
|
1999-03-23 05:07:56 +00:00
|
|
|
pq_list_t *pq_lists; /* array of all priority lists */
|
|
|
|
int pq_size; /* number of priority lists */
|
2003-04-18 05:04:16 +00:00
|
|
|
#define PQF_ACTIVE 0x0001
|
|
|
|
int pq_flags;
|
2003-04-28 23:56:12 +00:00
|
|
|
int pq_threads;
|
1999-03-23 05:07:56 +00:00
|
|
|
} pq_queue_t;
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Each KSEG has a scheduling queue. For now, threads that exist in their
|
|
|
|
* own KSEG (system scope) will get a full priority queue. In the future
|
|
|
|
* this can be optimized for the single thread per KSEG case.
|
|
|
|
*/
|
|
|
|
struct sched_queue {
|
|
|
|
pq_queue_t sq_runq;
|
|
|
|
TAILQ_HEAD(, pthread) sq_waitq; /* waiting in userland */
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct kse_thr_mailbox *kse_critical_t;
|
|
|
|
|
|
|
|
struct kse_group;
|
|
|
|
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
#define MAX_KSE_LOCKLEVEL 5
|
2003-04-18 05:04:16 +00:00
|
|
|
struct kse {
|
|
|
|
/* -- location and order specific items for gdb -- */
|
2003-08-05 22:46:00 +00:00
|
|
|
struct kcb *k_kcb;
|
2003-04-18 05:04:16 +00:00
|
|
|
struct pthread *k_curthread; /* current thread */
|
|
|
|
struct kse_group *k_kseg; /* parent KSEG */
|
|
|
|
struct sched_queue *k_schedq; /* scheduling queue */
|
|
|
|
/* -- end of location and order specific items -- */
|
2003-04-18 07:09:43 +00:00
|
|
|
TAILQ_ENTRY(kse) k_qe; /* KSE list link entry */
|
|
|
|
TAILQ_ENTRY(kse) k_kgqe; /* KSEG's KSE list entry */
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Items that are only modified by the kse, or that otherwise
|
|
|
|
* don't need to be locked when accessed
|
|
|
|
*/
|
|
|
|
struct lock k_lock;
|
|
|
|
struct lockuser k_lockusers[MAX_KSE_LOCKLEVEL];
|
|
|
|
int k_locklevel;
|
2003-04-21 04:02:56 +00:00
|
|
|
stack_t k_stack;
|
2003-04-18 05:04:16 +00:00
|
|
|
int k_flags;
|
|
|
|
#define KF_STARTED 0x0001 /* kernel kse created */
|
|
|
|
#define KF_INITIALIZED 0x0002 /* initialized on 1st upcall */
|
2003-12-09 02:20:56 +00:00
|
|
|
#define KF_TERMINATED 0x0004 /* kse is terminated */
|
|
|
|
#define KF_IDLE 0x0008 /* kse is idle */
|
|
|
|
#define KF_SWITCH 0x0010 /* thread switch in UTS */
|
2003-04-23 21:46:50 +00:00
|
|
|
int k_error; /* syscall errno in critical */
|
2003-04-18 05:04:16 +00:00
|
|
|
int k_cpu; /* CPU ID when bound */
|
2003-07-17 23:02:30 +00:00
|
|
|
int k_sigseqno; /* signal buffered count */
|
2003-04-18 05:04:16 +00:00
|
|
|
};
|
|
|
|
|
2003-12-09 02:20:56 +00:00
|
|
|
#define KSE_SET_IDLE(kse) ((kse)->k_flags |= KF_IDLE)
|
|
|
|
#define KSE_CLEAR_IDLE(kse) ((kse)->k_flags &= ~KF_IDLE)
|
|
|
|
#define KSE_IS_IDLE(kse) (((kse)->k_flags & KF_IDLE) != 0)
|
|
|
|
#define KSE_SET_SWITCH(kse) ((kse)->k_flags |= KF_SWITCH)
|
|
|
|
#define KSE_CLEAR_SWITCH(kse) ((kse)->k_flags &= ~KF_SWITCH)
|
|
|
|
#define KSE_IS_SWITCH(kse) (((kse)->k_flags & KF_SWITCH) != 0)
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Each KSE group contains one or more KSEs in which threads can run.
|
|
|
|
* At least for now, there is one scheduling queue per KSE group; KSEs
|
|
|
|
* within the same KSE group compete for threads from the same scheduling
|
|
|
|
* queue. A scope system thread has one KSE in one KSE group; the group
|
|
|
|
* does not use its scheduling queue.
|
|
|
|
*/
|
|
|
|
struct kse_group {
|
|
|
|
TAILQ_HEAD(, kse) kg_kseq; /* list of KSEs in group */
|
|
|
|
TAILQ_HEAD(, pthread) kg_threadq; /* list of threads in group */
|
|
|
|
TAILQ_ENTRY(kse_group) kg_qe; /* link entry */
|
|
|
|
struct sched_queue kg_schedq; /* scheduling queue */
|
|
|
|
struct lock kg_lock;
|
|
|
|
int kg_threadcount; /* # of assigned threads */
|
2003-04-22 20:28:33 +00:00
|
|
|
int kg_ksecount; /* # of assigned KSEs */
|
2003-04-18 05:04:16 +00:00
|
|
|
int kg_idle_kses;
|
|
|
|
int kg_flags;
|
|
|
|
#define KGF_SINGLE_THREAD 0x0001 /* scope system kse group */
|
|
|
|
#define KGF_SCHEDQ_INITED 0x0002 /* has an initialized schedq */
|
|
|
|
};
|
|
|
|
|
2003-04-18 07:09:43 +00:00
|
|
|
/*
|
|
|
|
* Add/remove threads from a KSE's scheduling queue.
|
|
|
|
* For now the scheduling queue is hung off the KSEG.
|
|
|
|
*/
|
|
|
|
#define KSEG_THRQ_ADD(kseg, thr) \
|
|
|
|
do { \
|
|
|
|
TAILQ_INSERT_TAIL(&(kseg)->kg_threadq, thr, kle);\
|
|
|
|
(kseg)->kg_threadcount++; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define KSEG_THRQ_REMOVE(kseg, thr) \
|
|
|
|
do { \
|
|
|
|
TAILQ_REMOVE(&(kseg)->kg_threadq, thr, kle); \
|
|
|
|
(kseg)->kg_threadcount--; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Lock acquire and release for KSEs.
|
|
|
|
*/
|
|
|
|
#define KSE_LOCK_ACQUIRE(kse, lck) \
|
|
|
|
do { \
|
2003-11-29 14:22:29 +00:00
|
|
|
if ((kse)->k_locklevel < MAX_KSE_LOCKLEVEL) { \
|
2003-04-18 05:04:16 +00:00
|
|
|
(kse)->k_locklevel++; \
|
|
|
|
_lock_acquire((lck), \
|
|
|
|
&(kse)->k_lockusers[(kse)->k_locklevel - 1], 0); \
|
|
|
|
} \
|
2003-11-29 14:22:29 +00:00
|
|
|
else \
|
|
|
|
PANIC("Exceeded maximum lock level"); \
|
2003-04-18 05:04:16 +00:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define KSE_LOCK_RELEASE(kse, lck) \
|
|
|
|
do { \
|
|
|
|
if ((kse)->k_locklevel > 0) { \
|
|
|
|
_lock_release((lck), \
|
|
|
|
&(kse)->k_lockusers[(kse)->k_locklevel - 1]); \
|
|
|
|
(kse)->k_locklevel--; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Lock our own KSEG.
|
|
|
|
*/
|
|
|
|
#define KSE_LOCK(curkse) \
|
|
|
|
KSE_LOCK_ACQUIRE(curkse, &(curkse)->k_kseg->kg_lock)
|
|
|
|
#define KSE_UNLOCK(curkse) \
|
|
|
|
KSE_LOCK_RELEASE(curkse, &(curkse)->k_kseg->kg_lock)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Lock a potentially different KSEG.
|
|
|
|
*/
|
|
|
|
#define KSE_SCHED_LOCK(curkse, kseg) \
|
|
|
|
KSE_LOCK_ACQUIRE(curkse, &(kseg)->kg_lock)
|
|
|
|
#define KSE_SCHED_UNLOCK(curkse, kseg) \
|
|
|
|
KSE_LOCK_RELEASE(curkse, &(kseg)->kg_lock)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Waiting queue manipulation macros (using pqe link):
|
|
|
|
*/
|
|
|
|
#define KSE_WAITQ_REMOVE(kse, thrd) \
|
|
|
|
do { \
|
|
|
|
if (((thrd)->flags & THR_FLAGS_IN_WAITQ) != 0) { \
|
|
|
|
TAILQ_REMOVE(&(kse)->k_schedq->sq_waitq, thrd, pqe); \
|
|
|
|
(thrd)->flags &= ~THR_FLAGS_IN_WAITQ; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
#define KSE_WAITQ_INSERT(kse, thrd) kse_waitq_insert(thrd)
|
|
|
|
#define KSE_WAITQ_FIRST(kse) TAILQ_FIRST(&(kse)->k_schedq->sq_waitq)
|
|
|
|
|
2003-08-05 22:46:00 +00:00
|
|
|
#define KSE_WAKEUP(kse) kse_wakeup(&(kse)->k_kcb->kcb_kmbx)
|
1999-03-23 05:07:56 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TailQ initialization values.
|
|
|
|
*/
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
#define TAILQ_INITIALIZER { NULL, NULL }
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* lock initialization values.
|
1996-08-20 08:22:01 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define LCK_INITIALIZER { NULL, NULL, LCK_DEFAULT }
|
1996-08-20 08:22:01 +00:00
|
|
|
|
|
|
|
struct pthread_mutex {
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Lock for accesses to this structure.
|
|
|
|
*/
|
|
|
|
struct lock m_lock;
|
1996-08-20 08:22:01 +00:00
|
|
|
enum pthread_mutextype m_type;
|
1999-03-23 05:07:56 +00:00
|
|
|
int m_protocol;
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_HEAD(mutex_head, pthread) m_queue;
|
1996-08-20 08:22:01 +00:00
|
|
|
struct pthread *m_owner;
|
|
|
|
long m_flags;
|
2003-04-18 05:04:16 +00:00
|
|
|
int m_count;
|
1999-03-23 05:07:56 +00:00
|
|
|
int m_refcount;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used for priority inheritence and protection.
|
|
|
|
*
|
|
|
|
* m_prio - For priority inheritence, the highest active
|
|
|
|
* priority (threads locking the mutex inherit
|
|
|
|
* this priority). For priority protection, the
|
|
|
|
* ceiling priority of this mutex.
|
|
|
|
* m_saved_prio - mutex owners inherited priority before
|
|
|
|
* taking the mutex, restored when the owner
|
|
|
|
* unlocks the mutex.
|
|
|
|
*/
|
|
|
|
int m_prio;
|
|
|
|
int m_saved_prio;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Link for list of all mutexes a thread currently owns.
|
|
|
|
*/
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_ENTRY(pthread_mutex) m_qe;
|
1996-08-20 08:22:01 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Flags for mutexes.
|
|
|
|
*/
|
|
|
|
#define MUTEX_FLAGS_PRIVATE 0x01
|
|
|
|
#define MUTEX_FLAGS_INITED 0x02
|
|
|
|
#define MUTEX_FLAGS_BUSY 0x04
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Static mutex initialization values.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define PTHREAD_MUTEX_STATIC_INITIALIZER \
|
|
|
|
{ LCK_INITIALIZER, PTHREAD_MUTEX_DEFAULT, PTHREAD_PRIO_NONE, \
|
|
|
|
TAILQ_INITIALIZER, NULL, MUTEX_FLAGS_PRIVATE, 0, 0, 0, 0, \
|
|
|
|
TAILQ_INITIALIZER }
|
1996-08-20 08:22:01 +00:00
|
|
|
|
|
|
|
struct pthread_mutex_attr {
|
|
|
|
enum pthread_mutextype m_type;
|
1999-03-23 05:07:56 +00:00
|
|
|
int m_protocol;
|
|
|
|
int m_ceiling;
|
1996-08-20 08:22:01 +00:00
|
|
|
long m_flags;
|
|
|
|
};
|
|
|
|
|
2001-01-24 13:03:38 +00:00
|
|
|
#define PTHREAD_MUTEXATTR_STATIC_INITIALIZER \
|
|
|
|
{ PTHREAD_MUTEX_DEFAULT, PTHREAD_PRIO_NONE, 0, MUTEX_FLAGS_PRIVATE }
|
|
|
|
|
1996-08-20 08:22:01 +00:00
|
|
|
/*
|
|
|
|
* Condition variable definitions.
|
|
|
|
*/
|
|
|
|
enum pthread_cond_type {
|
|
|
|
COND_TYPE_FAST,
|
|
|
|
COND_TYPE_MAX
|
|
|
|
};
|
|
|
|
|
|
|
|
struct pthread_cond {
|
1998-04-29 09:59:34 +00:00
|
|
|
/*
|
|
|
|
* Lock for accesses to this structure.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
struct lock c_lock;
|
|
|
|
enum pthread_cond_type c_type;
|
|
|
|
TAILQ_HEAD(cond_head, pthread) c_queue;
|
|
|
|
struct pthread_mutex *c_mutex;
|
|
|
|
long c_flags;
|
|
|
|
long c_seqno;
|
1996-08-20 08:22:01 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct pthread_cond_attr {
|
|
|
|
enum pthread_cond_type c_type;
|
|
|
|
long c_flags;
|
|
|
|
};
|
|
|
|
|
2003-09-04 14:06:43 +00:00
|
|
|
struct pthread_barrier {
|
|
|
|
pthread_mutex_t b_lock;
|
|
|
|
pthread_cond_t b_cond;
|
|
|
|
int b_count;
|
|
|
|
int b_waiters;
|
|
|
|
int b_generation;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct pthread_barrierattr {
|
|
|
|
int pshared;
|
|
|
|
};
|
|
|
|
|
2003-09-09 06:57:51 +00:00
|
|
|
struct pthread_spinlock {
|
|
|
|
volatile int s_lock;
|
|
|
|
pthread_t s_owner;
|
|
|
|
};
|
|
|
|
|
1996-08-20 08:22:01 +00:00
|
|
|
/*
|
|
|
|
* Flags for condition variables.
|
|
|
|
*/
|
|
|
|
#define COND_FLAGS_PRIVATE 0x01
|
|
|
|
#define COND_FLAGS_INITED 0x02
|
|
|
|
#define COND_FLAGS_BUSY 0x04
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Static cond initialization values.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define PTHREAD_COND_STATIC_INITIALIZER \
|
|
|
|
{ LCK_INITIALIZER, COND_TYPE_FAST, TAILQ_INITIALIZER, \
|
|
|
|
NULL, NULL, 0, 0 }
|
1996-08-20 08:22:01 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Cleanup definitions.
|
|
|
|
*/
|
|
|
|
struct pthread_cleanup {
|
|
|
|
struct pthread_cleanup *next;
|
|
|
|
void (*routine) ();
|
|
|
|
void *routine_arg;
|
|
|
|
};
|
|
|
|
|
2003-11-04 20:04:45 +00:00
|
|
|
struct pthread_atfork {
|
|
|
|
TAILQ_ENTRY(pthread_atfork) qe;
|
|
|
|
void (*prepare)(void);
|
|
|
|
void (*parent)(void);
|
|
|
|
void (*child)(void);
|
|
|
|
};
|
|
|
|
|
1996-08-20 08:22:01 +00:00
|
|
|
struct pthread_attr {
|
1999-03-23 05:07:56 +00:00
|
|
|
int sched_policy;
|
|
|
|
int sched_inherit;
|
|
|
|
int sched_interval;
|
1996-08-20 08:22:01 +00:00
|
|
|
int prio;
|
1998-03-08 02:37:27 +00:00
|
|
|
int suspend;
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_STACK_USER 0x100 /* 0xFF reserved for <pthread.h> */
|
2004-07-13 22:49:58 +00:00
|
|
|
#define THR_SIGNAL_THREAD 0x200 /* This is a signal thread */
|
1998-03-08 02:37:27 +00:00
|
|
|
int flags;
|
|
|
|
void *arg_attr;
|
|
|
|
void (*cleanup_attr) ();
|
|
|
|
void *stackaddr_attr;
|
|
|
|
size_t stacksize_attr;
|
2001-07-20 04:23:11 +00:00
|
|
|
size_t guardsize_attr;
|
1996-08-20 08:22:01 +00:00
|
|
|
};
|
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/*
|
|
|
|
* Thread creation state attributes.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_CREATE_RUNNING 0
|
|
|
|
#define THR_CREATE_SUSPENDED 1
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Miscellaneous definitions.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_STACK_DEFAULT 65536
|
1999-11-28 19:47:43 +00:00
|
|
|
|
1999-07-11 05:56:37 +00:00
|
|
|
/*
|
|
|
|
* Maximum size of initial thread's stack. This perhaps deserves to be larger
|
1999-07-06 00:25:38 +00:00
|
|
|
* than the stacks of other threads, since many applications are likely to run
|
1999-07-11 05:56:37 +00:00
|
|
|
* almost entirely on this stack.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_STACK_INITIAL 0x100000
|
2000-10-13 22:12:32 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Define the different priority ranges. All applications have thread
|
|
|
|
* priorities constrained within 0-31. The threads library raises the
|
|
|
|
* priority when delivering signals in order to ensure that signal
|
|
|
|
* delivery happens (from the POSIX spec) "as soon as possible".
|
|
|
|
* In the future, the threads library will also be able to map specific
|
|
|
|
* threads into real-time (cooperating) processes or kernel threads.
|
|
|
|
* The RT and SIGNAL priorities will be used internally and added to
|
|
|
|
* thread base priorities so that the scheduling queue can handle both
|
|
|
|
* normal and RT priority threads with and without signal handling.
|
|
|
|
*
|
|
|
|
* The approach taken is that, within each class, signal delivery
|
|
|
|
* always has priority over thread execution.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_DEFAULT_PRIORITY 15
|
|
|
|
#define THR_MIN_PRIORITY 0
|
|
|
|
#define THR_MAX_PRIORITY 31 /* 0x1F */
|
|
|
|
#define THR_SIGNAL_PRIORITY 32 /* 0x20 */
|
|
|
|
#define THR_RT_PRIORITY 64 /* 0x40 */
|
|
|
|
#define THR_FIRST_PRIORITY THR_MIN_PRIORITY
|
|
|
|
#define THR_LAST_PRIORITY \
|
|
|
|
(THR_MAX_PRIORITY + THR_SIGNAL_PRIORITY + THR_RT_PRIORITY)
|
|
|
|
#define THR_BASE_PRIORITY(prio) ((prio) & THR_MAX_PRIORITY)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
2000-10-13 22:12:32 +00:00
|
|
|
* Clock resolution in microseconds.
|
1996-01-22 00:23:58 +00:00
|
|
|
*/
|
2000-10-13 22:12:32 +00:00
|
|
|
#define CLOCK_RES_USEC 10000
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Time slice period in microseconds.
|
|
|
|
*/
|
2000-10-13 22:12:32 +00:00
|
|
|
#define TIMESLICE_USEC 20000
|
|
|
|
|
|
|
|
/*
|
2003-04-18 05:04:16 +00:00
|
|
|
* XXX - Define a thread-safe macro to get the current time of day
|
|
|
|
* which is updated at regular intervals by something.
|
|
|
|
*
|
|
|
|
* For now, we just make the system call to get the time.
|
2000-10-13 22:12:32 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define KSE_GET_TOD(curkse, tsp) \
|
|
|
|
do { \
|
2003-08-05 22:46:00 +00:00
|
|
|
*tsp = (curkse)->k_kcb->kcb_kmbx.km_timeofday; \
|
2003-04-18 05:04:16 +00:00
|
|
|
if ((tsp)->tv_sec == 0) \
|
|
|
|
clock_gettime(CLOCK_REALTIME, tsp); \
|
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
1998-09-07 19:01:43 +00:00
|
|
|
struct pthread_rwlockattr {
|
|
|
|
int pshared;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct pthread_rwlock {
|
|
|
|
pthread_mutex_t lock; /* monitor lock */
|
|
|
|
pthread_cond_t read_signal;
|
|
|
|
pthread_cond_t write_signal;
|
2004-01-08 15:37:09 +00:00
|
|
|
int state; /* 0 = idle >0 = # of readers -1 = writer */
|
1998-09-07 19:01:43 +00:00
|
|
|
int blocked_writers;
|
|
|
|
};
|
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/*
|
|
|
|
* Thread states.
|
|
|
|
*/
|
|
|
|
enum pthread_state {
|
|
|
|
PS_RUNNING,
|
2003-04-18 05:04:16 +00:00
|
|
|
PS_LOCKWAIT,
|
1996-01-22 00:23:58 +00:00
|
|
|
PS_MUTEX_WAIT,
|
|
|
|
PS_COND_WAIT,
|
|
|
|
PS_SLEEP_WAIT,
|
2003-02-17 10:05:18 +00:00
|
|
|
PS_SIGSUSPEND,
|
|
|
|
PS_SIGWAIT,
|
1996-01-22 00:23:58 +00:00
|
|
|
PS_JOIN,
|
|
|
|
PS_SUSPENDED,
|
|
|
|
PS_DEAD,
|
1999-03-23 05:07:56 +00:00
|
|
|
PS_DEADLOCK,
|
1996-01-22 00:23:58 +00:00
|
|
|
PS_STATE_MAX
|
|
|
|
};
|
|
|
|
|
2003-07-27 06:46:34 +00:00
|
|
|
struct sigwait_data {
|
|
|
|
sigset_t *waitset;
|
|
|
|
siginfo_t *siginfo; /* used to save siginfo for sigwaitinfo() */
|
|
|
|
};
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
union pthread_wait_data {
|
1999-03-23 05:07:56 +00:00
|
|
|
pthread_mutex_t mutex;
|
|
|
|
pthread_cond_t cond;
|
2003-04-18 05:04:16 +00:00
|
|
|
struct lock *lock;
|
2003-07-27 06:46:34 +00:00
|
|
|
struct sigwait_data *sigwait;
|
1996-01-22 00:23:58 +00:00
|
|
|
};
|
|
|
|
|
2000-01-19 07:04:50 +00:00
|
|
|
/*
|
|
|
|
* Define a continuation routine that can be used to perform a
|
|
|
|
* transfer of control:
|
|
|
|
*/
|
|
|
|
typedef void (*thread_continuation_t) (void *);
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* This stores a thread's state prior to running a signal handler.
|
|
|
|
* It is used when a signal is delivered to a thread blocked in
|
|
|
|
* userland. If the signal handler returns normally, the thread's
|
|
|
|
* state is restored from here.
|
|
|
|
*/
|
|
|
|
struct pthread_sigframe {
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
int psf_valid;
|
2003-04-18 05:04:16 +00:00
|
|
|
int psf_flags;
|
|
|
|
int psf_interrupted;
|
2003-09-22 14:40:36 +00:00
|
|
|
int psf_timeout;
|
2003-04-18 05:04:16 +00:00
|
|
|
int psf_signo;
|
|
|
|
enum pthread_state psf_state;
|
|
|
|
union pthread_wait_data psf_wait_data;
|
|
|
|
struct timespec psf_wakeup_time;
|
|
|
|
sigset_t psf_sigset;
|
|
|
|
sigset_t psf_sigmask;
|
|
|
|
int psf_seqno;
|
|
|
|
};
|
|
|
|
|
2001-11-17 14:28:39 +00:00
|
|
|
struct join_status {
|
|
|
|
struct pthread *thread;
|
2002-02-09 19:58:41 +00:00
|
|
|
void *ret;
|
2001-11-17 14:28:39 +00:00
|
|
|
int error;
|
|
|
|
};
|
2000-10-13 22:12:32 +00:00
|
|
|
|
2002-03-19 22:58:56 +00:00
|
|
|
struct pthread_specific_elem {
|
|
|
|
const void *data;
|
|
|
|
int seqno;
|
|
|
|
};
|
|
|
|
|
2004-07-13 22:49:58 +00:00
|
|
|
struct pthread_key {
|
|
|
|
volatile int allocated;
|
|
|
|
volatile int count;
|
|
|
|
int seqno;
|
|
|
|
void (*destructor) (void *);
|
|
|
|
};
|
2003-04-18 05:04:16 +00:00
|
|
|
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
#define MAX_THR_LOCKLEVEL 5
|
1996-01-22 00:23:58 +00:00
|
|
|
/*
|
|
|
|
* Thread structure.
|
|
|
|
*/
|
|
|
|
struct pthread {
|
2004-07-13 22:49:58 +00:00
|
|
|
/* Thread control block */
|
2003-08-05 22:46:00 +00:00
|
|
|
struct tcb *tcb;
|
2003-04-30 15:05:17 +00:00
|
|
|
|
1998-04-03 09:31:15 +00:00
|
|
|
/*
|
|
|
|
* Magic value to help recognize a valid thread structure
|
|
|
|
* from an invalid one:
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_MAGIC ((u_int32_t) 0xd09ba115)
|
1998-04-03 09:31:15 +00:00
|
|
|
u_int32_t magic;
|
1998-04-11 07:47:22 +00:00
|
|
|
char *name;
|
1999-11-28 19:47:43 +00:00
|
|
|
u_int64_t uniqueid; /* for gdb */
|
1998-04-03 09:31:15 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Queue entry for list of all threads: */
|
|
|
|
TAILQ_ENTRY(pthread) tle; /* link for all threads in process */
|
|
|
|
TAILQ_ENTRY(pthread) kle; /* link for all threads in KSE/KSEG */
|
|
|
|
|
|
|
|
/* Queue entry for GC lists: */
|
|
|
|
TAILQ_ENTRY(pthread) gcle;
|
|
|
|
|
2003-07-17 23:02:30 +00:00
|
|
|
/* Hash queue entry */
|
|
|
|
LIST_ENTRY(pthread) hle;
|
|
|
|
|
1998-04-29 09:59:34 +00:00
|
|
|
/*
|
|
|
|
* Lock for accesses to this thread structure.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
struct lock lock;
|
|
|
|
struct lockuser lockusers[MAX_THR_LOCKLEVEL];
|
|
|
|
int locklevel;
|
|
|
|
kse_critical_t critical[MAX_KSE_LOCKLEVEL];
|
|
|
|
struct kse *kse;
|
|
|
|
struct kse_group *kseg;
|
1998-09-30 06:22:07 +00:00
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/*
|
|
|
|
* Thread start routine, argument, stack pointer and thread
|
|
|
|
* attributes.
|
|
|
|
*/
|
1996-08-20 08:22:01 +00:00
|
|
|
void *(*start_routine)(void *);
|
|
|
|
void *arg;
|
|
|
|
struct pthread_attr attr;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
int active; /* thread running */
|
|
|
|
int blocked; /* thread blocked in kernel */
|
|
|
|
int need_switchout;
|
2000-11-09 05:08:26 +00:00
|
|
|
|
|
|
|
/*
|
2003-04-18 05:04:16 +00:00
|
|
|
* Used for tracking delivery of signal handlers.
|
|
|
|
*/
|
|
|
|
struct pthread_sigframe *curframe;
|
2003-11-29 14:22:29 +00:00
|
|
|
siginfo_t *siginfo;
|
2003-04-18 05:04:16 +00:00
|
|
|
|
|
|
|
/*
|
1999-11-28 05:38:13 +00:00
|
|
|
* Cancelability flags - the lower 2 bits are used by cancel
|
|
|
|
* definitions in pthread.h
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_AT_CANCEL_POINT 0x0004
|
|
|
|
#define THR_CANCELLING 0x0008
|
|
|
|
#define THR_CANCEL_NEEDED 0x0010
|
|
|
|
int cancelflags;
|
1999-11-28 05:38:13 +00:00
|
|
|
|
2000-01-19 07:04:50 +00:00
|
|
|
thread_continuation_t continuation;
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* The thread's base and pending signal masks. The active
|
|
|
|
* signal mask is stored in the thread's context (in mailbox).
|
|
|
|
*/
|
|
|
|
sigset_t sigmask;
|
|
|
|
sigset_t sigpend;
|
Check pending signals, if there is signal will be unblocked by
sigsuspend, thread shouldn't wait, in old code, it may be
ignored.
When a signal handler is invoked in sigsuspend, thread gets
two different signal masks, one is in thread structure,
sigprocmask() can retrieve it, another is in ucontext
which is a third parameter of signal handler, the former is
the result of sigsuspend mask ORed with sigaction's sa_mask
and current signal, the later is the mask in thread structure
before sigsuspend is called. After signal handler is called,
the mask in ucontext should be copied into thread structure,
and becomes CURRENT signal mask, then sigsuspend returns to
user code.
Reviewed by: deischen
Tested by: Sean McNeil <sean@mcneil.com>
2004-06-12 07:40:01 +00:00
|
|
|
sigset_t *oldsigmask;
|
2003-07-17 23:02:30 +00:00
|
|
|
volatile int check_pending;
|
2003-04-18 05:04:16 +00:00
|
|
|
int refcount;
|
2003-02-17 10:05:18 +00:00
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/* Thread state: */
|
|
|
|
enum pthread_state state;
|
2003-05-16 19:58:30 +00:00
|
|
|
volatile int lock_switch;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Number of microseconds accumulated by this thread when
|
|
|
|
* time slicing is active.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
long slice_usec;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Time to wake up thread. This is used for sleeping threads and
|
2003-04-18 05:04:16 +00:00
|
|
|
* for any operation which may time out (such as select).
|
1996-01-22 00:23:58 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
struct timespec wakeup_time;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/* TRUE if operation has timed out. */
|
2003-04-18 05:04:16 +00:00
|
|
|
int timeout;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Error variable used instead of errno. The function __error()
|
|
|
|
* returns a pointer to this.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
int error;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2001-11-17 14:28:39 +00:00
|
|
|
/*
|
|
|
|
* The joiner is the thread that is joining to this thread. The
|
|
|
|
* join status keeps track of a join operation to another thread.
|
|
|
|
*/
|
|
|
|
struct pthread *joiner;
|
|
|
|
struct join_status join_status;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
* The current thread can belong to only one scheduling queue at
|
2000-10-13 22:12:32 +00:00
|
|
|
* a time (ready or waiting queue). It can also belong to:
|
1996-01-22 00:23:58 +00:00
|
|
|
*
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
* o A queue of threads waiting for a mutex
|
|
|
|
* o A queue of threads waiting for a condition variable
|
2001-05-20 23:08:33 +00:00
|
|
|
*
|
2003-04-18 05:04:16 +00:00
|
|
|
* It is possible for a thread to belong to more than one of the
|
|
|
|
* above queues if it is handling a signal. A thread may only
|
|
|
|
* enter a mutex or condition variable queue when it is not
|
|
|
|
* being called from a signal handler. If a thread is a member
|
|
|
|
* of one of these queues when a signal handler is invoked, it
|
|
|
|
* must be removed from the queue before invoking the handler
|
|
|
|
* and then added back to the queue after return from the handler.
|
2000-10-13 22:12:32 +00:00
|
|
|
*
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
* Use pqe for the scheduling queue link (both ready and waiting),
|
2003-04-18 05:04:16 +00:00
|
|
|
* sqe for synchronization (mutex, condition variable, and join)
|
|
|
|
* queue links, and qe for all other links.
|
1996-01-22 00:23:58 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
TAILQ_ENTRY(pthread) pqe; /* priority, wait queues link */
|
2000-10-13 22:12:32 +00:00
|
|
|
TAILQ_ENTRY(pthread) sqe; /* synchronization queue link */
|
1998-04-11 07:47:22 +00:00
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/* Wait data. */
|
|
|
|
union pthread_wait_data data;
|
|
|
|
|
1997-02-05 23:26:09 +00:00
|
|
|
/*
|
|
|
|
* Set to TRUE if a blocking operation was
|
|
|
|
* interrupted by a signal:
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
int interrupted;
|
1997-02-05 23:26:09 +00:00
|
|
|
|
1999-03-23 05:07:56 +00:00
|
|
|
/*
|
2003-04-18 05:04:16 +00:00
|
|
|
* Set to non-zero when this thread has entered a critical
|
|
|
|
* region. We allow for recursive entries into critical regions.
|
1999-03-23 05:07:56 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
int critical_count;
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Set to TRUE if this thread should yield after leaving a
|
|
|
|
* critical region to check for signals, messages, etc.
|
|
|
|
*/
|
|
|
|
int critical_yield;
|
|
|
|
|
|
|
|
int sflags;
|
|
|
|
#define THR_FLAGS_IN_SYNCQ 0x0001
|
|
|
|
|
|
|
|
/* Miscellaneous flags; only set with scheduling lock held. */
|
|
|
|
int flags;
|
|
|
|
#define THR_FLAGS_PRIVATE 0x0001
|
|
|
|
#define THR_FLAGS_IN_WAITQ 0x0002 /* in waiting queue using pqe link */
|
|
|
|
#define THR_FLAGS_IN_RUNQ 0x0004 /* in run queue using pqe link */
|
|
|
|
#define THR_FLAGS_EXITING 0x0008 /* thread is exiting */
|
|
|
|
#define THR_FLAGS_SUSPENDED 0x0010 /* thread is suspended */
|
|
|
|
#define THR_FLAGS_GC_SAFE 0x0020 /* thread safe for cleaning */
|
|
|
|
#define THR_FLAGS_IN_TDLIST 0x0040 /* thread in all thread list */
|
|
|
|
#define THR_FLAGS_IN_GCLIST 0x0080 /* thread in gc list */
|
1999-03-23 05:07:56 +00:00
|
|
|
/*
|
|
|
|
* Base priority is the user setable and retrievable priority
|
|
|
|
* of the thread. It is only affected by explicit calls to
|
|
|
|
* set thread priority and upon thread creation via a thread
|
|
|
|
* attribute or default priority.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
char base_priority;
|
1999-03-23 05:07:56 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Inherited priority is the priority a thread inherits by
|
|
|
|
* taking a priority inheritence or protection mutex. It
|
|
|
|
* is not affected by base priority changes. Inherited
|
|
|
|
* priority defaults to and remains 0 until a mutex is taken
|
|
|
|
* that is being waited on by any other thread whose priority
|
|
|
|
* is non-zero.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
char inherited_priority;
|
1999-03-23 05:07:56 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Active priority is always the maximum of the threads base
|
|
|
|
* priority and inherited priority. When there is a change
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
* in either the base or inherited priority, the active
|
1999-03-23 05:07:56 +00:00
|
|
|
* priority must be recalculated.
|
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
char active_priority;
|
1999-03-23 05:07:56 +00:00
|
|
|
|
|
|
|
/* Number of priority ceiling or protection mutexes owned. */
|
2003-04-18 05:04:16 +00:00
|
|
|
int priority_mutex_count;
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2004-01-08 15:37:09 +00:00
|
|
|
/* Number rwlocks rdlocks held. */
|
|
|
|
int rdlock_count;
|
|
|
|
|
1999-03-23 05:07:56 +00:00
|
|
|
/*
|
|
|
|
* Queue of currently owned mutexes.
|
|
|
|
*/
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_HEAD(, pthread_mutex) mutexq;
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2002-03-19 22:58:56 +00:00
|
|
|
void *ret;
|
|
|
|
struct pthread_specific_elem *specific;
|
|
|
|
int specific_data_count;
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-12-29 23:33:51 +00:00
|
|
|
/* Alternative stack for sigaltstack() */
|
|
|
|
stack_t sigstk;
|
|
|
|
|
2003-05-30 00:21:52 +00:00
|
|
|
/*
|
|
|
|
* Current locks bitmap for rtld.
|
|
|
|
*/
|
|
|
|
int rtld_bits;
|
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
/* Cleanup handlers Link List */
|
|
|
|
struct pthread_cleanup *cleanup;
|
1997-02-05 23:26:09 +00:00
|
|
|
char *fname; /* Ptr to source file name */
|
|
|
|
int lineno; /* Source line number. */
|
1996-01-22 00:23:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
2003-04-18 05:04:16 +00:00
|
|
|
* Critical regions can also be detected by looking at the threads
|
|
|
|
* current lock level. Ensure these macros increment and decrement
|
|
|
|
* the lock levels such that locks can not be held with a lock level
|
|
|
|
* of 0.
|
1996-01-22 00:23:58 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_IN_CRITICAL(thrd) \
|
|
|
|
(((thrd)->locklevel > 0) || \
|
|
|
|
((thrd)->critical_count > 0))
|
|
|
|
|
|
|
|
#define THR_YIELD_CHECK(thrd) \
|
|
|
|
do { \
|
2004-07-13 22:49:58 +00:00
|
|
|
if (!THR_IN_CRITICAL(thrd)) { \
|
|
|
|
if (__predict_false(_libkse_debug)) \
|
|
|
|
_thr_debug_check_yield(thrd); \
|
|
|
|
if ((thrd)->critical_yield != 0) \
|
|
|
|
_thr_sched_switch(thrd); \
|
|
|
|
if ((thrd)->check_pending != 0) \
|
|
|
|
_thr_sig_check_pending(thrd); \
|
|
|
|
} \
|
2003-04-18 05:04:16 +00:00
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_LOCK_ACQUIRE(thrd, lck) \
|
|
|
|
do { \
|
2003-11-29 14:22:29 +00:00
|
|
|
if ((thrd)->locklevel < MAX_THR_LOCKLEVEL) { \
|
2003-05-16 19:58:30 +00:00
|
|
|
THR_DEACTIVATE_LAST_LOCK(thrd); \
|
2003-04-18 05:04:16 +00:00
|
|
|
(thrd)->locklevel++; \
|
|
|
|
_lock_acquire((lck), \
|
|
|
|
&(thrd)->lockusers[(thrd)->locklevel - 1], \
|
|
|
|
(thrd)->active_priority); \
|
2003-11-29 14:22:29 +00:00
|
|
|
} else \
|
|
|
|
PANIC("Exceeded maximum lock level"); \
|
2003-04-18 05:04:16 +00:00
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_LOCK_RELEASE(thrd, lck) \
|
|
|
|
do { \
|
|
|
|
if ((thrd)->locklevel > 0) { \
|
|
|
|
_lock_release((lck), \
|
|
|
|
&(thrd)->lockusers[(thrd)->locklevel - 1]); \
|
|
|
|
(thrd)->locklevel--; \
|
2003-05-16 19:58:30 +00:00
|
|
|
THR_ACTIVATE_LAST_LOCK(thrd); \
|
|
|
|
if ((thrd)->locklevel == 0) \
|
2003-04-28 23:56:12 +00:00
|
|
|
THR_YIELD_CHECK(thrd); \
|
2003-04-18 05:04:16 +00:00
|
|
|
} \
|
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-05-16 19:58:30 +00:00
|
|
|
#define THR_ACTIVATE_LAST_LOCK(thrd) \
|
2003-04-28 23:56:12 +00:00
|
|
|
do { \
|
2003-05-16 19:58:30 +00:00
|
|
|
if ((thrd)->locklevel > 0) \
|
|
|
|
_lockuser_setactive( \
|
|
|
|
&(thrd)->lockusers[(thrd)->locklevel - 1], 1); \
|
2003-04-28 23:56:12 +00:00
|
|
|
} while (0)
|
|
|
|
|
2003-05-16 19:58:30 +00:00
|
|
|
#define THR_DEACTIVATE_LAST_LOCK(thrd) \
|
2003-04-28 23:56:12 +00:00
|
|
|
do { \
|
2003-05-16 19:58:30 +00:00
|
|
|
if ((thrd)->locklevel > 0) \
|
|
|
|
_lockuser_setactive( \
|
|
|
|
&(thrd)->lockusers[(thrd)->locklevel - 1], 0); \
|
2003-04-28 23:56:12 +00:00
|
|
|
} while (0)
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* For now, threads will have their own lock separate from their
|
|
|
|
* KSE scheduling lock.
|
|
|
|
*/
|
|
|
|
#define THR_LOCK(thr) THR_LOCK_ACQUIRE(thr, &(thr)->lock)
|
|
|
|
#define THR_UNLOCK(thr) THR_LOCK_RELEASE(thr, &(thr)->lock)
|
|
|
|
#define THR_THREAD_LOCK(curthrd, thr) THR_LOCK_ACQUIRE(curthrd, &(thr)->lock)
|
|
|
|
#define THR_THREAD_UNLOCK(curthrd, thr) THR_LOCK_RELEASE(curthrd, &(thr)->lock)
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Priority queue manipulation macros (using pqe link). We use
|
|
|
|
* the thread's kseg link instead of the kse link because a thread
|
|
|
|
* does not (currently) have a statically assigned kse.
|
|
|
|
*/
|
|
|
|
#define THR_RUNQ_INSERT_HEAD(thrd) \
|
|
|
|
_pq_insert_head(&(thrd)->kseg->kg_schedq.sq_runq, thrd)
|
|
|
|
#define THR_RUNQ_INSERT_TAIL(thrd) \
|
|
|
|
_pq_insert_tail(&(thrd)->kseg->kg_schedq.sq_runq, thrd)
|
|
|
|
#define THR_RUNQ_REMOVE(thrd) \
|
|
|
|
_pq_remove(&(thrd)->kseg->kg_schedq.sq_runq, thrd)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Macros to insert/remove threads to the all thread list and
|
|
|
|
* the gc list.
|
|
|
|
*/
|
|
|
|
#define THR_LIST_ADD(thrd) do { \
|
|
|
|
if (((thrd)->flags & THR_FLAGS_IN_TDLIST) == 0) { \
|
|
|
|
TAILQ_INSERT_HEAD(&_thread_list, thrd, tle); \
|
2003-07-17 23:02:30 +00:00
|
|
|
_thr_hash_add(thrd); \
|
2003-04-18 05:04:16 +00:00
|
|
|
(thrd)->flags |= THR_FLAGS_IN_TDLIST; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
#define THR_LIST_REMOVE(thrd) do { \
|
|
|
|
if (((thrd)->flags & THR_FLAGS_IN_TDLIST) != 0) { \
|
|
|
|
TAILQ_REMOVE(&_thread_list, thrd, tle); \
|
2003-07-17 23:02:30 +00:00
|
|
|
_thr_hash_remove(thrd); \
|
2003-04-18 05:04:16 +00:00
|
|
|
(thrd)->flags &= ~THR_FLAGS_IN_TDLIST; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
#define THR_GCLIST_ADD(thrd) do { \
|
|
|
|
if (((thrd)->flags & THR_FLAGS_IN_GCLIST) == 0) { \
|
2003-04-18 07:09:43 +00:00
|
|
|
TAILQ_INSERT_HEAD(&_thread_gc_list, thrd, gcle);\
|
2003-04-18 05:04:16 +00:00
|
|
|
(thrd)->flags |= THR_FLAGS_IN_GCLIST; \
|
2003-04-18 07:09:43 +00:00
|
|
|
_gc_count++; \
|
2003-04-18 05:04:16 +00:00
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
#define THR_GCLIST_REMOVE(thrd) do { \
|
|
|
|
if (((thrd)->flags & THR_FLAGS_IN_GCLIST) != 0) { \
|
2003-04-18 07:09:43 +00:00
|
|
|
TAILQ_REMOVE(&_thread_gc_list, thrd, gcle); \
|
2003-04-18 05:04:16 +00:00
|
|
|
(thrd)->flags &= ~THR_FLAGS_IN_GCLIST; \
|
2003-04-18 07:09:43 +00:00
|
|
|
_gc_count--; \
|
2003-04-18 05:04:16 +00:00
|
|
|
} \
|
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 07:09:43 +00:00
|
|
|
#define GC_NEEDED() (atomic_load_acq_int(&_gc_count) >= 5)
|
|
|
|
|
2000-10-13 22:12:32 +00:00
|
|
|
/*
|
2003-04-18 05:04:16 +00:00
|
|
|
* Locking the scheduling queue for another thread uses that thread's
|
|
|
|
* KSEG lock.
|
2000-10-13 22:12:32 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_SCHED_LOCK(curthr, thr) do { \
|
|
|
|
(curthr)->critical[(curthr)->locklevel] = _kse_critical_enter(); \
|
|
|
|
(curthr)->locklevel++; \
|
|
|
|
KSE_SCHED_LOCK((curthr)->kse, (thr)->kseg); \
|
|
|
|
} while (0)
|
2000-10-13 22:12:32 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_SCHED_UNLOCK(curthr, thr) do { \
|
|
|
|
KSE_SCHED_UNLOCK((curthr)->kse, (thr)->kseg); \
|
|
|
|
(curthr)->locklevel--; \
|
|
|
|
_kse_critical_leave((curthr)->critical[(curthr)->locklevel]); \
|
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-05-16 19:58:30 +00:00
|
|
|
/* Take the scheduling lock with the intent to call the scheduler. */
|
|
|
|
#define THR_LOCK_SWITCH(curthr) do { \
|
|
|
|
(void)_kse_critical_enter(); \
|
|
|
|
KSE_SCHED_LOCK((curthr)->kse, (curthr)->kseg); \
|
|
|
|
} while (0)
|
2003-07-18 02:46:55 +00:00
|
|
|
#define THR_UNLOCK_SWITCH(curthr) do { \
|
|
|
|
KSE_SCHED_UNLOCK((curthr)->kse, (curthr)->kseg);\
|
|
|
|
} while (0)
|
2003-05-16 19:58:30 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_CRITICAL_ENTER(thr) (thr)->critical_count++
|
|
|
|
#define THR_CRITICAL_LEAVE(thr) do { \
|
|
|
|
(thr)->critical_count--; \
|
|
|
|
if (((thr)->critical_yield != 0) && \
|
|
|
|
((thr)->critical_count == 0)) { \
|
|
|
|
(thr)->critical_yield = 0; \
|
|
|
|
_thr_sched_switch(thr); \
|
|
|
|
} \
|
|
|
|
} while (0)
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_IS_ACTIVE(thrd) \
|
|
|
|
((thrd)->kse != NULL) && ((thrd)->kse->k_curthread == (thrd))
|
1996-01-22 00:23:58 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#define THR_IN_SYNCQ(thrd) (((thrd)->sflags & THR_FLAGS_IN_SYNCQ) != 0)
|
2003-05-04 16:17:01 +00:00
|
|
|
|
|
|
|
#define THR_IS_SUSPENDED(thrd) \
|
|
|
|
(((thrd)->state == PS_SUSPENDED) || \
|
|
|
|
(((thrd)->flags & THR_FLAGS_SUSPENDED) != 0))
|
|
|
|
#define THR_IS_EXITING(thrd) (((thrd)->flags & THR_FLAGS_EXITING) != 0)
|
2004-07-13 22:49:58 +00:00
|
|
|
#define DBG_CAN_RUN(thrd) (((thrd)->tcb->tcb_tmbx.tm_dflags & \
|
2004-08-03 02:23:06 +00:00
|
|
|
TMDF_SUSPEND) == 0)
|
2003-12-09 02:20:56 +00:00
|
|
|
|
|
|
|
extern int __isthreaded;
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
_kse_isthreaded(void)
|
|
|
|
{
|
|
|
|
return (__isthreaded != 0);
|
|
|
|
}
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/*
|
|
|
|
* Global variables for the pthread kernel.
|
|
|
|
*/
|
1996-11-11 09:07:05 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
SCLASS void *_usrstack SCLASS_PRESET(NULL);
|
|
|
|
SCLASS struct kse *_kse_initial SCLASS_PRESET(NULL);
|
|
|
|
SCLASS struct pthread *_thr_initial SCLASS_PRESET(NULL);
|
2004-07-13 22:49:58 +00:00
|
|
|
/* For debugger */
|
|
|
|
SCLASS int _libkse_debug SCLASS_PRESET(0);
|
|
|
|
SCLASS int _thread_activated SCLASS_PRESET(0);
|
2004-08-07 15:15:38 +00:00
|
|
|
SCLASS int _thread_scope_system SCLASS_PRESET(0);
|
1997-02-05 23:26:09 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* List of all threads: */
|
|
|
|
SCLASS TAILQ_HEAD(, pthread) _thread_list
|
|
|
|
SCLASS_PRESET(TAILQ_HEAD_INITIALIZER(_thread_list));
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* List of threads needing GC: */
|
|
|
|
SCLASS TAILQ_HEAD(, pthread) _thread_gc_list
|
|
|
|
SCLASS_PRESET(TAILQ_HEAD_INITIALIZER(_thread_gc_list));
|
1998-09-30 06:22:07 +00:00
|
|
|
|
2004-07-13 22:49:58 +00:00
|
|
|
SCLASS int _thread_active_threads SCLASS_PRESET(1);
|
2003-09-14 22:52:16 +00:00
|
|
|
|
2003-11-04 20:04:45 +00:00
|
|
|
SCLASS TAILQ_HEAD(atfork_head, pthread_atfork) _thr_atfork_list;
|
|
|
|
SCLASS pthread_mutex_t _thr_atfork_mutex;
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Default thread attributes: */
|
|
|
|
SCLASS struct pthread_attr _pthread_attr_default
|
|
|
|
SCLASS_PRESET({
|
|
|
|
SCHED_RR, 0, TIMESLICE_USEC, THR_DEFAULT_PRIORITY,
|
|
|
|
THR_CREATE_RUNNING, PTHREAD_CREATE_JOINABLE, NULL,
|
2003-07-18 02:46:55 +00:00
|
|
|
NULL, NULL, THR_STACK_DEFAULT, /* guardsize */0
|
2003-04-18 05:04:16 +00:00
|
|
|
});
|
2003-02-17 10:05:18 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Default mutex attributes: */
|
|
|
|
SCLASS struct pthread_mutex_attr _pthread_mutexattr_default
|
|
|
|
SCLASS_PRESET({PTHREAD_MUTEX_DEFAULT, PTHREAD_PRIO_NONE, 0, 0 });
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Default condition variable attributes: */
|
|
|
|
SCLASS struct pthread_cond_attr _pthread_condattr_default
|
|
|
|
SCLASS_PRESET({COND_TYPE_FAST, 0});
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Clock resolution in usec. */
|
|
|
|
SCLASS int _clock_res_usec SCLASS_PRESET(CLOCK_RES_USEC);
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Array of signal actions for this process: */
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
SCLASS struct sigaction _thread_sigact[_SIG_MAXSIG];
|
1999-03-23 05:07:56 +00:00
|
|
|
|
2000-10-13 22:12:32 +00:00
|
|
|
/*
|
2003-04-18 05:04:16 +00:00
|
|
|
* Lock for above count of dummy handlers and for the process signal
|
|
|
|
* mask and pending signal sets.
|
2000-10-13 22:12:32 +00:00
|
|
|
*/
|
2003-04-18 05:04:16 +00:00
|
|
|
SCLASS struct lock _thread_signal_lock;
|
2000-10-13 22:12:32 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Pending signals and mask for this process: */
|
|
|
|
SCLASS sigset_t _thr_proc_sigpending;
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
SCLASS siginfo_t _thr_proc_siginfo[_SIG_MAXSIG];
|
2000-10-13 22:12:32 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
SCLASS pid_t _thr_pid SCLASS_PRESET(0);
|
2002-11-12 00:55:01 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Garbage collector lock. */
|
|
|
|
SCLASS struct lock _gc_lock;
|
|
|
|
SCLASS int _gc_check SCLASS_PRESET(0);
|
2003-04-18 07:09:43 +00:00
|
|
|
SCLASS int _gc_count SCLASS_PRESET(0);
|
2002-11-12 00:55:01 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
SCLASS struct lock _mutex_static_lock;
|
|
|
|
SCLASS struct lock _rwlock_static_lock;
|
|
|
|
SCLASS struct lock _keytable_lock;
|
|
|
|
SCLASS struct lock _thread_list_lock;
|
|
|
|
SCLASS int _thr_guard_default;
|
|
|
|
SCLASS int _thr_page_size;
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
SCLASS pthread_t _thr_sig_daemon;
|
2003-04-18 05:04:16 +00:00
|
|
|
SCLASS int _thr_debug_flags SCLASS_PRESET(0);
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* Undefine the storage class and preset specifiers: */
|
1996-01-22 00:23:58 +00:00
|
|
|
#undef SCLASS
|
2003-04-18 05:04:16 +00:00
|
|
|
#undef SCLASS_PRESET
|
|
|
|
|
1996-01-22 00:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Function prototype definitions.
|
|
|
|
*/
|
|
|
|
__BEGIN_DECLS
|
2003-04-18 05:04:16 +00:00
|
|
|
int _cond_reinit(pthread_cond_t *);
|
|
|
|
void _cond_wait_backout(struct pthread *);
|
2003-07-17 23:02:30 +00:00
|
|
|
struct kse *_kse_alloc(struct pthread *, int sys_scope);
|
2003-04-18 05:04:16 +00:00
|
|
|
kse_critical_t _kse_critical_enter(void);
|
|
|
|
void _kse_critical_leave(kse_critical_t);
|
2003-04-23 21:46:50 +00:00
|
|
|
int _kse_in_critical(void);
|
2003-04-18 07:09:43 +00:00
|
|
|
void _kse_free(struct pthread *, struct kse *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _kse_init();
|
2003-04-18 07:09:43 +00:00
|
|
|
struct kse_group *_kseg_alloc(struct pthread *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _kse_lock_wait(struct lock *, struct lockuser *lu);
|
|
|
|
void _kse_lock_wakeup(struct lock *, struct lockuser *lu);
|
|
|
|
void _kse_single_thread(struct pthread *);
|
2003-04-21 04:02:56 +00:00
|
|
|
int _kse_setthreaded(int);
|
2003-04-22 20:28:33 +00:00
|
|
|
void _kseg_free(struct kse_group *);
|
1999-03-23 05:07:56 +00:00
|
|
|
int _mutex_cv_lock(pthread_mutex_t *);
|
|
|
|
int _mutex_cv_unlock(pthread_mutex_t *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _mutex_lock_backout(struct pthread *);
|
|
|
|
void _mutex_notify_priochange(struct pthread *, struct pthread *, int);
|
|
|
|
int _mutex_reinit(struct pthread_mutex *);
|
|
|
|
void _mutex_unlock_private(struct pthread *);
|
|
|
|
void _libpthread_init(struct pthread *);
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
int _pq_alloc(struct pq_queue *, int, int);
|
2003-04-18 07:09:43 +00:00
|
|
|
void _pq_free(struct pq_queue *);
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
int _pq_init(struct pq_queue *);
|
1999-03-23 05:07:56 +00:00
|
|
|
void _pq_remove(struct pq_queue *pq, struct pthread *);
|
|
|
|
void _pq_insert_head(struct pq_queue *pq, struct pthread *);
|
|
|
|
void _pq_insert_tail(struct pq_queue *pq, struct pthread *);
|
|
|
|
struct pthread *_pq_first(struct pq_queue *pq);
|
2004-07-13 22:49:58 +00:00
|
|
|
struct pthread *_pq_first_debug(struct pq_queue *pq);
|
2001-01-24 13:03:38 +00:00
|
|
|
void *_pthread_getspecific(pthread_key_t);
|
|
|
|
int _pthread_key_create(pthread_key_t *, void (*) (void *));
|
|
|
|
int _pthread_key_delete(pthread_key_t);
|
|
|
|
int _pthread_mutex_destroy(pthread_mutex_t *);
|
|
|
|
int _pthread_mutex_init(pthread_mutex_t *, const pthread_mutexattr_t *);
|
|
|
|
int _pthread_mutex_lock(pthread_mutex_t *);
|
|
|
|
int _pthread_mutex_trylock(pthread_mutex_t *);
|
|
|
|
int _pthread_mutex_unlock(pthread_mutex_t *);
|
|
|
|
int _pthread_mutexattr_init(pthread_mutexattr_t *);
|
|
|
|
int _pthread_mutexattr_destroy(pthread_mutexattr_t *);
|
|
|
|
int _pthread_mutexattr_settype(pthread_mutexattr_t *, int);
|
|
|
|
int _pthread_once(pthread_once_t *, void (*) (void));
|
2003-05-30 00:21:52 +00:00
|
|
|
int _pthread_rwlock_init(pthread_rwlock_t *, const pthread_rwlockattr_t *);
|
|
|
|
int _pthread_rwlock_destroy (pthread_rwlock_t *);
|
2003-04-18 05:04:16 +00:00
|
|
|
struct pthread *_pthread_self(void);
|
2001-01-24 13:03:38 +00:00
|
|
|
int _pthread_setspecific(pthread_key_t, const void *);
|
2003-09-09 06:57:51 +00:00
|
|
|
void _pthread_yield(void);
|
Original pthread_once code has memory leak if pthread_once_t is used in
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.
Reviewed by: deischen
2003-09-09 22:38:12 +00:00
|
|
|
void _pthread_cleanup_push(void (*routine) (void *), void *routine_arg);
|
|
|
|
void _pthread_cleanup_pop(int execute);
|
2003-04-18 07:09:43 +00:00
|
|
|
struct pthread *_thr_alloc(struct pthread *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thr_exit(char *, int, char *);
|
|
|
|
void _thr_exit_cleanup(void);
|
|
|
|
void _thr_lock_wait(struct lock *lock, struct lockuser *lu);
|
|
|
|
void _thr_lock_wakeup(struct lock *lock, struct lockuser *lu);
|
2003-11-04 20:04:45 +00:00
|
|
|
void _thr_mutex_reinit(pthread_mutex_t *);
|
2003-04-18 05:04:16 +00:00
|
|
|
int _thr_ref_add(struct pthread *, struct pthread *, int);
|
|
|
|
void _thr_ref_delete(struct pthread *, struct pthread *);
|
2003-11-04 20:04:45 +00:00
|
|
|
void _thr_rtld_init(void);
|
|
|
|
void _thr_rtld_fini(void);
|
2003-04-22 20:28:33 +00:00
|
|
|
int _thr_schedule_add(struct pthread *, struct pthread *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thr_schedule_remove(struct pthread *, struct pthread *);
|
|
|
|
void _thr_setrunnable(struct pthread *curthread, struct pthread *thread);
|
2003-07-23 02:11:07 +00:00
|
|
|
struct kse_mailbox *_thr_setrunnable_unlocked(struct pthread *thread);
|
|
|
|
struct kse_mailbox *_thr_sig_add(struct pthread *, int, siginfo_t *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thr_sig_dispatch(struct kse *, int, siginfo_t *);
|
|
|
|
int _thr_stack_alloc(struct pthread_attr *);
|
|
|
|
void _thr_stack_free(struct pthread_attr *);
|
|
|
|
void _thr_exit_cleanup(void);
|
2003-04-18 07:09:43 +00:00
|
|
|
void _thr_free(struct pthread *, struct pthread *);
|
|
|
|
void _thr_gc(struct pthread *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thr_panic_exit(char *, int, char *);
|
1996-01-22 00:23:58 +00:00
|
|
|
void _thread_cleanupspecific(void);
|
|
|
|
void _thread_dump_info(void);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thread_printf(int, const char *, ...);
|
|
|
|
void _thr_sched_switch(struct pthread *);
|
2003-05-16 19:58:30 +00:00
|
|
|
void _thr_sched_switch_unlocked(struct pthread *);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thr_set_timeout(const struct timespec *);
|
2003-05-29 17:10:45 +00:00
|
|
|
void _thr_seterrno(struct pthread *, int);
|
2003-04-18 05:04:16 +00:00
|
|
|
void _thr_sig_handler(int, siginfo_t *, ucontext_t *);
|
|
|
|
void _thr_sig_check_pending(struct pthread *);
|
|
|
|
void _thr_sig_rundown(struct pthread *, ucontext_t *,
|
|
|
|
struct pthread_sigframe *);
|
|
|
|
void _thr_sig_send(struct pthread *pthread, int sig);
|
|
|
|
void _thr_sigframe_restore(struct pthread *thread, struct pthread_sigframe *psf);
|
2003-05-29 17:10:45 +00:00
|
|
|
void _thr_spinlock_init(void);
|
2003-12-09 02:20:56 +00:00
|
|
|
void _thr_cancel_enter(struct pthread *);
|
|
|
|
void _thr_cancel_leave(struct pthread *, int);
|
2003-04-28 23:56:12 +00:00
|
|
|
int _thr_setconcurrency(int new_level);
|
|
|
|
int _thr_setmaxconcurrency(void);
|
2003-07-17 23:02:30 +00:00
|
|
|
void _thr_critical_enter(struct pthread *);
|
|
|
|
void _thr_critical_leave(struct pthread *);
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
int _thr_start_sig_daemon(void);
|
|
|
|
int _thr_getprocsig(int sig, siginfo_t *siginfo);
|
|
|
|
int _thr_getprocsig_unlocked(int sig, siginfo_t *siginfo);
|
|
|
|
void _thr_signal_init(void);
|
|
|
|
void _thr_signal_deinit(void);
|
2003-07-17 23:02:30 +00:00
|
|
|
void _thr_hash_add(struct pthread *);
|
|
|
|
void _thr_hash_remove(struct pthread *);
|
|
|
|
struct pthread *_thr_hash_find(struct pthread *);
|
2003-10-08 00:20:50 +00:00
|
|
|
void _thr_finish_cancellation(void *arg);
|
2003-12-29 23:33:51 +00:00
|
|
|
int _thr_sigonstack(void *sp);
|
2004-07-13 22:49:58 +00:00
|
|
|
void _thr_debug_check_yield(struct pthread *);
|
2003-04-18 05:04:16 +00:00
|
|
|
|
2003-05-30 00:21:52 +00:00
|
|
|
/*
|
|
|
|
* Aliases for _pthread functions. Should be called instead of
|
|
|
|
* originals if PLT replocation is unwanted at runtme.
|
|
|
|
*/
|
|
|
|
int _thr_cond_broadcast(pthread_cond_t *);
|
|
|
|
int _thr_cond_signal(pthread_cond_t *);
|
|
|
|
int _thr_cond_wait(pthread_cond_t *, pthread_mutex_t *);
|
|
|
|
int _thr_mutex_lock(pthread_mutex_t *);
|
|
|
|
int _thr_mutex_unlock(pthread_mutex_t *);
|
|
|
|
int _thr_rwlock_rdlock (pthread_rwlock_t *);
|
|
|
|
int _thr_rwlock_wrlock (pthread_rwlock_t *);
|
|
|
|
int _thr_rwlock_unlock (pthread_rwlock_t *);
|
|
|
|
|
2001-01-29 03:24:23 +00:00
|
|
|
/* #include <sys/aio.h> */
|
|
|
|
#ifdef _SYS_AIO_H_
|
2001-01-29 18:59:53 +00:00
|
|
|
int __sys_aio_suspend(const struct aiocb * const[], int, const struct timespec *);
|
2001-01-24 13:03:38 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* #include <fcntl.h> */
|
|
|
|
#ifdef _SYS_FCNTL_H_
|
|
|
|
int __sys_fcntl(int, int, ...);
|
|
|
|
int __sys_open(const char *, int, ...);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
|
|
|
|
2001-10-26 18:45:02 +00:00
|
|
|
/* #include <sys/ioctl.h> */
|
|
|
|
#ifdef _SYS_IOCTL_H_
|
|
|
|
int __sys_ioctl(int, unsigned long, ...);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* #inclde <sched.h> */
|
|
|
|
#ifdef _SCHED_H_
|
|
|
|
int __sys_sched_yield(void);
|
2001-10-26 18:45:02 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* #include <signal.h> */
|
|
|
|
#ifdef _SIGNAL_H_
|
|
|
|
int __sys_kill(pid_t, int);
|
|
|
|
int __sys_sigaction(int, const struct sigaction *, struct sigaction *);
|
|
|
|
int __sys_sigpending(sigset_t *);
|
|
|
|
int __sys_sigprocmask(int, const sigset_t *, sigset_t *);
|
|
|
|
int __sys_sigsuspend(const sigset_t *);
|
|
|
|
int __sys_sigreturn(ucontext_t *);
|
|
|
|
int __sys_sigaltstack(const struct sigaltstack *, struct sigaltstack *);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
|
|
|
|
2001-01-24 13:03:38 +00:00
|
|
|
/* #include <sys/socket.h> */
|
2001-10-26 18:45:02 +00:00
|
|
|
#ifdef _SYS_SOCKET_H_
|
2003-12-09 15:16:27 +00:00
|
|
|
int __sys_accept(int, struct sockaddr *, socklen_t *);
|
|
|
|
int __sys_connect(int, const struct sockaddr *, socklen_t);
|
2003-04-18 05:04:16 +00:00
|
|
|
int __sys_sendfile(int, int, off_t, size_t, struct sf_hdtr *,
|
|
|
|
off_t *, int);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
|
|
|
|
2001-10-26 18:45:02 +00:00
|
|
|
/* #include <sys/uio.h> */
|
2003-04-18 05:04:16 +00:00
|
|
|
#ifdef _SYS_UIO_H_
|
|
|
|
ssize_t __sys_readv(int, const struct iovec *, int);
|
|
|
|
ssize_t __sys_writev(int, const struct iovec *, int);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* #include <time.h> */
|
|
|
|
#ifdef _TIME_H_
|
|
|
|
int __sys_nanosleep(const struct timespec *, struct timespec *);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* #include <unistd.h> */
|
|
|
|
#ifdef _UNISTD_H_
|
|
|
|
int __sys_close(int);
|
|
|
|
int __sys_execve(const char *, char * const *, char * const *);
|
|
|
|
int __sys_fork(void);
|
|
|
|
int __sys_fsync(int);
|
|
|
|
pid_t __sys_getpid(void);
|
|
|
|
int __sys_select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
|
|
|
|
ssize_t __sys_read(int, void *, size_t);
|
|
|
|
ssize_t __sys_write(int, const void *, size_t);
|
|
|
|
void __sys_exit(int);
|
o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
2003-06-28 09:55:02 +00:00
|
|
|
int __sys_sigwait(const sigset_t *, int *);
|
|
|
|
int __sys_sigtimedwait(sigset_t *, siginfo_t *, struct timespec *);
|
1996-01-22 00:23:58 +00:00
|
|
|
#endif
|
In the words of the author:
o The polling mechanism for I/O readiness was changed from
select() to poll(). In additon, a wrapped version of poll()
is now provided.
o The wrapped select routine now converts each fd_set to a
poll array so that the thread scheduler doesn't have to
perform a bitwise search for selected fds each time file
descriptors are polled for I/O readiness.
o The thread scheduler was modified to use a new queue (_workq)
for threads that need work. Threads waiting for I/O readiness
and spinblocks are added to the work queue in addition to the
waiting queue. This reduces the time spent forming/searching
the array of file descriptors being polled.
o The waiting queue (_waitingq) is now maintained in order of
thread wakeup time. This allows the thread scheduler to
find the nearest wakeup time by looking at the first thread
in the queue instead of searching the entire queue.
o Removed file descriptor locking for select/poll routines. An
application should not rely on the threads library for providing
this locking; if necessary, the application should use mutexes
to protect selecting/polling of file descriptors.
o Retrieve and use the kernel clock rate/resolution at startup
instead of hardcoding the clock resolution to 10 msec (tested
with kernel running at 1000 HZ).
o All queues have been changed to use queue.h macros. These
include the queues of all threads, dead threads, and threads
waiting for file descriptor locks.
o Added reinitialization of the GC mutex and condition variable
after a fork. Also prevented reallocation of the ready queue
after a fork.
o Prevented the wrapped close routine from closing the thread
kernel pipes.
o Initialized file descriptor table for stdio entries at thread
init.
o Provided additional flags to indicate to what queues threads
belong.
o Moved TAILQ initialization for statically allocated mutex and
condition variables to after the spinlock.
o Added dispatching of signals to pthread_kill. Removing the
dispatching of signals from thread activation broke sigsuspend
when pthread_kill was used to send a signal to a thread.
o Temporarily set the state of a thread to PS_SUSPENDED when it
is first created and placed in the list of threads so that it
will not be accidentally scheduled before becoming a member
of one of the scheduling queues.
o Change the signal handler to queue signals to the thread kernel
pipe if the scheduling queues are protected. When scheduling
queues are unprotected, signals are then dequeued and handled.
o Ensured that all installed signal handlers block the scheduling
signal and that the scheduling signal handler blocks all
other signals. This ensures that the signal handler is only
interruptible for and by non-scheduling signals. An atomic
lock is used to decide which instance of the signal handler
will handle pending signals.
o Removed _lock_thread_list and _unlock_thread_list as they are
no longer used to protect the thread list.
o Added missing RCS IDs to modified files.
o Added checks for appropriate queue membership and activity when
adding, removing, and searching the scheduling queues. These
checks add very little overhead and are enabled when compiled
with _PTHREADS_INVARIANTS defined. Suggested and implemented
by Tor Egge with some modification by me.
o Close a race condition in uthread_close. (Tor Egge)
o Protect the scheduling queues while modifying them in
pthread_cond_signal and _thread_fd_unlock. (Tor Egge)
o Ensure that when a thread gets a mutex, the mutex is on that
threads list of owned mutexes. (Tor Egge)
o Set the kernel-in-scheduler flag in _thread_kern_sched_state
and _thread_kern_sched_state_unlock to prevent a scheduling
signal from calling the scheduler again. (Tor Egge)
o Don't use TAILQ_FOREACH macro while searching the waiting
queue for threads in a sigwait state, because a change of
state destroys the TAILQ link. It is actually safe to do
so, though, because once a sigwaiting thread is found, the
loop ends and the function returns. (Tor Egge)
o When dispatching signals to threads, make the thread inherit
the signal deferral flag of the currently running thread.
(Tor Egge)
Submitted by: Daniel Eischen <eischen@vigrid.com> and
Tor Egge <Tor.Egge@fast.no>
1999-06-20 08:28:48 +00:00
|
|
|
|
|
|
|
/* #include <poll.h> */
|
|
|
|
#ifdef _SYS_POLL_H_
|
2003-04-18 05:04:16 +00:00
|
|
|
int __sys_poll(struct pollfd *, unsigned, int);
|
2001-10-26 18:45:02 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
/* #include <sys/mman.h> */
|
|
|
|
#ifdef _SYS_MMAN_H_
|
|
|
|
int __sys_msync(void *, size_t, int);
|
2000-01-19 07:04:50 +00:00
|
|
|
#endif
|
|
|
|
|
2003-04-18 05:04:16 +00:00
|
|
|
#endif /* !_THR_PRIVATE_H */
|