Commit Graph

83 Commits

Author SHA1 Message Date
Mike Makonnen
f2c3dd08ec Preparations to make libthr work in multi-threaded fork()ing applications.
o Remove some code duplication between _thread_init(), which is run once
  to initialize libthr and the intitial thread, and pthread_create(), which
  initializes newly created threads, into a new function called from both
  places: init_td_common()
o Move initialization of certain parts of libthr into a separate
  function. These include:
	- Active threads list and it's lock
	- Dead threads list and it's lock & condition variable
	- Naming and insertion of the initial thread into the
	  active threads list.
2003-12-26 08:16:17 +00:00
Mike Makonnen
8657fd166c Remove _giant_mutex and its associated macros. 2003-12-15 12:38:06 +00:00
Mike Makonnen
2543fd4700 Comment out most of pthread_setschedparam. Pthread priorities didn't
work before anyways, and I didn't want to fix broken code I had no
way of testing. It was necessary however, in order to get rid of GIANT_LOCK.
Pthread priorities will have to wait a little longer to get fixed.
2003-12-15 12:31:46 +00:00
Mike Makonnen
c830473999 When creating a pthread in the suspended state their were two
problems: (1) The wrong flag was being checked for in the attribute
	  (2) The pthread's state was not being set to indicate it was
	      suspended.

Noticed by: Igor Sysoev <is@rambler-co.ru>
2003-12-15 09:35:02 +00:00
Mike Makonnen
099fe19901 Doh! Lock the thread passed in by the caller, not the current thread. 2003-12-12 09:51:39 +00:00
Mike Makonnen
f318a5206c Remove uses of GIANT_LOCK and replace with appropriate thread
and thread list locks.
2003-12-11 08:34:07 +00:00
Mike Makonnen
d214f02991 Take a stab at fixing some of the macro-nightmare.
PTHREAD_NEW_STATE should work as expected now: a thread
marked PS_RUNNING will get sent a SIGTHR.
Still more cleanups necessary.
2003-12-09 11:20:01 +00:00
Mike Makonnen
8955220107 Fix the wrapper function around signals so that a signal handling
thread on one of the mutex or condition variable queues is removed
from those queues before the real signal handler is called.
2003-12-09 11:12:11 +00:00
Mike Makonnen
6fedbb4e37 Ugghh, cvs add the functions necessary to lock the global signal action
table.
2003-12-09 11:06:55 +00:00
Mike Makonnen
4a7709c540 o Add a wrapper around sigaction(2), so we can insert our own wrapper
around signals.
o Lock the process global signal action table.
2003-12-09 11:04:36 +00:00
Mike Makonnen
d63466e954 Enable cancellation points around some syscalls. 2003-12-09 11:01:09 +00:00
Mike Makonnen
a4be5b10a2 Use dynamic instead of static LDT allocation.
Approved by: re (scottl)
2003-12-02 16:00:26 +00:00
Marcel Moolenaar
ed74e02776 Relink libc_r.a, libc_r.so and libc_r_p.so from libthr to libkse.
On ia64, where there's no libc_r at all, libkse is now the default
thread library by virtue of these links.

The reasons for this change are:
1. libkse is slated to become the default thread library anyway,
2. active development and maintenance is only present for libkse,
3. GNOME and KDE, both in the process of being supported on ia64,
   work better with KSE; even on ia64.
2003-09-27 23:27:19 +00:00
Marcel Moolenaar
c24297c0f2 Implement _get_curthread and _set_curthread. We use GCCs builtin
function this, which expands to PAL calls (rduniq and wruniq).
This needs adjustment when TLS is implemented.
2003-07-24 07:51:49 +00:00
Mike Makonnen
9e222aaf7a The MD framework for libthr on alpha 2003-07-19 15:57:52 +00:00
Mike Makonnen
393441d43b When _PTHREADSINVARIANTS is defined SIGABRT is not included
in the set of signals to block.
Also, make the PANIC macro call abort() instead of simply
exiting.
2003-07-08 09:58:23 +00:00
Mike Makonnen
659045ffbf Change all instances of THR_LOCK/UNLOCK, etc to UMTX_*.
It is a more acurate description of the locks they
operate on.
2003-07-06 10:18:48 +00:00
Mike Makonnen
9644071977 There's no need for _umtxtrylock to be a separate function.
Roll it into the pre-existing macro that's used to call it.
2003-07-06 10:10:32 +00:00
Mike Makonnen
e921a3c976 _pthread_mutex_trylock() is another internal libc function that must block
signals.
2003-07-03 13:28:53 +00:00
Mike Makonnen
f493d09ae7 Begin making libthr async signal safe.
Create a private, single underscore, version of pthread_mutex_unlock for libc.
pthread_mutex_lock already has one. These versions are different from the
ones that applications will link against because they block all signals
from the time a call to lock the mutex is made until it is successfully
unlocked.
2003-07-02 02:05:23 +00:00
Mike Makonnen
745a4a9ef8 Do not attempt to reque a thread on a mutex queue. It may be that
a thread receives a spurious wakeup from sigtimedwait(), so make sure
that the call to the queueing code is called only once before entering
the loop (not in the loop). This should fix some fatal errors people
are seeing with messages stating the thread is already on the mutex queue.
These errors may still be triggered from signal handlers; however, since
that part of the code is not locked down yet.
2003-07-01 15:52:09 +00:00
Ruslan Ermilov
0b3cbc5c38 Axe AINC.
Submitted by:	bde
2003-07-01 15:07:01 +00:00
Mike Makonnen
fadd82e367 Catchup with _thread_suspend() changes. 2003-06-30 12:35:31 +00:00
Mike Makonnen
dbc6f4c07d Sweep through pthread locking and use the new locking primitives for
libthr.
2003-06-29 23:51:04 +00:00
Mike Makonnen
2234d5bea2 Locking primitives and operations in libthr should use struct umtx,
not spinlock_t. Spinlock_t and the associated functions and macros may
require blocking signals in order for async-safe libc functions to behave
appropriately in libthr. This is undesriable for libthr internal locking.
So, this is the first step in completely separating libthr from libc's
locking primitives.

Three new macros should be used for internal libthr locking from now on:
THR_LOCK, THR_TRYLOCK, THR_UNLOCK.
2003-06-29 23:49:41 +00:00
Mike Makonnen
c36507007f In a critical section, separate the aquisition of the thread lock
and the disabling of signals. What we are really interested in is
keeping track of recursive disabling of signals. We should not
be recursively acquiring thread locks. Any such situations should
be reorganized to not require a recursive lock.

Separating the two out also allows us to block signals independent of
acquiring thread locks. This will be needed in libthr in the near future when
we put the pieces together to protect libc functions that use pthread mutexes
and low level locks.
2003-06-29 21:21:52 +00:00
John Polstra
7c916264aa Make _thread_suspend work with both the old broken sigtimedwait
implementation and the new improved one.  We now precompute the
signal set passed to sigtimedwait, using an inverted set when
necessary for compatibility with older kernels.
2003-06-29 15:55:44 +00:00
Mike Makonnen
7e2160688c The move to _retire() a thread in the GC instead of in the thread's
exit function has invalidated the need for _spin[un]lock_pthread().
The _spin[un]lock() functions can now dereference curthread without
the danger that the ldtentry containing the pointer to the thread
has been cleared out from under them.
2003-06-29 00:12:40 +00:00
Marcel Moolenaar
6128a735ea Create compatibility links for libc_r on ia64 to prevent build-time
breakages. Note that runtime compatibility is not guaranteed. Future
changes to setjmp/longjmp in libc will break threaded applications
linked against libc_r.so.5 on ia64. We pull our "tier 2" card once
more...

Reviewed by: ru
2003-06-27 18:07:47 +00:00
Mike Makonnen
05e948d996 _thread_printf() is only used for debugging or in cases where something's
screwed beyond all help, so it can just skip the pthreads wrapper
for write(2) and call directly into it.
2003-06-09 17:58:15 +00:00
Mike Makonnen
4dee39fea0 Make C applications statically compiled with libthr work. Previously,
an application compiled -static with libthr would dump core in
malloc(3) because the stub thread initialization routine in libc would
be used instead of the libthr supplied one.
2003-06-04 08:23:05 +00:00
Mike Makonnen
b9662ddd18 Teach recent changes in the umtx structure in the kernel to the libthr
initialiazer.

Found by:	tinderbox
2003-06-03 09:31:33 +00:00
Mike Makonnen
cff6c3cab1 Unwind the _giant_mutex from pthread_detach(). When detaching a joiner thread
it's important the correct lock order is observed: lock first the joined and
then the joiner.
2003-06-02 11:01:00 +00:00
Mike Makonnen
4384412030 Consolidate static_init() and static_init_private into one function.
The behaviour of this function is controlled by the argument: private.
2003-06-02 10:04:18 +00:00
David E. O'Brien
a23e5f4d43 .S comments must be C comments, not ASM ones. 2003-06-02 02:32:56 +00:00
Mike Makonnen
6e1aa51e9e I botched one of my committs in the last round. Fix it. 2003-05-31 14:38:22 +00:00
Mike Makonnen
1b2a19ce0e Make the mutex static initializers look more like the one for
condition variables. Cosmetic.

Explicitly compare against PTHREAD_MUTEX_INITIALIZER. We shouldn't
encourage calls to the mutex functions with null pointers to mutexes.

Approved by:	re/jhb
2003-05-29 20:58:31 +00:00
Mike Makonnen
41f2bd859f Use a static lock to ake sure pthread_cond_* functions called
from multiple threads don't initialze the same condition variable
more than once.

Explicitly compare cond pointers with PTHREAD_COND_INITIALIZER instead
of NULL. Just because it happens to be defined as NULL is no reason
to encourage the idea that people can call those functions with
NULL pointers to a condition variable.

Approved by:	re/jhb
2003-05-29 20:54:00 +00:00
Mike Makonnen
0e335eaeb5 Missing unlock.
Approved by:	re/jhb
2003-05-29 20:49:17 +00:00
Mike Makonnen
b3cdf7ae2e Don't hold the active thread list lock when signaling the gc thread.
The dead list thread is sufficient for synchronization.

Retire the arch_id (ldt array slot) in the gc thread instead of the
doing it in the thread itself.

Approved by:	re/jhb
2003-05-29 20:46:53 +00:00
Mike Makonnen
981f4968f0 It's unnecessary to lock the thread during creation. Simply extend
the scope of the active thread list lock.

Approved by:	re/jhb
2003-05-29 20:40:50 +00:00
Mike Makonnen
a09d02f780 Minimize the potential for deadlocks between an exiting thread and it's
joiner by making sure all locks and unlocks occur in the same order. For
the record the lock order is: DEAD_LIST, THREAD_LIST, exiting thread, joiner
thread.

Approved by: re/rwatson
2003-05-27 21:48:42 +00:00
Mike Makonnen
ee98e9be9f Revert part of the last commit. I don't know what I was smoking.
Approved by: re/rwatson
2003-05-27 21:43:49 +00:00
Mike Makonnen
ca1c469cc7 Decouple the thread stack [de]allocating functions from the 'dead threads list'
lock. It's not really necessary and we don't need the added complexity
or potential for deadlocks.

Approved by:	re/blanket libthr
2003-05-26 00:37:07 +00:00
Mike Makonnen
2387af9962 Revise the unlock order in _pthread_join(). Also, if the joined
thread is not dead, the join loop is guaranteed to execute at least
once, so there is no need to pick up the thread list lock after
we return from suspenstion only to release it after the loop.

Approved by:	re/blanket libthr
2003-05-26 00:28:49 +00:00
Mike Makonnen
12c407a424 Return gracefully, rather than aborting, when the maximum concurrent
threads per process has been reached. Return EAGAIN, as per spec.

Approved by:	re/blanket libthr
2003-05-25 22:40:57 +00:00
Mike Makonnen
d39d651258 _pthread_cancel() breaks the normal lock order of first locking the
joined and then the joiner thread. There isn't an easy (sane?) way
to make it use the correct order without introducing races involving
the target thread and finding which (active or dead) list it is on. So,
after locking the canceled thread it will try to lock the joined thread
and if it fails release the first lock and try again from the top.

Introduce a new function, _spintrylock, which is simply a wrapper arround
umtx_trylock(), to help accomplish this.

Approved by: re/blanket libthr
2003-05-25 08:48:11 +00:00
Mike Makonnen
4393f2c4ec Part of the last patch.
Modify the thread creation and thread searching routine
to lock the thread lists with the new locks instead of GIANT_LOCK.

Approved by:	re/blanket libthr
2003-05-25 08:35:37 +00:00
Mike Makonnen
71d09bc86a Start locking up the active and dead threads lists. The active threads
list is protected by a spinlock_t, but the dead list uses a pthread_mutex
because it is necessary to synchronize other threads with the garbage
collector thread. Lock/Unlock macros are used so it's easier to make
changes to the locks in the future.

The 'dead thread list' lock is intended to replace the gc mutex.
This doesn't have any practical ramifications. It simply makes it
clearer what the purpose of the lock is. The gc will use this lock,
instead of the gc mutex, to synchronize access to the dead list with
other threads.

Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK,
and also to properly lock and protect thread state changes,
especially with respect to a joining thread.

The gc thread was also re-arranged to be more organized and less nested.

_pthread_join() was also modified to use the thread list locks. However,
locking and unlocking here needs special care because a thread could find
itself in a position where it's joining an exiting thread that is
waiting on the dead list lock, which this thread (joiner) holds. If the
joiner doesn't take care to lock *and* unlock in the same order they
(the joiner and the joinee) could deadlock against each other.

Approved by:	re/blanket libthr
2003-05-25 08:31:33 +00:00
Mike Makonnen
6a1899ed5c The libthr code makes use of higher-level primitives (pthread_mutex_t and
pthread_cond_t) internaly in addition to the low-level spinlock_t. The
garbage collector mutex and condition variable are two such examples. This
might lead to critical sections nested within critical sections. Implement
a reference counting mechanism so that signals are masked only on the first
entry and unmasked on the last exit.

I'm not sure I like the idea of nested critical sections, but if
the library is going to use the pthread primitives it might be necessary.

Approved by:	re/blanket libthr
2003-05-25 07:58:22 +00:00