not spinlock_t. Spinlock_t and the associated functions and macros may
require blocking signals in order for async-safe libc functions to behave
appropriately in libthr. This is undesriable for libthr internal locking.
So, this is the first step in completely separating libthr from libc's
locking primitives.
Three new macros should be used for internal libthr locking from now on:
THR_LOCK, THR_TRYLOCK, THR_UNLOCK.
and the disabling of signals. What we are really interested in is
keeping track of recursive disabling of signals. We should not
be recursively acquiring thread locks. Any such situations should
be reorganized to not require a recursive lock.
Separating the two out also allows us to block signals independent of
acquiring thread locks. This will be needed in libthr in the near future when
we put the pieces together to protect libc functions that use pthread mutexes
and low level locks.
implementation and the new improved one. We now precompute the
signal set passed to sigtimedwait, using an inverted set when
necessary for compatibility with older kernels.
exit function has invalidated the need for _spin[un]lock_pthread().
The _spin[un]lock() functions can now dereference curthread without
the danger that the ldtentry containing the pointer to the thread
has been cleared out from under them.
an application compiled -static with libthr would dump core in
malloc(3) because the stub thread initialization routine in libc would
be used instead of the libthr supplied one.
condition variables. Cosmetic.
Explicitly compare against PTHREAD_MUTEX_INITIALIZER. We shouldn't
encourage calls to the mutex functions with null pointers to mutexes.
Approved by: re/jhb
from multiple threads don't initialze the same condition variable
more than once.
Explicitly compare cond pointers with PTHREAD_COND_INITIALIZER instead
of NULL. Just because it happens to be defined as NULL is no reason
to encourage the idea that people can call those functions with
NULL pointers to a condition variable.
Approved by: re/jhb
The dead list thread is sufficient for synchronization.
Retire the arch_id (ldt array slot) in the gc thread instead of the
doing it in the thread itself.
Approved by: re/jhb
joiner by making sure all locks and unlocks occur in the same order. For
the record the lock order is: DEAD_LIST, THREAD_LIST, exiting thread, joiner
thread.
Approved by: re/rwatson
thread is not dead, the join loop is guaranteed to execute at least
once, so there is no need to pick up the thread list lock after
we return from suspenstion only to release it after the loop.
Approved by: re/blanket libthr
joined and then the joiner thread. There isn't an easy (sane?) way
to make it use the correct order without introducing races involving
the target thread and finding which (active or dead) list it is on. So,
after locking the canceled thread it will try to lock the joined thread
and if it fails release the first lock and try again from the top.
Introduce a new function, _spintrylock, which is simply a wrapper arround
umtx_trylock(), to help accomplish this.
Approved by: re/blanket libthr
Modify the thread creation and thread searching routine
to lock the thread lists with the new locks instead of GIANT_LOCK.
Approved by: re/blanket libthr
list is protected by a spinlock_t, but the dead list uses a pthread_mutex
because it is necessary to synchronize other threads with the garbage
collector thread. Lock/Unlock macros are used so it's easier to make
changes to the locks in the future.
The 'dead thread list' lock is intended to replace the gc mutex.
This doesn't have any practical ramifications. It simply makes it
clearer what the purpose of the lock is. The gc will use this lock,
instead of the gc mutex, to synchronize access to the dead list with
other threads.
Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK,
and also to properly lock and protect thread state changes,
especially with respect to a joining thread.
The gc thread was also re-arranged to be more organized and less nested.
_pthread_join() was also modified to use the thread list locks. However,
locking and unlocking here needs special care because a thread could find
itself in a position where it's joining an exiting thread that is
waiting on the dead list lock, which this thread (joiner) holds. If the
joiner doesn't take care to lock *and* unlock in the same order they
(the joiner and the joinee) could deadlock against each other.
Approved by: re/blanket libthr
pthread_cond_t) internaly in addition to the low-level spinlock_t. The
garbage collector mutex and condition variable are two such examples. This
might lead to critical sections nested within critical sections. Implement
a reference counting mechanism so that signals are masked only on the first
entry and unmasked on the last exit.
I'm not sure I like the idea of nested critical sections, but if
the library is going to use the pthread primitives it might be necessary.
Approved by: re/blanket libthr
Access to the thread's flags and state is protected by
_thread_critical_enter/exit(). When a thread is signaled with a condition
its state must be protected by locking it and disabling
signals before it is taken of the waiters' queue.
Move the implementation of pthread_cond_signal() and pthread_cond_broadcast()
into one function, cond_signal(). Its behaviour is determined by the
last argument, int broadcast. If this is set to 1 it will remove all
waiters, otherwise it will wake up only the first waiter thread.
Remove an extraneous call to pthread_testcancel().
Approved by: re/blanket libthr
that take the address of a struct pthread as their first argument.
_spin[un]lock() just become wrappers arround these two functions.
These new functions are for use in situations where curthread can't be
used. One example is _thread_retire(), where we invalidate the array index
curthread uses to get its pointer..
Approved by: re/blanket libthr
Prevent one thread from messing up another thread's saved signal
mask by saving it in struct pthread instead of leaving it as a
global variable. D'oh!
Approved by: re/blanket libthr
When in either the mutex or cond queue we notice that the thread
is already on one of the queues, don't just simply abort(). Print
out the thread's identifiers and what queue it was on.
Approved by: markm/mentor, re/blanket libthr
is the *only* remaining thread in the application, in which case we
should not core dump, and instead exit gracefully.
Approved by: markm/mentor, re/blanket libthr
of pthread_cond_timedwait() is moved into cond_wait_common().
Pthread_cond_wait() and pthread_cond_timedwait() are now wrappers around
this function. Previously, the former called the latter with the abstime
pointing to 0 time. This violated Posix semantics should an application
have reason to call it with that argument because instead or returning
immediately it would have waited indefinitely for the cv to be signaled.
Approved by: markm/mentor, re/blanket libthr
Reviewed by: jeff
respect to other threads and signal handlers by moving to
the _thread_critical_enter/exit functions.
o Introduce an static function, testcancel(), that is used by
the other functions in this module. This allows it to make
locking assumptions that the top-level functions can't.
o Rework the code flow a bit to reduce indentation levels.
Approved by: markm/mentor, re/blanket libthr
Reviewed by: jeff
libthr. No changes were made to libpthread by request of deischen,
who will soon commit a real implementation for that library.
PR: standards/50848
Submitted by: Sergey A. Osokin <osa@freebsd.org.ru>
MFC after: 1 week
as curthread in the new context, so that it will be set automatically when
the thread is switched to. This fixes a race where we'd run for a little
while with curthread unset in _thread_start.
Reviewed by: jeff
_get_curthread(). This is similar to the kernel's curthread. Doing
this saves stack overhead and is more convenient to the programmer.
- Pass the pointer to the newly created thread to _thread_init().
- Remove _get_curthread_slow().
This was changed because originally we were blocking on the umtx and
allowing the kernel to do the queueing. It was decided that the
lib should queue and start the threads in the order it decides and the
umtx code would just be used like spinlocks.