state for amd64 was twice as large as necessary. Peter
recently fixed this, so the comment no longer applies.
Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.
FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
to avoid potential memory leak, also fix a bug in pthread_create, contention
scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check
right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit.
2. Execute hooks registered by atexit() on thread stack but not on scheduler
stack.
3. Simplify some code in _kse_single_thread by calling xxx_destroy functions.
Reviewed by: deischen
should be a value past to pthread_attr_setguardsize, not a rounded up value.
Also fix a stack size matching bug in thr_stack.c, now stack matching code
uses number of pages but not bytes length to match stack size, so for example,
size 512 bytes and size 513 bytes should both match 1 page stack size.
Reviewed by: deischen
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.
Reviewed by: deischen
instead of long types for low-level locks.
Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.
Add memory barriers to alpha atomic swap functions (submitted
by davidxu).
Requested by: bde
critical region, we wrap some syscalls for thread cancellation point, and
when syscalls returns, we call _thr_leave_cancellation_point, at the time
if a signal comes in, it would be buffered, and when the thread leaves
_thr_leave_cancellation_point, buffered signals will be processed, to avoid
messing up normal syscall errno, we should save and restore errno around
signal handling code.
yet, so we can protect some locking code from being interrupted by signal
handling. When KSE mode is turned on, reset the thread flag to scope process
except we are running in 1:1 mode which we needn't turn it off.
Also remove some unused member variables in structure kse.
Tested by: deischen
have execute permissions. Run "perl verify" instead. Replace all
occurences of the hardcoding of ./verify with $(VERIFY) to allow
it to be overridden as well.
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.
Reviewed by: deischen
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...
However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.
In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).
archs that can (or are required to) have per-thread registers.
Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.
Reviewed by: davidxu
context functions. We don't need to enter the kernel anymore. The
contexts are compatible (ie a context created by getcontext() can
be restored by _ia64_restore_context()).
While here, make the use of THR_ALIGNBYTES and THR_ALIGN a no-op.
They are going to be removed anyway.
We write 1 for r8 in the context so that _ia64_restore_context()
will return with a non-zero value. _ia64_save_context() always
return 0.
o In _ia64_restore_context(), don't restore the thread pointer. It
is not normally part of the context. Also, restore the return
registers. We get called for contexts created by getcontext(),
which means we have to restore all the syscall return values.
the userland version of [gs]etcontext to switch between a thread
and the UTS scheduler (and back again). This also fixes a bug
in i386 _thr_setcontext() which wasn't properly restoring the
context.
Reviewed by: davidxu
This eliminates ping-ponging of locks, where the idle KSE wakes
up only to find the lock it needs is being held. This gives
little or no gain to M:N mode but greatly speeds up 1:1 mode.
Reviewed & Tested by: davidxu
handed-off/signaled to a higher priority thread. Note that when
there are idle KSEs that could run the higher priority thread,
we still add the preemption point because it seems to take the
kernel a while to schedule an idle KSE. The drawbacks are that
threads will be swapped more often between CPUs (KSEs) and
that there will be an extra userland context switch (the idle
KSE is still woken and will probably resume the preempted
thread). We'll revisit this if and when idle CPU/KSE wakeup
times improve.
Inspired by: Petri Helenius <pete@he.iki.fi>
Reviewed by: davidxu
is system bound thread and when it is blocked, no upcall is generated.
o Add ability to libkse to allow it run in pure 1:1 threading mode,
defining SYSTEM_SCOPE_ONLY in Makefile can turn on this option.
o Eliminate code for installing dummy signal handler for sigwait call.
o Add hash table to find thread.
Reviewed by: deischen
its waitset, but if the signal is not masked by the thread, the signal
can interrupt the thread and signal action can be invoked by the thread,
sigwait should return with errno set to EINTR.
Also save and restore thread internal state(timeout and interrupted)
around signal handler invoking.
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
functions are derived from the swapctx() and restorectx() (resp)
from sys/ia64/ia64/context.s. The code is expected to be 99%
correct, but has not yet been tested.
Note that with these functions operating on mcontext_t, we also
created the foundation upon which we can implement getcontext(2)
and setcontext(2) replacements. It's not guaranteed that the use
of these syscalls and _ia64_{save|restore}_context() on the same
uicontext_t is actually going to work. Replacing the syscalls is
now trivially achieved.
This commit completes the ia64 port of libpthread itself (modulo
testing and bugfixes).
the register stack and memory stack and call the function given to it.
While here, provide empty, non-working, stubs for the context functions
(_ia64_save_context() and _ia64_restore_context()) so that anyone can at
least compile libkse from CVS sources. Real implementations will follow
soon.
minimize the amount and complexity of assembly code that needs to be
written. This way the core functionality is spread over 3 elementary
functions that don't have to do anything that can more easily and
more safely be done in C. As such, assembly code will only have to
know about the definition of mcontext_t.
The runtime cost of not having these functions being inlined is less
important than the cleanliness and maintainability of the code at
this stage of the implementation.
platforms the compiler warns about incompatible integer/pointer casts
and on ia64 this generally is bad news. We know that what we're doing
here is valid/correct, so suppress the warning. No functional change.
Sleeps better: marcel
by moving the definition of struct ksd to pthread_md.h and removing
the inclusion of ksd.h from thr_private.h (which has the definition
of struct kse and kse_critical_t). This allows ksd.h to have inline
functions that use struct kse and kse_critical_t and generally
yields a cleaner implementation at the cost of not having all ksd
related types/definitions in one header.
Implement the ksd functionality on ia64 by using inline functions
and permanently remove ksd.c from the ia64 specific makefile.
This change does not clean up the i386 specific version of ksd.h.
NOTE: The ksd code on ia64 abuses the tp register in the same way
as it is abused in libthr in that it is incompatible with the
runtime specification. This will be address when support for TLS
hits the tree.
_ksd_readandclear_tmbx to be function-like. That way we
can define them as inline functions or create prototypes
for them.
This change allows the ksd interface on ia64 to be fully
inlined.
the chance of getting the same thread id when allocating a
new thread is reduced. This won't work if the application
creates a new thread for every time a thread exits, but
we're still within the allowances of POSIX.
debugging is enabled so the symbol needs to be resolved before rtld
locking is enabled. I may not really know what I'm talking about,
but it works.
Submitted by: kan
path, making them suitable for direct use by the dynamic loader.
Register libpthread-specific locking API with rtld on startup.
This still has some rough edges with signals which should be
addresses later.
Approved by: re (scottl)
is called and the application is not threaded. This works around
a problem when an application that hasn't yet become threaded
tries to jump out of a signal handler.
Reported by: mbr
Approved by: re@ (rwatson)
low-level lock used by the libpthread implementation. In the
future, we'll eliminate spinlocks from libc but that will wait
until after 5.1-release.
Don't call an application signal handler if the handler is
the same as the library-installed handler. This seems to
be possible after a fork and is the cause of konsole hangs.
Approved by: re@ (jhb)
a lock is being waitied on.
Fix a races in join and cancellation.
When trying to wait on a CV and the library is not yet
threaded, make it threaded so that waiting actually works.
When trying to nanosleep() and we're not threaded, just
call the system call nanosleep instead of adding the thread
to the wait queue.
Clean up adding/removing new threads to the "all threads queue",
assigning them unique ids, and tracking how many active threads
there are. Do it all when the thread is added to the scheduling
queue instead of making pthread_create() know how to do it.
Fix a race where a thread could be marked for signal delivery
but it could be exited before we actually add the signal to it.
Other minor cleanups and bug fixes.
Submitted by: davidxu
Approved by: re@ (blanket for libpthread)
be external (initialize()!).
Remove cancellation points from _pthread_cond_wait and
_pthread_cond_timedwait (single underscore versions are
libc private functions). Point the weak reference(!) for
these functions to the versions with cancellation points.
Approved by: re@(blanket till 5/19)
Pointed out by: kan (cancellation point bug)
lock held (_thr_sched_switch_unlocked()) and use this to avoid
dropping the scheduler lock and having the scheduler retake the
same lock again.
Add a better way of detecting if a low-level lock is in use.
When switching out a thread due to blocking in the UTS, don't
switch to the KSE's scheduler stack only to switch back to
another thread. If possible switch to the new thread directly
from the old thread and avoid the overhead of the extra
context switch.
Check for pending signals on a thread when entering the scheduler
and add them to the threads signal frame. This includes some
other minor signal fixes.
Most of this was a joint effor between davidxu and myself.
Reviewed by: davidxu
Approved by: re@ (blanket for libpthread)
a thread lock).
Better protect access to thread state while searching for
threads to handle a signal.
Better protect access to process pending signals while processing
a thread in sigwait().
Submitted by: davidxu
KSEs when it's thread exits; allow the GC handler to do that.
o Make spinlock/spinlock critical regions.
The following were submitted by davidxu
o Alow thr_switch() to take a null mailbox argument.
o Better protect cancellation checks.
o Don't set KSE specific data when creating new KSEs; rely on the
first upcall of the KSE to set it.
o Add the ability to set the maximum concurrency level and do this
automatically. We should have a way to enable/disable this with
some sort of tunable because some applications may not want this
to be the default.
o Hold the scheduling lock across thread switch calls.
o If scheduling of a thread fails, make sure to remove it from the list
of active threads.
o Better protect accesses to a joining threads when the target thread is
exited and detached.
o Remove some macro definitions that are now provided by <sys/kse.h>.
o Don't leave the library in threaded mode if creation of the initial
KSE fails.
o Wakeup idle KSEs when there are threads ready to run.
o Maintain the number of threads active in the priority queue.
While I'm here, use the TAILQ_FOREACH macro instead of a more
manual method which was inherited from libc_r (so we could
remove elements from the list which isn't needed for libpthread).
Submitted by: Kazuaki Oda <kaakun@highway.ne.jp>
provided by Sergey A. Osokin <osa@freebsd.org.ru>.
In order to test this on a single CPU machine, you need to:
sysctl kern.threads.debug=1
sysctl kern.threads.virtual_cpu=2
lock level is 0. Thus far, the threads implementation doesn't use
mutexes or condition variables so the lock level should be 0.
Save the return value when trying to schedule a new thread and
use this to return an error from pthread_create().
Change the max sleep time for an idle KSE to 1 minute from 2 minutes.
Maintain a count of the number of KSEs within a KSEG.
With these changes scope system threads seem to work, but heavy
use of them crash the kernel (supposedly VM bugs).
to be instances where the kernel doesn't properly save and/or
restore it.
Use noupcall and nocompleted flags in the KSE mailbox. These
require kernel changes to work which will be committed sometime
later. Things still work without the changes.
Remove the general kse entry function and use two different
functions -- one for scope system threads and one for scope
process threads. The scope system function is not yet enabled
and we use the same function for all threads at the moment.
Keep a copy of the KSE stack for the case that a KSE runs
a scope system thread and uses the same stack as the thread
(no upcalls are generated, so a separate stack isn't needed).
This isn't enabled yet.
Use a separate field for the KSE waiting flag. It isn't
correct to use the mailbox flags field.
The following fixes were provided by David Xu:
o Initialize condition variable locks with thread versions
of the low-level locking functions instead of the kse versions.
o Enable threading before creating the first thread instead
of after.
o Don't enter critical regions when trying to malloc/free
or call functions that malloc/free.
o Take the scheduling lock when inheriting thread attributes.
o Check the attribute's stack pointer instead of the
attributes stack size for null when allocating a
thread's stack.
o Add a kseg reinit function so we don't have to destroy and
then recreate the same lock.
o Check the return value of kse_create() and return an
appropriate error if it fails.
o Don't forget to destroy a thread's locks when freeing it.
o Examine the correct flags word for checking to see if
a thread is in a synchronization queue.
Things should now work on an SMP kernel.
environment. This includes support for multiple KSEs and KSEGs.
The ability to create more than 1 KSE via pthread_setconcurrency()
is in the works as well as support for PTHREAD_SCOPE_SYSTEM threads.
Those should come shortly.
There are still some known issues which davidxu and I are working
on, but it'll make it easier for us by committing what we have.
This library now passes all of the ACE tests that libc_r passes
with the exception of one. It also seems to work OK with KDE
including konqueror, kwrite, etc. I haven't been able to get
mozilla to run due to lack of java plugin, so I'd be interested
to see how it works with that.
Reviewed by: davidxu
more complicated things than just setting the lock to 0.
- Implement stubs for this function in libc and the two threading libraries
that are currently in the tree.
In _thread_switch, set current thread pointer in kse mailbox
only after all registers copied out of thread mailbox, kernel will do
upcall at trap time, if set current thread pointer before loading all
registers from thread mailbox, at trap time, the thread mailbox data
will be overwritten by kernel, result is junk data is loaded into CPU.
`sigprocmask', `sigaltstack', and `sigwait' as well as to the
prototypes of the apparantly unimplemented functions `sigtimedwait'
and `sigwaitinfo'. This complies with IEEE Std 1003.1-2001.
The new libpthread will provide POSIX threading support using KSE.
These files were previously repo-copied from src/lib/libc_r.
Reviewed by: deischen
Approved by: -arch
at file flags and replace it with functions that will avoid null
pointer checks.
MFC to be done by archie ;-)
PR: 42100
Reviewed by: archie, robert
MFC after: 3 days
file descriptor bit if poll() returns POLLERR, POLLHUP, or POLLNVAL.
Othewise, it's possible for select() to return successfully but
with no bits set.
Reviewed by: deischen
MFC after: 3 days
PR: bin/42175
on behalf of a thread, we should check the POLLERR, POLLHUP, and
POLLNVAL flags as well to wake up the thread in these cases.
Suggested by: deischen
MFC after: 3 days
and pthread_resume_all_np(). These suspend and resume all threads except
the current thread, respectively. The existing functions pthread_single_np()
and pthread_multi_np(), which formerly had no effect, now exhibit the same
behaviour and pthread_suspend_all_np() and pthread_resume_all_np(). These
functions have been added mostly for the native java port.
Don't allow the uthread kernel pipe to use the same descriptors as
stdio. Mostily submitted by Oswald Buddenhagen <ossi@kde.org>.
Correct some minor style nits.
startup code rather than a static C++ object since c++ seems to be broken.
This doesn't seem to work for staticically linked program just yet, I'll
give that some more work when I get a chance.
Change case of POLLNVAL as an error.
Remove POLLHUP and POLLERR from one case, their place is most likely
amongst read events.
PR: 33723
Submitted by: Alexander Litvin <archer@whichever.org>
Reviewed by: deischen [Provided a small change to the PR patch as well]
MFC after: 4 weeks
Also, make an internal _getprogname() that is used only inside
libc. For libc, getprogname(3) is a weak symbol in case a
function of the same name is defined in userland.
of an alternate signal stack for handling signals. Let the kernel
send signals on the stack of the current thread and teach the threads
signal handler how to deliver signals to the current thread if it
needs to. Also, always store a threads context as a jmp_buf. Eventually
this will change to be a ucontext_t or mcontext_t.
Other small nits. Use struct pthread * instead of pthread_t in internal
library routines. The threads code wants struct pthread *, and pthread_t
doesn't necessarily have to be the same.
Reviewed by: jasone
return address when modifying a jmp_buf to create a new thread context.
Also set t12 with the return address.
This should fix libc_r on alpha.
With much detective work by: Bernd Walter <ticso@cicely.de>
the target thread of the join operation. This allows the cancelled
thread to detach the target thread in its cancellation handler.
This bug was found by Butenhof's cancel_subcontract test.
Reviewed by: jasone
kernel #defines to figure out where the stack is located. This stops
libc_r from exploding when the kernel is compiled with a different
KVM size. IMHO this is all kinda bogus, it would be better to just
check %esp and work from that.
- uthread_signal.c; libc_r does not wrap signal() since 1998/04/29.
- uthread_attr_setprio.c; it was never connected to the build, and
pthread_attr_setprio() does not exist in POSIX.
- uthread_sigblock.c and uthread_sigsetmask.c; these were no-ops
bloating libc_r's space.
pthread_private.h:
- Removed prototypes of non-syscalls: send().
- Removed prototypes of unused syscalls: sigpending(), sigsuspend(),
and select().
- Fixed prototype of fork().
- MFS: Fixed prototypes of <sys/socket.h> syscalls.
Reviewed by: deischen
Approved by: deischen, jasone
be malloc()ed, but they are now allocated using mmap(), just as the
default-size stacks are. A separate cache of stacks is kept for
non-default-size stacks.
Collaboration with: deischen
atomically:
1) Search _thread_list for the thread to join.
2) Search _dead_list for the thread to join.
3) Set the running thread as the joiner.
While we're at it, fix a race in the case where multiple threads try to
join on the same thread. POSIX says that the behavior of multiple joiners
is undefined, but the fix is cheap as a result of the other fix.
keep track of a joiner. POSIX only supports a single joiner, so this
simplification is acceptable.
At the same time, make sure to mark a joined thread as detached so that
its resources can be freed.
Reviewed by: deischen
PR: 24345
there is no need to wake all waiters to assure that the highest priority
thread is run. As the semaphore code is written, there was no correctness
problem, but the change improves sem_post() performance.
Pointed out by: deischen
process on fork(2).
It is the supposed behavior stated in the manpage of sigaction(2), and
Solaris, NetBSD and FreeBSD 3-STABLE correctly do so.
The previous fix against libc_r/uthread/uthread_fork.c fixed the
problem only for the programs linked with libc_r, so back it out and
fix fork(2) itself to help those not linked with libc_r as well.
PR: kern/26705
Submitted by: KUROSAWA Takahiro <fwkg7679@mb.infoweb.ne.jp>
Tested by: knu, GOTOU Yuuzou <gotoyuzo@notwork.org>,
and some other people
Not objected by: hackers
MFC in: 3 days
placed in any scheduling queue(s). The process of dispatching
signals to a thread can change its state which will attempt to add
or remove the thread from any scheduling queue to which it belongs.
This can break some assertions if the thread isn't in the queue(s)
implied by its state.
When adding dispatching a pending signal to a thread, be sure to
remove the signal from the threads set of pending signals.
PR: 27035
Tested by: brian
MFC in: 1 week
a "#pragma weak" directive linking the external symbol. This matches
the other pthread_* definitions, and ensures that users of this
function from within libc get the real version, not the stub.
Suggested by: deischen
Reviewed by: deischen, alfred
associated changes that had to happen to make this possible as well as
bugs fixed along the way.
Bring in required TLI library routines to support this.
Since we don't support TLI we've essentially copied what NetBSD
has done, adding a thin layer to emulate direct the TLI calls
into BSD socket calls.
This is mostly from Sun's tirpc release that was made in 1994,
however some fixes were backported from the 1999 release (supposedly
only made available after this porting effort was underway).
The submitter has agreed to continue on and bring us up to the
1999 release.
Several key features are introduced with this update:
Client calls are thread safe. (1999 code has server side thread
safe)
Updated, a more modern interface.
Many userland updates were done to bring the code up to par with
the recent RPC API.
There is an update to the pthreads library, a function
pthread_main_np() was added to emulate a function of Sun's threads
library.
While we're at it, bring in NetBSD's lockd, it's been far too
long of a wait.
New rpcbind(8) replaces portmap(8) (supporting communication over
an authenticated Unix-domain socket, and by default only allowing
set and unset requests over that channel). It's much more secure
than the old portmapper.
Umount(8), mountd(8), mount_nfs(8), nfsd(8) have also been upgraded
to support TI-RPC and to support IPV6.
Umount(8) is also fixed to unmount pathnames longer than 80 chars,
which are currently truncated by the Kernel statfs structure.
Submitted by: Martin Blapp <mb@imp.ch>
Manpage review: ru
Secure RPC implemented by: wpaul
application to provide locking for I/O operations. This doesn't
break any of my tests, but the old behavior can be restored by
compiling with _FDLOCKS_ENABLED. This will eventually be removed
when it is obvious it does not cause any problems.
Remove most of flockfile implementation, with the exception of
flockfile_debug.
Make error messages more informational (submitted by Mike Heffner
<spock@techfour.net>, who's now known as mikeh@FreeBSD.org).
Add another check for thread library initialization (jdp, we
really need a way to get _thread_init called at program start
before any constructors are run).
_foo - wrapped system call
foo - weak definition to _foo
and for cancellation points:
_foo - wrapped system call
__foo - enter cancellation point, call _foo(), leave
cancellation point
foo - weak definition to __foo
Change use of global _thread_run to call a function to get the
currently running thread.
Make all pthread_foo functions weak definitions to _pthread_foo,
where _pthread_foo is the implementation. This allows an application
to provide its own pthread functions.
Provide slightly different versions of pthread_mutex_lock and
pthread_mutex_init so that we can tell the difference between
a libc mutex and an application mutex. Threads holding mutexes
internal to libc should never be allowed to exit, call signal
handlers, or cancel.
Approved by: -arch
referenced to by libgcc.a.
This is needed when linking statically as SVR4 (ie, ELF) behavior is to only
link in a module if it satisfies an undefined strong reference from somewhere.
(this surprises a lot of people) Things are different when using shared libs,
the entire library and its modules and their symbols are available at run-time
(when the weak reference is seen to still be unsatisfied and is satisfied on
the spot), this is not the case with static libs.
Thus one can have a static binary with unresolved week references, and at
run-time dereference a NULL pointer.
Submitted by: eischen
global time of day. This costs us nothing, but is a bit of a hack
to work around a process blocking and not having the time updated
by an ITIMER_PROF signal.
PR: 23679
executed at least once, fixing pthread_mutex_lock() for recursive
mutex lock attempts.
Correctly set a threads signal mask while it is executing a signal
handler. The mask should be the union of its current mask, the
signal being handled, and the mask from the signal action.
Reported by: Dan Nelson <dnelson@emsphone.com>
MFC Candidate
was not getting properly initialized in pthread_cond_signal()
and pthread_cond_broadcast(). Reportedly, this can cause
an application to die.
MFC candidate
Submitted by: ade
the kernel to (re)use the alternate signal stack. In this
case, we don't return normally from the signal handler,
so the kernel still thinks we are using the signal stack.
The fixes a nasty bug where the signal handler can start
fiddling with the stack of a thread while the handler is
actually running on the same stack.
MFC candidate
file descriptors needing to be polled (Doh!). Reported
by Dan Nelson <dnelson@emsphone.com>.
Don't install and start the scheduling timer until the
first thread is created. This prevents the overhead of
having a periodic scheduling signal in a single threaded
program. Reported by Dan Nelson <dnelson@emsphone.com>.
Allow builtin longjmps out of application installed
signal handlers without the need perform any post-handler
cleanup:
o Change signal handling to save the threads interrupted
context on the stack. The threads current context is
now always stored in the same place (in the pthread).
If and when a signal handler returns, the interrupted
context is copied back to the storage area in the pthread.
o Before calling invoking a signal handler for a thread,
back the thread out of any internal waiting queues
(mutex, CV, join, etc) to which it belongs.
Rework uthread_info.c a bit to make it easier to change
the format of a thread dump.
Use an alternal signal stack for the thread library's
signal handler. This allows us to fiddle with the main
threads stack without fear of it being in use.
Reviewed by: jasone
by sigwait(). This prevents a signal from being sent to the process
when there are no application installed signal handlers.
Correct a typo in sigwait (foo -> foo[i]).
adding a signal frame to a thread, be sure to label the context
correctly so we don't restore an uninitialized process mask.
Reported by: kimc@W8HD.ORG and Andrey Rouskol <anry@sovintel.ru>