for interrupted field.
Also in _thr_sig_handler, retrieve current signal mask from kernel not
from ucp, the later is pre-unioned mask, not current signal mask.
pthread_md.h. This commit only moves the definition; it does not
change it for any of the platforms. This more easily allows 64-bit
architectures (in particular) to pick a slightly larger stack size.
to avoid potential memory leak, also fix a bug in pthread_create, contention
scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check
right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit.
2. Execute hooks registered by atexit() on thread stack but not on scheduler
stack.
3. Simplify some code in _kse_single_thread by calling xxx_destroy functions.
Reviewed by: deischen
should be a value past to pthread_attr_setguardsize, not a rounded up value.
Also fix a stack size matching bug in thr_stack.c, now stack matching code
uses number of pages but not bytes length to match stack size, so for example,
size 512 bytes and size 513 bytes should both match 1 page stack size.
Reviewed by: deischen
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.
Reviewed by: deischen
instead of long types for low-level locks.
Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.
Add memory barriers to alpha atomic swap functions (submitted
by davidxu).
Requested by: bde
critical region, we wrap some syscalls for thread cancellation point, and
when syscalls returns, we call _thr_leave_cancellation_point, at the time
if a signal comes in, it would be buffered, and when the thread leaves
_thr_leave_cancellation_point, buffered signals will be processed, to avoid
messing up normal syscall errno, we should save and restore errno around
signal handling code.
yet, so we can protect some locking code from being interrupted by signal
handling. When KSE mode is turned on, reset the thread flag to scope process
except we are running in 1:1 mode which we needn't turn it off.
Also remove some unused member variables in structure kse.
Tested by: deischen
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.
Reviewed by: deischen
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
archs that can (or are required to) have per-thread registers.
Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.
Reviewed by: davidxu
This eliminates ping-ponging of locks, where the idle KSE wakes
up only to find the lock it needs is being held. This gives
little or no gain to M:N mode but greatly speeds up 1:1 mode.
Reviewed & Tested by: davidxu