state for amd64 was twice as large as necessary. Peter
recently fixed this, so the comment no longer applies.
Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.
FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
to avoid potential memory leak, also fix a bug in pthread_create, contention
scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check
right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit.
2. Execute hooks registered by atexit() on thread stack but not on scheduler
stack.
3. Simplify some code in _kse_single_thread by calling xxx_destroy functions.
Reviewed by: deischen
should be a value past to pthread_attr_setguardsize, not a rounded up value.
Also fix a stack size matching bug in thr_stack.c, now stack matching code
uses number of pages but not bytes length to match stack size, so for example,
size 512 bytes and size 513 bytes should both match 1 page stack size.
Reviewed by: deischen
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.
Reviewed by: deischen
instead of long types for low-level locks.
Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.
Add memory barriers to alpha atomic swap functions (submitted
by davidxu).
Requested by: bde
critical region, we wrap some syscalls for thread cancellation point, and
when syscalls returns, we call _thr_leave_cancellation_point, at the time
if a signal comes in, it would be buffered, and when the thread leaves
_thr_leave_cancellation_point, buffered signals will be processed, to avoid
messing up normal syscall errno, we should save and restore errno around
signal handling code.
yet, so we can protect some locking code from being interrupted by signal
handling. When KSE mode is turned on, reset the thread flag to scope process
except we are running in 1:1 mode which we needn't turn it off.
Also remove some unused member variables in structure kse.
Tested by: deischen
have execute permissions. Run "perl verify" instead. Replace all
occurences of the hardcoding of ./verify with $(VERIFY) to allow
it to be overridden as well.
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.
Reviewed by: deischen
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...
However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.
In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).