the signal mask and pending signals of the calling thread. These
are stored in userland in libpthread.
There is a small race condition in this patch which could cause
problems if a signal arrives after setting the (kernel) signal
mask and before exec'ing. The thread's set of pending signals
also are not yet installed in the exec'd process. Both of these
will be corrected with the addition of a special syscall.
Reported & Tested by: Joost Bekkers <joost at jodocus dot org>
Reviewed by: julian, davidxu
a knob to force process scope threads. If the environment variable
LIBPTHREAD_PROCESS_SCOPE is set, force all threads to be process
scope threads regardless of how the application creates them. If
LIBPTHREAD_SYSTEM_SCOPE is set (forcing system scope threads), it
overrides LIBPTHREAD_PROCESS_SCOPE.
$ # To force system scope threads
$ LIBPTHREAD_SYSTEM_SCOPE=anything threaded_app
$ # To force process scope threads
$ LIBPTHREAD_PROCESS_SCOPE=anything threaded_app
LIBPTHREAD_SYSTEM_SCOPE in the environment.
You can still force libpthread to be built in strictly 1:1 by
adding -DSYSTEM_SCOPE_ONLY to CFLAGS. This is kept for archs
that don't yet support M:N mode.
Requested by: rwatson
Reviewed by: davidxu
1. Add global varible _libkse_debug, debugger uses the varible to identify
libpthread. when the varible is written to non-zero by debugger, libpthread
will take some special action at context switch time, it will check
TMDF_DOTRUNUSER flags, if a thread has the flags set by debugger, it won't
be scheduled, when a thread leaves KSE critical region, thread checks
the flag, if it was set, the thread relinquish CPU.
2. Add pq_first_debug to select a thread allowd to run by debugger.
3. Some names prefixed with _thr are renamed to _thread prefix.
which is allowed to run by debugger.
sigsuspend, thread shouldn't wait, in old code, it may be
ignored.
When a signal handler is invoked in sigsuspend, thread gets
two different signal masks, one is in thread structure,
sigprocmask() can retrieve it, another is in ucontext
which is a third parameter of signal handler, the former is
the result of sigsuspend mask ORed with sigaction's sa_mask
and current signal, the later is the mask in thread structure
before sigsuspend is called. After signal handler is called,
the mask in ucontext should be copied into thread structure,
and becomes CURRENT signal mask, then sigsuspend returns to
user code.
Reviewed by: deischen
Tested by: Sean McNeil <sean@mcneil.com>
mode (where the forked thread is the one and only thread and
is marked as system scope), set the system scope flag before
initializing the signal mask. This prevents trying to use
internal locks that haven't yet been initialized.
Reported by: Dan Nelson <dnelson at allantgroup.com>
Reviewed by: davidxu
These files had tags after the copyright notice,
inside the comment block (incorrect, removed),
and outside the comment block (correct).
Approved by: rwatson (mentor)
thread library for i386, amd64, and ia64. For alpha
and sparc64 the library is not changed and remains libkse,
and links are installed so that libpthread -> libc_r.
The gcc -pthread option will be changed in a separate
commit so that it links to -lpthread instead of -lc_r.
Approved by: re@
on a rwlock while there are writers waiting. We normally favor
writers but when a reader already has at least one other read lock,
we favor the reader. We don't track all the rwlocks owned by a
thread, nor all the threads that own a rwlock -- we just keep
a count of all the read locks owned by a thread.
PR: 24641
likely to be non-zero. When leaving the cancellation point, check
the return value against -1 to see if cancellation should be
checked. While I'm here, make the same change to connect() just
to be consisitent.
Pointed out by: davidxu
_thr_leave_cancellation_point to _thr_cancel_leave, add a parameter
to _thr_cancel_leave to indicate whether cancellation point should be
checked, this gives us an option to not check cancallation point if
a syscall successfully returns to avoid any leaks, current I have
creat(), open() and fcntl(F_DUPFD) to not check cancellation point
after they sucessfully returned.
Replace some members in structure kse with bit flags to same some
memory.
Conditionally compile THR_ASSERT to nothing if _PTHREAD_INVARIANTS is
not defined.
Inline some small functions in thr_cancel.c.
Use __predict_false in thr_kern.c for some executed only once code.
Reviewd by: deischen
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
UTS with the stack correctly aligned. Also, while here, use an indirect
jump rather than the pushq/ret hack.
This fixes threaded apps that use floating point for me, although
it hasn't solved all the problems. It is an improvement though.
Preservation of the 128 byte red zone hasn't been resolved yet.
Approved by: re (scottl)
through branch predict as suggested in INTEL IA32 optimization guide.
2.Allocate siginfo arrary separately to avoid pthread to be allocated at
2K boundary, which hits L1 address alias problem and causes context
switch to be slow down.
3.Simplify context switch code by removing redundant code, code size is
reduced, so it is expected to run faster.
Reviewed by: deischen
Approved by: re (scottl)
in init_main_thread. Also don't initialize lock and lockuser again for initial
thread, it is already done by _thr_alloc().
Reviewed by: deischen
Approved by: re (scottl)
signal handling mode, there is no chance to handle the signal, something
must be wrong in the library, just call kse_thr_interrupt to dump its core.
I have the code for a long time, but forgot to commit it.
Aside from the POSIX requirements for pthread_atfork(), when
fork()ing, take the malloc lock to keep malloc state consistent
in the child.
Reviewed by: davidxu
about the fpu code here. It should be using fxsave/fxrstor instead of
saving/restoring the control word. The SSE registers are used a lot in
gcc generated code on amd64. I'm not sure how this all fits together
though.
On ia64, where there's no libc_r at all, libkse is now the default
thread library by virtue of these links.
The reasons for this change are:
1. libkse is slated to become the default thread library anyway,
2. active development and maintenance is only present for libkse,
3. GNOME and KDE, both in the process of being supported on ia64,
work better with KSE; even on ia64.
can clear the pointer to mutex, not the thread doing mutex
handoff. Because _mutex_lock_backout does not hold scheduler
lock while testing THR_FLAGS_IN_SYNCQ and then reading mutex
pointer, it is possible mutex owner begin to unlock and
handoff the mutex to the current thread, and mutex pointer
will be cleared to NULL before current thread reading it, so
current thread will end up with deferencing a NULL pointer,
Fix the race by making mutex waiters to clear their mutex pointers.
While I am here, also save inherited priority in mutex for
PTHREAD_PRIO_INERIT mutex in mutex_trylock_common just like what
we did in mutex_lock_common.
for interrupted field.
Also in _thr_sig_handler, retrieve current signal mask from kernel not
from ucp, the later is pre-unioned mask, not current signal mask.
pthread_md.h. This commit only moves the definition; it does not
change it for any of the platforms. This more easily allows 64-bit
architectures (in particular) to pick a slightly larger stack size.
THR_SETCONTEXT as PANIC(). The THR_SETCONTEXT macro is currently not
used, which means that the definition we had could be wrong, overly
pessimistic or unknowingly right. I don't like the odds...
The new _ia64_break_setcontext() and corresponding kernel fixes make
KSE mostly usable. There's still a case where we don't properly
restore a context and end up with a NaT consumption fault (typically
an indication for not handling NaT collection points correctly),
but at least now mutex_d works...
state for amd64 was twice as large as necessary. Peter
recently fixed this, so the comment no longer applies.
Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.
FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
to avoid potential memory leak, also fix a bug in pthread_create, contention
scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check
right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit.
2. Execute hooks registered by atexit() on thread stack but not on scheduler
stack.
3. Simplify some code in _kse_single_thread by calling xxx_destroy functions.
Reviewed by: deischen
should be a value past to pthread_attr_setguardsize, not a rounded up value.
Also fix a stack size matching bug in thr_stack.c, now stack matching code
uses number of pages but not bytes length to match stack size, so for example,
size 512 bytes and size 513 bytes should both match 1 page stack size.
Reviewed by: deischen
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.
Reviewed by: deischen
instead of long types for low-level locks.
Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.
Add memory barriers to alpha atomic swap functions (submitted
by davidxu).
Requested by: bde
critical region, we wrap some syscalls for thread cancellation point, and
when syscalls returns, we call _thr_leave_cancellation_point, at the time
if a signal comes in, it would be buffered, and when the thread leaves
_thr_leave_cancellation_point, buffered signals will be processed, to avoid
messing up normal syscall errno, we should save and restore errno around
signal handling code.
yet, so we can protect some locking code from being interrupted by signal
handling. When KSE mode is turned on, reset the thread flag to scope process
except we are running in 1:1 mode which we needn't turn it off.
Also remove some unused member variables in structure kse.
Tested by: deischen
have execute permissions. Run "perl verify" instead. Replace all
occurences of the hardcoding of ./verify with $(VERIFY) to allow
it to be overridden as well.
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.
Reviewed by: deischen
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...
However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.
In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).
archs that can (or are required to) have per-thread registers.
Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.
Reviewed by: davidxu
context functions. We don't need to enter the kernel anymore. The
contexts are compatible (ie a context created by getcontext() can
be restored by _ia64_restore_context()).
While here, make the use of THR_ALIGNBYTES and THR_ALIGN a no-op.
They are going to be removed anyway.