the whole RLIMIT_STACK-sized region of the kernel-allocated stack as
the stack of main thread.
By default, the main thread stack is clamped at 2MB (4MB on 64bit
ABIs) and the rest is used for other threads stack allocation. Since
there is no programmatic way to adjust the size of the main thread
stack, pthread_attr_setstacksize() is too late, the knob allows user
to manage the main stack size both for single-threaded and
multi-threaded processes with the rlimit.
Reported by: "Ivan A. Kosarev" <ivan@ivan-labs.com>
Tested by: dim
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
This includes:
o All directories named *ia64*
o All files named *ia64*
o All ia64-specific code guarded by __ia64__
o All ia64-specific makefile logic
o Mention of ia64 in comments and documentation
This excludes:
o Everything under contrib/
o Everything under crypto/
o sys/xen/interface
o sys/sys/elf_common.h
Discussed at: BSDcan
mode. This allows the binder to be functional in the child after the
fork (assuming no lazy loading of a filter is needed), but other rtld
services which require write lock on rtld_bind_lock cause deadlock, if
called by child.
Change the _rtld_atfork() to lock the bind lock in write mode, making
the rtld fully functional after the fork.
Pre-resolve the symbols which are called by the libthr' fork()
interposer, since dynamic resolution causes deadlock due to the
rtld_bind_lock already owned in the write mode.
Reported and tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
the signal second time, by adding the missed else before if statement.
While there, postpone initializing local curthread variable until
passed signal number is checked for validity.
Submitted by: John Wolfe <jlw@xinuos.com>
PR: threads/186309
MFC after: 1 week
unlocking the rtld bind lock results in the processing of ast and
recursing into the check_deferred_signal(). Nested execution of
check_deferred_signal() delivers the signal to user code and clears
si_signo. On return, top-level check_deferred_signal() frame
continues delivering the same signal one more time, but now with zero
si_signo.
Fix this by adding a flag to indicate that deferred delivery is
running, so check_deferred_signal() should avoid doing anything. Since
user signal handler is allowed to modify the passed machine context to
make return from the signal handler to cause arbitrary jump, or do
longjmp(). For this case, also clear the flag in thr_sighandler(),
since kernel signal delivery means that nested delivery code should
not run right now.
Reported by: Vitaly Magerya <vmagerya@gmail.com>
Reviewed by: davidxu, jilles
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
cancellation point. When enabling the cancellation, only process the
pending cancellation for asynchronous mode.
Reported and reviewed by: Kohji Okuno <okuno.kohji@jp.panasonic.com>
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
identified, unify the code of check_deferred_signal() for all
architectures, making the variant under #ifdef x86 common.
Tested by: marius (sparc64)
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
check_deferred_signal() returns twice, since handle_signal() emulates
the return from the normal signal handler by sigreturn(2)ing the
passed context. Second return is performed on the destroyed stack
frame, because __fillcontextx() has already returned. This causes
undefined and bad behaviour, usually the victim thread gets SIGSEGV.
Avoid nested frame and the need to return from it by doing direct call
to getcontext() in the check_deferred_signal() and using a new private
libc helper __fillcontextx2() to complement the context with the
extended CPU state if the deferred signal is still present.
The __fillcontextx() is now unused, but is kept to allow older
libthr.so to be used with the new libc.
Mark __fillcontextx() as returning twice [1].
Reported by: pgj
Pointy hat to: kib
Discussed with: dim
Tested by: pgj, dim
Suggested by: jilles [1]
MFC after: 1 week
The accept4() function, compared to accept(), allows setting the new file
descriptor atomically close-on-exec and explicitly controlling the
non-blocking status on the new socket. (Note that the latter point means
that accept() is not equivalent to any form of accept4().)
The linuxulator's accept4 implementation leaves a race window where the new
file descriptor is not close-on-exec because it calls sys_accept(). This
implementation leaves no such race window (by using falloc() flags). The
linuxulator could be fixed and simplified by using the new code.
Like accept(), accept4() is async-signal-safe, a cancellation point and
permitted in capability mode.
The threaded rtld lock implementation is faster even in the single-threaded
case because it postpones signal handlers via THR_CRITICAL_ENTER and
THR_CRITICAL_LEAVE instead of calling sigprocmask(2).
As a result, exception handling becomes faster in single-threaded
applications linked with libthr.
Reviewed by: kib
pthread_suspend_all_np() may have already suspended its parent thread.
Add locking code in pthread_suspend_all_np() to only allow one thread
to suspend other threads, this eliminates a deadlock where two or more
threads try to suspend each others.
Enqueue thread in LIFO, this can cause starvation, but it gives better
performance. Use _thr_queuefifo to control the frequency of FIFO vs LIFO,
you can use environment string LIBPTHREAD_QUEUE_FIFO to configure the
variable.
a mutex after a thread has unlocked it, it event writes data to the mutex
memory to clear contention bit, there is a race that other threads
can lock it and unlock it, then destroy it, so it should not write
data to the mutex memory if there isn't any waiter.
The new operation UMTX_OP_MUTEX_WAKE2 try to fix the problem. It
requires thread library to clear the lock word entirely, then
call the WAKE2 operation to check if there is any waiter in kernel,
and try to wake up a thread, if necessary, the contention bit is set again
by the operation. This also mitgates the chance that other threads find
the contention bit and try to enter kernel to compete with each other
to wake up sleeping thread, this is unnecessary. With this change, the
mutex owner is no longer holding the mutex until it reaches a point
where kernel umtx queue is locked, it releases the mutex as soon as
possible.
Performance is improved when the mutex is contensted heavily. On Intel
i3-2310M, the runtime of a benchmark program is reduced from 26.87 seconds
to 2.39 seconds, it even is better than UMTX_OP_MUTEX_WAKE which is
deprecated now. http://people.freebsd.org/~davidxu/bench/mutex_perf.c
example, it uses a serialization point like following:
pthread_mutex_lock(&mutex);
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&muetx);
They think a previous lock holder should have already left the mutex and
is no longer referencing it, so they destroy it. To be maximum compatible
with such code, we use IA64 version to unlock the mutex in kernel, remove
the two steps unlocking code.
according to POSIX document, the clock ID may be dynamically allocated,
it unlikely will be in 64K forever. To make it future compatible, we
pack all timeout information into a new structure called _umtx_time, and
use fourth argument as a size indication, a zero means it is old code
using timespec as timeout value, but the new structure also includes flags
and a clock ID, so the size argument is different than before, and it is
non-zero. With this change, it is possible that a thread can sleep
on any supported clock, though current kernel code does not have such a
POSIX clock driver system.
UMTX_OP_WAIT. Upper 16bits is enough to hold a clock id, and lower
16bits is used to pass flags. The change saves a clock_gettime() syscall
from libthr.
the signal handlers with the context information in the deferrred
case.
Only enable the use of getcontextx(3) in the deferred signal delivery
code on amd64 and i386. Sparc64 seems to have some undetermined issues
with interaction of alloca(3) and signal delivery.
Tested by: flo (who also provided sparc64 harware access for me), pho
Discussed with: marius
MFC after: 1 month
versions of pthread_md.h have a special case of dereferencing a null
pointer. Clang warns about this with:
In file included from lib/libthr/arch/i386/i386/pthread_md.c:36:
lib/libthr/arch/i386/include/pthread_md.h:96:10: error: indirection of non-volatile null pointer will be deleted, not trap [-Werror,-Wnull-dereference]
return (TCB_GET32(tcb_self));
^~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:73:13: note: expanded from:
: "m" (*(u_int *)(__tcb_offset(name)))); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:96:10: note: consider using __builtin_trap() or qualifying pointer with 'volatile'
Since this indirection is done relative to the fs or gs segment, to
retrieve thread-specific data, it is an exception to the rule.
Therefore, add a volatile qualifier to tell the compiler we really want
to dereference a zero address.
MFC after: 1 week
o Set TP using inline assembly to avoid dead code elimination.
o Eliminate _tcb.
Merge from r161840:
Stylize: avoid using a global register variable.
Merge from r157461:
Simplify _get_curthread() and _tcb_ctor because libc and rtld now
already allocate thread pointer space in tls block for initial thread.
Merge from r177853:
Replace function _umtx_op with _umtx_op_err, the later function directly
returns errno, because errno can be mucked by user's signal handler and
most of pthread api heavily depends on errno to be correct, this change
should improve stability of the thread library.
MFC after: 1 week
used a global pthread_mutex_t for synchronization. r179417 replaced that
with an implementation that directly used atomic instructions and thr_*
syscalls to synchronize callers to pthread_once. However, calling
pthread_mutex_lock on the global mutex implicitly ensured that
_thr_check_init() had been called but with r179417 this was no longer
guaranteed. This meant that if you were unlucky enough to have your first
call into libthr be a call to pthread_once(), you would segfault when
trying to access the pointer returned by _get_curthread().
The fix is to explicitly call _thr_check_init() from pthread_once().
Reviewed by: davidxu
Approved by: emaste (mentor)
MFC after: 1 week
calling thread's unique integral ID, which is similar to AIX function of
the same name. Bump __FreeBSD_version to note its introduction.
Reviewed by: kib
Stack creation code is changed to call _rtld_get_stack_prot() to get
the stack protection right. There is a race where thread is created
during dlopen() of dso that requires executable stacks. Then,
_rtld_get_stack_prot() may return PROT_READ | PROT_WRITE, but thread
is still not linked into the thread list. In this case, the callback
misses the thread stack, and rechecks the required protection
afterward.
Reviewed by: davidxu
- Add flags CVWAIT_ABSTIME and CVWAIT_CLOCKID for umtx kernel based
condition variable, this should eliminate an extra system call to get
current time.
- Add sub-function UMTX_OP_NWAKE_PRIVATE to wake up N channels in single
system call. Create userland sleep queue for condition variable, in most
cases, thread will wait in the queue, the pthread_cond_signal will defer
thread wakeup until the mutex is unlocked, it tries to avoid an extra
system call and a extra context switch in time window of pthread_cond_signal
and pthread_mutex_unlock.
The changes are part of process-shared mutex project.
a silly rwlock deadlock problem, the deadlock is caused by writer
waiters, if a thread has already locked a reader lock, and wants to
acquire another reader lock, it will be blocked by writer waiters,
but we had already fixed it years ago.
functions set or get pthread_rwlock type, current supported types are:
PTHREAD_RWLOCK_PREFER_READER_NP,
PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP,
PTHREAD_RWLOCK_PREFER_WRITER_NP,
default is PTHREAD_RWLOCK_PREFER_WRITER_NONCECURSIVE_NP, this maintains
binary compatible with old code.
same null value, the code can not distinguish between them, to
fix the problem, now a destroyed object is assigned to a non-null
value, and it will be rejected by some pthread functions.
PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP is changed to number 1, so that
adaptive mutex can be statically initialized correctly.
it is incompatible with stack unwinding code, if they are invoked,
disable stack unwinding for current thread, and when thread is
exiting, print a warning message.
for them, two functions _pthread_cancel_enter and _pthread_cancel_leave
are added to let thread enter and leave a cancellation point, it also
makes it possible that other functions can be cancellation points in
libraries without having to be rewritten in libthr.
whether asynchronous mode is turned on or not, this always gives us a
chance to decide whether thread should be canceled or not in
cancellation points.
defer-mode cancellation works, asynchrnous mode does not work because
it lacks of libuwind's support. stack unwinding is not enabled unless
LIBTHR_UNWIND_STACK is defined in Makefile.
For all libthr contexts, use ${MACHINE_CPUARCH}
for all libc contexts, use ${MACHINE_ARCH} if it exists, otherwise use
${MACHINE_CPUARCH}
Move some common code up a layer (the .PATH statement was the same in
all the arch submakefiles).
# Hope she hasn't busted powerpc64 with this...
add a wrapper for it in libc and rework the code in libthr, the
system call still can return EINTR, we keep this feature.
Discussed on: thread
Reviewed by: jilles
module private type, when private type mutex is locked/unlocked, thread
critical region is entered or leaved. These changes makes fork()
async-signal safe which required by POSIX. Note that user's atfork handler
still needs to be async-signal safe, but it is not problem of libthr, it
is user's responsiblity.
some cases we want to improve:
1) if a thread signal got a signal while in cancellation point,
it is possible the TDP_WAKEUP may be eaten by signal handler
if the handler called some interruptibly system calls.
2) In signal handler, we want to disable cancellation.
3) When thread holding some low level locks, it is better to
disable signal, those code need not to worry reentrancy,
sigprocmask system call is avoided because it is a bit expensive.
The signal handler wrapper works in this way:
1) libthr installs its signal handler if user code invokes sigaction
to install its handler, the user handler is recorded in internal
array.
2) when a signal is delivered, libthr's signal handler is invoke,
libthr checks if thread holds some low level lock or is in critical
region, if it is true, the signal is buffered, and all signals are
masked, once the thread leaves critical region, correct signal
mask is restored and buffered signal is processed.
3) before user signal handler is invoked, cancellation is temporarily
disabled, after user signal handler is returned, cancellation state
is restored, and pending cancellation is rescheduled.
atexit and __cxa_atexit handlers that are either installed by unloaded
dso, or points to the functions provided by the dso.
Use _rtld_addr_phdr to locate segment information from the address of
private variable belonging to the dso, supplied by crtstuff.c. Provide
utility function __elf_phdr_match_addr to do the match of address against
dso executable segment.
Call back into libthr from __cxa_finalize using weak
__pthread_cxa_finalize symbol to remove any atfork handler which
function points into unloaded object.
The rtld needs private __pthread_cxa_finalize symbol to not require
resolution of the weak undefined symbol at initialization time. This
cannot work, since rtld is relocated before sym_zero is set up.
Idea by: kan
Reviewed by: kan (previous version)
MFC after: 3 weeks
which does not know what is the state of interrupted system call, for
example, open() system call opened a file and the thread is still cancelled,
result is descriptor leak, there are other problems which can cause resource
leak or undeterminable side effect when a thread is cancelled. However, this
is no longer true in new implementation.
In defering mode, a thread is canceled if cancellation request is pending and
later the thread enters a cancellation point, otherwise, a later
pthread_cancel() just causes SIGCANCEL to be sent to the target thread, and
causes target thread to abort system call, userland code in libthr then checks
cancellation state, and cancels the thread if needed. For example, the
cancellation point open(), the thread may be canceled at start,
but later, if it opened a file descriptor, it is not canceled, this avoids
file handle leak. Another example is read(), a thread may be canceled at start
of the function, but later, if it read some bytes from a socket, the thread
is not canceled, the caller then can decide if it should still enable cancelling
or disable it and continue reading data until it thinks it has read all
bytes of a packet, and keeps a protocol stream in health state, if user ignores
partly reading of a packet without disabling cancellation, then second iteration
of read loop cause the thread to be cancelled.
An exception is that the close() cancellation point always closes a file handle
despite whether the thread is cancelled or not.
The old mechanism is still kept, for a functions which is not so easily to
fix a cancellation problem, the rough mechanism is used.
Reviewed by: kib@
is acted upon, or when a thread calls pthread_exit(), the thread first
disables cancellation by setting its cancelability state to
PTHREAD_CANCEL_DISABLE and its cancelability type to
PTHREAD_CANCEL_DEFERRED. The cancelability state remains set to
PTHREAD_CANCEL_DISABLE until the thread has terminated.
It has no effect if a cancellation cleanup handler or thread-specific
data destructor routine changes the cancelability state to
PTHREAD_CANCEL_ENABLE.