the whole RLIMIT_STACK-sized region of the kernel-allocated stack as
the stack of main thread.
By default, the main thread stack is clamped at 2MB (4MB on 64bit
ABIs) and the rest is used for other threads stack allocation. Since
there is no programmatic way to adjust the size of the main thread
stack, pthread_attr_setstacksize() is too late, the knob allows user
to manage the main stack size both for single-threaded and
multi-threaded processes with the rlimit.
Reported by: "Ivan A. Kosarev" <ivan@ivan-labs.com>
Tested by: dim
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
This includes:
o All directories named *ia64*
o All files named *ia64*
o All ia64-specific code guarded by __ia64__
o All ia64-specific makefile logic
o Mention of ia64 in comments and documentation
This excludes:
o Everything under contrib/
o Everything under crypto/
o sys/xen/interface
o sys/sys/elf_common.h
Discussed at: BSDcan
mode. This allows the binder to be functional in the child after the
fork (assuming no lazy loading of a filter is needed), but other rtld
services which require write lock on rtld_bind_lock cause deadlock, if
called by child.
Change the _rtld_atfork() to lock the bind lock in write mode, making
the rtld fully functional after the fork.
Pre-resolve the symbols which are called by the libthr' fork()
interposer, since dynamic resolution causes deadlock due to the
rtld_bind_lock already owned in the write mode.
Reported and tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
the signal second time, by adding the missed else before if statement.
While there, postpone initializing local curthread variable until
passed signal number is checked for validity.
Submitted by: John Wolfe <jlw@xinuos.com>
PR: threads/186309
MFC after: 1 week
unlocking the rtld bind lock results in the processing of ast and
recursing into the check_deferred_signal(). Nested execution of
check_deferred_signal() delivers the signal to user code and clears
si_signo. On return, top-level check_deferred_signal() frame
continues delivering the same signal one more time, but now with zero
si_signo.
Fix this by adding a flag to indicate that deferred delivery is
running, so check_deferred_signal() should avoid doing anything. Since
user signal handler is allowed to modify the passed machine context to
make return from the signal handler to cause arbitrary jump, or do
longjmp(). For this case, also clear the flag in thr_sighandler(),
since kernel signal delivery means that nested delivery code should
not run right now.
Reported by: Vitaly Magerya <vmagerya@gmail.com>
Reviewed by: davidxu, jilles
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
cancellation point. When enabling the cancellation, only process the
pending cancellation for asynchronous mode.
Reported and reviewed by: Kohji Okuno <okuno.kohji@jp.panasonic.com>
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
identified, unify the code of check_deferred_signal() for all
architectures, making the variant under #ifdef x86 common.
Tested by: marius (sparc64)
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
check_deferred_signal() returns twice, since handle_signal() emulates
the return from the normal signal handler by sigreturn(2)ing the
passed context. Second return is performed on the destroyed stack
frame, because __fillcontextx() has already returned. This causes
undefined and bad behaviour, usually the victim thread gets SIGSEGV.
Avoid nested frame and the need to return from it by doing direct call
to getcontext() in the check_deferred_signal() and using a new private
libc helper __fillcontextx2() to complement the context with the
extended CPU state if the deferred signal is still present.
The __fillcontextx() is now unused, but is kept to allow older
libthr.so to be used with the new libc.
Mark __fillcontextx() as returning twice [1].
Reported by: pgj
Pointy hat to: kib
Discussed with: dim
Tested by: pgj, dim
Suggested by: jilles [1]
MFC after: 1 week
The accept4() function, compared to accept(), allows setting the new file
descriptor atomically close-on-exec and explicitly controlling the
non-blocking status on the new socket. (Note that the latter point means
that accept() is not equivalent to any form of accept4().)
The linuxulator's accept4 implementation leaves a race window where the new
file descriptor is not close-on-exec because it calls sys_accept(). This
implementation leaves no such race window (by using falloc() flags). The
linuxulator could be fixed and simplified by using the new code.
Like accept(), accept4() is async-signal-safe, a cancellation point and
permitted in capability mode.
The threaded rtld lock implementation is faster even in the single-threaded
case because it postpones signal handlers via THR_CRITICAL_ENTER and
THR_CRITICAL_LEAVE instead of calling sigprocmask(2).
As a result, exception handling becomes faster in single-threaded
applications linked with libthr.
Reviewed by: kib
pthread_suspend_all_np() may have already suspended its parent thread.
Add locking code in pthread_suspend_all_np() to only allow one thread
to suspend other threads, this eliminates a deadlock where two or more
threads try to suspend each others.
Enqueue thread in LIFO, this can cause starvation, but it gives better
performance. Use _thr_queuefifo to control the frequency of FIFO vs LIFO,
you can use environment string LIBPTHREAD_QUEUE_FIFO to configure the
variable.
a mutex after a thread has unlocked it, it event writes data to the mutex
memory to clear contention bit, there is a race that other threads
can lock it and unlock it, then destroy it, so it should not write
data to the mutex memory if there isn't any waiter.
The new operation UMTX_OP_MUTEX_WAKE2 try to fix the problem. It
requires thread library to clear the lock word entirely, then
call the WAKE2 operation to check if there is any waiter in kernel,
and try to wake up a thread, if necessary, the contention bit is set again
by the operation. This also mitgates the chance that other threads find
the contention bit and try to enter kernel to compete with each other
to wake up sleeping thread, this is unnecessary. With this change, the
mutex owner is no longer holding the mutex until it reaches a point
where kernel umtx queue is locked, it releases the mutex as soon as
possible.
Performance is improved when the mutex is contensted heavily. On Intel
i3-2310M, the runtime of a benchmark program is reduced from 26.87 seconds
to 2.39 seconds, it even is better than UMTX_OP_MUTEX_WAKE which is
deprecated now. http://people.freebsd.org/~davidxu/bench/mutex_perf.c
example, it uses a serialization point like following:
pthread_mutex_lock(&mutex);
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&muetx);
They think a previous lock holder should have already left the mutex and
is no longer referencing it, so they destroy it. To be maximum compatible
with such code, we use IA64 version to unlock the mutex in kernel, remove
the two steps unlocking code.