Commit Graph

541 Commits

Author SHA1 Message Date
Jilles Tjoelker
b18943f3b4 libthr: Always use the threaded rtld lock implementation.
The threaded rtld lock implementation is faster even in the single-threaded
case because it postpones signal handlers via THR_CRITICAL_ENTER and
THR_CRITICAL_LEAVE instead of calling sigprocmask(2).

As a result, exception handling becomes faster in single-threaded
applications linked with libthr.

Reviewed by:	kib
2013-01-18 23:08:40 +00:00
David Xu
a7b84c6512 In suspend_common(), don't wait for a thread which is in creation, because
pthread_suspend_all_np() may have already suspended its parent thread.
Add locking code in pthread_suspend_all_np() to only allow one thread
to suspend other threads, this eliminates a deadlock where two or more
threads try to suspend each others.
2012-08-27 03:09:39 +00:00
David Xu
0aa81bff0b Eliminate redundant code, _thr_spinlock_init() has already been called
in init_private(), don't call it again in fork() wrapper.
2012-08-23 05:15:15 +00:00
David Xu
d65f1abca7 Implement syscall clock_getcpuclockid2, so we can get a clock id
for process, thread or others we want to support.
Use the syscall to implement POSIX API clock_getcpuclock and
pthread_getcpuclockid.

PR:	168417
2012-08-17 02:26:31 +00:00
Oleksandr Tymoshenko
89e757fe6f Merging of projects/armv6, part 2
Handle TLS for ARMv6 and ARMv7
2012-08-15 03:08:29 +00:00
David Xu
aa75bc577a Do defered mutex wakeup once. 2012-08-12 00:56:56 +00:00
David Xu
e220a13ab9 MFp4:
Further decreases unexpected context switches by defering mutex wakeup
until internal sleep queue lock is released.
2012-08-11 23:17:02 +00:00
David Xu
5674256c7f Don't forget to initialize return value. 2012-07-20 05:47:12 +00:00
David Xu
ec225efc58 Simplify code by replacing _thr_ref_add() with _thr_find_thread(). 2012-07-20 03:37:19 +00:00
David Xu
340e384de9 Eliminate duplicated code. 2012-07-20 03:27:07 +00:00
David Xu
30dd4f448c Don't assign same value. 2012-07-20 03:22:17 +00:00
David Xu
670bc18dfe Eliminate duplicated code. 2012-07-20 03:16:52 +00:00
David Xu
7e0cf81bc9 Eliminate duplicated code. 2012-07-20 03:00:41 +00:00
David Xu
12dbbf86f8 Don't forget to release a thread reference count,
replace _thr_ref_add() with _thr_find_thread(),
so reference count is no longer needed.

MFC after:	3 days
2012-07-20 01:56:14 +00:00
David Xu
e3b090f037 Return EBUSY for PTHREAD_MUTEX_ADAPTIVE_NP too when the mutex could not
be acquired.

PR:	168317
MFC after:	3 days
2012-05-27 01:24:51 +00:00
David Xu
fa782a2611 Create a common function lookup() to search a chan, this eliminates
redundant SC_LOOKUP() calling.
2012-05-10 09:30:37 +00:00
David Xu
173943ace3 Fix mis-merged line, move SC_LOOKUP() call to
upper level.
2012-05-05 23:51:24 +00:00
David Xu
84ac0fb8ca MFp4:
Enqueue thread in LIFO, this can cause starvation, but it gives better
performance. Use _thr_queuefifo to control the frequency of FIFO vs LIFO,
you can use environment string LIBPTHREAD_QUEUE_FIFO to configure the
variable.
2012-05-03 09:17:31 +00:00
George V. Neville-Neil
6e047a2426 Set SIGCANCEL to SIGTHR as part of some cleanup of DTrace code.
Reviewed by:	davidxu@
MFC after:	1 week
2012-04-18 16:29:55 +00:00
David Xu
17ce606321 umtx operation UMTX_OP_MUTEX_WAKE has a side-effect that it accesses
a mutex after a thread has unlocked it, it event writes data to the mutex
memory to clear contention bit, there is a race that other threads
can lock it and unlock it, then destroy it, so it should not write
data to the mutex memory if there isn't any waiter.
The new operation UMTX_OP_MUTEX_WAKE2 try to fix the problem. It
requires thread library to clear the lock word entirely, then
call the WAKE2 operation to check if there is any waiter in kernel,
and try to wake up a thread, if necessary, the contention bit is set again
by the operation. This also mitgates the chance that other threads find
the contention bit and try to enter kernel to compete with each other
to wake up sleeping thread, this is unnecessary. With this change, the
mutex owner is no longer holding the mutex until it reaches a point
where kernel umtx queue is locked, it releases the mutex as soon as
possible.
Performance is improved when the mutex is contensted heavily.  On Intel
i3-2310M, the runtime of a benchmark program is reduced from 26.87 seconds
to 2.39 seconds, it even is better than UMTX_OP_MUTEX_WAKE which is
deprecated now. http://people.freebsd.org/~davidxu/bench/mutex_perf.c
2012-04-05 02:24:08 +00:00
Jilles Tjoelker
91792417bb libthr: In the atfork handlers for signals, do not skip the last signal.
_SIG_MAXSIG works a bit unexpectedly: signals 1 till _SIG_MAXSIG are valid,
both bounds inclusive.

Reviewed by:	davidxu
MFC after:	1 week
2012-03-26 17:05:26 +00:00
David Xu
81cd726a95 Use clockid parameter instead of hard-coded CLOCK_REALTIME.
Reported by:	pjd
2012-03-19 00:07:10 +00:00
David Xu
1b008f5e51 Some software think a mutex can be destroyed after it owned it, for
example, it uses a serialization point like following:
	pthread_mutex_lock(&mutex);
	pthread_mutex_unlock(&mutex);
	pthread_mutex_destroy(&muetx);
They think a previous lock holder should have already left the mutex and
is no longer referencing it, so they destroy it. To be maximum compatible
with such code, we use IA64 version to unlock the mutex in kernel, remove
the two steps unlocking code.
2012-03-18 00:22:29 +00:00
David Xu
e70bf9d5eb When destroying a barrier, waiting all threads exit the barrier,
this makes it possible a thread received PTHREAD_BARRIER_SERIAL_THREAD
immediately free memory area of the barrier.
2012-03-16 04:35:52 +00:00
Oleksandr Tymoshenko
34e3f7e717 - Switch to saving non-offseted pointer to TLS block in order too keep things simple 2012-03-06 03:27:58 +00:00
David Xu
24c209494a Follow changes made in revision 232144, pass absolute timeout to kernel,
this eliminates a clock_gettime() syscall.
2012-02-27 13:38:52 +00:00
David Xu
df1f1bae9e In revision 231989, we pass a 16-bit clock ID into kernel, however
according to POSIX document, the clock ID may be dynamically allocated,
it unlikely will be in 64K forever. To make it future compatible, we
pack all timeout information into a new structure called _umtx_time, and
use fourth argument as a size indication, a zero means it is old code
using timespec as timeout value, but the new structure also includes flags
and a clock ID, so the size argument is different than before, and it is
non-zero. With this change, it is possible that a thread can sleep
on any supported clock, though current kernel code does not have such a
POSIX clock driver system.
2012-02-25 02:12:17 +00:00
David Xu
b13a8fa78f Use unused fourth argument of umtx_op to pass flags to kernel for operation
UMTX_OP_WAIT. Upper 16bits is enough to hold a clock id, and lower
16bits is used to pass flags. The change saves a clock_gettime() syscall
from libthr.
2012-02-22 03:22:49 +00:00
David Xu
879d152454 Check both seconds and nanoseconds are zero, only checking nanoseconds
is zero may trigger timeout too early. It seems a copy&paste bug.
2012-02-19 08:17:14 +00:00
Oleksandr Tymoshenko
8ecdc98b5b Add thread-local storage support for arm:
- Switch to Variant I TCB layout
- Use function from rtld for TCB allocation/deallocation
2012-02-14 00:17:43 +00:00
David Xu
4c91ddd690 Make code more stable by checking NULL pointers. 2012-02-11 04:12:12 +00:00
Oleksandr Tymoshenko
dda3ee8770 Switch MIPS TLS implementation to Variant I:
Save pointer to the TLS structure taking into account TP_OFFSET
and TCB structure size.
2012-02-10 06:53:25 +00:00
David Xu
e7004bf44d Plug a memory leak. When a cached thread is reused, don't clear sleep
queue pointers, just reuse it.

PR:		164828
MFC after:	1 week
2012-02-07 02:57:36 +00:00
Konstantin Belousov
10280ca601 Use getcontextx(3) internal API instead of getcontext(2) to provide
the signal handlers with the context information in the deferrred
case.

Only enable the use of getcontextx(3) in the deferred signal delivery
code on amd64 and i386. Sparc64 seems to have some undetermined issues
with interaction of alloca(3) and signal delivery.

Tested by:	flo (who also provided sparc64 harware access for me), pho
Discussed with:	marius
MFC after:	1 month
2012-01-21 18:06:18 +00:00
Dimitry Andric
b34d83a709 The TCB_GET32() and TCB_GET64() macros in the i386 and amd64-specific
versions of pthread_md.h have a special case of dereferencing a null
pointer.  Clang warns about this with:

In file included from lib/libthr/arch/i386/i386/pthread_md.c:36:
lib/libthr/arch/i386/include/pthread_md.h:96:10: error: indirection of non-volatile null pointer will be deleted, not trap [-Werror,-Wnull-dereference]
        return (TCB_GET32(tcb_self));
                ^~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:73:13: note: expanded from:
            : "m" (*(u_int *)(__tcb_offset(name))));            \
                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:96:10: note: consider using __builtin_trap() or qualifying pointer with 'volatile'

Since this indirection is done relative to the fs or gs segment, to
retrieve thread-specific data, it is an exception to the rule.

Therefore, add a volatile qualifier to tell the compiler we really want
to dereference a zero address.

MFC after:	1 week
2011-12-15 19:42:25 +00:00
David Xu
7859df8e67 Pass CVWAIT flags to kernel, this should handle
timeout correctly for pthread_cond_timedwait when
it uses kernel-based condition variable.

PR:	162403
Submitted by: jilles
MFC after: 3 days
2011-11-17 01:43:50 +00:00
Alexander Kabaev
a805bbe21a Do not set thread name to less than informative 'initial thread'. 2011-06-19 13:35:36 +00:00
Marius Strobl
6ce2f878d3 Merge from r161730:
o  Set TP using inline assembly to avoid dead code elimination.
o  Eliminate _tcb.

Merge from r161840:
Stylize: avoid using a global register variable.

Merge from r157461:
Simplify _get_curthread() and _tcb_ctor because libc and rtld now
already allocate thread pointer space in tls block for initial thread.

Merge from r177853:
Replace function _umtx_op with _umtx_op_err, the later function directly
returns errno, because errno can be mucked by user's signal handler and
most of pthread api heavily depends on errno to be correct, this change
should improve stability of the thread library.

MFC after:	1 week
2011-06-18 11:07:09 +00:00
Ryan Stone
aad93b043a r179417 introduced a bug into pthread_once(). Previously pthread_once()
used a global pthread_mutex_t for synchronization.  r179417 replaced that
with an implementation that directly used atomic instructions and thr_*
syscalls to synchronize callers to pthread_once.  However, calling
pthread_mutex_lock on the global mutex implicitly ensured that
_thr_check_init() had been called but with r179417 this was no longer
guaranteed.  This meant that if you were unlucky enough to have your first
call into libthr be a call to pthread_once(), you would segfault when
trying to access the pointer returned by _get_curthread().

The fix is to explicitly call _thr_check_init() from pthread_once().

Reviewed by:	davidxu
Approved by:	emaste (mentor)
MFC after:	1 week
2011-04-20 14:19:34 +00:00
Jung-uk Kim
678b238c85 Introduce a non-portable function pthread_getthreadid_np(3) to retrieve
calling thread's unique integral ID, which is similar to AIX function of
the same name.  Bump __FreeBSD_version to note its introduction.

Reviewed by:	kib
2011-02-07 21:26:46 +00:00
David Xu
65a6aaf1f3 Fix a typo.
Submitted by:	avg
2011-01-11 01:57:02 +00:00
Konstantin Belousov
fad128db86 For the process that already loaded libthr but still not initialized
threading, fall back to libc method of performing
__pthread_map_stacks_exec() job.

Reported and tested by:	Mykola Dzham <i levsha me>
2011-01-10 16:10:25 +00:00
Konstantin Belousov
da2fcff746 Implement the __pthread_map_stacks_exec() for libthr.
Stack creation code is changed to call _rtld_get_stack_prot() to get
the stack protection right. There is a race where thread is created
during dlopen() of dso that requires executable stacks. Then,
_rtld_get_stack_prot() may return PROT_READ | PROT_WRITE, but thread
is still not linked into the thread list. In this case, the callback
misses the thread stack, and rechecks the required protection
afterward.

Reviewed by:	davidxu
2011-01-09 12:38:40 +00:00
Konstantin Belousov
6c69d05232 Add section .note.GNU-stack for assembly files used by 386 and amd64. 2011-01-07 16:09:33 +00:00
David Xu
ebc8e8fd7f Return 0 instead of garbage value.
Found by:	clang static analyzer
2011-01-06 08:13:30 +00:00
David Xu
1f6f22dfec Because sleepqueue may still being used, we should always check wchan with
queue locked.
2011-01-04 05:35:19 +00:00
David Xu
e29ba4c2db Always clear flag PMUTEX_FLAG_DEFERED when unlocking, as it is only
significant for lock owner.
2010-12-24 07:41:39 +00:00
David Xu
0126aea6ad Add sleep queue code. 2010-12-22 05:03:24 +00:00
David Xu
d1078b0b03 MFp4:
- Add flags CVWAIT_ABSTIME and CVWAIT_CLOCKID for umtx kernel based
  condition variable, this should eliminate an extra system call to get
  current time.

- Add sub-function UMTX_OP_NWAKE_PRIVATE to wake up N channels in single
  system call. Create userland sleep queue for condition variable, in most
  cases, thread will wait in the queue, the pthread_cond_signal will defer
  thread wakeup until the mutex is unlocked, it tries to avoid an extra
  system call and a extra context switch in time window of pthread_cond_signal
  and pthread_mutex_unlock.

The changes are part of process-shared mutex project.
2010-12-22 05:01:52 +00:00
David Xu
1d1486408b Use sysctl kern.sched.cpusetsize to retrieve size of kernel cpuset. 2010-11-02 02:13:13 +00:00