Commit Graph

516 Commits

Author SHA1 Message Date
David Xu
24c209494a Follow changes made in revision 232144, pass absolute timeout to kernel,
this eliminates a clock_gettime() syscall.
2012-02-27 13:38:52 +00:00
David Xu
df1f1bae9e In revision 231989, we pass a 16-bit clock ID into kernel, however
according to POSIX document, the clock ID may be dynamically allocated,
it unlikely will be in 64K forever. To make it future compatible, we
pack all timeout information into a new structure called _umtx_time, and
use fourth argument as a size indication, a zero means it is old code
using timespec as timeout value, but the new structure also includes flags
and a clock ID, so the size argument is different than before, and it is
non-zero. With this change, it is possible that a thread can sleep
on any supported clock, though current kernel code does not have such a
POSIX clock driver system.
2012-02-25 02:12:17 +00:00
David Xu
b13a8fa78f Use unused fourth argument of umtx_op to pass flags to kernel for operation
UMTX_OP_WAIT. Upper 16bits is enough to hold a clock id, and lower
16bits is used to pass flags. The change saves a clock_gettime() syscall
from libthr.
2012-02-22 03:22:49 +00:00
David Xu
879d152454 Check both seconds and nanoseconds are zero, only checking nanoseconds
is zero may trigger timeout too early. It seems a copy&paste bug.
2012-02-19 08:17:14 +00:00
Oleksandr Tymoshenko
8ecdc98b5b Add thread-local storage support for arm:
- Switch to Variant I TCB layout
- Use function from rtld for TCB allocation/deallocation
2012-02-14 00:17:43 +00:00
David Xu
4c91ddd690 Make code more stable by checking NULL pointers. 2012-02-11 04:12:12 +00:00
Oleksandr Tymoshenko
dda3ee8770 Switch MIPS TLS implementation to Variant I:
Save pointer to the TLS structure taking into account TP_OFFSET
and TCB structure size.
2012-02-10 06:53:25 +00:00
David Xu
e7004bf44d Plug a memory leak. When a cached thread is reused, don't clear sleep
queue pointers, just reuse it.

PR:		164828
MFC after:	1 week
2012-02-07 02:57:36 +00:00
Konstantin Belousov
10280ca601 Use getcontextx(3) internal API instead of getcontext(2) to provide
the signal handlers with the context information in the deferrred
case.

Only enable the use of getcontextx(3) in the deferred signal delivery
code on amd64 and i386. Sparc64 seems to have some undetermined issues
with interaction of alloca(3) and signal delivery.

Tested by:	flo (who also provided sparc64 harware access for me), pho
Discussed with:	marius
MFC after:	1 month
2012-01-21 18:06:18 +00:00
Dimitry Andric
b34d83a709 The TCB_GET32() and TCB_GET64() macros in the i386 and amd64-specific
versions of pthread_md.h have a special case of dereferencing a null
pointer.  Clang warns about this with:

In file included from lib/libthr/arch/i386/i386/pthread_md.c:36:
lib/libthr/arch/i386/include/pthread_md.h:96:10: error: indirection of non-volatile null pointer will be deleted, not trap [-Werror,-Wnull-dereference]
        return (TCB_GET32(tcb_self));
                ^~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:73:13: note: expanded from:
            : "m" (*(u_int *)(__tcb_offset(name))));            \
                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:96:10: note: consider using __builtin_trap() or qualifying pointer with 'volatile'

Since this indirection is done relative to the fs or gs segment, to
retrieve thread-specific data, it is an exception to the rule.

Therefore, add a volatile qualifier to tell the compiler we really want
to dereference a zero address.

MFC after:	1 week
2011-12-15 19:42:25 +00:00
David Xu
7859df8e67 Pass CVWAIT flags to kernel, this should handle
timeout correctly for pthread_cond_timedwait when
it uses kernel-based condition variable.

PR:	162403
Submitted by: jilles
MFC after: 3 days
2011-11-17 01:43:50 +00:00
Alexander Kabaev
a805bbe21a Do not set thread name to less than informative 'initial thread'. 2011-06-19 13:35:36 +00:00
Marius Strobl
6ce2f878d3 Merge from r161730:
o  Set TP using inline assembly to avoid dead code elimination.
o  Eliminate _tcb.

Merge from r161840:
Stylize: avoid using a global register variable.

Merge from r157461:
Simplify _get_curthread() and _tcb_ctor because libc and rtld now
already allocate thread pointer space in tls block for initial thread.

Merge from r177853:
Replace function _umtx_op with _umtx_op_err, the later function directly
returns errno, because errno can be mucked by user's signal handler and
most of pthread api heavily depends on errno to be correct, this change
should improve stability of the thread library.

MFC after:	1 week
2011-06-18 11:07:09 +00:00
Ryan Stone
aad93b043a r179417 introduced a bug into pthread_once(). Previously pthread_once()
used a global pthread_mutex_t for synchronization.  r179417 replaced that
with an implementation that directly used atomic instructions and thr_*
syscalls to synchronize callers to pthread_once.  However, calling
pthread_mutex_lock on the global mutex implicitly ensured that
_thr_check_init() had been called but with r179417 this was no longer
guaranteed.  This meant that if you were unlucky enough to have your first
call into libthr be a call to pthread_once(), you would segfault when
trying to access the pointer returned by _get_curthread().

The fix is to explicitly call _thr_check_init() from pthread_once().

Reviewed by:	davidxu
Approved by:	emaste (mentor)
MFC after:	1 week
2011-04-20 14:19:34 +00:00
Jung-uk Kim
678b238c85 Introduce a non-portable function pthread_getthreadid_np(3) to retrieve
calling thread's unique integral ID, which is similar to AIX function of
the same name.  Bump __FreeBSD_version to note its introduction.

Reviewed by:	kib
2011-02-07 21:26:46 +00:00
David Xu
65a6aaf1f3 Fix a typo.
Submitted by:	avg
2011-01-11 01:57:02 +00:00
Konstantin Belousov
fad128db86 For the process that already loaded libthr but still not initialized
threading, fall back to libc method of performing
__pthread_map_stacks_exec() job.

Reported and tested by:	Mykola Dzham <i levsha me>
2011-01-10 16:10:25 +00:00
Konstantin Belousov
da2fcff746 Implement the __pthread_map_stacks_exec() for libthr.
Stack creation code is changed to call _rtld_get_stack_prot() to get
the stack protection right. There is a race where thread is created
during dlopen() of dso that requires executable stacks. Then,
_rtld_get_stack_prot() may return PROT_READ | PROT_WRITE, but thread
is still not linked into the thread list. In this case, the callback
misses the thread stack, and rechecks the required protection
afterward.

Reviewed by:	davidxu
2011-01-09 12:38:40 +00:00
Konstantin Belousov
6c69d05232 Add section .note.GNU-stack for assembly files used by 386 and amd64. 2011-01-07 16:09:33 +00:00
David Xu
ebc8e8fd7f Return 0 instead of garbage value.
Found by:	clang static analyzer
2011-01-06 08:13:30 +00:00
David Xu
1f6f22dfec Because sleepqueue may still being used, we should always check wchan with
queue locked.
2011-01-04 05:35:19 +00:00
David Xu
e29ba4c2db Always clear flag PMUTEX_FLAG_DEFERED when unlocking, as it is only
significant for lock owner.
2010-12-24 07:41:39 +00:00
David Xu
0126aea6ad Add sleep queue code. 2010-12-22 05:03:24 +00:00
David Xu
d1078b0b03 MFp4:
- Add flags CVWAIT_ABSTIME and CVWAIT_CLOCKID for umtx kernel based
  condition variable, this should eliminate an extra system call to get
  current time.

- Add sub-function UMTX_OP_NWAKE_PRIVATE to wake up N channels in single
  system call. Create userland sleep queue for condition variable, in most
  cases, thread will wait in the queue, the pthread_cond_signal will defer
  thread wakeup until the mutex is unlocked, it tries to avoid an extra
  system call and a extra context switch in time window of pthread_cond_signal
  and pthread_mutex_unlock.

The changes are part of process-shared mutex project.
2010-12-22 05:01:52 +00:00
David Xu
1d1486408b Use sysctl kern.sched.cpusetsize to retrieve size of kernel cpuset. 2010-11-02 02:13:13 +00:00
David Xu
6ed79f06f4 Return previous sigaction correctly.
Submitted by:	avg
2010-10-29 09:35:36 +00:00
David Xu
322a8adaa3 Remove local variable 'first', instead check signal number in memory,
because the variable can be in register, second checking the variable
may still return true, however this is unexpected.
2010-10-29 07:04:45 +00:00
David Xu
67753965a8 Check small set and reject it, this is how kernel did. Always use the
size kernel is using.
2010-10-27 09:59:43 +00:00
David Xu
4a5478709b - Revert r214409.
- Use long word to figure out sizeof kernel cpuset, hope it works.
2010-10-27 09:29:03 +00:00
David Xu
e96b4de80e Remove locking and unlock in pthread_mutex_destroy, because
it can not fix race condition in application code, as a result,
the problem described in PR threads/151767 is avoided.
2010-10-27 04:19:07 +00:00
David Xu
65df457797 Fix typo. 2010-10-25 11:16:50 +00:00
David Xu
7f25f6c72d Get cpuset in pthread_attr_get_np() and free it in pthread_attr_destroy().
MFC after:	7 days
2010-10-25 09:16:04 +00:00
David Xu
de1e74c6a5 Revert revision 214007, I realized that MySQL wants to resolve
a silly rwlock deadlock problem, the deadlock is caused by writer
waiters, if a thread has already locked a reader lock, and wants to
acquire another reader lock, it will be blocked by writer waiters,
but we had already fixed it years ago.
2010-10-20 02:34:02 +00:00
David Xu
a24bcc04b2 Set default type to PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP, this
is the type we are using.
2010-10-18 23:37:56 +00:00
David Xu
bc15e58058 sort function name. 2010-10-18 05:16:44 +00:00
David Xu
7047ff7588 s/||/&& 2010-10-18 05:15:26 +00:00
David Xu
a6b9b59e04 Add pthread_rwlockattr_setkind_np and pthread_rwlockattr_getkind_np, the
functions set or get pthread_rwlock type, current supported types are:
   PTHREAD_RWLOCK_PREFER_READER_NP,
   PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP,
   PTHREAD_RWLOCK_PREFER_WRITER_NP,
default is PTHREAD_RWLOCK_PREFER_WRITER_NONCECURSIVE_NP, this maintains
binary compatible with old code.
2010-10-18 05:09:22 +00:00
David Xu
0935fc89af Oops, don't remove -fexceptions flag. 2010-10-08 01:53:33 +00:00
David Xu
0c3c9625a0 unwind.h was imported, gcc directory is no longer needed. 2010-10-08 01:47:14 +00:00
David Xu
722488013d change code to use unwind.h. 2010-09-30 12:59:56 +00:00
David Xu
ec92603cf9 Check invalid mutex in _mutex_cv_unlock. 2010-09-29 06:06:58 +00:00
David Xu
bbb64c2143 In current code, statically initialized and destroyed object have
same null value, the code can not distinguish between them, to
fix the problem, now a destroyed object is assigned to a non-null
value, and it will be rejected by some pthread functions.
PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP is changed to number 1, so that
adaptive mutex can be statically initialized correctly.
2010-09-28 04:57:56 +00:00
David Xu
1d5b5089aa Report death event to debugger before moving to gc list, otherwise
debugger may can not find it on thread list.
2010-09-26 06:45:24 +00:00
David Xu
8be6abcdc6 Only access unwind_disabled when _PTHREAD_FORCED_UNWIND is defined. 2010-09-25 09:43:24 +00:00
David Xu
9f1dc4c107 Add missing field. 2010-09-25 08:36:46 +00:00
David Xu
8690b9f6dd Because old _pthread_cleanup_push/pop do not have frame address,
it is incompatible with stack unwinding code, if they are invoked,
disable stack unwinding for current thread, and when thread is
exiting, print a warning message.
2010-09-25 06:27:09 +00:00
David Xu
6f066bb387 Simplify code, and in while loop, fix operator to match the unwinding
direction.
2010-09-25 04:21:31 +00:00
David Xu
f4213b9006 To support stack unwinding for cancellation points, add -fexceptions flag
for them, two functions _pthread_cancel_enter and _pthread_cancel_leave
are added to let thread enter and leave a cancellation point, it also
makes it possible that other functions can be cancellation points in
libraries without having to be rewritten in libthr.
2010-09-25 01:57:47 +00:00
David Xu
e5c66a0d9e inline testcancel() into thr_cancel_leave(), because cancel_pending is
almost false, this makes a slight better branch predicting.
2010-09-24 13:01:01 +00:00
David Xu
93ea4a71bf In most cases, cancel_point and cancel_async needn't be checked again,
because cancellation is almostly checked at cancellation points.
2010-09-24 07:52:07 +00:00