Commit Graph

617 Commits

Author SHA1 Message Date
davidxu
96aacc2279 Follow changes made in revision 232144, pass absolute timeout to kernel,
this eliminates a clock_gettime() syscall.
2012-02-27 13:38:52 +00:00
davidxu
61033245ae In revision 231989, we pass a 16-bit clock ID into kernel, however
according to POSIX document, the clock ID may be dynamically allocated,
it unlikely will be in 64K forever. To make it future compatible, we
pack all timeout information into a new structure called _umtx_time, and
use fourth argument as a size indication, a zero means it is old code
using timespec as timeout value, but the new structure also includes flags
and a clock ID, so the size argument is different than before, and it is
non-zero. With this change, it is possible that a thread can sleep
on any supported clock, though current kernel code does not have such a
POSIX clock driver system.
2012-02-25 02:12:17 +00:00
davidxu
d177303078 Use unused fourth argument of umtx_op to pass flags to kernel for operation
UMTX_OP_WAIT. Upper 16bits is enough to hold a clock id, and lower
16bits is used to pass flags. The change saves a clock_gettime() syscall
from libthr.
2012-02-22 03:22:49 +00:00
davidxu
828ee5105e Check both seconds and nanoseconds are zero, only checking nanoseconds
is zero may trigger timeout too early. It seems a copy&paste bug.
2012-02-19 08:17:14 +00:00
gonzo
04a89c6a3d Add thread-local storage support for arm:
- Switch to Variant I TCB layout
- Use function from rtld for TCB allocation/deallocation
2012-02-14 00:17:43 +00:00
davidxu
508a5a2b93 Make code more stable by checking NULL pointers. 2012-02-11 04:12:12 +00:00
gonzo
d9d5ba0daf Switch MIPS TLS implementation to Variant I:
Save pointer to the TLS structure taking into account TP_OFFSET
and TCB structure size.
2012-02-10 06:53:25 +00:00
davidxu
f738f62c3a Plug a memory leak. When a cached thread is reused, don't clear sleep
queue pointers, just reuse it.

PR:		164828
MFC after:	1 week
2012-02-07 02:57:36 +00:00
kib
fc150a4b54 Use getcontextx(3) internal API instead of getcontext(2) to provide
the signal handlers with the context information in the deferrred
case.

Only enable the use of getcontextx(3) in the deferred signal delivery
code on amd64 and i386. Sparc64 seems to have some undetermined issues
with interaction of alloca(3) and signal delivery.

Tested by:	flo (who also provided sparc64 harware access for me), pho
Discussed with:	marius
MFC after:	1 month
2012-01-21 18:06:18 +00:00
dim
366d0e5e6b The TCB_GET32() and TCB_GET64() macros in the i386 and amd64-specific
versions of pthread_md.h have a special case of dereferencing a null
pointer.  Clang warns about this with:

In file included from lib/libthr/arch/i386/i386/pthread_md.c:36:
lib/libthr/arch/i386/include/pthread_md.h:96:10: error: indirection of non-volatile null pointer will be deleted, not trap [-Werror,-Wnull-dereference]
        return (TCB_GET32(tcb_self));
                ^~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:73:13: note: expanded from:
            : "m" (*(u_int *)(__tcb_offset(name))));            \
                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/libthr/arch/i386/include/pthread_md.h:96:10: note: consider using __builtin_trap() or qualifying pointer with 'volatile'

Since this indirection is done relative to the fs or gs segment, to
retrieve thread-specific data, it is an exception to the rule.

Therefore, add a volatile qualifier to tell the compiler we really want
to dereference a zero address.

MFC after:	1 week
2011-12-15 19:42:25 +00:00
davidxu
d0004038ae Pass CVWAIT flags to kernel, this should handle
timeout correctly for pthread_cond_timedwait when
it uses kernel-based condition variable.

PR:	162403
Submitted by: jilles
MFC after: 3 days
2011-11-17 01:43:50 +00:00
kan
7427a39f73 Do not set thread name to less than informative 'initial thread'. 2011-06-19 13:35:36 +00:00
marius
44781533ff Merge from r161730:
o  Set TP using inline assembly to avoid dead code elimination.
o  Eliminate _tcb.

Merge from r161840:
Stylize: avoid using a global register variable.

Merge from r157461:
Simplify _get_curthread() and _tcb_ctor because libc and rtld now
already allocate thread pointer space in tls block for initial thread.

Merge from r177853:
Replace function _umtx_op with _umtx_op_err, the later function directly
returns errno, because errno can be mucked by user's signal handler and
most of pthread api heavily depends on errno to be correct, this change
should improve stability of the thread library.

MFC after:	1 week
2011-06-18 11:07:09 +00:00
rstone
488ea46b7d r179417 introduced a bug into pthread_once(). Previously pthread_once()
used a global pthread_mutex_t for synchronization.  r179417 replaced that
with an implementation that directly used atomic instructions and thr_*
syscalls to synchronize callers to pthread_once.  However, calling
pthread_mutex_lock on the global mutex implicitly ensured that
_thr_check_init() had been called but with r179417 this was no longer
guaranteed.  This meant that if you were unlucky enough to have your first
call into libthr be a call to pthread_once(), you would segfault when
trying to access the pointer returned by _get_curthread().

The fix is to explicitly call _thr_check_init() from pthread_once().

Reviewed by:	davidxu
Approved by:	emaste (mentor)
MFC after:	1 week
2011-04-20 14:19:34 +00:00
jkim
365538a229 Introduce a non-portable function pthread_getthreadid_np(3) to retrieve
calling thread's unique integral ID, which is similar to AIX function of
the same name.  Bump __FreeBSD_version to note its introduction.

Reviewed by:	kib
2011-02-07 21:26:46 +00:00
davidxu
3e5f77d5aa Fix a typo.
Submitted by:	avg
2011-01-11 01:57:02 +00:00
kib
2325865369 For the process that already loaded libthr but still not initialized
threading, fall back to libc method of performing
__pthread_map_stacks_exec() job.

Reported and tested by:	Mykola Dzham <i levsha me>
2011-01-10 16:10:25 +00:00
kib
b2e3ee7d07 Implement the __pthread_map_stacks_exec() for libthr.
Stack creation code is changed to call _rtld_get_stack_prot() to get
the stack protection right. There is a race where thread is created
during dlopen() of dso that requires executable stacks. Then,
_rtld_get_stack_prot() may return PROT_READ | PROT_WRITE, but thread
is still not linked into the thread list. In this case, the callback
misses the thread stack, and rechecks the required protection
afterward.

Reviewed by:	davidxu
2011-01-09 12:38:40 +00:00
kib
53f046ef30 Add section .note.GNU-stack for assembly files used by 386 and amd64. 2011-01-07 16:09:33 +00:00
davidxu
bbfe229093 Return 0 instead of garbage value.
Found by:	clang static analyzer
2011-01-06 08:13:30 +00:00
davidxu
dd12e67260 Because sleepqueue may still being used, we should always check wchan with
queue locked.
2011-01-04 05:35:19 +00:00
davidxu
1d89f14a5c Always clear flag PMUTEX_FLAG_DEFERED when unlocking, as it is only
significant for lock owner.
2010-12-24 07:41:39 +00:00
davidxu
72cc2acc23 Add sleep queue code. 2010-12-22 05:03:24 +00:00
davidxu
437ad27f9c MFp4:
- Add flags CVWAIT_ABSTIME and CVWAIT_CLOCKID for umtx kernel based
  condition variable, this should eliminate an extra system call to get
  current time.

- Add sub-function UMTX_OP_NWAKE_PRIVATE to wake up N channels in single
  system call. Create userland sleep queue for condition variable, in most
  cases, thread will wait in the queue, the pthread_cond_signal will defer
  thread wakeup until the mutex is unlocked, it tries to avoid an extra
  system call and a extra context switch in time window of pthread_cond_signal
  and pthread_mutex_unlock.

The changes are part of process-shared mutex project.
2010-12-22 05:01:52 +00:00
davidxu
344bdcae88 Use sysctl kern.sched.cpusetsize to retrieve size of kernel cpuset. 2010-11-02 02:13:13 +00:00
davidxu
f8b62a4759 Return previous sigaction correctly.
Submitted by:	avg
2010-10-29 09:35:36 +00:00
davidxu
fa0e722e16 Remove local variable 'first', instead check signal number in memory,
because the variable can be in register, second checking the variable
may still return true, however this is unexpected.
2010-10-29 07:04:45 +00:00
davidxu
8475a5bf0d Check small set and reject it, this is how kernel did. Always use the
size kernel is using.
2010-10-27 09:59:43 +00:00
davidxu
f8f25f57e2 - Revert r214409.
- Use long word to figure out sizeof kernel cpuset, hope it works.
2010-10-27 09:29:03 +00:00
davidxu
d60836560b Remove locking and unlock in pthread_mutex_destroy, because
it can not fix race condition in application code, as a result,
the problem described in PR threads/151767 is avoided.
2010-10-27 04:19:07 +00:00
davidxu
b4fee3c1ed Fix typo. 2010-10-25 11:16:50 +00:00
davidxu
36f64247c7 Get cpuset in pthread_attr_get_np() and free it in pthread_attr_destroy().
MFC after:	7 days
2010-10-25 09:16:04 +00:00
davidxu
1126acd0dc Revert revision 214007, I realized that MySQL wants to resolve
a silly rwlock deadlock problem, the deadlock is caused by writer
waiters, if a thread has already locked a reader lock, and wants to
acquire another reader lock, it will be blocked by writer waiters,
but we had already fixed it years ago.
2010-10-20 02:34:02 +00:00
davidxu
86b8c070d5 Set default type to PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP, this
is the type we are using.
2010-10-18 23:37:56 +00:00
davidxu
92672df66c sort function name. 2010-10-18 05:16:44 +00:00
davidxu
a7a5cfca82 s/||/&& 2010-10-18 05:15:26 +00:00
davidxu
c6d578b870 Add pthread_rwlockattr_setkind_np and pthread_rwlockattr_getkind_np, the
functions set or get pthread_rwlock type, current supported types are:
   PTHREAD_RWLOCK_PREFER_READER_NP,
   PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP,
   PTHREAD_RWLOCK_PREFER_WRITER_NP,
default is PTHREAD_RWLOCK_PREFER_WRITER_NONCECURSIVE_NP, this maintains
binary compatible with old code.
2010-10-18 05:09:22 +00:00
davidxu
de54e693fd Oops, don't remove -fexceptions flag. 2010-10-08 01:53:33 +00:00
davidxu
3bf7c8781d unwind.h was imported, gcc directory is no longer needed. 2010-10-08 01:47:14 +00:00
davidxu
86d9958ce8 change code to use unwind.h. 2010-09-30 12:59:56 +00:00
davidxu
df6acfd7c4 Check invalid mutex in _mutex_cv_unlock. 2010-09-29 06:06:58 +00:00
davidxu
f329bc965c In current code, statically initialized and destroyed object have
same null value, the code can not distinguish between them, to
fix the problem, now a destroyed object is assigned to a non-null
value, and it will be rejected by some pthread functions.
PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP is changed to number 1, so that
adaptive mutex can be statically initialized correctly.
2010-09-28 04:57:56 +00:00
davidxu
56cf3f4638 Report death event to debugger before moving to gc list, otherwise
debugger may can not find it on thread list.
2010-09-26 06:45:24 +00:00
davidxu
6b8f97b128 Only access unwind_disabled when _PTHREAD_FORCED_UNWIND is defined. 2010-09-25 09:43:24 +00:00
davidxu
6405ab4b6c Add missing field. 2010-09-25 08:36:46 +00:00
davidxu
121f2e2d60 Because old _pthread_cleanup_push/pop do not have frame address,
it is incompatible with stack unwinding code, if they are invoked,
disable stack unwinding for current thread, and when thread is
exiting, print a warning message.
2010-09-25 06:27:09 +00:00
davidxu
2aedc66f12 Simplify code, and in while loop, fix operator to match the unwinding
direction.
2010-09-25 04:21:31 +00:00
davidxu
74604ed9c4 To support stack unwinding for cancellation points, add -fexceptions flag
for them, two functions _pthread_cancel_enter and _pthread_cancel_leave
are added to let thread enter and leave a cancellation point, it also
makes it possible that other functions can be cancellation points in
libraries without having to be rewritten in libthr.
2010-09-25 01:57:47 +00:00
davidxu
b0052272aa inline testcancel() into thr_cancel_leave(), because cancel_pending is
almost false, this makes a slight better branch predicting.
2010-09-24 13:01:01 +00:00
davidxu
722a516400 In most cases, cancel_point and cancel_async needn't be checked again,
because cancellation is almostly checked at cancellation points.
2010-09-24 07:52:07 +00:00
davidxu
585e6320d2 If we are at cancellation point, always work as deferred mode despite
whether asynchronous mode is turned on or not, this always gives us a
chance to decide whether thread should be canceled or not in
cancellation points.
2010-09-21 06:47:04 +00:00
davidxu
fe5567c8f1 Because atfork lock is held while forking, a thread cancellation triggered
by atfork handler is unsafe, use intenal flag no_cancel to disable it.
2010-09-19 09:03:11 +00:00
davidxu
5418a23597 Fix typo. 2010-09-19 08:55:36 +00:00
davidxu
4c94bb3829 - _Unwind_Resume function is not used, remove it.
- Use a store barrier to make sure uwl_forcedunwind is lastest thing
  other threads can see.
- Add some comments.
2010-09-19 05:42:29 +00:00
davidxu
ad24f558bd Fix a race condition when finding stack unwinding functions. 2010-09-19 05:19:47 +00:00
davidxu
b00fcaa22c add code to support stack unwinding when thread exits. note that only
defer-mode cancellation works, asynchrnous mode does not work because
it lacks of libuwind's support. stack unwinding is not enabled unless
LIBTHR_UNWIND_STACK is defined in Makefile.
2010-09-15 02:56:32 +00:00
davidxu
6be463abbb Move back IN_GCLIST flag into field tlflags, since thread list and gc list
still share same lock.
2010-09-15 01:21:30 +00:00
davidxu
b79ad9b341 Don't compare thread pointers again. 2010-09-13 11:58:42 +00:00
davidxu
95869dd2a2 Fix copy&paste problem. 2010-09-13 11:57:46 +00:00
davidxu
ac1cdddd7f Update symbol. 2010-09-13 09:23:38 +00:00
davidxu
71456632de PS_DEAD state needs not be checked because _thr_find_thread() has already
checked it.
2010-09-13 07:18:00 +00:00
davidxu
e87e922f31 Convert thread list lock from mutex to rwlock. 2010-09-13 07:03:01 +00:00
imp
6a8c774078 Merge from tbemd, with a small amount of rework:
For all libthr contexts, use ${MACHINE_CPUARCH}
for all libc contexts, use ${MACHINE_ARCH} if it exists, otherwise use
${MACHINE_CPUARCH}
Move some common code up a layer (the .PATH statement was the same in
all the arch submakefiles).

# Hope she hasn't busted powerpc64 with this...
2010-09-13 01:43:10 +00:00
davidxu
e129c18a83 Because POSIX does not allow EINTR to be returned from sigwait(),
add a wrapper for it in libc and rework the code in libthr, the
system call still can return EINTR, we keep this feature.

Discussed on: thread
Reviewed by:  jilles
2010-09-10 01:47:37 +00:00
davidxu
417820202c To avoid possible race condition, SIGCANCEL is always sent except the
thread is dead.
2010-09-08 02:18:20 +00:00
davidxu
bc33915543 Fix off-by-one error in function _thr_sigact_unload, also disable the
function, it seems some gnome application tends to crash if we
unregister sigaction automatically.
2010-09-06 03:00:54 +00:00
davidxu
f21ffb282d Remove incorrect comments, also make sure signal is
disabled when unregistering sigaction.
2010-09-01 13:22:55 +00:00
davidxu
c19c7fe99f In function __pthread_cxa_finalize(), also make code for removing
atfork handler be async-signal safe.
2010-09-01 07:09:46 +00:00
davidxu
7852e2095b pthread_atfork should acquire writer lock and protect the code
with critical region.
2010-09-01 03:55:10 +00:00
davidxu
5f00b957ae Change atfork lock from mutex to rwlock, also make mutexes used by malloc()
module private type, when private type mutex is locked/unlocked, thread
critical region is entered or leaved. These changes makes fork()
async-signal safe which required by POSIX. Note that user's atfork handler
still needs to be async-signal safe, but it is not problem of libthr, it
is user's responsiblity.
2010-09-01 03:11:21 +00:00
davidxu
4dcb50723a Add signal handler wrapper, the reason to add it becauses there are
some cases we want to improve:
  1) if a thread signal got a signal while in cancellation point,
     it is possible the TDP_WAKEUP may be eaten by signal handler
     if the handler called some interruptibly system calls.
  2) In signal handler, we want to disable cancellation.
  3) When thread holding some low level locks, it is better to
     disable signal, those code need not to worry reentrancy,
     sigprocmask system call is avoided because it is a bit expensive.
The signal handler wrapper works in this way:
  1) libthr installs its signal handler if user code invokes sigaction
     to install its handler, the user handler is recorded in internal
     array.
  2) when a signal is delivered, libthr's signal handler is invoke,
     libthr checks if thread holds some low level lock or is in critical
     region, if it is true, the signal is buffered, and all signals are
     masked, once the thread leaves critical region, correct signal
     mask is restored and buffered signal is processed.
  3) before user signal handler is invoked, cancellation is temporarily
     disabled, after user signal handler is returned, cancellation state
     is restored, and pending cancellation is rescheduled.
2010-09-01 02:18:33 +00:00
davidxu
ca3cfe473f Unregister thread specific data destructor when a corresponding dso
is unloaded.
2010-08-27 05:20:22 +00:00
davidxu
4190ab4bbd clear lock to zero state if it is destroyed. 2010-08-27 03:23:07 +00:00
davidxu
c360399299 eliminate unused code. 2010-08-26 09:04:27 +00:00
davidxu
4dfe518936 Decrease rdlock count only when thread unlocked a reader lock.
MFC after:	3 days
2010-08-26 07:09:48 +00:00
nwhitehorn
d3a20dc0b9 Unify 32-bit and 64-bit PowerPC libthr support. This reduces code
duplication, and simplifies the TBEMD import.

Requested by:	imp
2010-08-24 20:50:08 +00:00
kib
b779af3734 Remove unused source.
MFC after:	2 weeks
2010-08-24 11:55:25 +00:00
kib
ead6fcadfd The __hidden definition is provided by sys/cdefs.h.
MFC after:	2 weeks
2010-08-24 11:54:48 +00:00
davidxu
14556ea479 Add wrapper for setcontext() and swapcontext(), the wrappers
unblock SIGCANCEL which is needed by thread cancellation.
2010-08-24 09:57:06 +00:00
kib
df9bc4850f On shared object unload, in __cxa_finalize, call and clear all installed
atexit and __cxa_atexit handlers that are either installed by unloaded
dso, or points to the functions provided by the dso.

Use _rtld_addr_phdr to locate segment information from the address of
private variable belonging to the dso, supplied by crtstuff.c. Provide
utility function __elf_phdr_match_addr to do the match of address against
dso executable segment.

Call back into libthr from __cxa_finalize using weak
__pthread_cxa_finalize symbol to remove any atfork handler which
function points into unloaded object.

The rtld needs private __pthread_cxa_finalize symbol to not require
resolution of the weak undefined symbol at initialization time. This
cannot work, since rtld is relocated before sym_zero is set up.

Idea by:	kan
Reviewed by:	kan (previous version)
MFC after:	3 weeks
2010-08-23 15:38:02 +00:00
davidxu
9d41da2950 Reduce redundant code.
Submitted by: kib
2010-08-20 13:42:48 +00:00
davidxu
5958e39b3f In current implementation, thread cancellation is done in signal handler,
which does not know what is the state of interrupted system call, for
example, open() system call opened a file and the thread is still cancelled,
result is descriptor leak, there are other problems which can cause resource
leak or undeterminable side effect when a thread is cancelled. However, this
is no longer true in new implementation.

  In defering mode, a thread is canceled if cancellation request is pending and
later the thread enters a cancellation point, otherwise, a later
pthread_cancel() just causes SIGCANCEL to be sent to the target thread, and
causes target thread to abort system call, userland code in libthr then checks
cancellation state, and cancels the thread if needed. For example, the
cancellation point open(), the thread may be canceled at start,
but later, if it opened a file descriptor, it is not canceled, this avoids
file handle leak. Another example is read(), a thread may be canceled at start
of the function, but later, if it read some bytes from a socket, the thread
is not canceled, the caller then can decide if it should still enable cancelling
or disable it and continue reading data until it thinks it has read all
bytes of a packet, and keeps a protocol stream in health state, if user ignores
partly reading of a packet without disabling cancellation, then second iteration
of read loop cause the thread to be cancelled.
An exception is that the close() cancellation point always closes a file handle
despite whether the thread is cancelled or not.

  The old mechanism is still kept, for a functions which is not so easily to
fix a cancellation problem, the rough mechanism is used.

Reviewed by: kib@
2010-08-20 05:15:39 +00:00
davidxu
07b66dcf06 According to specification, function fcntl() is a cancellation point only
when cmd argument is F_SETLKW.
2010-08-20 04:15:05 +00:00
davidxu
8eb397b4ca Tweak code a bit to be POSIX compatible, when a cancellation request
is acted upon, or when a thread calls pthread_exit(), the thread first
disables cancellation by setting its cancelability state to
PTHREAD_CANCEL_DISABLE and its cancelability type to
PTHREAD_CANCEL_DEFERRED. The cancelability state remains set to
PTHREAD_CANCEL_DISABLE until the thread has terminated.

It has no effect if a cancellation cleanup handler or thread-specific
data destructor routine changes the cancelability state to
PTHREAD_CANCEL_ENABLE.
2010-08-17 02:50:12 +00:00
kib
48eef70c5e Use _SIG_VALID instead of expanded form of the macro.
Submitted by:	Garrett Cooper <yanegomi gmail com>
MFC after:	1 week
2010-07-12 10:15:33 +00:00
nwhitehorn
9712e9de77 Fix SVN mismerge. We somehow ended up with the 32-bit powerpc version
in arch/powerpc64 instead of the 64-bit one.
2010-07-11 05:13:38 +00:00
nwhitehorn
980b46c696 Powerpc64 thread libraries support. 2010-07-10 15:13:49 +00:00
deischen
95cb40b038 Coalesce one more broken line. 2010-05-24 13:44:39 +00:00
deischen
f55ea98e9c Coalesce a couple of broken lines since they can fit within 80
characters.  Little nit found while looking at a bug report.
2010-05-24 13:43:11 +00:00
uqs
3960614646 mdoc: order prologue macros consistently by Dd/Dt/Os
Although groff_mdoc(7) gives another impression, this is the ordering
most widely used and also required by mdocml/mandoc.

Reviewed by:	ru
Approved by:	philip, ed (mentors)
2010-04-14 19:08:06 +00:00
imp
c27b492e47 Merge r195129 from project/mips to head by hand:
r195129 | gonzo | 2009-06-27 17:28:56 -0600 (Sat, 27 Jun 2009) | 2 lines
- Use sysarch(2) in MIPS version of _tcb_set/_tcb_get
2010-01-09 00:07:47 +00:00
davidxu
46ad5872cf remove file thr_sem_new.c. 2010-01-05 07:50:31 +00:00
davidxu
451e3b67a4 Remove extra new semaphore stubs, because libc already has them, and
ld can find the newest version which is default.

Poked by: kan@
2010-01-05 06:21:29 +00:00
davidxu
87c8a1faf2 Use umtx to implement process sharable semaphore, to make this work,
now type sema_t is a structure which can be put in a shared memory area,
and multiple processes can operate it concurrently.
User can either use mmap(MAP_SHARED) + sem_init(pshared=1) or use sem_open()
to initialize a shared semaphore.
Named semaphore uses file system and is located in /tmp directory, and its
file name is prefixed with 'SEMD', so now it is chroot or jail friendly.
In simplist cases, both for named and un-named semaphore, userland code
does not have to enter kernel to reduce/increase semaphore's count.
The semaphore is designed to be crash-safe, it means even if an application
is crashed in the middle of operating semaphore, the semaphore state is
still safely recovered by later use, there is no waiter counter maintained
by userland code.
The main semaphore code is in libc and libthr only has some necessary stubs,
this makes it possible that a non-threaded application can use semaphore
without linking to thread library.
Old semaphore implementation is kept libc to maintain binary compatibility.
The kernel ksem API is no longer used in the new implemenation.

Discussed on: threads@
2010-01-05 02:37:59 +00:00
marcel
5c1d0ca7f5 Work-around a race condition on ia64 while unlocking a contested lock.
The race condition is believed to be in UMTX_OP_MUTEX_WAKE. On ia64,
we simply go to the kernel to unlock.
The big question is why this is only a race condition on ia64...

MFC after:	3 days
2009-12-14 01:26:01 +00:00
kib
63a17ed232 Revert r199830 for now. Too many ports dlopen() libraries linked with
libthr, but forgot to link main binary with it.
2009-11-28 14:34:28 +00:00
kib
ac88979666 Libthr cannot be dynamically loaded into the running process.
Mark it with -z nodlopen for now.

Discussed with:	jhb, kan
MFC after:	3 weeks
2009-11-26 14:01:14 +00:00
kib
08e5013938 Current pselect(3) is implemented in usermode and thus vulnerable to
well-known race condition, which elimination was the reason for the
function appearance in first place. If sigmask supplied as argument to
pselect() enables a signal, the signal might be delivered before thread
called select(2), causing lost wakeup. Reimplement pselect() in kernel,
making change of sigmask and sleep atomic.

Since signal shall be delivered to the usermode, but sigmask restored,
set TDP_OLDMASK and save old mask in td_oldsigmask. The TDP_OLDMASK
should be cleared by ast() in case signal was not gelivered during
syscall execution.

Reviewed by:	davidxu
Tested by:	pho
MFC after:	1 month
2009-10-27 10:55:34 +00:00
marcel
fff54c20f8 Implement _umtx_op_err() for ia64. 2009-10-24 20:07:17 +00:00
jilles
874a086f97 Make openat(2) a cancellation point.
This is required by POSIX and matches open(2).

Reviewed by:	kib, jhb
MFC after:	1 month
2009-10-11 20:19:45 +00:00
davidxu
d9aeefb9ce don't report error if key was deleted.
PR:	threads/135462
2009-09-25 00:15:30 +00:00
attilio
a18d0e5adb rwlock implemented from libthr need to fall through the 'hard path' and
query umtx also if the shared waiters bit is set on a shared lock.
The writer starvation avoidance technique, infact, can lead to shared
waiters on a shared lock which can bring to a missed wakeup and thus
to a deadlock if the right bit is not checked (a notable case is the
writers counterpart to be handled through expired timeouts).

Fix that by checking for the shared waiters bit also when unlocking the
shared locks.

That bug was causing a reported MySQL deadlock.
Many thanks go to Nick Esborn and his employer DesertNet which provided
time and machines to identify and fix this issue.

PR:		thread/135673
Reported by:	Nick Esborn <nick at desert dot net>
Tested by:	Nick Esborn <nick at desert dot net>
Reviewed by:	jeff
2009-09-23 21:38:57 +00:00
attilio
daff94f8a6 In the current code, rdlock_count is not correctly handled for some cases.
The most notable is that it is not bumped in rwlock_rdlock_common() when
the hard path (__thr_rwlock_rdlock()) returns successfully.
This can lead to deadlocks in libthr when rwlocks recursion in read mode
happens.
Fix the interested parts by correctly handling rdlock_count.

PR:		threads/136345
Reported by:	rink
Tested by:	rink
Reviewed by:	jeff
Approved by:	re (kib)
MFC:		2 weeks
2009-07-06 09:31:04 +00:00
green
f397f112f7 These are some cosmetic changes to improve the clarity of libthr's fork implementation. 2009-05-11 16:45:53 +00:00
rwatson
9d69b9825b Now that the kernel defines CACHE_LINE_SIZE in machine/param.h, use
that definition in the custom locking code for the run-time linker
rather than local definitions.

Pointed out by:	tinderbox
MFC after:	2 weeks
2009-04-19 23:02:50 +00:00
davidxu
949d94d036 Turn on nodelete linker flag because libthr can not be unloaded safely,
it does hook on to libc.
2009-03-31 02:50:18 +00:00
kib
2ca0e1eded Forcibly unlock the malloc() locks in the child process after fork(),
by temporary pretending that the process is still multithreaded.
Current malloc lock primitives do nothing for singlethreaded process.

Reviewed by:	davidxu, deischen
2009-03-19 10:32:25 +00:00
davidxu
0d25ff31c6 Don't ignore other fcntl functions, directly call __sys_fcntl if
WITHOUT_SYSCALL_COMPAT is not defined.

Reviewed by:	deischen
2009-03-09 05:54:43 +00:00
davidxu
2adf4999ea Don't reference non-existent __fcntl_compat if WITHOUT_SYSCALL_COMPAT is defined.
Submitted by:	Pawel Worach "pawel dot worach at gmail dot com"
2009-03-09 02:34:02 +00:00
ru
ae7b564b50 With only one threading library, simplify the logic of setting SHLIBDIR. 2009-02-24 16:23:34 +00:00
ru
21f7074ade Fix build when WITH_SSP is set explicitly.
Submitted by:	Jeremie Le Hen
2009-02-21 15:04:31 +00:00
jkim
56ef1bde13 Honor WITHOUT_INSTALLLIB in some places. 2009-02-13 16:51:36 +00:00
peter
b14c2e0572 When libthr and rtld start up, there are a number of magic spells cast
in order to get the symbol binding state "just so".  This is to allow
locking to be activated and not run into recursion problems later.

However, one of the magic bits involves an explicit call to _umtx_op()
to force symbol resolution.  It does a wakeup operation on a fake,
uninitialized (ie: random contents) umtx.  Since libthr isn't active, this
is harmless.  Nothing can match the random wakeup.

However, valgrind finds this and is not amused.  Normally I'd just
write a suppression record for it, but the idea of passing random
args to syscalls (on purpose) just doesn't feel right.
2008-12-07 02:32:49 +00:00
kib
af7a67c13c Provide custom simple allocator for rtld locks in libthr. The allocator
does not use any external symbols, thus avoiding possible recursion into
rtld to resolve symbols, when called.

Reviewed by:	kan, davidxu
Tested by:	rink
MFC after:	1 month
2008-12-02 11:58:31 +00:00
kan
5542bcbfb0 Invoke _rtld_atfork_post earlier, before we reinitialize rtld locks
by switching into single-thread mode.

libthr ignores broken use of lock bitmaps used by default rtld locking
implementation, this in turn turns lock handoff in _rtld_thread_init
into NOP. This in turn makes child processes of forked multi-threaded
programs to run with _thr_signal_block still in effect, with most
signals blocked.

Reported by: phk, kib
2008-12-01 21:00:25 +00:00
kib
58f888b28c Unlock the malloc() locks in the child process after fork(). This gives
us working malloc in the fork child of the multithreaded process.

Although POSIX requires that only async-signal safe functions shall be
operable after fork in multithreaded process, not having malloc lower
the quality of our implementation.

Tested by:	rink
Discussed with:	kan, davidxu
Reviewed by:	kan
MFC after:	1 month
2008-11-29 21:46:28 +00:00
kib
b683fcf692 Add two rtld exported symbols, _rtld_atfork_pre and _rtld_atfork_post.
Threading library calls _pre before the fork, allowing the rtld to
lock itself to ensure that other threads of the process are out of
dynamic linker. _post releases the locks.

This allows the rtld to have consistent state in the child. Although
child may legitimately call only async-safe functions, the call may
need plt relocation resolution, and this requires working rtld.

Reported and debugging help by:	rink
Reviewed by:	kan, davidxu
MFC after:	1 month (anyway, not before 7.1 is out)
2008-11-27 11:27:59 +00:00
marcel
ead754945e Allow psaddr_t to be widened by using thr_pread_{int,long,ptr},
where critical. Some places still use ps_pread/ps_pwrite directly,
but only need changed when byte-order comes into the picture.
Also, change th_p in td_event_msg_t from a pointer type to
psaddr_t, so that events also work when psaddr_t is widened.
2008-09-14 16:07:21 +00:00
jasone
c30fff5419 Move call to _malloc_thread_cleanup() so that if this is the last thread,
the call never happens.  This is necessary because malloc may be used
during exit handler processing.

Submitted by:	davidxu
2008-09-09 17:14:32 +00:00
jasone
a734052e9c Add thread-specific caching for small size classes, based on magazines.
This caching allows for completely lock-free allocation/deallocation in the
steady state, at the expense of likely increased memory use and
fragmentation.

Reduce the default number of arenas to 2*ncpus, since thread-specific
caching typically reduces arena contention.

Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced,
cacheline-spaced, and subpage-spaced size classes.  The advantages are:
fewer size classes, reduced false cacheline sharing, and reduced internal
fragmentation for allocations that are slightly over 512, 1024, etc.

Increase RUN_MAX_SMALL, in order to limit fragmentation for the
subpage-spaced size classes.

Add a size-->bin lookup table for small sizes to simplify translating sizes
to size classes.  Include a hard-coded constant table that is used unless
custom size class spacing is specified at run time.

Add the ability to disable tiny size classes at compile time via
MALLOC_TINY.
2008-08-27 02:00:53 +00:00
davidxu
0cc238e339 In function pthread_condattr_getpshared, store result correctly.
PR:		kern/126128
2008-08-01 01:21:49 +00:00
ru
8735fdbd4c Enable GCC stack protection (aka Propolice) for userland:
- It is opt-out for now so as to give it maximum testing, but it may be
  turned opt-in for stable branches depending on the consensus.  You
  can turn it off with WITHOUT_SSP.
- WITHOUT_SSP was previously used to disable the build of GNU libssp.
  It is harmless to steal the knob as SSP symbols have been provided
  by libc for a long time, GNU libssp should not have been much used.
- SSP is disabled in a few corners such as system bootstrap programs
  (sys/boot), process bootstrap code (rtld, csu) and SSP symbols themselves.
- It should be safe to use -fstack-protector-all to build world, however
  libc will be automatically downgraded to -fstack-protector because it
  breaks rtld otherwise.
- This option is unavailable on ia64.

Enable GCC stack protection (aka Propolice) for kernel:
- It is opt-out for now so as to give it maximum testing.
- Do not compile your kernel with -fstack-protector-all, it won't work.

Submitted by:	Jeremie Le Hen <jeremie@le-hen.org>
2008-06-25 21:33:28 +00:00
davidxu
70dd244f26 Add two commands to _umtx_op system call to allow a simple mutex to be
locked and unlocked completely in userland. by locking and unlocking mutex
in userland, it reduces the total time a mutex is locked by a thread,
in some application code, a mutex only protects a small piece of code, the
code's execution time is less than a simple system call, if a lock contention
happens, however in current implemenation, the lock holder has to extend its
locking time and enter kernel to unlock it, the change avoids this disadvantage,
it first sets mutex to free state and then enters kernel and wake one waiter
up. This improves performance dramatically in some sysbench mutex tests.

Tested by: kris
Sounds great: jeff
2008-06-24 07:32:12 +00:00
davidxu
f4d6ff9c5e Make pthread_cleanup_push() and pthread_cleanup_pop() as a pair of macros,
use stack space to keep cleanup information, this eliminates overhead of
calling malloc() and free() in thread library.

Discussed on: thread@
2008-06-09 01:14:10 +00:00
dfr
a6bd1d1955 Call the fcntl compatiblity wrapper from the thread library fcntl wrappers
so that they get the benefit of the (limited) forward ABI compatibility.

MFC after: 1 week
2008-05-30 14:47:42 +00:00
davidxu
8951bcd14c Eliminate global mutex by using pthread_once's state field as
a semaphore.
2008-05-30 00:02:59 +00:00
davidxu
c0f6b35a3a - Reduce function call overhead for uncontended case.
- Remove unused flags MUTEX_FLAGS_* and their code.
- Check validity of the timeout parameter in mutex_self_lock().
2008-05-29 07:57:33 +00:00
imp
b9242ed45b Commit missing mips libthr support that I thought I'd committed earlier 2008-05-11 05:54:52 +00:00
davidxu
fc58e99cef Remove libc_r's remnant code. 2008-05-06 07:27:11 +00:00
davidxu
0e9d39ae8f Use UMTX_OP_WAIT_UINT_PRIVATE and UMTX_OP_WAKE_PRIVATE to save
time in kernel(avoid VM lookup).
2008-04-29 03:58:18 +00:00
kris
4a87c82b19 Increase the default MUTEX_ADAPTIVE_SPINS to 2000, after further
testing it turns out 200 was too short to give good adaptive
performance.

Reviewed by:   jeff
MFC after:     1 week
2008-04-26 13:19:07 +00:00
imp
92f18b23d5 Bring in mips threads support from perforce mips2-jnpr branch. 2008-04-26 12:17:57 +00:00
delphij
6b7d752076 Avoid various shadowed variables. libthr is now almost WARNS=4 clean except
for some const dequalifiers that needs more careful investigation.

Ok'ed by:	davidxu
2008-04-23 21:06:51 +00:00
davidxu
b794176bc5 Use native rwlock. 2008-04-22 06:44:11 +00:00
davidxu
6a349c6771 _vfork is not in libthr, remove the reference. 2008-04-16 03:19:11 +00:00
davidxu
a1371575f4 don't include pthread_np.h, it is not used. 2008-04-14 08:08:40 +00:00
davidxu
8d9f007088 put THR_CRITICAL_LEAVE into do .. while statement. 2008-04-03 02:47:35 +00:00
davidxu
2d5bf7e6fc add __hidden suffix to _umtx_op_err, this eliminates PLT. 2008-04-03 02:13:51 +00:00
davidxu
f1df18eb48 Non-portable functions are in pthread_np.h, fix compiling problem. 2008-04-02 11:41:12 +00:00
davidxu
d00a98a4e8 Add pthread_setaffinity_np and pthread_getaffinity_np to libc namespace. 2008-04-02 08:53:18 +00:00
davidxu
aa2019ec00 Remove unused functions. 2008-04-02 08:33:42 +00:00
davidxu
570834290d Replace function _umtx_op with _umtx_op_err, the later function directly
returns errno, because errno can be mucked by user's signal handler and
most of pthread api heavily depends on errno to be correct, this change
should improve stability of the thread library.
2008-04-02 07:41:25 +00:00
davidxu
8d50c29c72 Replace userland rwlock with a pure kernel based rwlock, the new
implementation does not switch pointers when it resumes waiters.

Asked by: jeff
2008-04-02 04:32:31 +00:00
davidxu
111802a962 Restore normal pthread_cond_signal path to avoid some obscure races. 2008-04-01 06:23:08 +00:00
davidxu
664ef9f365 return EAGAIN early rather than running bunch of code later, micro optimize
static branch prediction.
2008-04-01 00:21:49 +00:00
davidxu
220584ad61 Rewrite rwlock to user atomic operations to change rwlock state, this
eliminates internal mutex lock contention when most rwlock operations
are read.

Orignal patch provided by: jeff
2008-03-31 02:55:49 +00:00
ru
0f0375e36a Remove options MK_LIBKSE and DEFAULT_THREAD_LIB now that we no longer
build libkse.  This should fix WITHOUT_LIBTHR builds as a side effect.
2008-03-29 17:44:40 +00:00
ru
e8df07e5aa Compile libthr with warnings. 2008-03-25 13:28:12 +00:00
ru
e2f2131976 Fixed mis-implementation of pthread_mutex_get{spin,yield}loops_np().
Reviewed by:	davidxu
2008-03-25 09:48:10 +00:00
davidxu
1423f22a4c Add POSIX pthread API pthread_getcpuclockid() to get a thread's cpu
time clock id.
2008-03-22 09:59:20 +00:00