Commit Graph

43 Commits

Author SHA1 Message Date
davidxu
86d9958ce8 change code to use unwind.h. 2010-09-30 12:59:56 +00:00
davidxu
56cf3f4638 Report death event to debugger before moving to gc list, otherwise
debugger may can not find it on thread list.
2010-09-26 06:45:24 +00:00
davidxu
121f2e2d60 Because old _pthread_cleanup_push/pop do not have frame address,
it is incompatible with stack unwinding code, if they are invoked,
disable stack unwinding for current thread, and when thread is
exiting, print a warning message.
2010-09-25 06:27:09 +00:00
davidxu
2aedc66f12 Simplify code, and in while loop, fix operator to match the unwinding
direction.
2010-09-25 04:21:31 +00:00
davidxu
fe5567c8f1 Because atfork lock is held while forking, a thread cancellation triggered
by atfork handler is unsafe, use intenal flag no_cancel to disable it.
2010-09-19 09:03:11 +00:00
davidxu
4c94bb3829 - _Unwind_Resume function is not used, remove it.
- Use a store barrier to make sure uwl_forcedunwind is lastest thing
  other threads can see.
- Add some comments.
2010-09-19 05:42:29 +00:00
davidxu
ad24f558bd Fix a race condition when finding stack unwinding functions. 2010-09-19 05:19:47 +00:00
davidxu
b00fcaa22c add code to support stack unwinding when thread exits. note that only
defer-mode cancellation works, asynchrnous mode does not work because
it lacks of libuwind's support. stack unwinding is not enabled unless
LIBTHR_UNWIND_STACK is defined in Makefile.
2010-09-15 02:56:32 +00:00
davidxu
e87e922f31 Convert thread list lock from mutex to rwlock. 2010-09-13 07:03:01 +00:00
davidxu
4dcb50723a Add signal handler wrapper, the reason to add it becauses there are
some cases we want to improve:
  1) if a thread signal got a signal while in cancellation point,
     it is possible the TDP_WAKEUP may be eaten by signal handler
     if the handler called some interruptibly system calls.
  2) In signal handler, we want to disable cancellation.
  3) When thread holding some low level locks, it is better to
     disable signal, those code need not to worry reentrancy,
     sigprocmask system call is avoided because it is a bit expensive.
The signal handler wrapper works in this way:
  1) libthr installs its signal handler if user code invokes sigaction
     to install its handler, the user handler is recorded in internal
     array.
  2) when a signal is delivered, libthr's signal handler is invoke,
     libthr checks if thread holds some low level lock or is in critical
     region, if it is true, the signal is buffered, and all signals are
     masked, once the thread leaves critical region, correct signal
     mask is restored and buffered signal is processed.
  3) before user signal handler is invoked, cancellation is temporarily
     disabled, after user signal handler is returned, cancellation state
     is restored, and pending cancellation is rescheduled.
2010-09-01 02:18:33 +00:00
davidxu
c360399299 eliminate unused code. 2010-08-26 09:04:27 +00:00
davidxu
8eb397b4ca Tweak code a bit to be POSIX compatible, when a cancellation request
is acted upon, or when a thread calls pthread_exit(), the thread first
disables cancellation by setting its cancelability state to
PTHREAD_CANCEL_DISABLE and its cancelability type to
PTHREAD_CANCEL_DEFERRED. The cancelability state remains set to
PTHREAD_CANCEL_DISABLE until the thread has terminated.

It has no effect if a cancellation cleanup handler or thread-specific
data destructor routine changes the cancelability state to
PTHREAD_CANCEL_ENABLE.
2010-08-17 02:50:12 +00:00
jasone
c30fff5419 Move call to _malloc_thread_cleanup() so that if this is the last thread,
the call never happens.  This is necessary because malloc may be used
during exit handler processing.

Submitted by:	davidxu
2008-09-09 17:14:32 +00:00
jasone
a734052e9c Add thread-specific caching for small size classes, based on magazines.
This caching allows for completely lock-free allocation/deallocation in the
steady state, at the expense of likely increased memory use and
fragmentation.

Reduce the default number of arenas to 2*ncpus, since thread-specific
caching typically reduces arena contention.

Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced,
cacheline-spaced, and subpage-spaced size classes.  The advantages are:
fewer size classes, reduced false cacheline sharing, and reduced internal
fragmentation for allocations that are slightly over 512, 1024, etc.

Increase RUN_MAX_SMALL, in order to limit fragmentation for the
subpage-spaced size classes.

Add a size-->bin lookup table for small sizes to simplify translating sizes
to size classes.  Include a hard-coded constant table that is used unless
custom size class spacing is specified at run time.

Add the ability to disable tiny size classes at compile time via
MALLOC_TINY.
2008-08-27 02:00:53 +00:00
davidxu
fc58e99cef Remove libc_r's remnant code. 2008-05-06 07:27:11 +00:00
davidxu
0e9d39ae8f Use UMTX_OP_WAIT_UINT_PRIVATE and UMTX_OP_WAKE_PRIVATE to save
time in kernel(avoid VM lookup).
2008-04-29 03:58:18 +00:00
ru
e8df07e5aa Compile libthr with warnings. 2008-03-25 13:28:12 +00:00
davidxu
9dbfb036ab - Copy signal mask out before THR_UNLOCK(), because THR_UNLOCK() may call
_thr_suspend_check() which messes sigmask saved in thread structure.
- Don't suspend a thread has force_exit set.
- In pthread_exit(), if there is a suspension flag set, wake up waiting-
  thread after setting PS_DEAD, this causes waiting-thread to break loop
  in suspend_common().
2008-03-18 02:06:51 +00:00
davidxu
6b41341850 Don't report death event to debugger if it is a forced exit. 2008-03-06 02:07:18 +00:00
davidxu
cbdadd6e3e call underscore version of pthread_cleanup_pop instead. 2007-12-20 04:40:12 +00:00
imp
9109b1ceb8 Remove 3rd clause, renumber, ok per email 2007-01-12 07:26:21 +00:00
davidxu
add205129c Eliminate atomic operations in thread cancellation functions, it should
reduce overheads of cancellation points.
2006-11-24 09:57:38 +00:00
davidxu
31f2b819c6 WARNS level 4 cleanup. 2006-04-04 02:57:49 +00:00
davidxu
d6c88c0f27 Refine thread suspension code, now thread suspension is a blockable
operation, the caller is blocked util target threads are really
suspended, also avoid suspending a thread when it is holding a
critical lock.
Fix a bug in _thr_ref_delete which tests a never set flag.
2006-01-05 13:51:22 +00:00
davidxu
0929747005 Follow the change in kernel, joiner thread just waits at thread id
address, let kernel wake it up.
2005-10-26 07:11:43 +00:00
davidxu
2cf5eeb001 Add debugger event reporting support, current only TD_CREATE and TD_DEATH
events are reported.
2005-04-12 03:00:28 +00:00
davidxu
f066519e91 Import my recent 1:1 threading working. some features improved includes:
1. fast simple type mutex.
 2. __thread tls works.
 3. asynchronous cancellation works ( using signal ).
 4. thread synchronization is fully based on umtx, mainly, condition
    variable and other synchronization objects were rewritten by using
    umtx directly. those objects can be shared between processes via
    shared memory, it has to change ABI which does not happen yet.
 5. default stack size is increased to 1M on 32 bits platform, 2M for
    64 bits platform.
As the result, some mysql super-smack benchmarks show performance is
improved massivly.

Okayed by: jeff, mtm, rwatson, scottl
2005-04-02 01:20:00 +00:00
mtm
ce193bdc46 1. Now that it's a thread's state is changed from within the kernel, where
no userland locks are heald, the dead thread lock can no longer protect
   access to it. Therefore, instead of using an if (!dead)...else clause
   after walking the active threads list test the thread pointer before
   deciding not to walk the dead threads list. If the thread pointer is null
   it means it was not found in the active threads list and the dead threads
   list should be checked.

2. Do not free the stack of a thread that is not marked dead. This is the
   2nd and final part of eliminating the race to free a thread's stack.

MFC after: 3 days
2004-10-13 11:42:20 +00:00
mtm
9c7869cb6f Remove a reference to a non-existent syscall: _thr_exit(). The
actual name is thr_exit(). How this ever worked is beyond me.
2004-10-08 14:48:02 +00:00
mtm
0a21f474dc Close a race between a thread exiting and the freeing of it's stack.
After some discussion the best option seems to be to signal the thread's
death from within the kernel. This requires that thr_exit() take an
argument.

Discussed with: davidxu, deischen, marcel
MFC after: 3 days
2004-10-06 14:23:00 +00:00
mtm
d835c0621f Make libthr async-signal-safe without costly signal masking. The guidlines I
followed are: Only 3 functions (pthread_cancel, pthread_setcancelstate,
pthread_setcanceltype) are required to be async-signal-safe by POSIX. None of
the rest of the pthread api is required to be async-signal-safe. This means
that only the three mentioned functions are safe to use from inside
signal handlers.
However, there are certain system/libc calls that are
cancellation points that a caller may call from within a signal handler,
and since they are cancellation points calls have to be made into libthr
to test for cancellation and exit the thread if necessary. So, the
cancellation test and thread exit code paths must be async-signal-safe
as well. A summary of the changes follows:

o Almost all of the code paths that masked signals, as well as locking the
  pthread structure now lock only the pthread structure.
o Signals are masked (and left that way) as soon as a thread enters
  pthread_exit().
o The active and dead threads locks now explicitly require that signals
  are masked.
o Access to the isdead field of the pthread structure is protected by both
  the active and dead list locks for writing. Either one is sufficient for
  reading.
o The thread state and type fields have been combined into one three-state
  switch to make it easier to read without requiring a lock. It doesn't need
  a lock for writing (and therefore for reading either) because only the
  current thread can write to it and it is an integer value.
o The thread state field of the pthread structure has been eliminated. It
  was an unnecessary field that mostly duplicated the flags field, but
  required additional locking that would make a lot more code paths require
  signal masking. Any truly unique values (such as PS_DEAD) have been
  reborn as separate members of the pthread structure.
o Since the mutex and condvar pthread functions are not async-signal-safe
  there is no need to muck about with the wait queues when handling
  a signal ...
o ... which also removes the need for wrapping signal handlers and sigaction(2).
o The condvar and mutex async-cancellation code had to be revised as a result
  of some of these changes, which resulted in semi-unrelated changes which
  would have been difficult to work on as a separate commit, so they are
  included as well.

The only part of the changes I am worried about is related to locking for
the pthread joining fields. But, I will take a closer look at them once this
mega-patch is committed.
2004-05-20 12:06:16 +00:00
mtm
c715901410 Remove the garbage collector thread. All resources are freed
in-line. If the exiting thread cannot release a resource, then
the next thread to exit will release it.
2004-03-28 14:05:28 +00:00
mtm
2d3f49ebd3 Implement reference counting of read-write locks. This uses
a list in the thread structure to keep track of the locks and
how many times they have been locked. This list is checked
on every lock and unlock. The traversal through the list is
O(n). Most applications don't hold so many locks at once that
this will become a problem. However, if it does become a problem
it might be a good idea to review this once libthr is
off probation and in the optimization cycle.
This fixes:
	o deadlock when a thread tries to recursively acquire a
	  read lock when a writer is waiting on the lock.
	o a thread could previously successfully unlock a lock it did not own
	o deadlock when a thread tries to acquire a write lock on
	  a lock it already owns for reading or writing [ this is admittedly
	  not required by POSIX, but is nice to have ]
2004-01-19 14:51:45 +00:00
mtm
2ced154094 Change all instances of THR_LOCK/UNLOCK, etc to UMTX_*.
It is a more acurate description of the locks they
operate on.
2003-07-06 10:18:48 +00:00
mtm
7f75cc45e9 Sweep through pthread locking and use the new locking primitives for
libthr.
2003-06-29 23:51:04 +00:00
mtm
e0fdf7a90d Don't hold the active thread list lock when signaling the gc thread.
The dead list thread is sufficient for synchronization.

Retire the arch_id (ldt array slot) in the gc thread instead of the
doing it in the thread itself.

Approved by:	re/jhb
2003-05-29 20:46:53 +00:00
mtm
f36826da25 Minimize the potential for deadlocks between an exiting thread and it's
joiner by making sure all locks and unlocks occur in the same order. For
the record the lock order is: DEAD_LIST, THREAD_LIST, exiting thread, joiner
thread.

Approved by: re/rwatson
2003-05-27 21:48:42 +00:00
mtm
9b84ee274a Start locking up the active and dead threads lists. The active threads
list is protected by a spinlock_t, but the dead list uses a pthread_mutex
because it is necessary to synchronize other threads with the garbage
collector thread. Lock/Unlock macros are used so it's easier to make
changes to the locks in the future.

The 'dead thread list' lock is intended to replace the gc mutex.
This doesn't have any practical ramifications. It simply makes it
clearer what the purpose of the lock is. The gc will use this lock,
instead of the gc mutex, to synchronize access to the dead list with
other threads.

Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK,
and also to properly lock and protect thread state changes,
especially with respect to a joining thread.

The gc thread was also re-arranged to be more organized and less nested.

_pthread_join() was also modified to use the thread list locks. However,
locking and unlocking here needs special care because a thread could find
itself in a position where it's joining an exiting thread that is
waiting on the dead list lock, which this thread (joiner) holds. If the
joiner doesn't take care to lock *and* unlock in the same order they
(the joiner and the joinee) could deadlock against each other.

Approved by:	re/blanket libthr
2003-05-25 08:31:33 +00:00
mtm
6356a9c88e Make WARNS2 clean. The fixes mostly included:
o removed unused variables
	o explicit inclusion of header files
	o prototypes for externally defined functions

Approved by:    re/blanket libthr
2003-05-23 09:48:20 +00:00
mtm
7f8de979fc note to self: do not confuse void* with int.
Approved by:	re/blanket libthr
2003-05-23 08:13:24 +00:00
mtm
ef39a958df When a thread exits it does not return from the kernel unless it
is the *only* remaining thread in the application, in which case we
should not core dump, and instead exit gracefully.

Approved by: markm/mentor, re/blanket libthr
2003-05-21 03:29:18 +00:00
jeff
bb610376a5 - Define curthread as _get_curthread() and remove all direct calls to
_get_curthread().  This is similar to the kernel's curthread.  Doing
   this saves stack overhead and is more convenient to the programmer.
 - Pass the pointer to the newly created thread to _thread_init().
 - Remove _get_curthread_slow().
2003-04-02 03:05:39 +00:00
jeff
08f648d4cd - Add libthr but don't hook it up to the regular build yet. This is an
adaptation of libc_r for the thr system call interface.  This is beta
   quality code.
2003-04-01 03:46:29 +00:00