Commit Graph

31 Commits

Author SHA1 Message Date
Jason Evans
5b3842aefa Move call to _malloc_thread_cleanup() so that if this is the last thread,
the call never happens.  This is necessary because malloc may be used
during exit handler processing.

Submitted by:	davidxu
2008-09-09 17:14:32 +00:00
Jason Evans
d6742bfbd3 Add thread-specific caching for small size classes, based on magazines.
This caching allows for completely lock-free allocation/deallocation in the
steady state, at the expense of likely increased memory use and
fragmentation.

Reduce the default number of arenas to 2*ncpus, since thread-specific
caching typically reduces arena contention.

Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced,
cacheline-spaced, and subpage-spaced size classes.  The advantages are:
fewer size classes, reduced false cacheline sharing, and reduced internal
fragmentation for allocations that are slightly over 512, 1024, etc.

Increase RUN_MAX_SMALL, in order to limit fragmentation for the
subpage-spaced size classes.

Add a size-->bin lookup table for small sizes to simplify translating sizes
to size classes.  Include a hard-coded constant table that is used unless
custom size class spacing is specified at run time.

Add the ability to disable tiny size classes at compile time via
MALLOC_TINY.
2008-08-27 02:00:53 +00:00
David Xu
cf181aee60 Remove libc_r's remnant code. 2008-05-06 07:27:11 +00:00
David Xu
8d6a11a070 Use UMTX_OP_WAIT_UINT_PRIVATE and UMTX_OP_WAKE_PRIVATE to save
time in kernel(avoid VM lookup).
2008-04-29 03:58:18 +00:00
Ruslan Ermilov
e03efb02bc Compile libthr with warnings. 2008-03-25 13:28:12 +00:00
David Xu
2ea1f90a18 - Copy signal mask out before THR_UNLOCK(), because THR_UNLOCK() may call
_thr_suspend_check() which messes sigmask saved in thread structure.
- Don't suspend a thread has force_exit set.
- In pthread_exit(), if there is a suspension flag set, wake up waiting-
  thread after setting PS_DEAD, this causes waiting-thread to break loop
  in suspend_common().
2008-03-18 02:06:51 +00:00
David Xu
697b4b49be Don't report death event to debugger if it is a forced exit. 2008-03-06 02:07:18 +00:00
David Xu
9ba01c866b call underscore version of pthread_cleanup_pop instead. 2007-12-20 04:40:12 +00:00
Warner Losh
fed32d7544 Remove 3rd clause, renumber, ok per email 2007-01-12 07:26:21 +00:00
David Xu
f08e1bf682 Eliminate atomic operations in thread cancellation functions, it should
reduce overheads of cancellation points.
2006-11-24 09:57:38 +00:00
David Xu
37a6356bbe WARNS level 4 cleanup. 2006-04-04 02:57:49 +00:00
David Xu
bc414752d3 Refine thread suspension code, now thread suspension is a blockable
operation, the caller is blocked util target threads are really
suspended, also avoid suspending a thread when it is holding a
critical lock.
Fix a bug in _thr_ref_delete which tests a never set flag.
2006-01-05 13:51:22 +00:00
David Xu
d7f119abd5 Follow the change in kernel, joiner thread just waits at thread id
address, let kernel wake it up.
2005-10-26 07:11:43 +00:00
David Xu
d245d9e13f Add debugger event reporting support, current only TD_CREATE and TD_DEATH
events are reported.
2005-04-12 03:00:28 +00:00
David Xu
a091d823ad Import my recent 1:1 threading working. some features improved includes:
1. fast simple type mutex.
 2. __thread tls works.
 3. asynchronous cancellation works ( using signal ).
 4. thread synchronization is fully based on umtx, mainly, condition
    variable and other synchronization objects were rewritten by using
    umtx directly. those objects can be shared between processes via
    shared memory, it has to change ABI which does not happen yet.
 5. default stack size is increased to 1M on 32 bits platform, 2M for
    64 bits platform.
As the result, some mysql super-smack benchmarks show performance is
improved massivly.

Okayed by: jeff, mtm, rwatson, scottl
2005-04-02 01:20:00 +00:00
Mike Makonnen
5dbd7addb0 1. Now that it's a thread's state is changed from within the kernel, where
no userland locks are heald, the dead thread lock can no longer protect
   access to it. Therefore, instead of using an if (!dead)...else clause
   after walking the active threads list test the thread pointer before
   deciding not to walk the dead threads list. If the thread pointer is null
   it means it was not found in the active threads list and the dead threads
   list should be checked.

2. Do not free the stack of a thread that is not marked dead. This is the
   2nd and final part of eliminating the race to free a thread's stack.

MFC after: 3 days
2004-10-13 11:42:20 +00:00
Mike Makonnen
4cdb7f14ed Remove a reference to a non-existent syscall: _thr_exit(). The
actual name is thr_exit(). How this ever worked is beyond me.
2004-10-08 14:48:02 +00:00
Mike Makonnen
401901ac43 Close a race between a thread exiting and the freeing of it's stack.
After some discussion the best option seems to be to signal the thread's
death from within the kernel. This requires that thr_exit() take an
argument.

Discussed with: davidxu, deischen, marcel
MFC after: 3 days
2004-10-06 14:23:00 +00:00
Mike Makonnen
4cd18a22d5 Make libthr async-signal-safe without costly signal masking. The guidlines I
followed are: Only 3 functions (pthread_cancel, pthread_setcancelstate,
pthread_setcanceltype) are required to be async-signal-safe by POSIX. None of
the rest of the pthread api is required to be async-signal-safe. This means
that only the three mentioned functions are safe to use from inside
signal handlers.
However, there are certain system/libc calls that are
cancellation points that a caller may call from within a signal handler,
and since they are cancellation points calls have to be made into libthr
to test for cancellation and exit the thread if necessary. So, the
cancellation test and thread exit code paths must be async-signal-safe
as well. A summary of the changes follows:

o Almost all of the code paths that masked signals, as well as locking the
  pthread structure now lock only the pthread structure.
o Signals are masked (and left that way) as soon as a thread enters
  pthread_exit().
o The active and dead threads locks now explicitly require that signals
  are masked.
o Access to the isdead field of the pthread structure is protected by both
  the active and dead list locks for writing. Either one is sufficient for
  reading.
o The thread state and type fields have been combined into one three-state
  switch to make it easier to read without requiring a lock. It doesn't need
  a lock for writing (and therefore for reading either) because only the
  current thread can write to it and it is an integer value.
o The thread state field of the pthread structure has been eliminated. It
  was an unnecessary field that mostly duplicated the flags field, but
  required additional locking that would make a lot more code paths require
  signal masking. Any truly unique values (such as PS_DEAD) have been
  reborn as separate members of the pthread structure.
o Since the mutex and condvar pthread functions are not async-signal-safe
  there is no need to muck about with the wait queues when handling
  a signal ...
o ... which also removes the need for wrapping signal handlers and sigaction(2).
o The condvar and mutex async-cancellation code had to be revised as a result
  of some of these changes, which resulted in semi-unrelated changes which
  would have been difficult to work on as a separate commit, so they are
  included as well.

The only part of the changes I am worried about is related to locking for
the pthread joining fields. But, I will take a closer look at them once this
mega-patch is committed.
2004-05-20 12:06:16 +00:00
Mike Makonnen
1c6f63018d Remove the garbage collector thread. All resources are freed
in-line. If the exiting thread cannot release a resource, then
the next thread to exit will release it.
2004-03-28 14:05:28 +00:00
Mike Makonnen
c40bafac85 Implement reference counting of read-write locks. This uses
a list in the thread structure to keep track of the locks and
how many times they have been locked. This list is checked
on every lock and unlock. The traversal through the list is
O(n). Most applications don't hold so many locks at once that
this will become a problem. However, if it does become a problem
it might be a good idea to review this once libthr is
off probation and in the optimization cycle.
This fixes:
	o deadlock when a thread tries to recursively acquire a
	  read lock when a writer is waiting on the lock.
	o a thread could previously successfully unlock a lock it did not own
	o deadlock when a thread tries to acquire a write lock on
	  a lock it already owns for reading or writing [ this is admittedly
	  not required by POSIX, but is nice to have ]
2004-01-19 14:51:45 +00:00
Mike Makonnen
659045ffbf Change all instances of THR_LOCK/UNLOCK, etc to UMTX_*.
It is a more acurate description of the locks they
operate on.
2003-07-06 10:18:48 +00:00
Mike Makonnen
dbc6f4c07d Sweep through pthread locking and use the new locking primitives for
libthr.
2003-06-29 23:51:04 +00:00
Mike Makonnen
b3cdf7ae2e Don't hold the active thread list lock when signaling the gc thread.
The dead list thread is sufficient for synchronization.

Retire the arch_id (ldt array slot) in the gc thread instead of the
doing it in the thread itself.

Approved by:	re/jhb
2003-05-29 20:46:53 +00:00
Mike Makonnen
a09d02f780 Minimize the potential for deadlocks between an exiting thread and it's
joiner by making sure all locks and unlocks occur in the same order. For
the record the lock order is: DEAD_LIST, THREAD_LIST, exiting thread, joiner
thread.

Approved by: re/rwatson
2003-05-27 21:48:42 +00:00
Mike Makonnen
71d09bc86a Start locking up the active and dead threads lists. The active threads
list is protected by a spinlock_t, but the dead list uses a pthread_mutex
because it is necessary to synchronize other threads with the garbage
collector thread. Lock/Unlock macros are used so it's easier to make
changes to the locks in the future.

The 'dead thread list' lock is intended to replace the gc mutex.
This doesn't have any practical ramifications. It simply makes it
clearer what the purpose of the lock is. The gc will use this lock,
instead of the gc mutex, to synchronize access to the dead list with
other threads.

Modify _pthread_exit() to use these two new locks instead of GIANT_LOCK,
and also to properly lock and protect thread state changes,
especially with respect to a joining thread.

The gc thread was also re-arranged to be more organized and less nested.

_pthread_join() was also modified to use the thread list locks. However,
locking and unlocking here needs special care because a thread could find
itself in a position where it's joining an exiting thread that is
waiting on the dead list lock, which this thread (joiner) holds. If the
joiner doesn't take care to lock *and* unlock in the same order they
(the joiner and the joinee) could deadlock against each other.

Approved by:	re/blanket libthr
2003-05-25 08:31:33 +00:00
Mike Makonnen
7d9d7ca2ed Make WARNS2 clean. The fixes mostly included:
o removed unused variables
	o explicit inclusion of header files
	o prototypes for externally defined functions

Approved by:    re/blanket libthr
2003-05-23 09:48:20 +00:00
Mike Makonnen
4e3f7b6ede note to self: do not confuse void* with int.
Approved by:	re/blanket libthr
2003-05-23 08:13:24 +00:00
Mike Makonnen
f97591bf25 When a thread exits it does not return from the kernel unless it
is the *only* remaining thread in the application, in which case we
should not core dump, and instead exit gracefully.

Approved by: markm/mentor, re/blanket libthr
2003-05-21 03:29:18 +00:00
Jeff Roberson
26f52e2f8b - Define curthread as _get_curthread() and remove all direct calls to
_get_curthread().  This is similar to the kernel's curthread.  Doing
   this saves stack overhead and is more convenient to the programmer.
 - Pass the pointer to the newly created thread to _thread_init().
 - Remove _get_curthread_slow().
2003-04-02 03:05:39 +00:00
Jeff Roberson
bb535300dd - Add libthr but don't hook it up to the regular build yet. This is an
adaptation of libc_r for the thr system call interface.  This is beta
   quality code.
2003-04-01 03:46:29 +00:00