Commit Graph

73 Commits

Author SHA1 Message Date
David Xu
7de1ecef2d Add two commands to _umtx_op system call to allow a simple mutex to be
locked and unlocked completely in userland. by locking and unlocking mutex
in userland, it reduces the total time a mutex is locked by a thread,
in some application code, a mutex only protects a small piece of code, the
code's execution time is less than a simple system call, if a lock contention
happens, however in current implemenation, the lock holder has to extend its
locking time and enter kernel to unlock it, the change avoids this disadvantage,
it first sets mutex to free state and then enters kernel and wake one waiter
up. This improves performance dramatically in some sysbench mutex tests.

Tested by: kris
Sounds great: jeff
2008-06-24 07:32:12 +00:00
David Xu
850f4d66cb - Reduce function call overhead for uncontended case.
- Remove unused flags MUTEX_FLAGS_* and their code.
- Check validity of the timeout parameter in mutex_self_lock().
2008-05-29 07:57:33 +00:00
Kris Kennaway
dd77f9f7f2 Increase the default MUTEX_ADAPTIVE_SPINS to 2000, after further
testing it turns out 200 was too short to give good adaptive
performance.

Reviewed by:   jeff
MFC after:     1 week
2008-04-26 13:19:07 +00:00
Ruslan Ermilov
7e0e78248e Fixed mis-implementation of pthread_mutex_get{spin,yield}loops_np().
Reviewed by:	davidxu
2008-03-25 09:48:10 +00:00
Dag-Erling Smørgrav
096ba44775 _pthread_mutex_isowned_np(): use a more reliable method; the current code
will work in simple cases, but may fail in more complicated ones.

Reviewed by:	davidxu
2008-02-14 12:37:58 +00:00
Dag-Erling Smørgrav
d29b081fee Remove unnecessary prototype. 2008-02-06 20:43:19 +00:00
Dag-Erling Smørgrav
1cbdac2668 Per discussion on -threads, rename _islocked_np() to _isowned_np(). 2008-02-06 19:34:31 +00:00
Dag-Erling Smørgrav
fcd61d9141 After careful consideration (and a brief discussion with attilio@), change
the semantics of pthread_mutex_islocked_np() to return true if and only if
the mutex is held by the current thread.

Obviously, change the regression test to match.

MFC after:	2 weeks
2008-02-04 12:35:23 +00:00
Dag-Erling Smørgrav
5fd410a787 Add pthread_mutex_islocked_np(), a cheap way to verify that a mutex is
locked.  This is intended primarily to support the userland equivalent
of the various *_ASSERT_LOCKED() macros we have in the kernel.

MFC after:	2 weeks
2008-02-03 22:38:10 +00:00
David Xu
c5f411515c Add function prototypes. 2007-12-17 02:53:11 +00:00
David Xu
093fcf1694 1. Add function pthread_mutex_setspinloops_np to turn a mutex's spin
loop count.
2. Add function pthread_mutex_setyieldloops_np to turn a mutex's yield
   loop count.
3. Make environment variables PTHREAD_SPINLOOPS and PTHREAD_YIELDLOOPS
   to be only used for turnning PTHREAD_MUTEX_ADAPTIVE_NP mutex.
2007-12-14 06:25:57 +00:00
David Xu
6a663207e7 Enclose all code for macro ENQUEUE_MUTEX in do while statement, and
add missing brackets.

MFC: after 1 day
2007-12-11 08:00:58 +00:00
Jason Evans
b6b7fd3e2a Fix pointer dereferencing problems in _pthread_mutex_init_calloc_cb() that
were obscured by pseudo-opaque pthreads API pointer casting.
2007-11-28 00:16:24 +00:00
Jason Evans
e1636e1f97 Add _pthread_mutex_init_calloc_cb() to libthr and libkse, so that malloc(3)
(part of libc) can use pthreads mutexes without causing infinite recursion
during initialization.
2007-11-27 03:16:44 +00:00
David Xu
9e1ddd5fa0 Convert ceiling type to unsigned integer before comparing, fix compiler
warnings.
2007-11-21 05:25:27 +00:00
David Xu
56b45d9067 Avoid doing adaptive spinning for priority protected mutex, current
implementation always does lock in kernel.
2007-10-31 01:50:48 +00:00
David Xu
55f18e070f Don't do adaptive spinning if it is running on UP kernel. 2007-10-31 01:44:50 +00:00
David Xu
e8ef3c283b Restore revision 1.55, the kris's adaptive mutex type. 2007-10-31 01:37:13 +00:00
Kris Kennaway
83941f797f Adaptive mutexes should have the same deadlock detection properties that
default (errorcheck) mutexes do.

Noticed by:          davidxu
2007-10-30 09:24:23 +00:00
David Xu
7416cdabcd Add my recent work of adaptive spin mutex code. Use two environments variable
to tune pthread mutex performance:
1. LIBPTHREAD_SPINLOOPS
	If a pthread mutex is being locked by another thread, this environment
	variable sets total number of spin loops before the current thread
	sleeps in kernel, this saves a syscall overhead if the mutex will be
	unlocked very soon (well written application code).
2. LIBPTHREAD_YIELDLOOPS
	If a pthread mutex is being locked by other threads, this environment
	variable sets total number of sched_yield() loops before the currrent
	thread sleeps in kernel. if a pthread mutex is locked, the current thread
	gives up cpu, but will not sleep in kernel, this means, current thread
	does not set contention bit in mutex, but let lock owner to run again
	if the owner is on kernel's run queue, and when lock owner unlocks the
	mutex, it does not need to enter kernel and do lots of work to resume
	mutex waiters, in some cases, this saves lots of syscall overheads for
	mutex owner.

In my practice, sometimes LIBPTHREAD_YIELDLOOPS can massively improve performance
than LIBPTHREAD_SPINLOOPS, this depends on application. These two environments
are global to all pthread mutex, there is no interface to set them for each
pthread mutex, the default values are zero, this means spinning is turned off
by default.
2007-10-30 05:57:37 +00:00
Kris Kennaway
2017a7cdfe Add a new "non-portable" mutex type, PTHREAD_MUTEX_ADAPTIVE_NP. This
is also implemented in glibc and is used by a number of existing
applications (mysql, firefox, etc).

This mutex type is a default mutex with the additional property that
it spins briefly when attempting to acquire a contested lock, doing
trylock operations in userland before entering the kernel to block if
eventually unsuccessful.

The expectation is that applications requesting this mutex type know
that the mutex is likely to be only held for very brief periods, so it
is faster to spin in userland and probably succeed in acquiring the
mutex, than to enter the kernel and sleep, only to be woken up almost
immediately.  This can help significantly in certain cases when
pthread mutexes are heavily contended and held for brief durations
(such as mysql).

Spin up to 200 times before entering the kernel, which represents only
a few us on modern CPUs.  No performance degradation was observed with
this value and it is sufficient to avoid a large performance drop in
mysql performance in the heavily contended pthread mutex case.

The libkse implementation is a NOP.

Reviewed by:      jeff
MFC after:        3 days
2007-10-29 21:01:47 +00:00
David Xu
00784f8b10 backout experimental adaptive spinning mutex for product use. 2007-05-09 08:39:33 +00:00
David Xu
03779e5c2b Insert mutex at tail if it has highest ceiling. 2007-01-05 03:57:11 +00:00
David Xu
da20a63dbb Oops, don't corrupt the list. 2007-01-05 03:33:47 +00:00
David Xu
5470bb56fc Check if the PP mutex is recursive, if we have already locked it, place the
mutex in right order sorted by priority ceiling.
2007-01-05 03:29:15 +00:00
David Xu
842a092b74 Check environment variable PTHREAD_ADAPTIVE_SPIN, if it is set, use
it as a default spin cycle count.
2006-12-20 04:43:34 +00:00
David Xu
8a8178c010 Create inline function _thr_umutex_trylock2 to only try one atomic
operation, if it is failed, we call syscall directly, this saves
one atomic operation per lock contention.
2006-12-14 13:22:02 +00:00
David Xu
58c7bab332 Move code calculating new inherited priority into single function. 2006-11-11 13:33:47 +00:00
David Xu
ddaf6689e3 Use return value of _thr_umutex_lock instead of using zero. 2006-09-08 09:29:14 +00:00
David Xu
8ab9d78b9d Use umutex APIs to implement pthread_mutex, member pp_mutexq is added
into pthread structure to keep track of locked PTHREAD_PRIO_PROTECT mutex,
no real mutex code is changed, the mutex locking and unlocking code should
has same performance as before.
2006-08-28 04:52:50 +00:00
David Xu
065dbdc130 Axe unused member field. 2006-08-08 05:04:43 +00:00
Xin LI
da84584390 Unexpand two TAILQ_FOREACH_SAFE cases.
Ok'ed by:	davidxu
2006-07-17 09:23:44 +00:00
David Xu
b971a73040 Remove unused member field m_queue. 2006-06-02 08:37:01 +00:00
David Xu
a97944597b Do not check validity of timeout if a mutex can be acquired immediately.
Completly drop recursive mutex in pthread_cond_wait and restore recursive
after resumption. Reorganize code to make gcc to generate better code.
2006-04-08 13:24:44 +00:00
David Xu
37a6356bbe WARNS level 4 cleanup. 2006-04-04 02:57:49 +00:00
David Xu
9ad4b64459 Remove priority mutex code because it does not work correctly,
to make it work, turnstile like mechanism to support priority
propagating and other realtime scheduling options in kernel
should be available to userland mutex, for the moment, I just
want to make libthr be simple and efficient thread library.

Discussed with: deischen, julian
2006-03-27 23:50:21 +00:00
David Xu
7b8797d300 Reimplement mutex_init to get rid of compile warning. 2006-02-28 06:06:19 +00:00
David Xu
1c9db39d88 Eliminate unused code. 2006-01-16 05:33:48 +00:00
David Xu
add2da0db1 Enable mutex inheritance code in mutex_fork, I forgot to turn on it.
while here, add some comments about process shared mutex.
2006-01-14 11:33:43 +00:00
David Xu
e6262545cf Let _mutex_cv_lock call internal functiona mutex_lock_common. 2005-12-21 05:14:07 +00:00
David Xu
e6a9baa280 Remove unused _get_curthread() call. 2005-12-12 07:14:57 +00:00
Stefan Farfeleder
ad7c49168f - Prefix MUTEX_TYPE_MAX with PTHREAD_ to avoid namespace pollution.
- Remove the macros MUTEX_TYPE_FAST and MUTEX_TYPE_COUNTING_FAST.

OK'ed by:	deischen
2005-08-19 21:31:42 +00:00
David Xu
a091d823ad Import my recent 1:1 threading working. some features improved includes:
1. fast simple type mutex.
 2. __thread tls works.
 3. asynchronous cancellation works ( using signal ).
 4. thread synchronization is fully based on umtx, mainly, condition
    variable and other synchronization objects were rewritten by using
    umtx directly. those objects can be shared between processes via
    shared memory, it has to change ABI which does not happen yet.
 5. default stack size is increased to 1M on 32 bits platform, 2M for
    64 bits platform.
As the result, some mysql super-smack benchmarks show performance is
improved massivly.

Okayed by: jeff, mtm, rwatson, scottl
2005-04-02 01:20:00 +00:00
Mike Makonnen
2d79470f8a Remove vestiges of libthr's signal mangling past. This fixes that last
known problem with mysql on libthr: not being able to kill mysqld.
2004-09-22 18:51:16 +00:00
Mike Makonnen
ff9af45a01 The SUSv3 function say that the affected functions MAY FAIL, if the
specified mutex is invalid. In spec parlance 'MAY FAIL' means it's
up to the implementor. So, remove the check for NULL pointers for two
reasons:
	1. A mutex may be invalid without necessarily being NULL.
	2. If the pointer to the mutex is NULL core-dumping in the
	   vicinity of the problem is much much much better than failing
	   in some other part of the code (especially when the application
	   doesn't check the return value of the function that you oh so
	   helpfully set to EINVAL).
2004-09-22 16:53:23 +00:00
Mike Makonnen
0feabab576 o Assertions to catch that stuff that shouldn't happen is not happening.
o In the rwlock code: move a duplicated check inside an if..else to after
  the if...else clause.
o When initializing a static rwlock move the initialization check
  inside the lock.
o In thr_setschedparam.c: When breaking out of the trylock...retry if busy
  loop make sure to reset the mtx pointer to null if the mutex is nolonger
  in a queue.
2004-07-30 17:13:00 +00:00
Marcel Moolenaar
cd28f17da2 Change the thread ID (thr_id_t) used for 1:1 threading from being a
pointer to the corresponding struct thread to the thread ID (lwpid_t)
assigned to that thread. The primary reason for this change is that
libthr now internally uses the same ID as the debugger and the kernel
when referencing to a kernel thread. This allows us to implement the
support for debugging without additional translations and/or mappings.

To preserve the ABI, the 1:1 threading syscalls, including the umtx
locking API have not been changed to work on a lwpid_t. Instead the
1:1 threading syscalls operate on long and the umtx locking API has
not been changed except for the contested bit. Previously this was
the least significant bit. Now it's the most significant bit. Since
the contested bit should not be tested by userland, this change is
not expected to be visible. Just to be sure, UMTX_CONTESTED has been
removed from <sys/umtx.h>.

Reviewed by: mtm@
ABI preservation tested on: i386, ia64
2004-07-02 00:40:07 +00:00
Mike Makonnen
4cd18a22d5 Make libthr async-signal-safe without costly signal masking. The guidlines I
followed are: Only 3 functions (pthread_cancel, pthread_setcancelstate,
pthread_setcanceltype) are required to be async-signal-safe by POSIX. None of
the rest of the pthread api is required to be async-signal-safe. This means
that only the three mentioned functions are safe to use from inside
signal handlers.
However, there are certain system/libc calls that are
cancellation points that a caller may call from within a signal handler,
and since they are cancellation points calls have to be made into libthr
to test for cancellation and exit the thread if necessary. So, the
cancellation test and thread exit code paths must be async-signal-safe
as well. A summary of the changes follows:

o Almost all of the code paths that masked signals, as well as locking the
  pthread structure now lock only the pthread structure.
o Signals are masked (and left that way) as soon as a thread enters
  pthread_exit().
o The active and dead threads locks now explicitly require that signals
  are masked.
o Access to the isdead field of the pthread structure is protected by both
  the active and dead list locks for writing. Either one is sufficient for
  reading.
o The thread state and type fields have been combined into one three-state
  switch to make it easier to read without requiring a lock. It doesn't need
  a lock for writing (and therefore for reading either) because only the
  current thread can write to it and it is an integer value.
o The thread state field of the pthread structure has been eliminated. It
  was an unnecessary field that mostly duplicated the flags field, but
  required additional locking that would make a lot more code paths require
  signal masking. Any truly unique values (such as PS_DEAD) have been
  reborn as separate members of the pthread structure.
o Since the mutex and condvar pthread functions are not async-signal-safe
  there is no need to muck about with the wait queues when handling
  a signal ...
o ... which also removes the need for wrapping signal handlers and sigaction(2).
o The condvar and mutex async-cancellation code had to be revised as a result
  of some of these changes, which resulted in semi-unrelated changes which
  would have been difficult to work on as a separate commit, so they are
  included as well.

The only part of the changes I am worried about is related to locking for
the pthread joining fields. But, I will take a closer look at them once this
mega-patch is committed.
2004-05-20 12:06:16 +00:00
Mike Makonnen
7295f69667 q§ 2004-05-20 11:55:04 +00:00
Mike Makonnen
0c3a942692 The thread suspend function now returns ETIMEDOUT, not EAGAIN. 2004-03-29 09:35:07 +00:00