336 Commits

Author SHA1 Message Date
davidxu
9acb5351ab SYSTEM_SCOPE_ONLY flag is no longer needed, it is the only mode libthr
supports.
2008-01-18 04:29:36 +00:00
davidxu
70ef09a695 sem_post() requires to return -1 on error. 2008-01-07 02:26:29 +00:00
davidxu
cbdadd6e3e call underscore version of pthread_cleanup_pop instead. 2007-12-20 04:40:12 +00:00
davidxu
cd0acadf51 Remove vfork() overloading, it is no longer needed. 2007-12-20 04:32:28 +00:00
davidxu
42e5c27a87 Add function prototypes. 2007-12-17 02:53:11 +00:00
davidxu
5340a5f502 1. Add function pthread_mutex_setspinloops_np to turn a mutex's spin
loop count.
2. Add function pthread_mutex_setyieldloops_np to turn a mutex's yield
   loop count.
3. Make environment variables PTHREAD_SPINLOOPS and PTHREAD_YIELDLOOPS
   to be only used for turnning PTHREAD_MUTEX_ADAPTIVE_NP mutex.
2007-12-14 06:25:57 +00:00
davidxu
50439b0991 Enclose all code for macro ENQUEUE_MUTEX in do while statement, and
add missing brackets.

MFC: after 1 day
2007-12-11 08:00:58 +00:00
jasone
600513aa8d Fix pointer dereferencing problems in _pthread_mutex_init_calloc_cb() that
were obscured by pseudo-opaque pthreads API pointer casting.
2007-11-28 00:16:24 +00:00
jasone
21bb948195 Add _pthread_mutex_init_calloc_cb() to libthr and libkse, so that malloc(3)
(part of libc) can use pthreads mutexes without causing infinite recursion
during initialization.
2007-11-27 03:16:44 +00:00
davidxu
e24c6a3904 Simplify code, fix a thread cancellation bug in sem_wait and sem_timedwait. 2007-11-23 05:42:52 +00:00
davidxu
df28d4b72f Reuse nwaiter member field to record number of waiters, in sem_post(),
this should reduce the chance having to do a syscall when there is no
waiter in the semaphore.
2007-11-21 06:01:02 +00:00
davidxu
6a4c1655fd Remove warning level and aliasing restrictions. 2007-11-21 05:29:57 +00:00
davidxu
a0922e4b91 Convert ceiling type to unsigned integer before comparing, fix compiler
warnings.
2007-11-21 05:25:27 +00:00
davidxu
e90ed63757 Add some function prototypes. 2007-11-21 05:23:54 +00:00
davidxu
c63688968a Remove umtx_t definition, use type long directly, add wrapper function
_thr_umtx_wait_uint() for umtx operation UMTX_OP_WAIT_UINT, use the
function in semaphore operations, this fixed compiler warnings.
2007-11-21 05:21:58 +00:00
jb
5582e69034 These are the things that the tinderbox has problems with because it
doesn't use the default CFLAGS which contain -fno-strict-aliasing.

Until the code is cleaned up, just add -fno-strict-aliasing to the
CFLAGS of these for the tinderboxes' sake, allowing the rest of the
tree to have -Werror enabled again.
2007-11-20 02:07:30 +00:00
marius
af79ab51ec In _pthread_key_create() ensure that libthr is initialized. This
fixes a NULL-dereference of curthread when libstdc+ initializes
the exception handling globals on archs we can't use GNU TLS due
to lack of support in binutils 2.15 (i.e. arm and sparc64), yet,
thus making threaded C++ programs compiled with GCC 4.2.1 work
again on these archs.

Reviewed by:	davidxu
MFC after:	3 days
2007-11-06 21:50:43 +00:00
davidxu
c01b764192 Avoid doing adaptive spinning for priority protected mutex, current
implementation always does lock in kernel.
2007-10-31 01:50:48 +00:00
davidxu
674cdbbcee Don't do adaptive spinning if it is running on UP kernel. 2007-10-31 01:44:50 +00:00
davidxu
e199852bb6 Restore revision 1.55, the kris's adaptive mutex type. 2007-10-31 01:37:13 +00:00
kris
9e91eb96b8 Adaptive mutexes should have the same deadlock detection properties that
default (errorcheck) mutexes do.

Noticed by:          davidxu
2007-10-30 09:24:23 +00:00
davidxu
97a20b1db7 Add my recent work of adaptive spin mutex code. Use two environments variable
to tune pthread mutex performance:
1. LIBPTHREAD_SPINLOOPS
	If a pthread mutex is being locked by another thread, this environment
	variable sets total number of spin loops before the current thread
	sleeps in kernel, this saves a syscall overhead if the mutex will be
	unlocked very soon (well written application code).
2. LIBPTHREAD_YIELDLOOPS
	If a pthread mutex is being locked by other threads, this environment
	variable sets total number of sched_yield() loops before the currrent
	thread sleeps in kernel. if a pthread mutex is locked, the current thread
	gives up cpu, but will not sleep in kernel, this means, current thread
	does not set contention bit in mutex, but let lock owner to run again
	if the owner is on kernel's run queue, and when lock owner unlocks the
	mutex, it does not need to enter kernel and do lots of work to resume
	mutex waiters, in some cases, this saves lots of syscall overheads for
	mutex owner.

In my practice, sometimes LIBPTHREAD_YIELDLOOPS can massively improve performance
than LIBPTHREAD_SPINLOOPS, this depends on application. These two environments
are global to all pthread mutex, there is no interface to set them for each
pthread mutex, the default values are zero, this means spinning is turned off
by default.
2007-10-30 05:57:37 +00:00
kris
bbfd76f872 Add a new "non-portable" mutex type, PTHREAD_MUTEX_ADAPTIVE_NP. This
is also implemented in glibc and is used by a number of existing
applications (mysql, firefox, etc).

This mutex type is a default mutex with the additional property that
it spins briefly when attempting to acquire a contested lock, doing
trylock operations in userland before entering the kernel to block if
eventually unsuccessful.

The expectation is that applications requesting this mutex type know
that the mutex is likely to be only held for very brief periods, so it
is faster to spin in userland and probably succeed in acquiring the
mutex, than to enter the kernel and sleep, only to be woken up almost
immediately.  This can help significantly in certain cases when
pthread mutexes are heavily contended and held for brief durations
(such as mysql).

Spin up to 200 times before entering the kernel, which represents only
a few us on modern CPUs.  No performance degradation was observed with
this value and it is sufficient to avoid a large performance drop in
mysql performance in the heavily contended pthread mutex case.

The libkse implementation is a NOP.

Reviewed by:      jeff
MFC after:        3 days
2007-10-29 21:01:47 +00:00
ru
8ea97e9ef7 - Stop calling libthr alternative as it's now the default
threading library.

- Now that libpthread is a symlink, it's no longer possible
  to link applications with libpthread and have libmap.conf(5)
  select the desired threading library; applications will be
  linked to the default threading library, libkse or libthr.
  Remove an obsolete paragraph.

- Mention that improvements can be seen compared to libkse.

Reviewed by:	deischen, davidxu
2007-10-22 10:13:38 +00:00
davidxu
5523a9bb34 Use macro THR_CLEANUP_PUSH/POP, they are cheaper than pthread_cleanup_push/pop. 2007-10-16 07:46:15 +00:00
davidxu
38b01da7d2 Reverse the logic of UP and SMP.
Submitted by: jasone
2007-10-16 07:36:02 +00:00
obrien
a1598920aa Tweak the handling of "WITHOUT_LIBPTHREAD". Also remove the accidental
treatment of 'LIBKSE' as an "old style" knob.

Submitted by:	ru
Approved by:	re(kensmith)
2007-10-09 23:31:11 +00:00
ru
08a6766d3a Always install libpthread.* symlinks if at least one of
the threading libraries is built.  This simplifies the
logic in makefiles that need to check if the pthreads
support is present.  It also fixes a bug where we would
build a threading library that we shouldn't have built:
for example, building with WITHOUT_LIBTHR and the default
value of DEFAULT_THREADING_LIB (libthr) would mistakenly
build the libthr library, but not install it.

Approved by:	re (kensmith)
2007-10-01 18:29:55 +00:00
davidxu
52bd1282cd Output error message to STDERR_FILENO.
Approved by: re (bmah)
2007-08-07 04:50:14 +00:00
davidxu
abf8c6f8db Set warning level to 2. 2007-06-08 02:21:13 +00:00
deischen
ff36458e08 Bump library versions in preparation for 7.0.
Ok'd by:	kan
2007-05-21 02:49:08 +00:00
ru
3be5d73f3d Fix a logic bug I re-introduced in my patch I sent to Daniel
that would cause the selected shared threading library to be
overwritten with its 32-bit version on amd64.

PR:		amd64/112509
2007-05-18 12:25:48 +00:00
deischen
7d5de3a8a3 Allow DEFAULT_THREAD_LIB to be set from /etc/src.conf.
Submitted by:	ru
2007-05-17 04:54:35 +00:00
deischen
bf3a79274d Enable symbol versioning by default. Use WITHOUT_SYMVER to disable it.
Warning, after symbol versioning is enabled, going back is not easy
(use WITHOUT_SYMVER at your own risk).

Change the default thread library to libthr.

There most likely still needs to be a version bump for at least the
thread libraries.  If necessary, this will happen later.
2007-05-13 14:12:40 +00:00
davidxu
d97c4f1e52 backout experimental adaptive spinning mutex for product use. 2007-05-09 08:39:33 +00:00
deischen
2a7306fdc5 Use C comments since we now preprocess these files with CPP. 2007-04-29 14:05:22 +00:00
davidxu
2dd85eafce If a thread who's name is being set is not the current thread, use macros
THR_THREAD_LOCK and THR_THREAD_UNLOCK instead, this should fix wrong
lock level problem.

Bug reported by: ed dot maste at gmail dot com
2007-04-05 07:20:31 +00:00
imp
9109b1ceb8 Remove 3rd clause, renumber, ok per email 2007-01-12 07:26:21 +00:00
davidxu
9770c4c640 Insert mutex at tail if it has highest ceiling. 2007-01-05 03:57:11 +00:00
davidxu
dec2b546dd Oops, don't corrupt the list. 2007-01-05 03:33:47 +00:00
davidxu
190109deab Check if the PP mutex is recursive, if we have already locked it, place the
mutex in right order sorted by priority ceiling.
2007-01-05 03:29:15 +00:00
davidxu
d305995d66 get LIBPTHREAD_ADAPTIVE_SPIN early, so it can be used for some global
mutexes.
2006-12-20 05:05:44 +00:00
davidxu
e034ab54f2 Check environment variable PTHREAD_ADAPTIVE_SPIN, if it is set, use
it as a default spin cycle count.
2006-12-20 04:43:34 +00:00
davidxu
ca27871833 - Remove variable _thr_scope_system, all threads are system scope.
- Rename _thr_smp_cpus to boolean variable _thr_is_smp.
- Define CPU_SPINWAIT macro for each arch, only X86 supports it.
2006-12-15 11:52:01 +00:00
davidxu
26cbb63b3f Create inline function _thr_umutex_trylock2 to only try one atomic
operation, if it is failed, we call syscall directly, this saves
one atomic operation per lock contention.
2006-12-14 13:22:02 +00:00
davidxu
229aca4634 Correctly check failed syscall. 2006-12-12 05:26:39 +00:00
davidxu
fcda4340a4 Move checking for c_has_waiters into low level _thr_ucond_signal and
_thr_ucond_broadcast, clear condition variable pointer in cancellation
info after returing from _thr_ucond_wait, since kernel has already
dropped the internal lock, so we don't need to unlock it in cancellation
handler again.
2006-12-12 03:08:49 +00:00
davidxu
33168bd24f test cancel_pending to save a thr_wake call in some specical cases. 2006-12-06 00:15:35 +00:00
davidxu
b01e86cf0a _thr_ucond_wait drops lock, we should pick it up again. 2006-12-05 23:46:11 +00:00
davidxu
3f73376426 the c_has_waiters is lazily updated, temporarily disable the false
alarm code.
2006-12-05 07:23:58 +00:00