Commit Graph

61 Commits

Author SHA1 Message Date
David Xu
42ce445fed Backout experimental adaptive-spin umtx code. 2007-06-06 07:35:08 +00:00
Jeff Roberson
3c2e44364e Commit 8/14 of sched_lock decomposition.
- Use a global umtx spinlock to protect the sleep queues now that there
   is no global scheduler lock.
 - Use thread_lock() to protect thread state.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:50 +00:00
Robert Watson
873fbcd776 Further system call comment cleanup:
- Remove also "MP SAFE" after prior "MPSAFE" pass. (suggested by bde)
- Remove extra blank lines in some cases.
- Add extra blank lines in some cases.
- Remove no-op comments consisting solely of the function name, the word
  "syscall", or the system call name.
- Add punctuation.
- Re-wrap some comments.
2007-03-05 13:10:58 +00:00
David Xu
4e32b7b3cc Add a lwpid field into per-cpu structure, the lwpid represents current
running thread's id on each cpu. This allow us to add in-kernel adaptive
spin for user level mutex. While spinning in user space is possible,
without correct thread running state exported from kernel, it hardly
can be implemented efficiently without wasting cpu cycles, however
exporting thread running state unlikely will be implemented soon as
it has to design and stablize interfaces. This implementation is
transparent to user space, it can be disabled dynamically. With this
change, mutex ping-pong program's performance is improved massively on
SMP machine. performance of mysql super-smack select benchmark is increased
about 7% on Intel dual dual-core2 Xeon machine, it indicates on systems
which have bunch of cpus and system-call overhead is low (athlon64, opteron,
and core-2 are known to be fast), the adaptive spin does help performance.

Added sysctls:
    kern.threads.umtx_dflt_spins
        if the sysctl value is non-zero, a zero umutex.m_spincount will
        cause the sysctl value to be used a spin cycle count.
    kern.threads.umtx_max_spins
        the sysctl sets upper limit of spin cycle count.

Tested on: Athlon64 X2 3800+, Dual Xeon 5130
2006-12-20 04:40:39 +00:00
Julian Elischer
ad1e7d285a Threading cleanup.. part 2 of several.
Make part of John Birrell's KSE patch permanent..
Specifically, remove:
Any reference of the ksegrp structure. This feature was
never fully utilised and made things overly complicated.
All code in the scheduler that tried to make threaded programs
fair to unthreaded programs.  Libpthread processes will already
do this to some extent and libthr processes already disable it.

Also:
Since this makes such a big change to the scheduler(s), take the opportunity
to rename some structures and elements that had to be moved anyhow.
This makes the code a lot more readable.

The ULE scheduler compiles again but I have no idea if it works.

The 4bsd scheduler still reqires a little cleaning and some functions that now do
ALMOST nothing will go away, but I thought I'd do that as a separate commit.

Tested by David Xu, and Dan Eischen using libthr and libpthread.
2006-12-06 06:34:57 +00:00
David Xu
745fbd3a72 if a thread blocked on userland condition variable is
pthread_cancel()ed, it is expected that the thread will not
consume a pthread_cond_signal(), therefor, we use thr_wake()
to mark a flag, the flag tells a thread calling do_cv_wait()
in umtx code to not block on a condition variable.
Thread library is expected that once a thread detected itself
is in pthread_cond_wait, it will call the thr_wake() for itself
in its SIGCANCEL handler.
2006-12-04 14:15:12 +00:00
David Xu
a6abdf322d Introduce userspace condition variable, since we have already POSIX
priority mutex implemented, it is the time to introduce this stuff,
now we can use umutex and ucond together to implement pthread's
condition wait/signal.
2006-12-03 01:49:22 +00:00
Robert Watson
acd3428b7d Sweep kernel replacing suser(9) calls with priv(9) calls, assigning
specific privilege names to a broad range of privileges.  These may
require some future tweaking.

Sponsored by:           nCircle Network Security, Inc.
Obtained from:          TrustedBSD Project
Discussed on:           arch@
Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri,
                        Alex Lyashkov <umka at sevcity dot net>,
                        Skip Ford <skip dot ford at verizon dot net>,
                        Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:42:10 +00:00
John Birrell
8460a577a4 Make KSE a kernel option, turned on by default in all GENERIC
kernel configs except sun4v (which doesn't process signals properly
with KSE).

Reviewed by:	davidxu@
2006-10-26 21:42:22 +00:00
David Xu
4c9b02c253 Optimize umtx_lock_pi() a bit by moving some heavy code out of the loop,
make a fast path when a umtx_pi can be allocated without being blocked.
2006-10-26 09:33:34 +00:00
David Xu
7c24ae418a In order to eliminate a branch, convert opcode to unsigned integer. 2006-10-25 06:38:46 +00:00
David Xu
91d0b4d615 Eliminate an unnecessary `if' statement. 2006-10-25 06:28:23 +00:00
David Xu
5f641fc0fb o Add keyword volatile for user mutex owner field.
o Fix type consistent problem by using type long for old
  umtx and wait channel.
o Rename casuptr to casuword.
2006-10-17 02:24:47 +00:00
David Xu
ae7d8a6766 Implement 32bit umtx_lock and umtx_unlock system calls, these two system
calls are not used by libthr in RELENG_6 and HEAD, it is only used by
the libthr in RELENG-5, the _umtx_op system call can do more incremental
dirty works than these two system calls without having to introduce new
system calls or throw away old system calls when things are going on.
2006-10-06 08:22:08 +00:00
David Xu
e58b17ea53 Fix umtx command order error for freebsd 32bit. 2006-09-22 14:59:10 +00:00
David Xu
1eec02f538 Add umtx support for 32bit process on AMD64 machine. 2006-09-22 00:52:54 +00:00
David Xu
654d6b2e0b Merge all code of do_lock_normal, do_lock_pi and do_lock_pp into
function do_lock_umutex.
2006-09-05 12:01:09 +00:00
David Xu
295ce693b9 Check if it is root user in do_unlock_pp. 2006-09-03 00:07:37 +00:00
David Xu
81273e0632 Make sure we get new m_owner value if we can not unlock it in
uncontested case. Reorder statements in do_unlock_umutex.
2006-09-02 02:41:33 +00:00
David Xu
8a156460bf Reorder some statments. Fix typo and remove stale comments. 2006-08-30 23:59:45 +00:00
David Xu
a324b5ecd3 Update comments about interrupted mutex locking. 2006-08-28 07:09:27 +00:00
David Xu
d10183d94d This is initial version of POSIX priority mutex support, a new userland
mutex structure is added as following:
struct umutex {
        __lwpid_t       m_owner;
        uint32_t        m_flags;
        uint32_t        m_ceilings[2];
        uint32_t        m_spare[4];
};
The m_owner represents owner thread, it is a thread id, in non-contested
case, userland can simply use atomic_cmpset_int to lock the mutex, if the
mutex is contested, high order bit will be set, and userland should do locking
and unlocking via kernel syscall. Flag UMUTEX_PRIO_INHERIT represents
pthread's PTHREAD_PRIO_INHERIT mutex, which when contention happens, kernel
should do priority propagating. Flag UMUTEX_PRIO_PROTECT indicates it is
pthread's PTHREAD_PRIO_PROTECT mutex, userland should initialize m_owner
to contested state UMUTEX_CONTESTED, then atomic_cmpset_int will be failure
and kernel syscall should be invoked to do locking, this becauses
for such a mutex, kernel should always boost the thread's priority before
it can lock the mutex, m_ceilings is used by PTHREAD_PRIO_PROTECT mutex,
the first element is used to boost thread's priority when it locked the mutex,
second element is used when the mutex is unlocked, the PTHREAD_PRIO_PROTECT
mutex's link list is kept in userland, the m_ceiling[1] is managed by thread
library so kernel needn't allocate memory to keep the link list, when such
a mutex is unlocked, kernel reset m_owner to UMUTEX_CONTESTED.
Flag USYNC_PROCESS_SHARED indicate if the synchronization object is process
shared, if the flag is not set, it saves a vm_map_lookup() call.

The umtx chain is still used as a sleep queue, when a thread is blocked on
PTHREAD_PRIO_INHERIT mutex, a umtx_pi is allocated to support priority
propagating, it is dynamically allocated and reference count is used,
it is not optimized but works well in my tests, while the umtx chain has
its own locking protocol, the priority propagating protocol are all protected
by sched_lock because priority propagating function is called with sched_lock
held from scheduler.

No visible performance degradation is found which these changes. Some parameter
names in _umtx_op syscall are renamed.
2006-08-28 04:24:51 +00:00
David Xu
3db720fdce Add user priority loaning code to support priority propagation for
1:1 threading's POSIX priority mutexes, the code is no-op unless
priority-aware umtx code is committed.
2006-08-25 06:12:53 +00:00
David Xu
7b8d821268 Move flag TDF_UMTXQ into structure umtxq, this eliminates the requirement
of scheduler lock in some umtx code.
2006-05-18 08:43:46 +00:00
David Xu
005efcdb0e Use wakeup_one to avoid thundering herd.
Tested by: kris
2006-05-09 13:00:46 +00:00
John Baldwin
0f180a7cce Change msleep() and tsleep() to not alter the calling thread's priority
if the specified priority is zero.  This avoids a race where the calling
thread could read a snapshot of it's current priority, then a different
thread could change the first thread's priority, then the original thread
would call sched_prio() inside msleep() undoing the change made by the
second thread.  I used a priority of zero as no thread that calls msleep()
or tsleep() should be specifying a priority of zero anyway.

The various places that passed 'curthread->td_priority' or some variant
as the priority now pass 0.
2006-04-17 18:20:38 +00:00
David Xu
a99f7ca21e Axe unused code. 2006-02-04 06:36:39 +00:00
David Xu
4938faa635 do umtx_wake at userland thread exit address, so that others userland
threads can wait for a thread to exit, and safely assume that the thread
has left userland and is no longer using its userland stack, this is
necessary for pthread_join when a thread is waiting for another thread
to exit which has user customized stack, after pthread_join returns,
the userland stack can be reused for other purposes, without this change,
the joiner thread has to spin at the address to ensure the thread is really
exited.
2005-10-26 06:55:46 +00:00
David Xu
bc8e6d817d Allocate umtx_q from heap instead of stack, this avoids
page fault panic in kernel under heavy swapping.
2005-03-05 09:15:03 +00:00
David Xu
a2cc61fa6e Revert my previous errno hack, that is certainly an issue,
and always has been, but the system call itself returns
errno in a register so the problem is really a function of
libc, not the system call.

Discussed with : Matthew Dillion <dillon@apollo.backplane.com>
2005-01-18 13:53:10 +00:00
David Xu
b7be40d612 make umtx timeout relative so userland can select different clock type,
e.g, CLOCK_REALTIME or CLOCK_MONOTONIC.
merge umtx_wait and umtx_timedwait into single function.
2005-01-14 13:38:15 +00:00
Poul-Henning Kamp
3963baec64 Comment out debugging printf which doesn't compile on amd64. 2005-01-12 10:11:31 +00:00
David Xu
333d4875cd Let _umtx_op directly return error code rather than from errno because
errno can be tampered potentially by nested signal handle.
Now all error codes are returned in negative value, positive value are
reserved for future expansion.
2005-01-12 05:55:52 +00:00
David Xu
3e380f0d3d Break out of loop earlier if it is not timeout. 2005-01-08 06:57:46 +00:00
Warner Losh
9454b2d864 /* -> /*- for copyright notices, minor format tweaks as necessary 2005-01-06 23:35:40 +00:00
David Xu
476e1d077e Return ETIMEDOUT when thread is timeouted since POSIX thread
APIs expect ETIMEDOUT not EAGAIN, this simplifies userland code a
bit.
2005-01-06 02:08:34 +00:00
David Xu
cc1000ac5b Make umtx_wait and umtx_wake more like linux futex does, it is
more general than previous. It also lets me implement cancelable point
in thread library. Also in theory, umtx_lock and umtx_unlock can
be implemented by using umtx_wait and umtx_wake, all atomic operations
can be done in userland without kernel's casuptr() function.
2004-12-30 02:56:17 +00:00
David Xu
c180db2bce Make _umtx_op() as more general interface, the final parameter needn't be
timespec pointer, every parameter will be interpreted by its opcode.
2004-12-25 13:02:50 +00:00
David Xu
8b37fbabb4 1. introduce umtx_owner to get an owner of a umtx.
2. add const qualifier to umtx_timedlock and umtx_timedwait.
3. add missing blackets in umtx do_unlock_and_wait.
2004-12-25 12:49:35 +00:00
David Xu
3dd213f160 Add umtxq_lock/unlock around umtx_signal, fix debug kernel compiling,
let umtx_lock returns EINTR when it returns ERESTART, this lets
userland have chance to back off mtx lock code when needed.
2004-12-24 11:59:20 +00:00
David Xu
a08c214a72 1. Fix race condition between umtx lock and unlock, heavy testing
on SMP can explore the bug.
2. Let umtx_wake returns number of threads have been woken.
2004-12-24 11:30:55 +00:00
David Xu
839f811c6a 1. msleep returns EWOULDBLOCK not ETIMEDOUT, use EWOULDBLOCK instead.
2. Eliminate a possible lock leak in timed wait loop.
2004-12-18 13:43:16 +00:00
David Xu
50586e8b6b 1. make umtx sharable between processes, the way is two or more processes
call mmap() to create a shared space, and then initialize umtx on it,
   after that, each thread in different processes can use the umtx same
   as threads in same process.
2. introduce a new syscall _umtx_op to support timed lock and condition
   variable semantics. also, orignal umtx_lock and umtx_unlock inline
   functions now are reimplemented by using _umtx_op, the _umtx_op can
   use arbitrary id not just a thread id.
2004-12-18 12:52:44 +00:00
David Xu
d111b34081 Forgot to inline umtxq_unlock. 2004-11-30 12:18:53 +00:00
David Xu
3f76af0f4a 1. use per-chain mutex instead of global mutex to reduce
lock collision.
2. Fix two race conditions. One is between _umtx_unlock and signal,
   also a thread was marked TDF_UMTXWAKEUP by _umtx_unlock, it is
   possible a signal delivered to the thread will cause msleep
   returns EINTR, and the thread breaks out of loop, this causes
   umtx ownership is not transfered to the thread. Another is in
   _umtx_unlock itself, when the function sets the umtx to
   UMTX_UNOWNED state, a new thread can come in and lock the umtx,
   also the function tries to set contested bit flag, but it will
   fail. Although the function will wake a blocked thread, if that
   thread breaks out of loop by signal, no contested bit will be set.
2004-11-30 12:02:53 +00:00
Mike Makonnen
c21e3b38bd writers must hold both sched_lock and the process lock; therefore, readers
need only obtain the process lock.
2004-07-12 15:28:31 +00:00
Marcel Moolenaar
cd28f17da2 Change the thread ID (thr_id_t) used for 1:1 threading from being a
pointer to the corresponding struct thread to the thread ID (lwpid_t)
assigned to that thread. The primary reason for this change is that
libthr now internally uses the same ID as the debugger and the kernel
when referencing to a kernel thread. This allows us to implement the
support for debugging without additional translations and/or mappings.

To preserve the ABI, the 1:1 threading syscalls, including the umtx
locking API have not been changed to work on a lwpid_t. Instead the
1:1 threading syscalls operate on long and the umtx locking API has
not been changed except for the contested bit. Previously this was
the least significant bit. Now it's the most significant bit. Since
the contested bit should not be tested by userland, this change is
not expected to be visible. Just to be sure, UMTX_CONTESTED has been
removed from <sys/umtx.h>.

Reviewed by: mtm@
ABI preservation tested on: i386, ia64
2004-07-02 00:40:07 +00:00
Mike Makonnen
0af67a2ef9 Use the proc lock to sleep on a libthr umtx. 2004-03-27 14:32:03 +00:00
Tim J. Robbins
f05a427aa6 Return EINVAL if the contested bit is not set on the umtx passed to
_umtx_unlock() instead of firing a KASSERT.
2003-09-07 11:14:52 +00:00
Peter Wemm
80611144e4 Initialize 'blocked' to NULL. I think this was a real problem, but I
am not sure about that.  The lack of -Werror and the inline noise hid
this for a while.
2003-07-23 20:29:13 +00:00