Commit Graph

638 Commits

Author SHA1 Message Date
davidxu
c7be5e14dc Code clean up, remove unused MACROS and function prototypes. 2003-12-19 12:57:08 +00:00
deischen
1f8c5c54fd accept() returns a file descriptor when it succeeds which is very
likely to be non-zero.  When leaving the cancellation point, check
the return value against -1 to see if cancellation should be
checked.  While I'm here, make the same change to connect() just
to be consisitent.

Pointed out by: davidxu
2003-12-09 23:40:27 +00:00
deischen
ed86f0d4d5 Remove an unused struct definition. 2003-12-09 15:18:40 +00:00
deischen
212e86fbe9 Add cancellation points for accept() and connect(). 2003-12-09 15:16:27 +00:00
davidxu
7813286b21 Use mutex instead of low level thread lock to implement spinlock, this
avoids signal to be blocked when otherwise it can be handled.
2003-12-09 02:37:40 +00:00
davidxu
22c52834eb Rename _thr_enter_cancellation_point to _thr_cancel_enter, rename
_thr_leave_cancellation_point to _thr_cancel_leave, add a parameter
to _thr_cancel_leave to indicate whether cancellation point should be
checked, this gives us an option to not check cancallation point if
a syscall successfully returns to avoid any leaks, current I have
creat(), open() and fcntl(F_DUPFD) to not check cancellation point
after they sucessfully returned.

Replace some members in structure kse with bit flags to same some
memory.

Conditionally compile THR_ASSERT to nothing if _PTHREAD_INVARIANTS is
not defined.

Inline some small functions in thr_cancel.c.

Use __predict_false in thr_kern.c for some executed only once code.

Reviewd by: deischen
2003-12-09 02:20:56 +00:00
davidxu
3f57a77355 More reliably check timeout for pthread_mutex_timedlock. 2003-12-09 00:52:28 +00:00
deischen
e2ed712394 Go back to using rev 1.18 where thread locks are used instead of KSE
locks for [libc] spinlock implementation.  This was previously backed
out because it exposed a bug in ia64 implementation.

OK'd by:	marcel
2003-12-08 13:33:20 +00:00
marcel
4b6eafd82f Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
peter
bf613741f0 Apply a second fix for stack alignment with libkse. This time, enter the
UTS with the stack correctly aligned.  Also, while here, use an indirect
jump rather than the pushq/ret hack.

This fixes threaded apps that use floating point for me, although
it hasn't solved all the problems.  It is an improvement though.
Preservation of the 128 byte red zone hasn't been resolved yet.

Approved by:  re (scottl)
2003-12-05 01:41:43 +00:00
davidxu
639818d0f3 Eliminate two pushl by using call instruction directly, this really
helps branch predict a lot for INTEL P4.

Approved by: re (scottl)
2003-11-29 14:25:43 +00:00
davidxu
2cc3179bff 1.Macro optimizing KSE_LOCK_ACQUIRE and THR_LOCK_ACQUIRE to use static fall
through branch predict as suggested in INTEL IA32 optimization guide.

2.Allocate siginfo arrary separately to avoid pthread to be allocated at
2K boundary, which hits L1 address alias problem and causes context
switch to be slow down.

3.Simplify context switch code by removing redundant code, code size is
reduced, so it is expected to run faster.

Reviewed by: deischen
Approved by: re (scottl)
2003-11-29 14:22:29 +00:00
davidxu
e9f088c469 Remove surplus mmap() call for stack guard page in init_private, it is done
in init_main_thread. Also don't initialize lock and lockuser again for initial
thread, it is already done by _thr_alloc().

Reviewed by: deischen
Approved by: re (scottl)
2003-11-29 14:10:02 +00:00
deischen
b1926e392e Back out last change and go back to using KSE locks instead of thread
locks until we know why this breaks ia64.

Reported by:	marcel
2003-11-16 15:01:26 +00:00
davidxu
27be031526 If a thread in critical region got a synchronous signal, according current
signal handling mode, there is no chance to handle the signal, something
must be wrong in the library, just call kse_thr_interrupt to dump its core.
I have the code for a long time, but forgot to commit it.
2003-11-09 00:37:14 +00:00
davidxu
48868cdc76 Use THR lock instead of KSE lock to avoid scheduler be blocked in spinlock.
Reviewed by: deischen
2003-11-08 06:07:04 +00:00
deischen
75b2a9cea3 style(9)
Reviewed by:	bde
2003-11-05 18:19:24 +00:00
deischen
c99795abd5 Don't declare the malloc lock; use the declaration provided in libc.
Noticed by:	bde
2003-11-05 18:18:45 +00:00
davidxu
66e5a49572 Add pthread_atfork() source code. Dan forgot to commit this file. 2003-11-05 03:42:10 +00:00
deischen
1191fa7e32 Add an implementation for pthread_atfork().
Aside from the POSIX requirements for pthread_atfork(), when
fork()ing, take the malloc lock to keep malloc state consistent
in the child.

Reviewed by:	davidxu
2003-11-04 20:04:45 +00:00
deischen
573b809044 Add the ability to reinitialize libpthread's internal FIFO-queueing
locks.

Reviewed by:	davidxu
2003-11-04 20:01:38 +00:00
deischen
3153a078d8 Add the ability to reinitialize a spinlock (libc/libpthread
internal lock, not a pthread spinlock).

Reviewed by:	davidxu
2003-11-04 19:59:22 +00:00
deischen
b961438622 s/foo()/foo(void)/
Add a blank line after a variable declaration.
2003-11-04 19:58:12 +00:00
deischen
06593e0c1a Libpthread uses the convention that all of its (non-weak) symbols
begin with underscores and provide weak definitions without
underscores.  Make the pthread spinlock conform to this convention.
2003-11-04 19:56:12 +00:00
deischen
3ccfc8a9e4 Add the ability to reinitialize a mutex (internally, not a userland
API).

Reviewed by:	davidxu
2003-11-04 19:53:32 +00:00
peter
9fdc368a9c Use amd64_set_fsbase() instead of calling sysarch() directly. 2003-10-23 06:12:57 +00:00
deischen
ef384a447b This test relies on the concurrency level being 1; make it so. 2003-10-20 04:23:49 +00:00
peter
aa62482800 Update context code for my last ABI breakage of mcontext. I'm worried
about the fpu code here.  It should be using fxsave/fxrstor instead of
saving/restoring the control word.  The SSE registers are used a lot in
gcc generated code on amd64.  I'm not sure how this all fits together
though.
2003-10-17 16:30:09 +00:00
deischen
b5f43f9f88 Don't forget to initialize the fake tcb when the kcb is allocated. 2003-10-12 16:50:45 +00:00
deischen
8df72a4176 Reverse the order of the first two arguments to _sparc64_enter_uts().
The first argument is the UTS function, the second argument is the
first argument to the UTS function.  Who's on first.
2003-10-09 20:52:17 +00:00
deischen
69eb1f0d34 Convert a couple of hardcoded values to constants. Make thr_getcontext()
return 0 when called the first time, and return 1 when resumed by
thr_setcontext().
2003-10-09 14:48:09 +00:00
deischen
801ac4642e Add preliminary sparc64 support to libpthread. This does not
yet work, but hopefully someone familiar with the sparc64
port can pick up the reins.

Submitted by:	jake
With mods by:	deischen
2003-10-09 02:32:28 +00:00
davidxu
97f8d9d1b5 Fix some comments for last commit. 2003-10-08 00:30:38 +00:00
davidxu
debdb208b6 Complete cancellation support for M:N threads, check cancelling flag when
thread state is changed from RUNNING to WAIT state and do some cancellation
operations for every cancellable state.

Reviewed by: deischen
2003-10-08 00:20:50 +00:00
davidxu
3ea48105a7 Use thread lock instead of scheduler lock to eliminate lock contention
for all wrapped syscalls under SMP.

Reviewed by: deischen
2003-10-08 00:17:13 +00:00
davidxu
bde7cb88e1 Only generate code for _LCK_ASSERT if _LCK_DEBUG is defined. 2003-10-02 03:24:26 +00:00
davidxu
a0841bce6f When concurrency level is reduced and a kse is exiting, make sure no other
threads are still referencing the kse by migrating them to initial kse.

Reviewed by: deischen
2003-09-29 06:25:04 +00:00
davidxu
5910a67bb3 Remove unused variable. 2003-09-28 13:47:29 +00:00
marcel
4d437893ac Relink libc_r.a, libc_r.so and libc_r_p.so from libthr to libkse.
On ia64, where there's no libc_r at all, libkse is now the default
thread library by virtue of these links.

The reasons for this change are:
1. libkse is slated to become the default thread library anyway,
2. active development and maintenance is only present for libkse,
3. GNOME and KDE, both in the process of being supported on ia64,
   work better with KSE; even on ia64.
2003-09-27 23:27:19 +00:00
davidxu
528c35bfb3 pthread API should return error code in return value not in errno.
Reviewed by: deischen
2003-09-25 13:53:49 +00:00
davidxu
c0197faefe If syscall failed, restore old sigaction and return error to thread. 2003-09-25 06:23:40 +00:00
davidxu
13f5fe3849 As comments in _mutex_lock_backout state, only current thread
can clear the pointer to mutex, not the thread doing mutex
handoff. Because _mutex_lock_backout does not hold scheduler
lock while testing THR_FLAGS_IN_SYNCQ and then reading mutex
pointer, it is possible mutex owner begin to unlock and
handoff the mutex to the current thread, and mutex pointer
will be cleared to NULL before current thread reading it, so
current thread will end up with deferencing a NULL pointer,
Fix the race by making mutex waiters to clear their mutex pointers.
While I am here, also save inherited priority in mutex for
PTHREAD_PRIO_INERIT mutex in mutex_trylock_common just like what
we did in mutex_lock_common.
2003-09-24 12:52:57 +00:00
davidxu
0414766399 Free thread name memory if there is. 2003-09-23 04:02:23 +00:00
davidxu
eed69ab82a Save and restore timeout field for signal frame just like what we did
for interrupted field.
Also in _thr_sig_handler, retrieve current signal mask from kernel not
from ucp, the later is pre-unioned mask, not current signal mask.
2003-09-22 14:40:36 +00:00
davidxu
b429e20750 Fix FPU state restoring bug by jumping to right position. 2003-09-22 14:34:02 +00:00
davidxu
96155432c1 Print waitset correctly. 2003-09-22 00:40:23 +00:00
marcel
265b2be119 Make KSE_STACKSIZE machine dependent by moving it from thr_kern.c to
pthread_md.h. This commit only moves the definition; it does not
change it for any of the platforms. This more easily allows 64-bit
architectures (in particular) to pick a slightly larger stack size.
2003-09-19 23:28:13 +00:00
marcel
5888af272d _ia64_break_setcontext() now takes a mcontext_t. While here, define
THR_SETCONTEXT as PANIC(). The THR_SETCONTEXT macro is currently not
used, which means that the definition we had could be wrong, overly
pessimistic or unknowingly right. I don't like the odds...

The new _ia64_break_setcontext() and corresponding kernel fixes make
KSE mostly usable. There's still a case where we don't properly
restore a context and end up with a NaT consumption fault (typically
an indication for not handling NaT collection points correctly),
but at least now mutex_d works...
2003-09-19 23:00:28 +00:00
marcel
ef50cc82f8 Stop using the setcontext() syscall to restore an async context.
Instead use the break instruction with an immediate specially
created for us.
2003-09-19 22:54:05 +00:00
davidxu
85f4a8612b pthread api should return error code in return value, not in errno. 2003-09-18 12:19:28 +00:00
davidxu
e7f6efc66e Fix a typo. Also turn on PTHREAD_SCOPE_SYSTEM after fork(). 2003-09-16 02:03:39 +00:00
deischen
7332e364d0 Remove a comment that questioned why the size of the FPU
state for amd64 was twice as large as necessary.  Peter
recently fixed this, so the comment no longer applies.

Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.

FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
2003-09-16 00:00:53 +00:00
davidxu
1d188498a3 Fix bogus comment and assign sigmask in critical region, use
SIG_CANTMASK to remove unmaskable signal masks.
2003-09-15 00:08:48 +00:00
davidxu
8d7f2fb390 Fix a bogus comment, sigmask must be maintained correctly,
it will be inherited in pthread_create.
2003-09-15 00:06:46 +00:00
davidxu
42c958f29f 1. Allocating and freeing lock related resource in _thr_alloc and _thr_free
to avoid potential memory leak, also fix a bug in pthread_create, contention
   scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check
   right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit.
2. Execute hooks registered by atexit() on thread stack but not on scheduler
   stack.
3. Simplify some code in _kse_single_thread by calling xxx_destroy functions.

Reviewed by: deischen
2003-09-14 22:52:16 +00:00
davidxu
40d912b465 When invoking an old style signal handler, use true traditional BSD style to
invoke signal handler.

Reviewed by: deischen
2003-09-14 22:42:39 +00:00
davidxu
b622332e18 Respect POSIX specification, a value return from pthread_attr_getguardsize
should be a value past to pthread_attr_setguardsize, not a rounded up value.
Also fix a stack size matching bug in thr_stack.c, now stack matching code
uses number of pages but not bytes length to match stack size, so for example,
size 512 bytes and size 513 bytes should both match 1 page stack size.

Reviewed by: deischen
2003-09-14 22:39:44 +00:00
davidxu
d08173b50b Avoid garbage bits in c_flags by direct assigning value.
Reviewed by: deischen
2003-09-14 22:33:32 +00:00
davidxu
b117b7ad58 If user is seting scope process flag, clear PTHREAD_SCOPE_SYSTEM bit
accordingly.

Reviewed by: deischen
2003-09-14 22:32:28 +00:00
davidxu
be9fb27b11 Check invalid parameter and return EINVAL.
Reviewed by: deischen
2003-09-14 22:28:13 +00:00
davidxu
707186a985 Original pthread_once code has memory leak if pthread_once_t is used in
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.

Reviewed by: deischen
2003-09-09 22:38:12 +00:00
davidxu
f2dd9e6365 Add code to support pthread spin lock.
Reviewed by: deischen
2003-09-09 06:57:51 +00:00
davidxu
521fd6195a Add small piece of code to support pthread_rwlock_timedrdlock and
pthread_rwlock_timedrwlock.
2003-09-06 00:07:52 +00:00
kan
3cec797a7b The caller is expected to set up PIC register corectly before
jumping to .cerror. This means .cerror has to be present in the
same module with its consumers, or bad things will happen.
2003-09-05 18:08:19 +00:00
davidxu
82aeb9fc85 Add code to support barrier synchronous object and implement
pthread_mutex_timedlock().

Reviewed by: deischen
2003-09-04 14:06:43 +00:00
davidxu
480778ce65 Remove repeated macro THR_IN_CONDQ. 2003-09-04 07:46:26 +00:00
davidxu
962cbaebac Allow hooks registered by atexit() to run with current thread pointer set,
without this change, my atexit test dumps core.
2003-09-04 05:24:53 +00:00
deischen
919bc52171 Don't assume sizeof(long) = sizeof(int) on x86; use int
instead of long types for low-level locks.

Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.

Add memory barriers to alpha atomic swap functions (submitted
by davidxu).

Requested by:	bde
2003-09-03 17:56:26 +00:00
davidxu
f61d4432c4 Move kse_wakeup_multi call to just before KSE_SCHED_UNLOCK.
Tested on: SMP
2003-09-03 00:21:10 +00:00
kan
6d12300f90 Rethink the way thr_libc.So is generated. Relying on GCC to extract
only needed symbols from libc_pic is not working on sparc64.

Requested by: jake
2003-09-02 19:37:11 +00:00
deischen
e8c434f7c5 Allow the concurrency level to be reduced.
Reviewed by:	davidxu
2003-08-30 12:09:16 +00:00
davidxu
6c1cd2954d Repost masked signal to kernel for scope system thread, it hardly happens
in real world.

Reviewed by: deischen
2003-08-21 22:02:18 +00:00
davidxu
5546ba5027 _thr_sig_check_pending is also called by scope system thread when it leaves
critical region, we wrap some syscalls for thread cancellation point, and
when syscalls returns, we call _thr_leave_cancellation_point, at the time
if a signal comes in, it would be buffered, and when the thread leaves
_thr_leave_cancellation_point, buffered signals will be processed, to avoid
messing up normal syscall errno, we should save and restore errno around
signal handling code.
2003-08-20 13:43:35 +00:00
deischen
9371b61282 Add back a loop for up to PTHREAD_DESTRUCTOR_ITERATIONS to
destroy thread-specific data.  Display a warning when thread
specific data remains after PTHREAD_DESTRUCTOR_ITERATIONS.

Reviewed by:	davidxu
2003-08-20 02:34:14 +00:00
davidxu
00e9963efb Support printing 64 bits pointer and long integer.
Reviewed by: deischen
2003-08-19 08:29:33 +00:00
davidxu
0e9eb96aee Save and restore errno around sigprocmask. 2003-08-19 03:33:51 +00:00
davidxu
49037ec1b2 Direct call exit if thread was never created. This makes it safe to call
pthread_exit in main() without creating any thread.

Tessted by: deischen
2003-08-18 04:03:08 +00:00
davidxu
3203dde90e Treat initial thread as scope system thread when KSE mode is not activated
yet, so we can protect some locking code from being interrupted by signal
handling. When KSE mode is turned on, reset the thread flag to scope process
except we are running in 1:1 mode which we needn't turn it off.
Also remove some unused member variables in structure kse.

Tested by: deischen
2003-08-18 03:58:29 +00:00
davidxu
76d162d172 If threaded mode is not turned on yet, direct call __sys_sched_yield. 2003-08-16 13:02:45 +00:00
davidxu
e695327086 Keep initial kse and kse group just like we keep initial thread,
Don't free them, so some code can still reference them.

Reviewed by: deischen
2003-08-16 05:22:20 +00:00
davidxu
206928815a Access user provided pointer out of lock, and also check the case when
a key is less than 0.
2003-08-16 05:19:00 +00:00
marcel
8e205a2f04 Don't run verify directly as that would require the perl script to
have execute permissions. Run "perl verify" instead. Replace all
occurences of the hardcoding of ./verify with $(VERIFY) to allow
it to be overridden as well.
2003-08-13 03:59:18 +00:00
davidxu
f9f5048801 Always set tcb for bound thread, and switch tcb for M:N thread at correct
time.
2003-08-13 01:49:07 +00:00
davidxu
2bb2e522ee Don't forget to set kcb_self. 2003-08-12 22:13:06 +00:00
davidxu
c5eb02d502 Correctly set current tcb. This fixes some IA64/KSE problems.
Reviewed by: deischen, julian
2003-08-12 08:01:34 +00:00
davidxu
09416455b1 Add some quick pathes to exit process when signal action is default and
signal can causes process to exit.

Reviewed by: deischen
2003-08-10 22:35:46 +00:00
davidxu
4d41804986 Initialize rtld lock just before turning on thread mode and
uninitialize rtld lock after thread mode shutdown.
2003-08-10 22:30:20 +00:00
davidxu
b614f44d78 If thread mode is not activated yet, just call __sys_fork() directly,
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.

Reviewed by: deischen
2003-08-10 22:20:41 +00:00
davidxu
349a8e7e9b Tweak rtld lock to allow recursive on reader lock and detect recursive
on writer lock. This is first cut at rwlock for rtld.

Submitted by: desichen
2003-08-10 22:15:03 +00:00
davidxu
b213a2432e If thread mode is not activated yet, don't do extra work.
Reviewed by: deischen
2003-08-10 22:07:28 +00:00
davidxu
6245694474 o Add code to GC freed KSEs and KSE groups
o Fix a bug in kse_free_unlocked(), kcb_dtor shouldn't be called because
  the KSE is cached and will be resued in _kse_alloc().

Reviewed by: deischen
2003-08-08 22:20:59 +00:00
kan
0faaaf6049 Allow gcc driver to process -r option iself, do not use -Wl,-r to
bypass it. Doing otherwise did not allow compiler to detect and disable
conflicting options generated from specs.

Reported by:	jake
2003-08-08 03:41:13 +00:00
marcel
0dae148272 Grok async contexts. When a thread is interrupted and an upcall
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...

However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
2003-08-07 08:03:05 +00:00
deischen
9dd8477032 Fix a typo. s/Line/Like/ 2003-08-06 06:12:54 +00:00
marcel
37f3261fb0 Avoid a level of indirection to get from the thread pointer to the
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.

In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
2003-08-06 04:17:42 +00:00
deischen
b024e2e419 Don't call kse_set_curthread() when scheduling a new bound
thread.  It should only be called by the current kse and
never by a KSE on behalf of another.

Submitted by:	davidxu
2003-08-06 00:43:28 +00:00
marcel
5dfa7ee632 Fix an off by one error in the number of arguments passed to
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
2003-08-06 00:23:40 +00:00
marcel
e45924adac Define the static TLS as an array of long double. This will guarantee
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).
2003-08-06 00:17:15 +00:00
deischen
aff30d7ad6 Use auto LDT allocation for i386. 2003-08-05 23:09:22 +00:00
deischen
73db9e759e Rethink the MD interfaces for libpthread to account for
archs that can (or are required to) have per-thread registers.

Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.

Reviewed by:	davidxu
2003-08-05 22:46:00 +00:00
marcel
ad82646deb Define THR_GETCONTEXT and THR_SETCONTEXT in terms of the userland
context functions. We don't need to enter the kernel anymore. The
contexts are compatible (ie a context created by getcontext() can
be restored by _ia64_restore_context()).

While here, make the use of THR_ALIGNBYTES and THR_ALIGN a no-op.
They are going to be removed anyway.
2003-08-05 19:37:20 +00:00
marcel
2de934f7e3 o In _ia64_save_context() clear the return registers except for r8.
We write 1 for r8 in the context so that _ia64_restore_context()
   will return with a non-zero value. _ia64_save_context() always
   return 0.
o  In _ia64_restore_context(), don't restore the thread pointer. It
   is not normally part of the context. Also, restore the return
   registers. We get called for contexts created by getcontext(),
   which means we have to restore all the syscall return values.
2003-08-05 19:33:01 +00:00
davidxu
6bcc87d469 -15 is incorrect to be used to align stack to 16 bytes, use ~15 instead. 2003-08-02 22:39:10 +00:00
deischen
db2a04e50d Take the same approach for i386 as that for ia64 and amd64. Use
the userland version of [gs]etcontext to switch between a thread
and the UTS scheduler (and back again).  This also fixes a bug
in i386 _thr_setcontext() which wasn't properly restoring the
context.

Reviewed by:	davidxu
2003-07-31 21:09:11 +00:00
davidxu
c970c3739c Fix some typos, correctly jump into UTS. 2003-07-31 08:50:01 +00:00
davidxu
33260f08b8 sysctlbyname needs size_t type, not int. 2003-07-31 08:26:58 +00:00
deischen
1c6cde237e Don't forget to unlock the scheduler lock. Somehow this got removed
from one of my last commits.  This only affected priority ceiling
mutexes.

Pointy hat to:	deischen
2003-07-30 13:28:05 +00:00
davidxu
2947f8c61f Simplify sigwait code a bit by using a waitset and removing oldsigmask.
Reviewed by: deischen
2003-07-27 06:46:34 +00:00
davidxu
2891e1e63b Fix typo. 2003-07-26 02:36:50 +00:00
deischen
9f8651cad6 Move idle kse wakeup to outside of regions where locks are held.
This eliminates ping-ponging of locks, where the idle KSE wakes
up only to find the lock it needs is being held.  This gives
little or no gain to M:N mode but greatly speeds up 1:1 mode.

Reviewed & Tested by:	davidxu
2003-07-23 02:11:07 +00:00
deischen
23a97de297 Add missing arguments to _amd64_restore_context() when called from
THR_SETCONTEXT().
2003-07-20 12:41:38 +00:00
davidxu
d1d23a75b8 Override libc function raise(), in threading mode, raise() will
send signal to current thread.

Reviewed by: deischen
2003-07-19 05:25:49 +00:00
deischen
f34d7dc27d Add some very beta amd64 bits. These will also need some tweaking. 2003-07-19 04:44:21 +00:00
deischen
8c86a69beb Cleanup thread accounting. Don't reset a threads timeslice
when it blocks; it only gets reset when it yields.

Properly set a thread's default stack guardsize.

Reviewed by:	davidxu
2003-07-18 02:46:55 +00:00
deischen
875c5215cc Add a preemption point when a mutex or condition variable is
handed-off/signaled to a higher priority thread.  Note that when
there are idle KSEs that could run the higher priority thread,
we still add the preemption point because it seems to take the
kernel a while to schedule an idle KSE.  The drawbacks are that
threads will be swapped more often between CPUs (KSEs) and
that there will be an extra userland context switch (the idle
KSE is still woken and will probably resume the preempted
thread).  We'll revisit this if and when idle CPU/KSE wakeup
times improve.

Inspired by:	Petri Helenius <pete@he.iki.fi>
Reviewed by:	davidxu
2003-07-18 02:46:30 +00:00
davidxu
8cbb5ce673 o Eliminate upcall for PTHREAD_SYSTEM_SCOPE thread, now it
is system bound thread and when it is blocked, no upcall is generated.

o Add ability to libkse to allow it run in pure 1:1 threading mode,
  defining SYSTEM_SCOPE_ONLY in Makefile can turn on this option.

o Eliminate code for installing dummy signal handler for sigwait call.

o Add hash table to find thread.

Reviewed by: deischen
2003-07-17 23:02:30 +00:00
davidxu
93d7f2a880 Don't resume sigwait thread If signal is masked. 2003-07-09 22:30:55 +00:00
davidxu
b14a2d89ea POSIX says if a thread is in sigwait state, although a signal may not in
its waitset, but if the signal is not masked by the thread, the signal
can interrupt the thread and signal action can be invoked by the thread,
sigwait should return with errno set to EINTR.
Also save and restore thread internal state(timeout and interrupted)
around signal handler invoking.
2003-07-09 14:30:51 +00:00
davidxu
9687583ade Restore signal mask correctly after fork(). 2003-07-09 01:39:24 +00:00
davidxu
54fcf3f7fa Save and restore thread's error code around signal handling.
Reviewed by: deischen
2003-07-09 01:06:12 +00:00
davidxu
ac66c4d2c8 Correctly print signal mask, the bug was introduced by cut and paste
in last commit.
2003-07-07 12:12:33 +00:00
davidxu
0dc21a981d Add a newline to debug message. 2003-07-07 04:32:17 +00:00
davidxu
8aa4e2d685 Avoid accessing user provided parameters in critical region.
Reviewed by: deischen
2003-07-07 04:28:23 +00:00
davidxu
a727b4630b Print thread's scope, also print signal mask for every thread and print
it in one line.
2003-07-07 03:08:11 +00:00
davidxu
edda827315 Correctly lock/unlock signal lock. I must be in bad state, need to sleep. 2003-07-04 08:51:37 +00:00
davidxu
5a2adbe709 Always check and restore sigaction previously set, also access user parameter
outside of lock.
2003-07-04 07:49:06 +00:00
davidxu
2d545c0325 If select() is only used for sleep, convert it to nanosleep,
it only need purely wait in user space.
2003-07-03 13:36:29 +00:00
davidxu
9fdfee8413 Check if thread is in critical region, only testing check_pending
is not enough.
2003-07-03 10:12:21 +00:00
ru
78334a2ff7 Style. 2003-07-02 20:52:39 +00:00
ru
e00593397e Take thr_support.c out of SRCS so that it does not end up in libraries.
Record the missing dependency of thr_libc.So on the libc_pic.a library.

OK'ed by:	kan
2003-07-02 20:51:30 +00:00
davidxu
edf662e5b9 Set unlock_mutex to 1 after locked mutex.
Use THR_CONDQ_CLEAR not THR_COND_SET in cond_queue_deq, current
cond_queue_deq is not used.
2003-07-02 14:12:37 +00:00
davidxu
7bd396d912 Fix typo. 2003-07-02 13:23:03 +00:00
ru
40bbdc5330 Unbreak "make checkdpadd". 2003-07-01 15:37:35 +00:00
ru
38009c82b2 Axe AINC.
Submitted by:	bde
2003-07-01 15:07:01 +00:00
davidxu
67068d4816 Because there are only _SIG_MAXSIG elements in thread siginfo array,
use [signal number - 1] as subscript to access the array.
2003-06-30 06:16:50 +00:00
davidxu
623b392ac5 Remove surplus unlocking code I accidentally checked in. This won't be
triggered until LDT entry is exhausted.
2003-06-30 05:49:06 +00:00
davidxu
7b25bda563 o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
  try to find a thread to dispatch the signal. The dispatching process
  can be rolled back if the signal is no longer in kernel.

o Create two functions _thr_signal_init() and _thr_signal_deinit(),
  all signal action settings are retrieved from kernel when threading
  mode is turned on, after a fork(), child process will reset them to
  user settings by calling _thr_signal_deinit(). when threading mode
  is not turned on, all signal operations are direct past to kernel.

o When a thread generated a synchoronous signals and its context returned
  from completed list, UTS will retrieve the signal from its mailbox and try
  to deliver the signal to thread.

o Context signal mask is now only used when delivering signals, thread's
  current signal mask is always the one in pthread structure.

o Remove have_signals field in pthread structure, replace it with
  psf_valid in pthread_signal_frame. when psf_valid is true, in context
  switch time, thread will backout itself from some mutex/condition
  internal queues, then begin to process signals. when a thread is not
  at blocked state and running, check_pending indicates there are signals
  for the thread, after preempted and then resumed time, UTS will try to
  deliver signals to the thread.

o At signal delivering time, not only pending signals in thread will be
  scanned, process's pending signals will be scanned too.

o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
  replace it with oldsigmask in pthread structure, when a thread calls
  sigwait(), its current signal mask is backuped to oldsigmask, and waitset
  is copied to its signal mask and when the thread gets a signal in the
  waitset range, its current signal mask is restored from oldsigmask,
  these are done in atomic fashion.

o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().

o Signal code locking is better than previous, there is fewer race conditions.

o Temporary disable most of code in _kse_single_thread as it is not safe
  after fork().
2003-06-28 09:55:02 +00:00
davidxu
edb09b0c7f Use mmap retuned value.
Reviewed by: deischen
2003-06-28 09:48:05 +00:00
davidxu
2a76d14fc7 Temporary disable rwlock based code, replace it with low level KSE locking
code until rtld-elf and libkse can cooperate better, those code can be
restored.

Reviewed by: deischen
2003-06-28 09:47:22 +00:00
davidxu
27c0f951b2 Write new thread pointer back only when success.
Reviewed by: deischen
2003-06-28 09:41:59 +00:00
davidxu
e7f0429dc8 After thread was interrupted by signal, it should relock mutex.
Reviewed by: deischen
2003-06-28 09:40:57 +00:00
davidxu
cec78b1ad3 if thread is exiting, just returns. kse_thr_interrupt interface
was changed, it needs signal parameter, pass -1 to it, it indicates to
interrupt syscall.

Reviewed by: deischen
2003-06-28 09:39:35 +00:00
marcel
e381861265 Implement _ia64_save_context() and _ia64_restore_context(). Both
functions are derived from the swapctx() and restorectx() (resp)
from sys/ia64/ia64/context.s. The code is expected to be 99%
correct, but has not yet been tested.

Note that with these functions operating on mcontext_t, we also
created the foundation upon which we can implement getcontext(2)
and setcontext(2) replacements. It's not guaranteed that the use
of these syscalls and _ia64_{save|restore}_context() on the same
uicontext_t is actually going to work. Replacing the syscalls is
now trivially achieved.

This commit completes the ia64 port of libpthread itself (modulo
testing and bugfixes).
2003-06-27 06:15:13 +00:00
marcel
d53d28ab16 Implement _ia64_enter_uts(). The purpose of this function is to switch
the register stack and memory stack and call the function given to it.

While here, provide empty, non-working, stubs for the context functions
(_ia64_save_context() and _ia64_restore_context()) so that anyone can at
least compile libkse from CVS sources. Real implementations will follow
soon.
2003-06-26 05:40:15 +00:00
marcel
a6cc46f9c7 Implement _thr_enter_uts() and _thr_switch() as inline functions to
minimize the amount and complexity of assembly code that needs to be
written. This way the core functionality is spread over 3 elementary
functions that don't have to do anything that can more easily and
more safely be done in C. As such, assembly code will only have to
know about the definition of mcontext_t.
The runtime cost of not having these functions being inlined is less
important than the cleanliness and maintainability of the code at
this stage of the implementation.
2003-06-26 03:55:58 +00:00
marcel
73291d87a7 Explicitly widen int types before casting to pointer types. On 64-bit
platforms the compiler warns about incompatible integer/pointer casts
and on ia64 this generally is bad news. We know that what we're doing
here is valid/correct, so suppress the warning. No functional change.

Sleeps better: marcel
2003-06-24 00:37:26 +00:00
marcel
931dcaae39 Untangle the inter-dependency of kse types and ksd types/functions
by moving the definition of struct ksd to pthread_md.h and removing
the inclusion of ksd.h from thr_private.h (which has the definition
of struct kse and kse_critical_t). This allows ksd.h to have inline
functions that use struct kse and kse_critical_t and generally
yields a cleaner implementation at the cost of not having all ksd
related types/definitions in one header.

Implement the ksd functionality on ia64 by using inline functions
and permanently remove ksd.c from the ia64 specific makefile.

This change does not clean up the i386 specific version of ksd.h.

NOTE: The ksd code on ia64 abuses the tp register in the same way
as it is abused in libthr in that it is incompatible with the
runtime specification. This will be address when support for TLS
hits the tree.
2003-06-23 23:15:06 +00:00
marcel
7958c0f19c Change the definition of _ksd_curkse, _ksd_curthread and
_ksd_readandclear_tmbx to be function-like. That way we
can define them as inline functions or create prototypes
for them.

This change allows the ksd interface on ia64 to be fully
inlined.
2003-06-23 09:49:16 +00:00
marcel
71d03b2c99 Define THR_{G|S}ETCONTEXT to expand to {g|s}etcontext(2).
Define THR_ALIGN to align at 16-byte boundaries.
2003-06-23 04:52:09 +00:00
marcel
b2e802d7cc Implement atomic_swap_{int|long|ptr}. Define atomic_swap_ptr as a
macro that expands to atomic_swap_long() to avoid compiler warnings
caused by incompatible pointer passing.
2003-06-23 04:44:43 +00:00
marcel
8ff698074b Move the machine specific files from sys/Makefile.inc and put them
in a machine specific makefile. While here, sort the sub-directories
in Makefile and remove _atomic_lock.S from all makefiles.
2003-06-23 04:28:31 +00:00
davidxu
51cf58c433 Don't lock scheduler lock twice. 2003-06-18 06:08:03 +00:00
deischen
7a2a53be28 After selecting a thread to handle a signal and taking
its scheduling lock, make sure that the thread still has
the signal unmasked.

Make a debug statement conditional on debugging being
enabled.
2003-06-08 17:37:21 +00:00
deischen
08f5ae2179 Insert threads at the end of the free thread list so that
the chance of getting the same thread id when allocating a
new thread is reduced.  This won't work if the application
creates a new thread for every time a thread exits, but
we're still within the allowances of POSIX.
2003-06-08 17:35:11 +00:00
deischen
49fd6082dc Provide a reference to __sys_write. The implementation uses this when
debugging is enabled so the symbol needs to be resolved before rtld
locking is enabled.  I may not really know what I'm talking about,
but it works.

Submitted by:	kan
2003-06-08 17:29:32 +00:00
imp
0dde8b2eeb Don't force -L/usr/lib. This is incorrect because we should not be
looking at the host environment for anything.  This breaks building
-CURRENT on 4.x as well.

Submitted by: kan@
2003-06-08 03:58:20 +00:00
davidxu
4bb0c351d9 Only init _thread_sigact once, needn't init it again after a fork().
Obtained from: deischen
2003-06-04 12:40:21 +00:00
davidxu
35d309a617 Despite whether threaded mode is turned on, always save thread's
signal mask.
2003-06-04 12:38:21 +00:00
davidxu
490170d1ea KMF_DONE is now in /sys/sys/kse.h, no longer need to define it here. 2003-06-04 03:22:59 +00:00
davidxu
ddbadd783d Free memory of internal low level lock when mutex and condition variable
are destroyed.

Submitted by: tegge
2003-06-03 02:21:01 +00:00
davidxu
7ab0ceb361 Save THR_FLAGS_IN_TDLIST in signal frame, otherwise if a thread received
a signal will can not be removed from thread list after it exited.

Reviewed by: deischen
Approved by: re (jhb)
2003-05-30 14:50:16 +00:00
kan
a22cad65cc Attempt to eliminate PLT relocations from rwlock aquire/release
path, making them suitable for direct use by the dynamic loader.

Register libpthread-specific locking API with rtld on startup.

This still has some rough edges with signals which should be
addresses later.

Approved by:	re (scottl)
2003-05-30 00:21:52 +00:00
deischen
51dd2dd2b3 Call the __sys_sigprocmask(the system call) when sigprocmask()
is called and the application is not threaded.  This works around
a problem when an application that hasn't yet become threaded
tries to jump out of a signal handler.

Reported by:	mbr
Approved by:	re@ (rwatson)
2003-05-30 00:09:22 +00:00
deischen
cccb8a3418 Don't really spin on a spinlock; silently convert it to the same
low-level lock used by the libpthread implementation.  In the
future, we'll eliminate spinlocks from libc but that will wait
until after 5.1-release.

Don't call an application signal handler if the handler is
the same as the library-installed handler.  This seems to
be possible after a fork and is the cause of konsole hangs.

Approved by:	re@ (jhb)
2003-05-29 17:10:45 +00:00
deischen
dc5114efb5 Change low-level locking a bit so that we can tell if
a lock is being waitied on.

Fix a races in join and cancellation.

When trying to wait on a CV and the library is not yet
threaded, make it threaded so that waiting actually works.

When trying to nanosleep() and we're not threaded, just
call the system call nanosleep instead of adding the thread
to the wait queue.

Clean up adding/removing new threads to the "all threads queue",
assigning them unique ids, and tracking how many active threads
there are.  Do it all when the thread is added to the scheduling
queue instead of making pthread_create() know how to do it.

Fix a race where a thread could be marked for signal delivery
but it could be exited before we actually add the signal to it.

Other minor cleanups and bug fixes.

Submitted by:	davidxu
Approved by:	re@ (blanket for libpthread)
2003-05-24 02:29:25 +00:00
deischen
b26e5b44e0 Eek, staticize a couple of functions that shouldn't
be external (initialize()!).

Remove cancellation points from _pthread_cond_wait and
_pthread_cond_timedwait (single underscore versions are
libc private functions).  Point the weak reference(!) for
these functions to the versions with cancellation points.

Approved by:	re@(blanket till 5/19)
Pointed out by:	kan (cancellation point bug)
2003-05-19 23:04:50 +00:00
deischen
7f206ad4bb Add a method of yielding the current thread with the scheduler
lock held (_thr_sched_switch_unlocked()) and use this to avoid
dropping the scheduler lock and having the scheduler retake the
same lock again.

Add a better way of detecting if a low-level lock is in use.

When switching out a thread due to blocking in the UTS, don't
switch to the KSE's scheduler stack only to switch back to
another thread.  If possible switch to the new thread directly
from the old thread and avoid the overhead of the extra
context switch.

Check for pending signals on a thread when entering the scheduler
and add them to the threads signal frame.  This includes some
other minor signal fixes.

Most of this was a joint effor between davidxu and myself.

Reviewed by:	davidxu
Approved by:	re@ (blanket for libpthread)
2003-05-16 19:58:30 +00:00
deischen
e98d7c2f51 Make pthread_join() async-cancel-safe. David was going to commit
this, but I think he's asleep and want to be sure it gets in before
the freeze.

Submitted by:	davidxu
2003-05-06 00:02:54 +00:00
davidxu
c4b2696073 call dump_queues() only when DEBUG_THREAD_KERN is defined, save some
cpu cycles.
2003-05-05 05:01:19 +00:00
deischen
3cb9ba9e43 Protect against a race between granting a lock and accessing
other parts of the lock.

Submitted by:	davidxu
2003-05-04 22:29:09 +00:00
deischen
ca059a5aea Fix suspend and resume.
Submitted (in part) by:	Kazuaki Oda <kaakun@highway.ne.jp>
2003-05-04 16:17:01 +00:00
davidxu
10a5205738 Handle thread canceled case, it is same as signal caused backout,
but will break out of loop.
2003-05-02 11:39:00 +00:00
deischen
63fd7f7479 Move the mailbox to the beginning of the thread and align the
thread so that the context (SSE FPU state) is also aligned.
2003-04-30 15:05:17 +00:00
davidxu
148de0f38f Call kse_wakeup_mutli() after remove current thread from RUNQ to avoid
doing unnecessary idle kse wakeup.
2003-04-30 01:15:21 +00:00
davidxu
d0537b4f65 Call kse_wakeup_multi() to wakeup idle KSEs when there are threads ready
to run.
2003-04-30 01:03:58 +00:00
deischen
6bd4376dc8 Create the thread signal lock as a KSE lock (as opposed to
a thread lock).

Better protect access to thread state while searching for
threads to handle a signal.

Better protect access to process pending signals while processing
a thread in sigwait().

Submitted by:	davidxu
2003-04-29 21:03:33 +00:00
deischen
6deabb439b o Don't add a scope system thread's KSE to the list of available
KSEs when it's thread exits; allow the GC handler to do that.

o Make spinlock/spinlock critical regions.

The following were submitted by davidxu

  o Alow thr_switch() to take a null mailbox argument.

  o Better protect cancellation checks.

  o Don't set KSE specific data when creating new KSEs; rely on the
    first upcall of the KSE to set it.

  o Add the ability to set the maximum concurrency level and do this
    automatically.  We should have a way to enable/disable this with
    some sort of tunable because some applications may not want this
    to be the default.

  o Hold the scheduling lock across thread switch calls.

  o If scheduling of a thread fails, make sure to remove it from the list
    of active threads.

  o Better protect accesses to a joining threads when the target thread is
    exited and detached.

  o Remove some macro definitions that are now provided by <sys/kse.h>.

  o Don't leave the library in threaded mode if creation of the initial
    KSE fails.

  o Wakeup idle KSEs when there are threads ready to run.

  o Maintain the number of threads active in the priority queue.
2003-04-28 23:56:12 +00:00
deischen
4d8bc81d7d Use the correct link entry for walking the list of threads.
While I'm here, use the TAILQ_FOREACH macro instead of a more
manual method which was inherited from libc_r (so we could
remove elements from the list which isn't needed for libpthread).

Submitted by:	Kazuaki Oda <kaakun@highway.ne.jp>
2003-04-28 21:35:06 +00:00
deischen
bfee33ea7a Remove the %gs restoring hack (already commented out).
Don't install man pages.

Temporarily (again) rename the library to libkse.  It will be put back
to libpthread after more wide-spread testing.
2003-04-25 01:31:56 +00:00
deischen
4dac1223a3 Remove the i386-specific hack (well, we only run on i386 anyways)
to always set %gs when resuming a thread.

Install this library as libpthread instead of libkse.
2003-04-23 21:48:29 +00:00
deischen
f8027ceb95 Protect thread errno from being changed while operating
on behalf of the KSE.

Add a kse_reinit function to reinitialize a reused KSE.

Submitted by:	davidxu
2003-04-23 21:46:50 +00:00
deischen
6fa234aec9 Set the quantum for scope system threads to 0 (no quantum). 2003-04-22 21:32:32 +00:00
deischen
07d9646454 Add a working pthread_[gs]etconcurrency. Initial null implementation
provided by Sergey A. Osokin <osa@freebsd.org.ru>.

In order to test this on a single CPU machine, you need to:

    sysctl kern.threads.debug=1
    sysctl kern.threads.virtual_cpu=2
2003-04-22 20:29:16 +00:00
deischen
23350dd1f5 Add a couple asserts to pthread_cond_foo to ensure the (low-level)
lock level is 0.  Thus far, the threads implementation doesn't use
mutexes or condition variables so the lock level should be 0.

Save the return value when trying to schedule a new thread and
use this to return an error from pthread_create().

Change the max sleep time for an idle KSE to 1 minute from 2 minutes.

Maintain a count of the number of KSEs within a KSEG.

With these changes scope system threads seem to work, but heavy
use of them crash the kernel (supposedly VM bugs).
2003-04-22 20:28:33 +00:00
deischen
7b2d1b3027 Add an i386-specifc hack to always set %gs. There still seems
to be instances where the kernel doesn't  properly save and/or
restore it.

Use noupcall and nocompleted flags in the KSE mailbox.  These
require kernel changes to work which will be committed sometime
later.  Things still work without the changes.

Remove the general kse entry function and use two different
functions -- one for scope system threads and one for scope
process threads.  The scope system function is not yet enabled
and we use the same function for all threads at the moment.

Keep a copy of the KSE stack for the case that a KSE runs
a scope system thread and uses the same stack as the thread
(no upcalls are generated, so a separate stack isn't needed).
This isn't enabled yet.

Use a separate field for the KSE waiting flag.  It isn't
correct to use the mailbox flags field.

The following fixes were provided by David Xu:

  o Initialize condition variable locks with thread versions
    of the low-level locking functions instead of the kse versions.

  o Enable threading before creating the first thread instead
    of after.

  o Don't enter critical regions when trying to malloc/free
    or call functions that malloc/free.

  o Take the scheduling lock when inheriting thread attributes.

  o Check the attribute's stack pointer instead of the
    attributes stack size for null when allocating a
    thread's stack.

  o Add a kseg reinit function so we don't have to destroy and
    then recreate the same lock.

  o Check the return value of kse_create() and return an
    appropriate error if it fails.

  o Don't forget to destroy a thread's locks when freeing it.

  o Examine the correct flags word for checking to see if
    a thread is in a synchronization queue.

Things should now work on an SMP kernel.
2003-04-21 04:02:56 +00:00
deischen
4819c92a75 Remove duplicate $FreeBSD$ id. 2003-04-18 07:45:03 +00:00
deischen
d729efd111 Sorry folks; I accidentally committed a patch from what I was working
on a couple of days ago.  This should be the most recent changes.

Noticed by:	davidxu
2003-04-18 07:09:43 +00:00
deischen
f3007d8862 Comment out the addition of -g to CFLAGS. This snuck in from
my local version.
2003-04-18 05:06:56 +00:00
deischen
5d56aa9cb2 Revamp libpthread so that it has a chance of working in an SMP
environment.  This includes support for multiple KSEs and KSEGs.

The ability to create more than 1 KSE via pthread_setconcurrency()
is in the works as well as support for PTHREAD_SCOPE_SYSTEM threads.
Those should come shortly.

There are still some known issues which davidxu and I are working
on, but it'll make it easier for us by committing what we have.

This library now passes all of the ACE tests that libc_r passes
with the exception of one.  It also seems to work OK with KDE
including konqueror, kwrite, etc.  I haven't been able to get
mozilla to run due to lack of java plugin, so I'd be interested
to see how it works with that.

Reviewed by:	davidxu
2003-04-18 05:04:16 +00:00
deischen
e68f624d87 Add FIFO queueing locking operations based on atomic swap.
Modify thread errno for the new libpthread changes.

Reviewed by:	davidxu
2003-04-18 05:02:39 +00:00
deischen
9fe0b6bde6 Add architecture dependent atomic ops (atomic_swap only), KSE specific
data, and userland versions of [gs]etcontext().

Modify the UTS entry and exit functions to account of FPU validity
and format.
2003-04-18 05:00:52 +00:00
jeff
b16324e722 - Define a _spinunlock() function so that threading implementations may do
more complicated things than just setting the lock to 0.
 - Implement stubs for this function in libc and the two threading libraries
   that are currently in the tree.
2003-03-26 04:02:24 +00:00
davidxu
71e2d62a49 Backout last commit.
Requested by: jhb
2003-03-15 04:45:42 +00:00
davidxu
496ff1af45 Fix a bug in rwlock. When a rwlock was locked by reader threads, a
writter thread can block reader threads to get read lock.
2003-03-14 01:02:47 +00:00
phantom
c4af5dc854 Fix cut'n'paste error
Noticed by:	julian
2003-03-05 20:50:03 +00:00
phantom
4f2d1b56c7 MFlibc_r: add and document pthread_attr_get_np() function. 2003-03-03 22:40:20 +00:00
davidxu
212ceca6b8 Fix compiling error. 2003-02-26 08:28:28 +00:00
mini
9318f6d82c Insert threads interrupted by a signal while running onto the run queue. 2003-02-23 21:15:25 +00:00
mini
dd3fb86399 Add signal logic to the build. 2003-02-23 21:14:08 +00:00
mini
f410bbff9b Deliver signals posted via an upcall to the appropriate thread. 2003-02-17 10:05:18 +00:00