Commit Graph

637 Commits

Author SHA1 Message Date
davidxu
0a270f4e6a Allocate red zone and stack space together and then split red zone from
allocated space, orignal code left red zone unallocated, but those space
can be allocated by user code, and result was providing no protection.
2004-10-06 08:11:07 +00:00
deischen
e06ce725da Add a wrapper for execve(). The exec'd process must be started with
the signal mask and pending signals of the calling thread.  These
are stored in userland in libpthread.

There is a small race condition in this patch which could cause
problems if a signal arrives after setting the (kernel) signal
mask and before exec'ing.  The thread's set of pending signals
also are not yet installed in the exec'd process.  Both of these
will be corrected with the addition of a special syscall.

Reported & Tested by:	Joost Bekkers <joost at jodocus dot org>
Reviewed by:	julian, davidxu
2004-09-26 06:50:15 +00:00
cognet
e06c72a787 _tcb_ctor takes two args. 2004-09-24 13:02:30 +00:00
ssouhlal
650224c287 Make sure we don't call _thr_start_sig_daemon() when SYSTEM_SCOPE_ONLY is defined. This makes libpthread usable on powerpc.
Approved by:	grehan (mentor), deischen
2004-09-24 06:36:31 +00:00
davidxu
8eb09c15b9 Add missing brackets. It was committed from wrong tree. 2004-08-26 02:41:01 +00:00
davidxu
a4dbc4af0d gcc -O2 cleanup. tested for a long time.
Reviewed by: deischen
2004-08-25 23:42:40 +00:00
davidxu
f62c438c72 Pull debug symbols in for statically linked binary.
Reviewed by: desichen
2004-08-21 11:49:19 +00:00
davidxu
776807c108 Fix compile, s/tp_dtv/tp_tdv/g. 2004-08-16 14:07:38 +00:00
grehan
a14d72d426 Bring PPC up to date with latest TLS changes. 2004-08-16 05:41:39 +00:00
davidxu
4872917430 1. Add macro DTV_OFFSET to calculate dtv offset in tcb.
2. Export symbols needed by debugger.
2004-08-16 03:27:29 +00:00
davidxu
62ead65343 Add a file to collection all symbols will be needed by debugger. 2004-08-16 03:25:07 +00:00
dfr
4dd05c8c57 Add TLS support for i386 and amd64. 2004-08-15 16:28:05 +00:00
deischen
0340faafa1 As long as we have a knob to force system scope threads, why not have
a knob to force process scope threads.  If the environment variable
LIBPTHREAD_PROCESS_SCOPE is set, force all threads to be process
scope threads regardless of how the application creates them.  If
LIBPTHREAD_SYSTEM_SCOPE is set (forcing system scope threads), it
overrides LIBPTHREAD_PROCESS_SCOPE.

        $ # To force system scope threads
        $ LIBPTHREAD_SYSTEM_SCOPE=anything threaded_app
        $ # To force process scope threads
        $ LIBPTHREAD_PROCESS_SCOPE=anything threaded_app
2004-08-12 12:12:12 +00:00
davidxu
be9db1f7cb Check debugger suspending flag for system scope thread.
Reviewed by: deischen
2004-08-08 22:42:11 +00:00
deischen
628d407e84 Add a way to force 1:1 mode for libpthread. To do this, define
LIBPTHREAD_SYSTEM_SCOPE in the environment.

You can still force libpthread to be built in strictly 1:1 by
adding -DSYSTEM_SCOPE_ONLY to CFLAGS.  This is kept for archs
that don't yet support M:N mode.

Requested by:   rwatson
Reviewed by:    davidxu
2004-08-07 15:15:38 +00:00
davidxu
6f2afa324d s/TMDF_DONOTRUNUSER/TMDF_SUSPEND/g
Dicussed with: deischen
2004-08-03 02:23:06 +00:00
davidxu
7ef444d527 Save context in kernel fashion, so it can be restored by
kse_switchin syscall.
2004-07-31 14:18:26 +00:00
davidxu
7dea99e896 Remove unused field. 2004-07-31 14:14:55 +00:00
davidxu
ab04048f48 Macro optimize, this increases context switch speed about 2% on my
athlon64 machine.
2004-07-31 01:53:21 +00:00
grehan
2de38b8015 PPC MD bits for KSE. Runs test cases OK. Crippled to 1:1 mode for
the time being.
2004-07-19 12:19:04 +00:00
marcel
9514d00916 Don't include lock.h and pthread_md.h when we're being included by
libthread_db. Both headers are included seperately.
2004-07-18 04:22:01 +00:00
davidxu
d2c424e30c Copy lwp id to thread mailbox. 2004-07-14 00:58:53 +00:00
davidxu
b241de2523 Call kse_switchin to switch context when being debugged. 2004-07-13 22:54:23 +00:00
davidxu
16b35fe1ee Remove unused symbols. 2004-07-13 22:53:56 +00:00
davidxu
5d24033b92 Let debugger check signal, make SIGINFO works. 2004-07-13 22:52:11 +00:00
davidxu
4eaa7c96d8 If _libkse_debug is not zero, activate thread mode. 2004-07-13 22:51:03 +00:00
davidxu
2e3100b547 Add code to support thread debugging.
1. Add global varible _libkse_debug, debugger uses the varible to identify
   libpthread. when the varible is written to non-zero by debugger, libpthread
   will take some special action at context switch time, it will check
   TMDF_DOTRUNUSER flags, if a thread has the flags set by debugger, it won't
   be scheduled, when a thread leaves KSE critical region, thread checks
   the flag, if it was set, the thread relinquish CPU.

2. Add pq_first_debug to select a thread allowd to run by debugger.

3. Some names prefixed with _thr are renamed to _thread prefix.

which is allowed to run by debugger.
2004-07-13 22:49:58 +00:00
davidxu
404e9eb472 kse_switchin ABI was changed in kernel. 2004-07-12 07:41:01 +00:00
davidxu
c72d8a4de4 Check pending signals, if there is signal will be unblocked by
sigsuspend, thread shouldn't wait, in old code, it may be
ignored.
When a signal handler is invoked in sigsuspend, thread gets
two different signal masks, one is in thread structure,
sigprocmask() can retrieve it, another is in ucontext
which is a third parameter of signal handler, the former is
the result of sigsuspend mask ORed with sigaction's sa_mask
and current signal, the later is the mask in thread structure
before sigsuspend is called. After signal handler is called,
the mask in ucontext should be copied into thread structure,
and becomes CURRENT signal mask, then sigsuspend returns to
user code.

Reviewed by: deischen
Tested by: Sean McNeil <sean@mcneil.com>
2004-06-12 07:40:01 +00:00
tjr
bdd43780eb Avoid clobbering the red zone when running on the new context's stack in
_amd64_restore_context().
2004-06-07 21:25:16 +00:00
cognet
0ff01bbcf8 Arm bits for libpthread. It has no chances to work and should be considered
as stubs.
2004-05-14 12:21:29 +00:00
deischen
6920f21350 After forking and initializing the library to single-threaded
mode (where the forked thread is the one and only thread and
is marked as system scope), set the system scope flag before
initializing the signal mask.  This prevents trying to use
internal locks that haven't yet been initialized.

Reported by:	Dan Nelson <dnelson at allantgroup.com>
Reviewed by:	davidxu
2004-04-08 23:16:21 +00:00
davidxu
12db4373da Fix a POSIX conformance bug. POSIX says sigwait should return error number
in return value not in errno.
2004-03-17 02:12:19 +00:00
bde
1e78a65d3a Fixed a misspelling of 0 as NULL. 2004-03-14 05:27:26 +00:00
cperciva
81f9b2b83a style cleanup: Remove duplicate $FreeBSD$ tags.
These files had tags after the copyright notice,
inside the comment block (incorrect, removed),
and outside the comment block (correct).

Approved by:	rwatson (mentor)
2004-02-10 20:42:33 +00:00
deischen
ba84b16ebb Add cancellation point to sem_wait() and sem_timedwait() for pshared
semaphores.  Also add cancellation cleanup handlers to keep semaphores
in a consistent state.

Submitted in part by:	davidxu
Reviewed by:		davidxu
2004-02-06 15:20:56 +00:00
davidxu
c03701303f libkse was renamed to libpthread. 2004-02-05 02:55:20 +00:00
deischen
aea62cb1af Provide a userland version of non-pshared semaphores and add cancellation
points to sem_wait() and sem_timedwait().  Also make sem_post signal-safe.
2004-02-03 05:50:07 +00:00
marcel
288be8413e Now that libpthread is the default threading library, remove the
compatibility link from libc_r to libpthread (previously a link
from libc_r to libkse).
2004-01-31 05:05:45 +00:00
deischen
674bc2cc3f Change libkse back to libpthread and make it the default
thread library for i386, amd64, and ia64.  For alpha
and sparc64 the library is not changed and remains libkse,
and links are installed so that libpthread -> libc_r.

The gcc -pthread option will be changed in a separate
commit so that it links to -lpthread instead of -lc_r.

Approved by:	re@
2004-01-30 12:13:17 +00:00
davidxu
47933921d0 Return EPERM if mutex owner is not current thread but it tries to
unlock the mutex, old code confuses some programs when it returns EINVAL.

Noticed by: bland
2004-01-17 03:09:57 +00:00
deischen
b5d5c1e2e6 Add a simple work-around for deadlocking on recursive read locks
on a rwlock while there are writers waiting.  We normally favor
writers but when a reader already has at least one other read lock,
we favor the reader.  We don't track all the rwlocks owned by a
thread, nor all the threads that own a rwlock -- we just keep
a count of all the read locks owned by a thread.

PR:	24641
2004-01-08 15:37:09 +00:00
davidxu
f3e2bcfe4c Kernel now supports per-thread sigaltstack, follow the change to
enable sigaltstack for scope system thread.
2004-01-03 02:40:27 +00:00
davidxu
32583a78cf Return error code in errno, not in return value. 2004-01-02 00:38:42 +00:00
davidxu
abb0f21d58 Fix a typo. 2004-01-02 00:27:30 +00:00
davidxu
d6b72f0aa3 Forgot to commit this file for last commit. :( 2003-12-29 23:33:51 +00:00
davidxu
8d750bfb1b Implement sigaltstack() as per-threaded. Current only scope process thread
is supported, for scope system process, kernel signal bits need to be
changed.

Reviewed by: deischen
Tested on  : i386 amd64 ia64
2003-12-29 23:21:09 +00:00
davidxu
d2abf7e6f9 Correctly retrieve sigaction flags. 2003-12-28 12:20:04 +00:00
davidxu
7b3cca8521 Replace a comment with more accurated one, memory heap is now protected by
new fork() wrapper.
2003-12-19 13:24:54 +00:00
davidxu
c7be5e14dc Code clean up, remove unused MACROS and function prototypes. 2003-12-19 12:57:08 +00:00
deischen
1f8c5c54fd accept() returns a file descriptor when it succeeds which is very
likely to be non-zero.  When leaving the cancellation point, check
the return value against -1 to see if cancellation should be
checked.  While I'm here, make the same change to connect() just
to be consisitent.

Pointed out by: davidxu
2003-12-09 23:40:27 +00:00
deischen
ed86f0d4d5 Remove an unused struct definition. 2003-12-09 15:18:40 +00:00
deischen
212e86fbe9 Add cancellation points for accept() and connect(). 2003-12-09 15:16:27 +00:00
davidxu
7813286b21 Use mutex instead of low level thread lock to implement spinlock, this
avoids signal to be blocked when otherwise it can be handled.
2003-12-09 02:37:40 +00:00
davidxu
22c52834eb Rename _thr_enter_cancellation_point to _thr_cancel_enter, rename
_thr_leave_cancellation_point to _thr_cancel_leave, add a parameter
to _thr_cancel_leave to indicate whether cancellation point should be
checked, this gives us an option to not check cancallation point if
a syscall successfully returns to avoid any leaks, current I have
creat(), open() and fcntl(F_DUPFD) to not check cancellation point
after they sucessfully returned.

Replace some members in structure kse with bit flags to same some
memory.

Conditionally compile THR_ASSERT to nothing if _PTHREAD_INVARIANTS is
not defined.

Inline some small functions in thr_cancel.c.

Use __predict_false in thr_kern.c for some executed only once code.

Reviewd by: deischen
2003-12-09 02:20:56 +00:00
davidxu
3f57a77355 More reliably check timeout for pthread_mutex_timedlock. 2003-12-09 00:52:28 +00:00
deischen
e2ed712394 Go back to using rev 1.18 where thread locks are used instead of KSE
locks for [libc] spinlock implementation.  This was previously backed
out because it exposed a bug in ia64 implementation.

OK'd by:	marcel
2003-12-08 13:33:20 +00:00
marcel
4b6eafd82f Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
peter
bf613741f0 Apply a second fix for stack alignment with libkse. This time, enter the
UTS with the stack correctly aligned.  Also, while here, use an indirect
jump rather than the pushq/ret hack.

This fixes threaded apps that use floating point for me, although
it hasn't solved all the problems.  It is an improvement though.
Preservation of the 128 byte red zone hasn't been resolved yet.

Approved by:  re (scottl)
2003-12-05 01:41:43 +00:00
davidxu
639818d0f3 Eliminate two pushl by using call instruction directly, this really
helps branch predict a lot for INTEL P4.

Approved by: re (scottl)
2003-11-29 14:25:43 +00:00
davidxu
2cc3179bff 1.Macro optimizing KSE_LOCK_ACQUIRE and THR_LOCK_ACQUIRE to use static fall
through branch predict as suggested in INTEL IA32 optimization guide.

2.Allocate siginfo arrary separately to avoid pthread to be allocated at
2K boundary, which hits L1 address alias problem and causes context
switch to be slow down.

3.Simplify context switch code by removing redundant code, code size is
reduced, so it is expected to run faster.

Reviewed by: deischen
Approved by: re (scottl)
2003-11-29 14:22:29 +00:00
davidxu
e9f088c469 Remove surplus mmap() call for stack guard page in init_private, it is done
in init_main_thread. Also don't initialize lock and lockuser again for initial
thread, it is already done by _thr_alloc().

Reviewed by: deischen
Approved by: re (scottl)
2003-11-29 14:10:02 +00:00
deischen
b1926e392e Back out last change and go back to using KSE locks instead of thread
locks until we know why this breaks ia64.

Reported by:	marcel
2003-11-16 15:01:26 +00:00
davidxu
27be031526 If a thread in critical region got a synchronous signal, according current
signal handling mode, there is no chance to handle the signal, something
must be wrong in the library, just call kse_thr_interrupt to dump its core.
I have the code for a long time, but forgot to commit it.
2003-11-09 00:37:14 +00:00
davidxu
48868cdc76 Use THR lock instead of KSE lock to avoid scheduler be blocked in spinlock.
Reviewed by: deischen
2003-11-08 06:07:04 +00:00
deischen
75b2a9cea3 style(9)
Reviewed by:	bde
2003-11-05 18:19:24 +00:00
deischen
c99795abd5 Don't declare the malloc lock; use the declaration provided in libc.
Noticed by:	bde
2003-11-05 18:18:45 +00:00
davidxu
66e5a49572 Add pthread_atfork() source code. Dan forgot to commit this file. 2003-11-05 03:42:10 +00:00
deischen
1191fa7e32 Add an implementation for pthread_atfork().
Aside from the POSIX requirements for pthread_atfork(), when
fork()ing, take the malloc lock to keep malloc state consistent
in the child.

Reviewed by:	davidxu
2003-11-04 20:04:45 +00:00
deischen
573b809044 Add the ability to reinitialize libpthread's internal FIFO-queueing
locks.

Reviewed by:	davidxu
2003-11-04 20:01:38 +00:00
deischen
3153a078d8 Add the ability to reinitialize a spinlock (libc/libpthread
internal lock, not a pthread spinlock).

Reviewed by:	davidxu
2003-11-04 19:59:22 +00:00
deischen
b961438622 s/foo()/foo(void)/
Add a blank line after a variable declaration.
2003-11-04 19:58:12 +00:00
deischen
06593e0c1a Libpthread uses the convention that all of its (non-weak) symbols
begin with underscores and provide weak definitions without
underscores.  Make the pthread spinlock conform to this convention.
2003-11-04 19:56:12 +00:00
deischen
3ccfc8a9e4 Add the ability to reinitialize a mutex (internally, not a userland
API).

Reviewed by:	davidxu
2003-11-04 19:53:32 +00:00
peter
9fdc368a9c Use amd64_set_fsbase() instead of calling sysarch() directly. 2003-10-23 06:12:57 +00:00
deischen
ef384a447b This test relies on the concurrency level being 1; make it so. 2003-10-20 04:23:49 +00:00
peter
aa62482800 Update context code for my last ABI breakage of mcontext. I'm worried
about the fpu code here.  It should be using fxsave/fxrstor instead of
saving/restoring the control word.  The SSE registers are used a lot in
gcc generated code on amd64.  I'm not sure how this all fits together
though.
2003-10-17 16:30:09 +00:00
deischen
b5f43f9f88 Don't forget to initialize the fake tcb when the kcb is allocated. 2003-10-12 16:50:45 +00:00
deischen
8df72a4176 Reverse the order of the first two arguments to _sparc64_enter_uts().
The first argument is the UTS function, the second argument is the
first argument to the UTS function.  Who's on first.
2003-10-09 20:52:17 +00:00
deischen
69eb1f0d34 Convert a couple of hardcoded values to constants. Make thr_getcontext()
return 0 when called the first time, and return 1 when resumed by
thr_setcontext().
2003-10-09 14:48:09 +00:00
deischen
801ac4642e Add preliminary sparc64 support to libpthread. This does not
yet work, but hopefully someone familiar with the sparc64
port can pick up the reins.

Submitted by:	jake
With mods by:	deischen
2003-10-09 02:32:28 +00:00
davidxu
97f8d9d1b5 Fix some comments for last commit. 2003-10-08 00:30:38 +00:00
davidxu
debdb208b6 Complete cancellation support for M:N threads, check cancelling flag when
thread state is changed from RUNNING to WAIT state and do some cancellation
operations for every cancellable state.

Reviewed by: deischen
2003-10-08 00:20:50 +00:00
davidxu
3ea48105a7 Use thread lock instead of scheduler lock to eliminate lock contention
for all wrapped syscalls under SMP.

Reviewed by: deischen
2003-10-08 00:17:13 +00:00
davidxu
bde7cb88e1 Only generate code for _LCK_ASSERT if _LCK_DEBUG is defined. 2003-10-02 03:24:26 +00:00
davidxu
a0841bce6f When concurrency level is reduced and a kse is exiting, make sure no other
threads are still referencing the kse by migrating them to initial kse.

Reviewed by: deischen
2003-09-29 06:25:04 +00:00
davidxu
5910a67bb3 Remove unused variable. 2003-09-28 13:47:29 +00:00
marcel
4d437893ac Relink libc_r.a, libc_r.so and libc_r_p.so from libthr to libkse.
On ia64, where there's no libc_r at all, libkse is now the default
thread library by virtue of these links.

The reasons for this change are:
1. libkse is slated to become the default thread library anyway,
2. active development and maintenance is only present for libkse,
3. GNOME and KDE, both in the process of being supported on ia64,
   work better with KSE; even on ia64.
2003-09-27 23:27:19 +00:00
davidxu
528c35bfb3 pthread API should return error code in return value not in errno.
Reviewed by: deischen
2003-09-25 13:53:49 +00:00
davidxu
c0197faefe If syscall failed, restore old sigaction and return error to thread. 2003-09-25 06:23:40 +00:00
davidxu
13f5fe3849 As comments in _mutex_lock_backout state, only current thread
can clear the pointer to mutex, not the thread doing mutex
handoff. Because _mutex_lock_backout does not hold scheduler
lock while testing THR_FLAGS_IN_SYNCQ and then reading mutex
pointer, it is possible mutex owner begin to unlock and
handoff the mutex to the current thread, and mutex pointer
will be cleared to NULL before current thread reading it, so
current thread will end up with deferencing a NULL pointer,
Fix the race by making mutex waiters to clear their mutex pointers.
While I am here, also save inherited priority in mutex for
PTHREAD_PRIO_INERIT mutex in mutex_trylock_common just like what
we did in mutex_lock_common.
2003-09-24 12:52:57 +00:00
davidxu
0414766399 Free thread name memory if there is. 2003-09-23 04:02:23 +00:00
davidxu
eed69ab82a Save and restore timeout field for signal frame just like what we did
for interrupted field.
Also in _thr_sig_handler, retrieve current signal mask from kernel not
from ucp, the later is pre-unioned mask, not current signal mask.
2003-09-22 14:40:36 +00:00
davidxu
b429e20750 Fix FPU state restoring bug by jumping to right position. 2003-09-22 14:34:02 +00:00
davidxu
96155432c1 Print waitset correctly. 2003-09-22 00:40:23 +00:00
marcel
265b2be119 Make KSE_STACKSIZE machine dependent by moving it from thr_kern.c to
pthread_md.h. This commit only moves the definition; it does not
change it for any of the platforms. This more easily allows 64-bit
architectures (in particular) to pick a slightly larger stack size.
2003-09-19 23:28:13 +00:00
marcel
5888af272d _ia64_break_setcontext() now takes a mcontext_t. While here, define
THR_SETCONTEXT as PANIC(). The THR_SETCONTEXT macro is currently not
used, which means that the definition we had could be wrong, overly
pessimistic or unknowingly right. I don't like the odds...

The new _ia64_break_setcontext() and corresponding kernel fixes make
KSE mostly usable. There's still a case where we don't properly
restore a context and end up with a NaT consumption fault (typically
an indication for not handling NaT collection points correctly),
but at least now mutex_d works...
2003-09-19 23:00:28 +00:00
marcel
ef50cc82f8 Stop using the setcontext() syscall to restore an async context.
Instead use the break instruction with an immediate specially
created for us.
2003-09-19 22:54:05 +00:00
davidxu
85f4a8612b pthread api should return error code in return value, not in errno. 2003-09-18 12:19:28 +00:00
davidxu
e7f6efc66e Fix a typo. Also turn on PTHREAD_SCOPE_SYSTEM after fork(). 2003-09-16 02:03:39 +00:00
deischen
7332e364d0 Remove a comment that questioned why the size of the FPU
state for amd64 was twice as large as necessary.  Peter
recently fixed this, so the comment no longer applies.

Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.

FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
2003-09-16 00:00:53 +00:00
davidxu
1d188498a3 Fix bogus comment and assign sigmask in critical region, use
SIG_CANTMASK to remove unmaskable signal masks.
2003-09-15 00:08:48 +00:00
davidxu
8d7f2fb390 Fix a bogus comment, sigmask must be maintained correctly,
it will be inherited in pthread_create.
2003-09-15 00:06:46 +00:00
davidxu
42c958f29f 1. Allocating and freeing lock related resource in _thr_alloc and _thr_free
to avoid potential memory leak, also fix a bug in pthread_create, contention
   scope should be inherited when PTHREAD_INHERIT_SCHED is set, and also check
   right field for PTHREAD_INHERIT_SCHED, scheduling inherit flag is in sched_inherit.
2. Execute hooks registered by atexit() on thread stack but not on scheduler
   stack.
3. Simplify some code in _kse_single_thread by calling xxx_destroy functions.

Reviewed by: deischen
2003-09-14 22:52:16 +00:00
davidxu
40d912b465 When invoking an old style signal handler, use true traditional BSD style to
invoke signal handler.

Reviewed by: deischen
2003-09-14 22:42:39 +00:00
davidxu
b622332e18 Respect POSIX specification, a value return from pthread_attr_getguardsize
should be a value past to pthread_attr_setguardsize, not a rounded up value.
Also fix a stack size matching bug in thr_stack.c, now stack matching code
uses number of pages but not bytes length to match stack size, so for example,
size 512 bytes and size 513 bytes should both match 1 page stack size.

Reviewed by: deischen
2003-09-14 22:39:44 +00:00
davidxu
d08173b50b Avoid garbage bits in c_flags by direct assigning value.
Reviewed by: deischen
2003-09-14 22:33:32 +00:00
davidxu
b117b7ad58 If user is seting scope process flag, clear PTHREAD_SCOPE_SYSTEM bit
accordingly.

Reviewed by: deischen
2003-09-14 22:32:28 +00:00
davidxu
be9fb27b11 Check invalid parameter and return EINVAL.
Reviewed by: deischen
2003-09-14 22:28:13 +00:00
davidxu
707186a985 Original pthread_once code has memory leak if pthread_once_t is used in
a shared library or any other dyanmic allocated data block, once
pthread_once_t is initialized, a mutex is allocated, if we unload the
shared library or free those data block, then there is no way to deallocate
the mutex, result is memory leak.
To fix this problem, we don't use mutex field in pthread_once_t, instead,
we use its state field and an internal mutex and conditional variable in
libkse to do any synchronization, we introduce a third state IN_PROGRESS to
wait if another thread is already in invoking init_routine().
Also while I am here, make pthread_once() conformed to pthread cancellation
point specification.

Reviewed by: deischen
2003-09-09 22:38:12 +00:00
davidxu
f2dd9e6365 Add code to support pthread spin lock.
Reviewed by: deischen
2003-09-09 06:57:51 +00:00
davidxu
521fd6195a Add small piece of code to support pthread_rwlock_timedrdlock and
pthread_rwlock_timedrwlock.
2003-09-06 00:07:52 +00:00
kan
3cec797a7b The caller is expected to set up PIC register corectly before
jumping to .cerror. This means .cerror has to be present in the
same module with its consumers, or bad things will happen.
2003-09-05 18:08:19 +00:00
davidxu
82aeb9fc85 Add code to support barrier synchronous object and implement
pthread_mutex_timedlock().

Reviewed by: deischen
2003-09-04 14:06:43 +00:00
davidxu
480778ce65 Remove repeated macro THR_IN_CONDQ. 2003-09-04 07:46:26 +00:00
davidxu
962cbaebac Allow hooks registered by atexit() to run with current thread pointer set,
without this change, my atexit test dumps core.
2003-09-04 05:24:53 +00:00
deischen
919bc52171 Don't assume sizeof(long) = sizeof(int) on x86; use int
instead of long types for low-level locks.

Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.

Add memory barriers to alpha atomic swap functions (submitted
by davidxu).

Requested by:	bde
2003-09-03 17:56:26 +00:00
davidxu
f61d4432c4 Move kse_wakeup_multi call to just before KSE_SCHED_UNLOCK.
Tested on: SMP
2003-09-03 00:21:10 +00:00
kan
6d12300f90 Rethink the way thr_libc.So is generated. Relying on GCC to extract
only needed symbols from libc_pic is not working on sparc64.

Requested by: jake
2003-09-02 19:37:11 +00:00
deischen
e8c434f7c5 Allow the concurrency level to be reduced.
Reviewed by:	davidxu
2003-08-30 12:09:16 +00:00
davidxu
6c1cd2954d Repost masked signal to kernel for scope system thread, it hardly happens
in real world.

Reviewed by: deischen
2003-08-21 22:02:18 +00:00
davidxu
5546ba5027 _thr_sig_check_pending is also called by scope system thread when it leaves
critical region, we wrap some syscalls for thread cancellation point, and
when syscalls returns, we call _thr_leave_cancellation_point, at the time
if a signal comes in, it would be buffered, and when the thread leaves
_thr_leave_cancellation_point, buffered signals will be processed, to avoid
messing up normal syscall errno, we should save and restore errno around
signal handling code.
2003-08-20 13:43:35 +00:00
deischen
9371b61282 Add back a loop for up to PTHREAD_DESTRUCTOR_ITERATIONS to
destroy thread-specific data.  Display a warning when thread
specific data remains after PTHREAD_DESTRUCTOR_ITERATIONS.

Reviewed by:	davidxu
2003-08-20 02:34:14 +00:00
davidxu
00e9963efb Support printing 64 bits pointer and long integer.
Reviewed by: deischen
2003-08-19 08:29:33 +00:00
davidxu
0e9eb96aee Save and restore errno around sigprocmask. 2003-08-19 03:33:51 +00:00
davidxu
49037ec1b2 Direct call exit if thread was never created. This makes it safe to call
pthread_exit in main() without creating any thread.

Tessted by: deischen
2003-08-18 04:03:08 +00:00
davidxu
3203dde90e Treat initial thread as scope system thread when KSE mode is not activated
yet, so we can protect some locking code from being interrupted by signal
handling. When KSE mode is turned on, reset the thread flag to scope process
except we are running in 1:1 mode which we needn't turn it off.
Also remove some unused member variables in structure kse.

Tested by: deischen
2003-08-18 03:58:29 +00:00
davidxu
76d162d172 If threaded mode is not turned on yet, direct call __sys_sched_yield. 2003-08-16 13:02:45 +00:00
davidxu
e695327086 Keep initial kse and kse group just like we keep initial thread,
Don't free them, so some code can still reference them.

Reviewed by: deischen
2003-08-16 05:22:20 +00:00
davidxu
206928815a Access user provided pointer out of lock, and also check the case when
a key is less than 0.
2003-08-16 05:19:00 +00:00
marcel
8e205a2f04 Don't run verify directly as that would require the perl script to
have execute permissions. Run "perl verify" instead. Replace all
occurences of the hardcoding of ./verify with $(VERIFY) to allow
it to be overridden as well.
2003-08-13 03:59:18 +00:00
davidxu
f9f5048801 Always set tcb for bound thread, and switch tcb for M:N thread at correct
time.
2003-08-13 01:49:07 +00:00
davidxu
2bb2e522ee Don't forget to set kcb_self. 2003-08-12 22:13:06 +00:00
davidxu
c5eb02d502 Correctly set current tcb. This fixes some IA64/KSE problems.
Reviewed by: deischen, julian
2003-08-12 08:01:34 +00:00
davidxu
09416455b1 Add some quick pathes to exit process when signal action is default and
signal can causes process to exit.

Reviewed by: deischen
2003-08-10 22:35:46 +00:00
davidxu
4d41804986 Initialize rtld lock just before turning on thread mode and
uninitialize rtld lock after thread mode shutdown.
2003-08-10 22:30:20 +00:00
davidxu
b614f44d78 If thread mode is not activated yet, just call __sys_fork() directly,
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.

Reviewed by: deischen
2003-08-10 22:20:41 +00:00
davidxu
349a8e7e9b Tweak rtld lock to allow recursive on reader lock and detect recursive
on writer lock. This is first cut at rwlock for rtld.

Submitted by: desichen
2003-08-10 22:15:03 +00:00
davidxu
b213a2432e If thread mode is not activated yet, don't do extra work.
Reviewed by: deischen
2003-08-10 22:07:28 +00:00
davidxu
6245694474 o Add code to GC freed KSEs and KSE groups
o Fix a bug in kse_free_unlocked(), kcb_dtor shouldn't be called because
  the KSE is cached and will be resued in _kse_alloc().

Reviewed by: deischen
2003-08-08 22:20:59 +00:00
kan
0faaaf6049 Allow gcc driver to process -r option iself, do not use -Wl,-r to
bypass it. Doing otherwise did not allow compiler to detect and disable
conflicting options generated from specs.

Reported by:	jake
2003-08-08 03:41:13 +00:00
marcel
0dae148272 Grok async contexts. When a thread is interrupted and an upcall
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...

However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
2003-08-07 08:03:05 +00:00
deischen
9dd8477032 Fix a typo. s/Line/Like/ 2003-08-06 06:12:54 +00:00
marcel
37f3261fb0 Avoid a level of indirection to get from the thread pointer to the
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.

In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
2003-08-06 04:17:42 +00:00
deischen
b024e2e419 Don't call kse_set_curthread() when scheduling a new bound
thread.  It should only be called by the current kse and
never by a KSE on behalf of another.

Submitted by:	davidxu
2003-08-06 00:43:28 +00:00
marcel
5dfa7ee632 Fix an off by one error in the number of arguments passed to
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
2003-08-06 00:23:40 +00:00
marcel
e45924adac Define the static TLS as an array of long double. This will guarantee
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).
2003-08-06 00:17:15 +00:00
deischen
aff30d7ad6 Use auto LDT allocation for i386. 2003-08-05 23:09:22 +00:00
deischen
73db9e759e Rethink the MD interfaces for libpthread to account for
archs that can (or are required to) have per-thread registers.

Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.

Reviewed by:	davidxu
2003-08-05 22:46:00 +00:00
marcel
ad82646deb Define THR_GETCONTEXT and THR_SETCONTEXT in terms of the userland
context functions. We don't need to enter the kernel anymore. The
contexts are compatible (ie a context created by getcontext() can
be restored by _ia64_restore_context()).

While here, make the use of THR_ALIGNBYTES and THR_ALIGN a no-op.
They are going to be removed anyway.
2003-08-05 19:37:20 +00:00