Commit Graph

593 Commits

Author SHA1 Message Date
David Xu
fca6ccde6e Check unhandled signals before thread marks itself as DEAD,
this reduces chances of signal losting problem found by
Peter Holm <peter@holm.cc>
2004-10-23 23:37:54 +00:00
David Xu
b4f9f84b96 1. Move thread list flags into new separate member, and atomically
put DEAD thread on GC list, this closes a race between pthread_join
   and thr_cleanup.
2. Introduce a mutex to protect tcb initialization, tls allocation and
   deallocation code in rtld seems no lock protection or it is broken,
   under stress testing, memory is corrupted.

Reviewed by: deischen
patch partly provided by: deischen
2004-10-23 23:28:36 +00:00
David Xu
39454d368f Decrease reference count if we won't use the thread, this avoids memory
leak under some cases.
2004-10-21 03:42:24 +00:00
David Xu
42c7735ce5 if system scope thread didn't set timeout, don't call clock_gettime syscall
before and after sleeping.

Reviewed by: deischen
2004-10-08 22:57:30 +00:00
David Xu
2dad2d6bfc Use PTHREAD_SCOPE_SYSTEM to decide what should be done. 2004-10-07 14:23:15 +00:00
David Xu
e897f51327 Follow kernel change, restore signal mask correctly by using a command
of kse_thr_interrupt.
2004-10-07 13:52:18 +00:00
David Xu
de97eeddd3 Allocate red zone and stack space together and then split red zone from
allocated space, orignal code left red zone unallocated, but those space
can be allocated by user code, and result was providing no protection.
2004-10-06 08:11:07 +00:00
Daniel Eischen
862e463a75 Add a wrapper for execve(). The exec'd process must be started with
the signal mask and pending signals of the calling thread.  These
are stored in userland in libpthread.

There is a small race condition in this patch which could cause
problems if a signal arrives after setting the (kernel) signal
mask and before exec'ing.  The thread's set of pending signals
also are not yet installed in the exec'd process.  Both of these
will be corrected with the addition of a special syscall.

Reported & Tested by:	Joost Bekkers <joost at jodocus dot org>
Reviewed by:	julian, davidxu
2004-09-26 06:50:15 +00:00
Olivier Houchard
99feca3bae _tcb_ctor takes two args. 2004-09-24 13:02:30 +00:00
Suleiman Souhlal
eea4bca56b Make sure we don't call _thr_start_sig_daemon() when SYSTEM_SCOPE_ONLY is defined. This makes libpthread usable on powerpc.
Approved by:	grehan (mentor), deischen
2004-09-24 06:36:31 +00:00
David Xu
1db81dc074 Add missing brackets. It was committed from wrong tree. 2004-08-26 02:41:01 +00:00
David Xu
28f5d1b766 gcc -O2 cleanup. tested for a long time.
Reviewed by: deischen
2004-08-25 23:42:40 +00:00
David Xu
0dabb2c8a0 Pull debug symbols in for statically linked binary.
Reviewed by: desichen
2004-08-21 11:49:19 +00:00
David Xu
f914e34db6 Fix compile, s/tp_dtv/tp_tdv/g. 2004-08-16 14:07:38 +00:00
Peter Grehan
391d4a3856 Bring PPC up to date with latest TLS changes. 2004-08-16 05:41:39 +00:00
David Xu
a002d437ea 1. Add macro DTV_OFFSET to calculate dtv offset in tcb.
2. Export symbols needed by debugger.
2004-08-16 03:27:29 +00:00
David Xu
497c17e0ae Add a file to collection all symbols will be needed by debugger. 2004-08-16 03:25:07 +00:00
Doug Rabson
99c8d0836d Add TLS support for i386 and amd64. 2004-08-15 16:28:05 +00:00
Daniel Eischen
b9de27c005 As long as we have a knob to force system scope threads, why not have
a knob to force process scope threads.  If the environment variable
LIBPTHREAD_PROCESS_SCOPE is set, force all threads to be process
scope threads regardless of how the application creates them.  If
LIBPTHREAD_SYSTEM_SCOPE is set (forcing system scope threads), it
overrides LIBPTHREAD_PROCESS_SCOPE.

        $ # To force system scope threads
        $ LIBPTHREAD_SYSTEM_SCOPE=anything threaded_app
        $ # To force process scope threads
        $ LIBPTHREAD_PROCESS_SCOPE=anything threaded_app
2004-08-12 12:12:12 +00:00
David Xu
78f687539a Check debugger suspending flag for system scope thread.
Reviewed by: deischen
2004-08-08 22:42:11 +00:00
Daniel Eischen
00be1d3d12 Add a way to force 1:1 mode for libpthread. To do this, define
LIBPTHREAD_SYSTEM_SCOPE in the environment.

You can still force libpthread to be built in strictly 1:1 by
adding -DSYSTEM_SCOPE_ONLY to CFLAGS.  This is kept for archs
that don't yet support M:N mode.

Requested by:   rwatson
Reviewed by:    davidxu
2004-08-07 15:15:38 +00:00
David Xu
4513fb36aa s/TMDF_DONOTRUNUSER/TMDF_SUSPEND/g
Dicussed with: deischen
2004-08-03 02:23:06 +00:00
David Xu
aa087e0e12 Save context in kernel fashion, so it can be restored by
kse_switchin syscall.
2004-07-31 14:18:26 +00:00
David Xu
5f0d8cc327 Remove unused field. 2004-07-31 14:14:55 +00:00
David Xu
df6978352a Macro optimize, this increases context switch speed about 2% on my
athlon64 machine.
2004-07-31 01:53:21 +00:00
Peter Grehan
0f47890401 PPC MD bits for KSE. Runs test cases OK. Crippled to 1:1 mode for
the time being.
2004-07-19 12:19:04 +00:00
Marcel Moolenaar
3271031518 Don't include lock.h and pthread_md.h when we're being included by
libthread_db. Both headers are included seperately.
2004-07-18 04:22:01 +00:00
David Xu
dd094c943d Copy lwp id to thread mailbox. 2004-07-14 00:58:53 +00:00
David Xu
e378b41cb4 Call kse_switchin to switch context when being debugged. 2004-07-13 22:54:23 +00:00
David Xu
63db3fb215 Remove unused symbols. 2004-07-13 22:53:56 +00:00
David Xu
c7f5b2dbc5 Let debugger check signal, make SIGINFO works. 2004-07-13 22:52:11 +00:00
David Xu
099e4630c1 If _libkse_debug is not zero, activate thread mode. 2004-07-13 22:51:03 +00:00
David Xu
566382df0a Add code to support thread debugging.
1. Add global varible _libkse_debug, debugger uses the varible to identify
   libpthread. when the varible is written to non-zero by debugger, libpthread
   will take some special action at context switch time, it will check
   TMDF_DOTRUNUSER flags, if a thread has the flags set by debugger, it won't
   be scheduled, when a thread leaves KSE critical region, thread checks
   the flag, if it was set, the thread relinquish CPU.

2. Add pq_first_debug to select a thread allowd to run by debugger.

3. Some names prefixed with _thr are renamed to _thread prefix.

which is allowed to run by debugger.
2004-07-13 22:49:58 +00:00
David Xu
a5dc4a8255 kse_switchin ABI was changed in kernel. 2004-07-12 07:41:01 +00:00
David Xu
5321c2a9b0 Check pending signals, if there is signal will be unblocked by
sigsuspend, thread shouldn't wait, in old code, it may be
ignored.
When a signal handler is invoked in sigsuspend, thread gets
two different signal masks, one is in thread structure,
sigprocmask() can retrieve it, another is in ucontext
which is a third parameter of signal handler, the former is
the result of sigsuspend mask ORed with sigaction's sa_mask
and current signal, the later is the mask in thread structure
before sigsuspend is called. After signal handler is called,
the mask in ucontext should be copied into thread structure,
and becomes CURRENT signal mask, then sigsuspend returns to
user code.

Reviewed by: deischen
Tested by: Sean McNeil <sean@mcneil.com>
2004-06-12 07:40:01 +00:00
Tim J. Robbins
bf1d6a62b0 Avoid clobbering the red zone when running on the new context's stack in
_amd64_restore_context().
2004-06-07 21:25:16 +00:00
Olivier Houchard
cbed470d9c Arm bits for libpthread. It has no chances to work and should be considered
as stubs.
2004-05-14 12:21:29 +00:00
Daniel Eischen
b8bbeeda02 After forking and initializing the library to single-threaded
mode (where the forked thread is the one and only thread and
is marked as system scope), set the system scope flag before
initializing the signal mask.  This prevents trying to use
internal locks that haven't yet been initialized.

Reported by:	Dan Nelson <dnelson at allantgroup.com>
Reviewed by:	davidxu
2004-04-08 23:16:21 +00:00
David Xu
3128c7b24e Fix a POSIX conformance bug. POSIX says sigwait should return error number
in return value not in errno.
2004-03-17 02:12:19 +00:00
Bruce Evans
2dc8d58f59 Fixed a misspelling of 0 as NULL. 2004-03-14 05:27:26 +00:00
Colin Percival
d623b765cf style cleanup: Remove duplicate $FreeBSD$ tags.
These files had tags after the copyright notice,
inside the comment block (incorrect, removed),
and outside the comment block (correct).

Approved by:	rwatson (mentor)
2004-02-10 20:42:33 +00:00
Daniel Eischen
4b4d63bdfe Add cancellation point to sem_wait() and sem_timedwait() for pshared
semaphores.  Also add cancellation cleanup handlers to keep semaphores
in a consistent state.

Submitted in part by:	davidxu
Reviewed by:		davidxu
2004-02-06 15:20:56 +00:00
David Xu
cb10cbc878 libkse was renamed to libpthread. 2004-02-05 02:55:20 +00:00
Daniel Eischen
6bf50f98b1 Provide a userland version of non-pshared semaphores and add cancellation
points to sem_wait() and sem_timedwait().  Also make sem_post signal-safe.
2004-02-03 05:50:07 +00:00
Marcel Moolenaar
a99e07ba17 Now that libpthread is the default threading library, remove the
compatibility link from libc_r to libpthread (previously a link
from libc_r to libkse).
2004-01-31 05:05:45 +00:00
Daniel Eischen
bd224d495e Change libkse back to libpthread and make it the default
thread library for i386, amd64, and ia64.  For alpha
and sparc64 the library is not changed and remains libkse,
and links are installed so that libpthread -> libc_r.

The gcc -pthread option will be changed in a separate
commit so that it links to -lpthread instead of -lc_r.

Approved by:	re@
2004-01-30 12:13:17 +00:00
David Xu
e4dcaa6ee9 Return EPERM if mutex owner is not current thread but it tries to
unlock the mutex, old code confuses some programs when it returns EINVAL.

Noticed by: bland
2004-01-17 03:09:57 +00:00
Daniel Eischen
24f33bca1c Add a simple work-around for deadlocking on recursive read locks
on a rwlock while there are writers waiting.  We normally favor
writers but when a reader already has at least one other read lock,
we favor the reader.  We don't track all the rwlocks owned by a
thread, nor all the threads that own a rwlock -- we just keep
a count of all the read locks owned by a thread.

PR:	24641
2004-01-08 15:37:09 +00:00
David Xu
7a29c72c07 Kernel now supports per-thread sigaltstack, follow the change to
enable sigaltstack for scope system thread.
2004-01-03 02:40:27 +00:00
David Xu
ac4476923c Return error code in errno, not in return value. 2004-01-02 00:38:42 +00:00
David Xu
f909113819 Fix a typo. 2004-01-02 00:27:30 +00:00
David Xu
4560f4f0b1 Forgot to commit this file for last commit. :( 2003-12-29 23:33:51 +00:00
David Xu
02eead1d0a Implement sigaltstack() as per-threaded. Current only scope process thread
is supported, for scope system process, kernel signal bits need to be
changed.

Reviewed by: deischen
Tested on  : i386 amd64 ia64
2003-12-29 23:21:09 +00:00
David Xu
fff5bd9ed9 Correctly retrieve sigaction flags. 2003-12-28 12:20:04 +00:00
David Xu
c7148de1a6 Replace a comment with more accurated one, memory heap is now protected by
new fork() wrapper.
2003-12-19 13:24:54 +00:00
David Xu
22df7d650a Code clean up, remove unused MACROS and function prototypes. 2003-12-19 12:57:08 +00:00
Daniel Eischen
6ed6ccb310 accept() returns a file descriptor when it succeeds which is very
likely to be non-zero.  When leaving the cancellation point, check
the return value against -1 to see if cancellation should be
checked.  While I'm here, make the same change to connect() just
to be consisitent.

Pointed out by: davidxu
2003-12-09 23:40:27 +00:00
Daniel Eischen
fcebdd871d Remove an unused struct definition. 2003-12-09 15:18:40 +00:00
Daniel Eischen
cf25ae6974 Add cancellation points for accept() and connect(). 2003-12-09 15:16:27 +00:00
David Xu
cdbc3e83fa Use mutex instead of low level thread lock to implement spinlock, this
avoids signal to be blocked when otherwise it can be handled.
2003-12-09 02:37:40 +00:00
David Xu
71679e629d Rename _thr_enter_cancellation_point to _thr_cancel_enter, rename
_thr_leave_cancellation_point to _thr_cancel_leave, add a parameter
to _thr_cancel_leave to indicate whether cancellation point should be
checked, this gives us an option to not check cancallation point if
a syscall successfully returns to avoid any leaks, current I have
creat(), open() and fcntl(F_DUPFD) to not check cancellation point
after they sucessfully returned.

Replace some members in structure kse with bit flags to same some
memory.

Conditionally compile THR_ASSERT to nothing if _PTHREAD_INVARIANTS is
not defined.

Inline some small functions in thr_cancel.c.

Use __predict_false in thr_kern.c for some executed only once code.

Reviewd by: deischen
2003-12-09 02:20:56 +00:00
David Xu
d5c854e890 More reliably check timeout for pthread_mutex_timedlock. 2003-12-09 00:52:28 +00:00
Daniel Eischen
80fecc4d18 Go back to using rev 1.18 where thread locks are used instead of KSE
locks for [libc] spinlock implementation.  This was previously backed
out because it exposed a bug in ia64 implementation.

OK'd by:	marcel
2003-12-08 13:33:20 +00:00
Marcel Moolenaar
47eb01b822 Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
Peter Wemm
30a62d30f4 Apply a second fix for stack alignment with libkse. This time, enter the
UTS with the stack correctly aligned.  Also, while here, use an indirect
jump rather than the pushq/ret hack.

This fixes threaded apps that use floating point for me, although
it hasn't solved all the problems.  It is an improvement though.
Preservation of the 128 byte red zone hasn't been resolved yet.

Approved by:  re (scottl)
2003-12-05 01:41:43 +00:00
David Xu
508f442784 Eliminate two pushl by using call instruction directly, this really
helps branch predict a lot for INTEL P4.

Approved by: re (scottl)
2003-11-29 14:25:43 +00:00
David Xu
170422c2ef 1.Macro optimizing KSE_LOCK_ACQUIRE and THR_LOCK_ACQUIRE to use static fall
through branch predict as suggested in INTEL IA32 optimization guide.

2.Allocate siginfo arrary separately to avoid pthread to be allocated at
2K boundary, which hits L1 address alias problem and causes context
switch to be slow down.

3.Simplify context switch code by removing redundant code, code size is
reduced, so it is expected to run faster.

Reviewed by: deischen
Approved by: re (scottl)
2003-11-29 14:22:29 +00:00
David Xu
5a8fe60d7e Remove surplus mmap() call for stack guard page in init_private, it is done
in init_main_thread. Also don't initialize lock and lockuser again for initial
thread, it is already done by _thr_alloc().

Reviewed by: deischen
Approved by: re (scottl)
2003-11-29 14:10:02 +00:00
Daniel Eischen
5303e94607 Back out last change and go back to using KSE locks instead of thread
locks until we know why this breaks ia64.

Reported by:	marcel
2003-11-16 15:01:26 +00:00
David Xu
0e17930dd7 If a thread in critical region got a synchronous signal, according current
signal handling mode, there is no chance to handle the signal, something
must be wrong in the library, just call kse_thr_interrupt to dump its core.
I have the code for a long time, but forgot to commit it.
2003-11-09 00:37:14 +00:00
David Xu
38a53c6206 Use THR lock instead of KSE lock to avoid scheduler be blocked in spinlock.
Reviewed by: deischen
2003-11-08 06:07:04 +00:00
Daniel Eischen
7a1192c1d3 style(9)
Reviewed by:	bde
2003-11-05 18:19:24 +00:00
Daniel Eischen
94db4dd759 Don't declare the malloc lock; use the declaration provided in libc.
Noticed by:	bde
2003-11-05 18:18:45 +00:00
David Xu
dfde783410 Add pthread_atfork() source code. Dan forgot to commit this file. 2003-11-05 03:42:10 +00:00
Daniel Eischen
4c1123c1c0 Add an implementation for pthread_atfork().
Aside from the POSIX requirements for pthread_atfork(), when
fork()ing, take the malloc lock to keep malloc state consistent
in the child.

Reviewed by:	davidxu
2003-11-04 20:04:45 +00:00
Daniel Eischen
d6b826bac7 Add the ability to reinitialize libpthread's internal FIFO-queueing
locks.

Reviewed by:	davidxu
2003-11-04 20:01:38 +00:00
Daniel Eischen
15a06fd231 Add the ability to reinitialize a spinlock (libc/libpthread
internal lock, not a pthread spinlock).

Reviewed by:	davidxu
2003-11-04 19:59:22 +00:00
Daniel Eischen
264978955e s/foo()/foo(void)/
Add a blank line after a variable declaration.
2003-11-04 19:58:12 +00:00
Daniel Eischen
dc17710e7c Libpthread uses the convention that all of its (non-weak) symbols
begin with underscores and provide weak definitions without
underscores.  Make the pthread spinlock conform to this convention.
2003-11-04 19:56:12 +00:00
Daniel Eischen
ee574ccc3e Add the ability to reinitialize a mutex (internally, not a userland
API).

Reviewed by:	davidxu
2003-11-04 19:53:32 +00:00
Peter Wemm
d1a499ad2a Use amd64_set_fsbase() instead of calling sysarch() directly. 2003-10-23 06:12:57 +00:00
Daniel Eischen
5bb9c67cc7 This test relies on the concurrency level being 1; make it so. 2003-10-20 04:23:49 +00:00
Peter Wemm
eaa9864401 Update context code for my last ABI breakage of mcontext. I'm worried
about the fpu code here.  It should be using fxsave/fxrstor instead of
saving/restoring the control word.  The SSE registers are used a lot in
gcc generated code on amd64.  I'm not sure how this all fits together
though.
2003-10-17 16:30:09 +00:00
Daniel Eischen
077af0a4b4 Don't forget to initialize the fake tcb when the kcb is allocated. 2003-10-12 16:50:45 +00:00
Daniel Eischen
1f2215bcc4 Reverse the order of the first two arguments to _sparc64_enter_uts().
The first argument is the UTS function, the second argument is the
first argument to the UTS function.  Who's on first.
2003-10-09 20:52:17 +00:00
Daniel Eischen
97576c1c61 Convert a couple of hardcoded values to constants. Make thr_getcontext()
return 0 when called the first time, and return 1 when resumed by
thr_setcontext().
2003-10-09 14:48:09 +00:00
Daniel Eischen
203a51090b Add preliminary sparc64 support to libpthread. This does not
yet work, but hopefully someone familiar with the sparc64
port can pick up the reins.

Submitted by:	jake
With mods by:	deischen
2003-10-09 02:32:28 +00:00
David Xu
3128827980 Fix some comments for last commit. 2003-10-08 00:30:38 +00:00
David Xu
6e812b65c6 Complete cancellation support for M:N threads, check cancelling flag when
thread state is changed from RUNNING to WAIT state and do some cancellation
operations for every cancellable state.

Reviewed by: deischen
2003-10-08 00:20:50 +00:00
David Xu
eb0fa623b7 Use thread lock instead of scheduler lock to eliminate lock contention
for all wrapped syscalls under SMP.

Reviewed by: deischen
2003-10-08 00:17:13 +00:00
David Xu
28e2ce478d Only generate code for _LCK_ASSERT if _LCK_DEBUG is defined. 2003-10-02 03:24:26 +00:00
David Xu
ee74732c91 When concurrency level is reduced and a kse is exiting, make sure no other
threads are still referencing the kse by migrating them to initial kse.

Reviewed by: deischen
2003-09-29 06:25:04 +00:00
David Xu
bd193b8ba7 Remove unused variable. 2003-09-28 13:47:29 +00:00
Marcel Moolenaar
ed74e02776 Relink libc_r.a, libc_r.so and libc_r_p.so from libthr to libkse.
On ia64, where there's no libc_r at all, libkse is now the default
thread library by virtue of these links.

The reasons for this change are:
1. libkse is slated to become the default thread library anyway,
2. active development and maintenance is only present for libkse,
3. GNOME and KDE, both in the process of being supported on ia64,
   work better with KSE; even on ia64.
2003-09-27 23:27:19 +00:00
David Xu
58effe49ae pthread API should return error code in return value not in errno.
Reviewed by: deischen
2003-09-25 13:53:49 +00:00
David Xu
a551fe8dc1 If syscall failed, restore old sigaction and return error to thread. 2003-09-25 06:23:40 +00:00
David Xu
3d10572d1a As comments in _mutex_lock_backout state, only current thread
can clear the pointer to mutex, not the thread doing mutex
handoff. Because _mutex_lock_backout does not hold scheduler
lock while testing THR_FLAGS_IN_SYNCQ and then reading mutex
pointer, it is possible mutex owner begin to unlock and
handoff the mutex to the current thread, and mutex pointer
will be cleared to NULL before current thread reading it, so
current thread will end up with deferencing a NULL pointer,
Fix the race by making mutex waiters to clear their mutex pointers.
While I am here, also save inherited priority in mutex for
PTHREAD_PRIO_INERIT mutex in mutex_trylock_common just like what
we did in mutex_lock_common.
2003-09-24 12:52:57 +00:00
David Xu
cc640f7aaa Free thread name memory if there is. 2003-09-23 04:02:23 +00:00
David Xu
4841159528 Save and restore timeout field for signal frame just like what we did
for interrupted field.
Also in _thr_sig_handler, retrieve current signal mask from kernel not
from ucp, the later is pre-unioned mask, not current signal mask.
2003-09-22 14:40:36 +00:00
David Xu
b1f054a092 Fix FPU state restoring bug by jumping to right position. 2003-09-22 14:34:02 +00:00