Commit Graph

69 Commits

Author SHA1 Message Date
Daniel Eischen
c0addafac3 Fix a race condition introduced when redzones were added. Use an
atomic operation to return and adjust the stack.

Submitted by:	luoqi
2006-02-24 22:03:10 +00:00
Daniel Eischen
e059b9ce6c Remove an unused variable. 2005-07-29 21:49:47 +00:00
Peter Wemm
3b4399f6a7 Clean out the leftovers from the i386_set_gsbase() TLS conversion.
Like on libthr, there is an i386_set_gsbase() stub implementation here
to avoid libc.so.5 issues.  This should likely be a weak symbol and I
expect this will be fixed soon.

Approved by:	re
2005-06-29 23:15:36 +00:00
Peter Wemm
1ec622fdd6 Remove the special _amd64_set_gsbase() code for #ifdef COMPAT_32BIT, now
that the amd64 kernel implements i386_get/set_gsbase().  All the rest of
the ldt backwards compat code should go away soon.
2005-04-26 20:41:48 +00:00
Peter Wemm
72a79166ea Use the i386_set_gsbase() syscall if it is implemented in the kernel.
This is a little hairy here because the allocation and usage of this
functionality is split into two places in libpthread.
2005-04-14 00:13:20 +00:00
Olivier Houchard
e0d6cac076 Use the new atomic_cmpset_32(). 2005-04-07 22:06:05 +00:00
Olivier Houchard
788d6eeca0 Bring in a more healthy version of the libpthread for arm, which uses
ARM_TP_ADDRESS.
2005-02-26 19:06:49 +00:00
Peter Wemm
9e6d5a03d4 i386_set_ldt() is not available when running 32 bit binaries on amd64
kernels.  Use the recently exposed direct-set routines instead.  This is
only activated for when we compile i386 support libraries on amd64.
2004-11-06 03:35:51 +00:00
Peter Wemm
2cdea9a39f Cosmetic tweaks to reduce diffs to the i386 counterpart. 2004-11-06 03:33:19 +00:00
Olivier Houchard
b341b08336 Partial support of KSE for arm. 2004-11-05 23:49:21 +00:00
Olivier Houchard
99feca3bae _tcb_ctor takes two args. 2004-09-24 13:02:30 +00:00
David Xu
1db81dc074 Add missing brackets. It was committed from wrong tree. 2004-08-26 02:41:01 +00:00
David Xu
28f5d1b766 gcc -O2 cleanup. tested for a long time.
Reviewed by: deischen
2004-08-25 23:42:40 +00:00
David Xu
f914e34db6 Fix compile, s/tp_dtv/tp_tdv/g. 2004-08-16 14:07:38 +00:00
Peter Grehan
391d4a3856 Bring PPC up to date with latest TLS changes. 2004-08-16 05:41:39 +00:00
David Xu
a002d437ea 1. Add macro DTV_OFFSET to calculate dtv offset in tcb.
2. Export symbols needed by debugger.
2004-08-16 03:27:29 +00:00
Doug Rabson
99c8d0836d Add TLS support for i386 and amd64. 2004-08-15 16:28:05 +00:00
David Xu
aa087e0e12 Save context in kernel fashion, so it can be restored by
kse_switchin syscall.
2004-07-31 14:18:26 +00:00
David Xu
5f0d8cc327 Remove unused field. 2004-07-31 14:14:55 +00:00
David Xu
df6978352a Macro optimize, this increases context switch speed about 2% on my
athlon64 machine.
2004-07-31 01:53:21 +00:00
Peter Grehan
0f47890401 PPC MD bits for KSE. Runs test cases OK. Crippled to 1:1 mode for
the time being.
2004-07-19 12:19:04 +00:00
David Xu
dd094c943d Copy lwp id to thread mailbox. 2004-07-14 00:58:53 +00:00
David Xu
e378b41cb4 Call kse_switchin to switch context when being debugged. 2004-07-13 22:54:23 +00:00
David Xu
a5dc4a8255 kse_switchin ABI was changed in kernel. 2004-07-12 07:41:01 +00:00
Tim J. Robbins
bf1d6a62b0 Avoid clobbering the red zone when running on the new context's stack in
_amd64_restore_context().
2004-06-07 21:25:16 +00:00
Olivier Houchard
cbed470d9c Arm bits for libpthread. It has no chances to work and should be considered
as stubs.
2004-05-14 12:21:29 +00:00
Marcel Moolenaar
47eb01b822 Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
Peter Wemm
30a62d30f4 Apply a second fix for stack alignment with libkse. This time, enter the
UTS with the stack correctly aligned.  Also, while here, use an indirect
jump rather than the pushq/ret hack.

This fixes threaded apps that use floating point for me, although
it hasn't solved all the problems.  It is an improvement though.
Preservation of the 128 byte red zone hasn't been resolved yet.

Approved by:  re (scottl)
2003-12-05 01:41:43 +00:00
David Xu
508f442784 Eliminate two pushl by using call instruction directly, this really
helps branch predict a lot for INTEL P4.

Approved by: re (scottl)
2003-11-29 14:25:43 +00:00
Peter Wemm
d1a499ad2a Use amd64_set_fsbase() instead of calling sysarch() directly. 2003-10-23 06:12:57 +00:00
Peter Wemm
eaa9864401 Update context code for my last ABI breakage of mcontext. I'm worried
about the fpu code here.  It should be using fxsave/fxrstor instead of
saving/restoring the control word.  The SSE registers are used a lot in
gcc generated code on amd64.  I'm not sure how this all fits together
though.
2003-10-17 16:30:09 +00:00
Daniel Eischen
077af0a4b4 Don't forget to initialize the fake tcb when the kcb is allocated. 2003-10-12 16:50:45 +00:00
Daniel Eischen
1f2215bcc4 Reverse the order of the first two arguments to _sparc64_enter_uts().
The first argument is the UTS function, the second argument is the
first argument to the UTS function.  Who's on first.
2003-10-09 20:52:17 +00:00
Daniel Eischen
97576c1c61 Convert a couple of hardcoded values to constants. Make thr_getcontext()
return 0 when called the first time, and return 1 when resumed by
thr_setcontext().
2003-10-09 14:48:09 +00:00
Daniel Eischen
203a51090b Add preliminary sparc64 support to libpthread. This does not
yet work, but hopefully someone familiar with the sparc64
port can pick up the reins.

Submitted by:	jake
With mods by:	deischen
2003-10-09 02:32:28 +00:00
David Xu
b1f054a092 Fix FPU state restoring bug by jumping to right position. 2003-09-22 14:34:02 +00:00
Marcel Moolenaar
302e193264 Make KSE_STACKSIZE machine dependent by moving it from thr_kern.c to
pthread_md.h. This commit only moves the definition; it does not
change it for any of the platforms. This more easily allows 64-bit
architectures (in particular) to pick a slightly larger stack size.
2003-09-19 23:28:13 +00:00
Marcel Moolenaar
aec40a4c57 _ia64_break_setcontext() now takes a mcontext_t. While here, define
THR_SETCONTEXT as PANIC(). The THR_SETCONTEXT macro is currently not
used, which means that the definition we had could be wrong, overly
pessimistic or unknowingly right. I don't like the odds...

The new _ia64_break_setcontext() and corresponding kernel fixes make
KSE mostly usable. There's still a case where we don't properly
restore a context and end up with a NaT consumption fault (typically
an indication for not handling NaT collection points correctly),
but at least now mutex_d works...
2003-09-19 23:00:28 +00:00
Marcel Moolenaar
492eea0dcd Stop using the setcontext() syscall to restore an async context.
Instead use the break instruction with an immediate specially
created for us.
2003-09-19 22:54:05 +00:00
Daniel Eischen
a87a12715c Remove a comment that questioned why the size of the FPU
state for amd64 was twice as large as necessary.  Peter
recently fixed this, so the comment no longer applies.

Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.

FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
2003-09-16 00:00:53 +00:00
Daniel Eischen
850108c0b4 Don't assume sizeof(long) = sizeof(int) on x86; use int
instead of long types for low-level locks.

Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.

Add memory barriers to alpha atomic swap functions (submitted
by davidxu).

Requested by:	bde
2003-09-03 17:56:26 +00:00
David Xu
44498c15e0 Don't forget to set kcb_self. 2003-08-12 22:13:06 +00:00
Marcel Moolenaar
778a4a9dd4 Grok async contexts. When a thread is interrupted and an upcall
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...

However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
2003-08-07 08:03:05 +00:00
Daniel Eischen
39521cdb3c Fix a typo. s/Line/Like/ 2003-08-06 06:12:54 +00:00
Marcel Moolenaar
d7c68311ee Avoid a level of indirection to get from the thread pointer to the
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.

In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
2003-08-06 04:17:42 +00:00
Marcel Moolenaar
119fb38770 Define the static TLS as an array of long double. This will guarantee
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).
2003-08-06 00:17:15 +00:00
Daniel Eischen
199d58cbfc Use auto LDT allocation for i386. 2003-08-05 23:09:22 +00:00
Daniel Eischen
59c3b99b8f Rethink the MD interfaces for libpthread to account for
archs that can (or are required to) have per-thread registers.

Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.

Reviewed by:	davidxu
2003-08-05 22:46:00 +00:00
Marcel Moolenaar
9a3ea63e79 Define THR_GETCONTEXT and THR_SETCONTEXT in terms of the userland
context functions. We don't need to enter the kernel anymore. The
contexts are compatible (ie a context created by getcontext() can
be restored by _ia64_restore_context()).

While here, make the use of THR_ALIGNBYTES and THR_ALIGN a no-op.
They are going to be removed anyway.
2003-08-05 19:37:20 +00:00
Marcel Moolenaar
50be3a75cc o In _ia64_save_context() clear the return registers except for r8.
We write 1 for r8 in the context so that _ia64_restore_context()
   will return with a non-zero value. _ia64_save_context() always
   return 0.
o  In _ia64_restore_context(), don't restore the thread pointer. It
   is not normally part of the context. Also, restore the return
   registers. We get called for contexts created by getcontext(),
   which means we have to restore all the syscall return values.
2003-08-05 19:33:01 +00:00