Commit Graph

77 Commits

Author SHA1 Message Date
imp
1e5e0a78ab Merge from tbemd:
Convert from using MACHINE_ARCH to MACHINE_CPUARCH.  Hoist path statement
up into the top Makefile rather than repeating it on every arch Makefile.
2010-06-13 01:27:29 +00:00
rwatson
2368fc3e65 For un-prototyped static inline functions declared in pthread_md.h on
ia64, powerpc, and sparc64, use ANSI function headers and specifically
indicate the lack of arguments with 'void'.  Otherwise, warnings are
generated at WARNS=3, leading to a compile failure with -Werror.
2007-12-01 14:23:29 +00:00
deischen
f1ac56b04d WARNS=3'ify. 2007-11-30 17:20:29 +00:00
marcel
9dfca48522 Stylize:
o  avoid using a global register variable.
o  redefine struct ia64_tp as a union. We don't have to get to the
   fields themselves. We just need it to be of the right size with
   the right alignment.
2006-09-01 21:25:22 +00:00
marcel
657a4c30ac The ucontext is 16-byte aligned, which means that struct tcb is
16-byte aligned. Consequently, struct tcb is a multiple of 16
bytes in size. We need to make sure there's no padding after
struct ppc32_tp. We do this by explicitly adding the necessary
padding in front of it.
2006-09-01 19:13:36 +00:00
marcel
a081b45ede Stylize. Introduce ppc_{get|set}_tp() and ppc_{get|set}_tcb() to
abstract the magic that happens when deriving one or the other.
2006-09-01 17:52:13 +00:00
marcel
74d4bf1cd1 Implement TLS. 2006-09-01 06:17:16 +00:00
davidxu
bbcf536040 In order to let new binutils can compile it, replace movl with
movw for segment saving and restoring.

Submitted by: Diego 'Flameeyes' Petteno flameeyes at gentoo dot org
2006-05-07 08:19:04 +00:00
deischen
2c3357a0a6 Fix a race condition introduced when redzones were added. Use an
atomic operation to return and adjust the stack.

Submitted by:	luoqi
2006-02-24 22:03:10 +00:00
deischen
f0435773fa Remove an unused variable. 2005-07-29 21:49:47 +00:00
peter
db8830bc2d Clean out the leftovers from the i386_set_gsbase() TLS conversion.
Like on libthr, there is an i386_set_gsbase() stub implementation here
to avoid libc.so.5 issues.  This should likely be a weak symbol and I
expect this will be fixed soon.

Approved by:	re
2005-06-29 23:15:36 +00:00
peter
408a98eda0 Remove the special _amd64_set_gsbase() code for #ifdef COMPAT_32BIT, now
that the amd64 kernel implements i386_get/set_gsbase().  All the rest of
the ldt backwards compat code should go away soon.
2005-04-26 20:41:48 +00:00
peter
03d84df307 Use the i386_set_gsbase() syscall if it is implemented in the kernel.
This is a little hairy here because the allocation and usage of this
functionality is split into two places in libpthread.
2005-04-14 00:13:20 +00:00
cognet
c7b04f713a Use the new atomic_cmpset_32(). 2005-04-07 22:06:05 +00:00
cognet
23a2642656 Bring in a more healthy version of the libpthread for arm, which uses
ARM_TP_ADDRESS.
2005-02-26 19:06:49 +00:00
peter
09f7cb0cec i386_set_ldt() is not available when running 32 bit binaries on amd64
kernels.  Use the recently exposed direct-set routines instead.  This is
only activated for when we compile i386 support libraries on amd64.
2004-11-06 03:35:51 +00:00
peter
1028f02dbd Cosmetic tweaks to reduce diffs to the i386 counterpart. 2004-11-06 03:33:19 +00:00
cognet
2310011fe2 Partial support of KSE for arm. 2004-11-05 23:49:21 +00:00
cognet
e06c72a787 _tcb_ctor takes two args. 2004-09-24 13:02:30 +00:00
davidxu
8eb09c15b9 Add missing brackets. It was committed from wrong tree. 2004-08-26 02:41:01 +00:00
davidxu
a4dbc4af0d gcc -O2 cleanup. tested for a long time.
Reviewed by: deischen
2004-08-25 23:42:40 +00:00
davidxu
776807c108 Fix compile, s/tp_dtv/tp_tdv/g. 2004-08-16 14:07:38 +00:00
grehan
a14d72d426 Bring PPC up to date with latest TLS changes. 2004-08-16 05:41:39 +00:00
davidxu
4872917430 1. Add macro DTV_OFFSET to calculate dtv offset in tcb.
2. Export symbols needed by debugger.
2004-08-16 03:27:29 +00:00
dfr
4dd05c8c57 Add TLS support for i386 and amd64. 2004-08-15 16:28:05 +00:00
davidxu
7ef444d527 Save context in kernel fashion, so it can be restored by
kse_switchin syscall.
2004-07-31 14:18:26 +00:00
davidxu
7dea99e896 Remove unused field. 2004-07-31 14:14:55 +00:00
davidxu
ab04048f48 Macro optimize, this increases context switch speed about 2% on my
athlon64 machine.
2004-07-31 01:53:21 +00:00
grehan
2de38b8015 PPC MD bits for KSE. Runs test cases OK. Crippled to 1:1 mode for
the time being.
2004-07-19 12:19:04 +00:00
davidxu
d2c424e30c Copy lwp id to thread mailbox. 2004-07-14 00:58:53 +00:00
davidxu
b241de2523 Call kse_switchin to switch context when being debugged. 2004-07-13 22:54:23 +00:00
davidxu
404e9eb472 kse_switchin ABI was changed in kernel. 2004-07-12 07:41:01 +00:00
tjr
bdd43780eb Avoid clobbering the red zone when running on the new context's stack in
_amd64_restore_context().
2004-06-07 21:25:16 +00:00
cognet
0ff01bbcf8 Arm bits for libpthread. It has no chances to work and should be considered
as stubs.
2004-05-14 12:21:29 +00:00
marcel
4b6eafd82f Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
peter
bf613741f0 Apply a second fix for stack alignment with libkse. This time, enter the
UTS with the stack correctly aligned.  Also, while here, use an indirect
jump rather than the pushq/ret hack.

This fixes threaded apps that use floating point for me, although
it hasn't solved all the problems.  It is an improvement though.
Preservation of the 128 byte red zone hasn't been resolved yet.

Approved by:  re (scottl)
2003-12-05 01:41:43 +00:00
davidxu
639818d0f3 Eliminate two pushl by using call instruction directly, this really
helps branch predict a lot for INTEL P4.

Approved by: re (scottl)
2003-11-29 14:25:43 +00:00
peter
9fdc368a9c Use amd64_set_fsbase() instead of calling sysarch() directly. 2003-10-23 06:12:57 +00:00
peter
aa62482800 Update context code for my last ABI breakage of mcontext. I'm worried
about the fpu code here.  It should be using fxsave/fxrstor instead of
saving/restoring the control word.  The SSE registers are used a lot in
gcc generated code on amd64.  I'm not sure how this all fits together
though.
2003-10-17 16:30:09 +00:00
deischen
b5f43f9f88 Don't forget to initialize the fake tcb when the kcb is allocated. 2003-10-12 16:50:45 +00:00
deischen
8df72a4176 Reverse the order of the first two arguments to _sparc64_enter_uts().
The first argument is the UTS function, the second argument is the
first argument to the UTS function.  Who's on first.
2003-10-09 20:52:17 +00:00
deischen
69eb1f0d34 Convert a couple of hardcoded values to constants. Make thr_getcontext()
return 0 when called the first time, and return 1 when resumed by
thr_setcontext().
2003-10-09 14:48:09 +00:00
deischen
801ac4642e Add preliminary sparc64 support to libpthread. This does not
yet work, but hopefully someone familiar with the sparc64
port can pick up the reins.

Submitted by:	jake
With mods by:	deischen
2003-10-09 02:32:28 +00:00
davidxu
b429e20750 Fix FPU state restoring bug by jumping to right position. 2003-09-22 14:34:02 +00:00
marcel
265b2be119 Make KSE_STACKSIZE machine dependent by moving it from thr_kern.c to
pthread_md.h. This commit only moves the definition; it does not
change it for any of the platforms. This more easily allows 64-bit
architectures (in particular) to pick a slightly larger stack size.
2003-09-19 23:28:13 +00:00
marcel
5888af272d _ia64_break_setcontext() now takes a mcontext_t. While here, define
THR_SETCONTEXT as PANIC(). The THR_SETCONTEXT macro is currently not
used, which means that the definition we had could be wrong, overly
pessimistic or unknowingly right. I don't like the odds...

The new _ia64_break_setcontext() and corresponding kernel fixes make
KSE mostly usable. There's still a case where we don't properly
restore a context and end up with a NaT consumption fault (typically
an indication for not handling NaT collection points correctly),
but at least now mutex_d works...
2003-09-19 23:00:28 +00:00
marcel
ef50cc82f8 Stop using the setcontext() syscall to restore an async context.
Instead use the break instruction with an immediate specially
created for us.
2003-09-19 22:54:05 +00:00
deischen
7332e364d0 Remove a comment that questioned why the size of the FPU
state for amd64 was twice as large as necessary.  Peter
recently fixed this, so the comment no longer applies.

Also, since the size of struct mcontext changed, adjust
the threads library version of get&set context to match.

FYI, any change layout/size change to any arch's struct
mcontext will likely need some minor changes in libpthread.
2003-09-16 00:00:53 +00:00
deischen
919bc52171 Don't assume sizeof(long) = sizeof(int) on x86; use int
instead of long types for low-level locks.

Add prototypes for some internal libc functions that are
wrapped by the library as cancellation points.

Add memory barriers to alpha atomic swap functions (submitted
by davidxu).

Requested by:	bde
2003-09-03 17:56:26 +00:00
davidxu
2bb2e522ee Don't forget to set kcb_self. 2003-08-12 22:13:06 +00:00