Commit Graph

453 Commits

Author SHA1 Message Date
David Xu
5f1a6df490 Add some quick pathes to exit process when signal action is default and
signal can causes process to exit.

Reviewed by: deischen
2003-08-10 22:35:46 +00:00
David Xu
b2674f96cc Initialize rtld lock just before turning on thread mode and
uninitialize rtld lock after thread mode shutdown.
2003-08-10 22:30:20 +00:00
David Xu
7292e8d174 If thread mode is not activated yet, just call __sys_fork() directly,
otherwise masks all signals until fork() returns, in child process,
we reset library state before restoring signal masks until we reach
a safe to point.

Reviewed by: deischen
2003-08-10 22:20:41 +00:00
David Xu
3fceb84efd Tweak rtld lock to allow recursive on reader lock and detect recursive
on writer lock. This is first cut at rwlock for rtld.

Submitted by: desichen
2003-08-10 22:15:03 +00:00
David Xu
94fd4648c3 If thread mode is not activated yet, don't do extra work.
Reviewed by: deischen
2003-08-10 22:07:28 +00:00
David Xu
1771242836 o Add code to GC freed KSEs and KSE groups
o Fix a bug in kse_free_unlocked(), kcb_dtor shouldn't be called because
  the KSE is cached and will be resued in _kse_alloc().

Reviewed by: deischen
2003-08-08 22:20:59 +00:00
Alexander Kabaev
dd83c5f0a2 Allow gcc driver to process -r option iself, do not use -Wl,-r to
bypass it. Doing otherwise did not allow compiler to detect and disable
conflicting options generated from specs.

Reported by:	jake
2003-08-08 03:41:13 +00:00
Marcel Moolenaar
778a4a9dd4 Grok async contexts. When a thread is interrupted and an upcall
happens, the context of the interrupted thread is exported to
userland. Unlike most contexts, it will be an async context and
we cannot easily use our existing functions to set such a
context.
To avoid a lot of complexity that may possibly interfere with
the common case, we simply let the kernel deal with it. However,
we don't use the EPC based syscall path to invoke setcontext(2).
No, we use the break-based syscall path. That way the trapframe
will be compatible with the context we're trying to restore and
we save the kernel a lot of trouble. The kind of trouble we did
not want to go though ourselves...

However, we also need to set the threads mailbox and there's no
syscall to help us out. To avoid creating a new syscall, we use
the context itself to pass the information to the kernel so that
the kernel can update the mailbox. This involves setting a flag
(_MC_FLAGS_KSE_SET_MBOX) and setting ifa (the address) and isr
(the value).
2003-08-07 08:03:05 +00:00
Daniel Eischen
39521cdb3c Fix a typo. s/Line/Like/ 2003-08-06 06:12:54 +00:00
Marcel Moolenaar
d7c68311ee Avoid a level of indirection to get from the thread pointer to the
TCB. We know that the thread pointer points to &tcb->tcb_tp, so all
we have to do is subtract offsetof(struct tcb, tcb_tp) from the
thread pointer to get to the TCB. Any reasonably smart compiler will
translate accesses to fields in the TCB as negative offsets from TP.

In _tcb_set() make sure the fake TCB gets a pointer to the current
KCB, just like any other TCB. This fixes a NULL-pointer dereference
in _thr_ref_add() when it tried to get the current KSE.
2003-08-06 04:17:42 +00:00
Daniel Eischen
fc40494359 Don't call kse_set_curthread() when scheduling a new bound
thread.  It should only be called by the current kse and
never by a KSE on behalf of another.

Submitted by:	davidxu
2003-08-06 00:43:28 +00:00
Marcel Moolenaar
4a997ca96e Fix an off by one error in the number of arguments passed to
makecontext(). We only supply 3, not 4. This is mostly harmless,
except that on ia64 the garbage can include NaT bits, resulting
in NaT consumption faults.
2003-08-06 00:23:40 +00:00
Marcel Moolenaar
119fb38770 Define the static TLS as an array of long double. This will guarantee
that the TLS is 16-byte aligned, as well as guarantee that the thread
pointer is 16-byte aligned as it points to struct ia64_tp. Likewise,
struct tcb and struct ksd are also guaranteed to be 16-byte aligned
(if they weren't already).
2003-08-06 00:17:15 +00:00
Daniel Eischen
199d58cbfc Use auto LDT allocation for i386. 2003-08-05 23:09:22 +00:00
Daniel Eischen
59c3b99b8f Rethink the MD interfaces for libpthread to account for
archs that can (or are required to) have per-thread registers.

Tested on i386, amd64; marcel is testing on ia64 and will
have some follow-up commits.

Reviewed by:	davidxu
2003-08-05 22:46:00 +00:00
Marcel Moolenaar
9a3ea63e79 Define THR_GETCONTEXT and THR_SETCONTEXT in terms of the userland
context functions. We don't need to enter the kernel anymore. The
contexts are compatible (ie a context created by getcontext() can
be restored by _ia64_restore_context()).

While here, make the use of THR_ALIGNBYTES and THR_ALIGN a no-op.
They are going to be removed anyway.
2003-08-05 19:37:20 +00:00
Marcel Moolenaar
50be3a75cc o In _ia64_save_context() clear the return registers except for r8.
We write 1 for r8 in the context so that _ia64_restore_context()
   will return with a non-zero value. _ia64_save_context() always
   return 0.
o  In _ia64_restore_context(), don't restore the thread pointer. It
   is not normally part of the context. Also, restore the return
   registers. We get called for contexts created by getcontext(),
   which means we have to restore all the syscall return values.
2003-08-05 19:33:01 +00:00
David Xu
3664d35cd5 -15 is incorrect to be used to align stack to 16 bytes, use ~15 instead. 2003-08-02 22:39:10 +00:00
Daniel Eischen
51200f9b7c Take the same approach for i386 as that for ia64 and amd64. Use
the userland version of [gs]etcontext to switch between a thread
and the UTS scheduler (and back again).  This also fixes a bug
in i386 _thr_setcontext() which wasn't properly restoring the
context.

Reviewed by:	davidxu
2003-07-31 21:09:11 +00:00
David Xu
3807b4840c Fix some typos, correctly jump into UTS. 2003-07-31 08:50:01 +00:00
David Xu
64e64426d7 sysctlbyname needs size_t type, not int. 2003-07-31 08:26:58 +00:00
Daniel Eischen
24dc93d4c2 Don't forget to unlock the scheduler lock. Somehow this got removed
from one of my last commits.  This only affected priority ceiling
mutexes.

Pointy hat to:	deischen
2003-07-30 13:28:05 +00:00
David Xu
41282b992f Simplify sigwait code a bit by using a waitset and removing oldsigmask.
Reviewed by: deischen
2003-07-27 06:46:34 +00:00
David Xu
36144f1e6e Fix typo. 2003-07-26 02:36:50 +00:00
Daniel Eischen
cc24e83605 Move idle kse wakeup to outside of regions where locks are held.
This eliminates ping-ponging of locks, where the idle KSE wakes
up only to find the lock it needs is being held.  This gives
little or no gain to M:N mode but greatly speeds up 1:1 mode.

Reviewed & Tested by:	davidxu
2003-07-23 02:11:07 +00:00
Daniel Eischen
f4c57e7baf Add missing arguments to _amd64_restore_context() when called from
THR_SETCONTEXT().
2003-07-20 12:41:38 +00:00
David Xu
1aa2ee9714 Override libc function raise(), in threading mode, raise() will
send signal to current thread.

Reviewed by: deischen
2003-07-19 05:25:49 +00:00
Daniel Eischen
5a201fddb7 Add some very beta amd64 bits. These will also need some tweaking. 2003-07-19 04:44:21 +00:00
Daniel Eischen
a735c7a6ea Cleanup thread accounting. Don't reset a threads timeslice
when it blocks; it only gets reset when it yields.

Properly set a thread's default stack guardsize.

Reviewed by:	davidxu
2003-07-18 02:46:55 +00:00
Daniel Eischen
596ea21c7f Add a preemption point when a mutex or condition variable is
handed-off/signaled to a higher priority thread.  Note that when
there are idle KSEs that could run the higher priority thread,
we still add the preemption point because it seems to take the
kernel a while to schedule an idle KSE.  The drawbacks are that
threads will be swapped more often between CPUs (KSEs) and
that there will be an extra userland context switch (the idle
KSE is still woken and will probably resume the preempted
thread).  We'll revisit this if and when idle CPU/KSE wakeup
times improve.

Inspired by:	Petri Helenius <pete@he.iki.fi>
Reviewed by:	davidxu
2003-07-18 02:46:30 +00:00
David Xu
090b336154 o Eliminate upcall for PTHREAD_SYSTEM_SCOPE thread, now it
is system bound thread and when it is blocked, no upcall is generated.

o Add ability to libkse to allow it run in pure 1:1 threading mode,
  defining SYSTEM_SCOPE_ONLY in Makefile can turn on this option.

o Eliminate code for installing dummy signal handler for sigwait call.

o Add hash table to find thread.

Reviewed by: deischen
2003-07-17 23:02:30 +00:00
David Xu
6fbddb9816 Don't resume sigwait thread If signal is masked. 2003-07-09 22:30:55 +00:00
David Xu
d80384bc8d POSIX says if a thread is in sigwait state, although a signal may not in
its waitset, but if the signal is not masked by the thread, the signal
can interrupt the thread and signal action can be invoked by the thread,
sigwait should return with errno set to EINTR.
Also save and restore thread internal state(timeout and interrupted)
around signal handler invoking.
2003-07-09 14:30:51 +00:00
David Xu
9efd29f394 Restore signal mask correctly after fork(). 2003-07-09 01:39:24 +00:00
David Xu
0527fc8806 Save and restore thread's error code around signal handling.
Reviewed by: deischen
2003-07-09 01:06:12 +00:00
David Xu
db6104d462 Correctly print signal mask, the bug was introduced by cut and paste
in last commit.
2003-07-07 12:12:33 +00:00
David Xu
ace6720e77 Add a newline to debug message. 2003-07-07 04:32:17 +00:00
David Xu
91f7616aff Avoid accessing user provided parameters in critical region.
Reviewed by: deischen
2003-07-07 04:28:23 +00:00
David Xu
62e74c0cb2 Print thread's scope, also print signal mask for every thread and print
it in one line.
2003-07-07 03:08:11 +00:00
David Xu
a1a9b0071e Correctly lock/unlock signal lock. I must be in bad state, need to sleep. 2003-07-04 08:51:37 +00:00
David Xu
dfde101719 Always check and restore sigaction previously set, also access user parameter
outside of lock.
2003-07-04 07:49:06 +00:00
David Xu
f399623004 If select() is only used for sleep, convert it to nanosleep,
it only need purely wait in user space.
2003-07-03 13:36:29 +00:00
David Xu
8b258c151d Check if thread is in critical region, only testing check_pending
is not enough.
2003-07-03 10:12:21 +00:00
Ruslan Ermilov
a8f9b6fdbf Style. 2003-07-02 20:52:39 +00:00
Ruslan Ermilov
cdae046749 Take thr_support.c out of SRCS so that it does not end up in libraries.
Record the missing dependency of thr_libc.So on the libc_pic.a library.

OK'ed by:	kan
2003-07-02 20:51:30 +00:00
David Xu
98c3b7810b Set unlock_mutex to 1 after locked mutex.
Use THR_CONDQ_CLEAR not THR_COND_SET in cond_queue_deq, current
cond_queue_deq is not used.
2003-07-02 14:12:37 +00:00
David Xu
eb2bb9e574 Fix typo. 2003-07-02 13:23:03 +00:00
Ruslan Ermilov
dfebdcdf7c Unbreak "make checkdpadd". 2003-07-01 15:37:35 +00:00
Ruslan Ermilov
0b3cbc5c38 Axe AINC.
Submitted by:	bde
2003-07-01 15:07:01 +00:00
David Xu
5af40bb68a Because there are only _SIG_MAXSIG elements in thread siginfo array,
use [signal number - 1] as subscript to access the array.
2003-06-30 06:16:50 +00:00
David Xu
a913c5dd9d Remove surplus unlocking code I accidentally checked in. This won't be
triggered until LDT entry is exhausted.
2003-06-30 05:49:06 +00:00
David Xu
a772047bc6 o Use a daemon thread to monitor signal events in kernel, if pending
signals were changed in kernel, it will retrieve the pending set and
  try to find a thread to dispatch the signal. The dispatching process
  can be rolled back if the signal is no longer in kernel.

o Create two functions _thr_signal_init() and _thr_signal_deinit(),
  all signal action settings are retrieved from kernel when threading
  mode is turned on, after a fork(), child process will reset them to
  user settings by calling _thr_signal_deinit(). when threading mode
  is not turned on, all signal operations are direct past to kernel.

o When a thread generated a synchoronous signals and its context returned
  from completed list, UTS will retrieve the signal from its mailbox and try
  to deliver the signal to thread.

o Context signal mask is now only used when delivering signals, thread's
  current signal mask is always the one in pthread structure.

o Remove have_signals field in pthread structure, replace it with
  psf_valid in pthread_signal_frame. when psf_valid is true, in context
  switch time, thread will backout itself from some mutex/condition
  internal queues, then begin to process signals. when a thread is not
  at blocked state and running, check_pending indicates there are signals
  for the thread, after preempted and then resumed time, UTS will try to
  deliver signals to the thread.

o At signal delivering time, not only pending signals in thread will be
  scanned, process's pending signals will be scanned too.

o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
  replace it with oldsigmask in pthread structure, when a thread calls
  sigwait(), its current signal mask is backuped to oldsigmask, and waitset
  is copied to its signal mask and when the thread gets a signal in the
  waitset range, its current signal mask is restored from oldsigmask,
  these are done in atomic fashion.

o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().

o Signal code locking is better than previous, there is fewer race conditions.

o Temporary disable most of code in _kse_single_thread as it is not safe
  after fork().
2003-06-28 09:55:02 +00:00
David Xu
d15cbd7dc0 Use mmap retuned value.
Reviewed by: deischen
2003-06-28 09:48:05 +00:00
David Xu
8d5f23a1f9 Temporary disable rwlock based code, replace it with low level KSE locking
code until rtld-elf and libkse can cooperate better, those code can be
restored.

Reviewed by: deischen
2003-06-28 09:47:22 +00:00
David Xu
52d9c77df4 Write new thread pointer back only when success.
Reviewed by: deischen
2003-06-28 09:41:59 +00:00
David Xu
a56b526b51 After thread was interrupted by signal, it should relock mutex.
Reviewed by: deischen
2003-06-28 09:40:57 +00:00
David Xu
a07576d63c if thread is exiting, just returns. kse_thr_interrupt interface
was changed, it needs signal parameter, pass -1 to it, it indicates to
interrupt syscall.

Reviewed by: deischen
2003-06-28 09:39:35 +00:00
Marcel Moolenaar
fd62f5ca46 Implement _ia64_save_context() and _ia64_restore_context(). Both
functions are derived from the swapctx() and restorectx() (resp)
from sys/ia64/ia64/context.s. The code is expected to be 99%
correct, but has not yet been tested.

Note that with these functions operating on mcontext_t, we also
created the foundation upon which we can implement getcontext(2)
and setcontext(2) replacements. It's not guaranteed that the use
of these syscalls and _ia64_{save|restore}_context() on the same
uicontext_t is actually going to work. Replacing the syscalls is
now trivially achieved.

This commit completes the ia64 port of libpthread itself (modulo
testing and bugfixes).
2003-06-27 06:15:13 +00:00
Marcel Moolenaar
b51f305ec1 Implement _ia64_enter_uts(). The purpose of this function is to switch
the register stack and memory stack and call the function given to it.

While here, provide empty, non-working, stubs for the context functions
(_ia64_save_context() and _ia64_restore_context()) so that anyone can at
least compile libkse from CVS sources. Real implementations will follow
soon.
2003-06-26 05:40:15 +00:00
Marcel Moolenaar
6351f43f14 Implement _thr_enter_uts() and _thr_switch() as inline functions to
minimize the amount and complexity of assembly code that needs to be
written. This way the core functionality is spread over 3 elementary
functions that don't have to do anything that can more easily and
more safely be done in C. As such, assembly code will only have to
know about the definition of mcontext_t.
The runtime cost of not having these functions being inlined is less
important than the cleanliness and maintainability of the code at
this stage of the implementation.
2003-06-26 03:55:58 +00:00
Marcel Moolenaar
5858b0cea8 Explicitly widen int types before casting to pointer types. On 64-bit
platforms the compiler warns about incompatible integer/pointer casts
and on ia64 this generally is bad news. We know that what we're doing
here is valid/correct, so suppress the warning. No functional change.

Sleeps better: marcel
2003-06-24 00:37:26 +00:00
Marcel Moolenaar
82468d1f27 Untangle the inter-dependency of kse types and ksd types/functions
by moving the definition of struct ksd to pthread_md.h and removing
the inclusion of ksd.h from thr_private.h (which has the definition
of struct kse and kse_critical_t). This allows ksd.h to have inline
functions that use struct kse and kse_critical_t and generally
yields a cleaner implementation at the cost of not having all ksd
related types/definitions in one header.

Implement the ksd functionality on ia64 by using inline functions
and permanently remove ksd.c from the ia64 specific makefile.

This change does not clean up the i386 specific version of ksd.h.

NOTE: The ksd code on ia64 abuses the tp register in the same way
as it is abused in libthr in that it is incompatible with the
runtime specification. This will be address when support for TLS
hits the tree.
2003-06-23 23:15:06 +00:00
Marcel Moolenaar
46559d7101 Change the definition of _ksd_curkse, _ksd_curthread and
_ksd_readandclear_tmbx to be function-like. That way we
can define them as inline functions or create prototypes
for them.

This change allows the ksd interface on ia64 to be fully
inlined.
2003-06-23 09:49:16 +00:00
Marcel Moolenaar
ca4b6c293b Define THR_{G|S}ETCONTEXT to expand to {g|s}etcontext(2).
Define THR_ALIGN to align at 16-byte boundaries.
2003-06-23 04:52:09 +00:00
Marcel Moolenaar
97caaa6522 Implement atomic_swap_{int|long|ptr}. Define atomic_swap_ptr as a
macro that expands to atomic_swap_long() to avoid compiler warnings
caused by incompatible pointer passing.
2003-06-23 04:44:43 +00:00
Marcel Moolenaar
842728619a Move the machine specific files from sys/Makefile.inc and put them
in a machine specific makefile. While here, sort the sub-directories
in Makefile and remove _atomic_lock.S from all makefiles.
2003-06-23 04:28:31 +00:00
David Xu
eb90369fa6 Don't lock scheduler lock twice. 2003-06-18 06:08:03 +00:00
Daniel Eischen
690f13f3c3 After selecting a thread to handle a signal and taking
its scheduling lock, make sure that the thread still has
the signal unmasked.

Make a debug statement conditional on debugging being
enabled.
2003-06-08 17:37:21 +00:00
Daniel Eischen
f91de797ce Insert threads at the end of the free thread list so that
the chance of getting the same thread id when allocating a
new thread is reduced.  This won't work if the application
creates a new thread for every time a thread exits, but
we're still within the allowances of POSIX.
2003-06-08 17:35:11 +00:00
Daniel Eischen
4d6f145a3b Provide a reference to __sys_write. The implementation uses this when
debugging is enabled so the symbol needs to be resolved before rtld
locking is enabled.  I may not really know what I'm talking about,
but it works.

Submitted by:	kan
2003-06-08 17:29:32 +00:00
Warner Losh
cedfd4f63f Don't force -L/usr/lib. This is incorrect because we should not be
looking at the host environment for anything.  This breaks building
-CURRENT on 4.x as well.

Submitted by: kan@
2003-06-08 03:58:20 +00:00
David Xu
a05fa0abea Only init _thread_sigact once, needn't init it again after a fork().
Obtained from: deischen
2003-06-04 12:40:21 +00:00
David Xu
cd0a0c267b Despite whether threaded mode is turned on, always save thread's
signal mask.
2003-06-04 12:38:21 +00:00
David Xu
e84a8d0d65 KMF_DONE is now in /sys/sys/kse.h, no longer need to define it here. 2003-06-04 03:22:59 +00:00
David Xu
a4c69f224b Free memory of internal low level lock when mutex and condition variable
are destroyed.

Submitted by: tegge
2003-06-03 02:21:01 +00:00
David Xu
9abece6475 Save THR_FLAGS_IN_TDLIST in signal frame, otherwise if a thread received
a signal will can not be removed from thread list after it exited.

Reviewed by: deischen
Approved by: re (jhb)
2003-05-30 14:50:16 +00:00
Alexander Kabaev
84d55c7fad Attempt to eliminate PLT relocations from rwlock aquire/release
path, making them suitable for direct use by the dynamic loader.

Register libpthread-specific locking API with rtld on startup.

This still has some rough edges with signals which should be
addresses later.

Approved by:	re (scottl)
2003-05-30 00:21:52 +00:00
Daniel Eischen
43dd76d242 Call the __sys_sigprocmask(the system call) when sigprocmask()
is called and the application is not threaded.  This works around
a problem when an application that hasn't yet become threaded
tries to jump out of a signal handler.

Reported by:	mbr
Approved by:	re@ (rwatson)
2003-05-30 00:09:22 +00:00
Daniel Eischen
28362a5c80 Don't really spin on a spinlock; silently convert it to the same
low-level lock used by the libpthread implementation.  In the
future, we'll eliminate spinlocks from libc but that will wait
until after 5.1-release.

Don't call an application signal handler if the handler is
the same as the library-installed handler.  This seems to
be possible after a fork and is the cause of konsole hangs.

Approved by:	re@ (jhb)
2003-05-29 17:10:45 +00:00
Daniel Eischen
1cb570c531 Change low-level locking a bit so that we can tell if
a lock is being waitied on.

Fix a races in join and cancellation.

When trying to wait on a CV and the library is not yet
threaded, make it threaded so that waiting actually works.

When trying to nanosleep() and we're not threaded, just
call the system call nanosleep instead of adding the thread
to the wait queue.

Clean up adding/removing new threads to the "all threads queue",
assigning them unique ids, and tracking how many active threads
there are.  Do it all when the thread is added to the scheduling
queue instead of making pthread_create() know how to do it.

Fix a race where a thread could be marked for signal delivery
but it could be exited before we actually add the signal to it.

Other minor cleanups and bug fixes.

Submitted by:	davidxu
Approved by:	re@ (blanket for libpthread)
2003-05-24 02:29:25 +00:00
Daniel Eischen
28f318b941 Eek, staticize a couple of functions that shouldn't
be external (initialize()!).

Remove cancellation points from _pthread_cond_wait and
_pthread_cond_timedwait (single underscore versions are
libc private functions).  Point the weak reference(!) for
these functions to the versions with cancellation points.

Approved by:	re@(blanket till 5/19)
Pointed out by:	kan (cancellation point bug)
2003-05-19 23:04:50 +00:00
Daniel Eischen
fd626336fd Add a method of yielding the current thread with the scheduler
lock held (_thr_sched_switch_unlocked()) and use this to avoid
dropping the scheduler lock and having the scheduler retake the
same lock again.

Add a better way of detecting if a low-level lock is in use.

When switching out a thread due to blocking in the UTS, don't
switch to the KSE's scheduler stack only to switch back to
another thread.  If possible switch to the new thread directly
from the old thread and avoid the overhead of the extra
context switch.

Check for pending signals on a thread when entering the scheduler
and add them to the threads signal frame.  This includes some
other minor signal fixes.

Most of this was a joint effor between davidxu and myself.

Reviewed by:	davidxu
Approved by:	re@ (blanket for libpthread)
2003-05-16 19:58:30 +00:00
Daniel Eischen
07e6b1c7a3 Make pthread_join() async-cancel-safe. David was going to commit
this, but I think he's asleep and want to be sure it gets in before
the freeze.

Submitted by:	davidxu
2003-05-06 00:02:54 +00:00
David Xu
f508d26091 call dump_queues() only when DEBUG_THREAD_KERN is defined, save some
cpu cycles.
2003-05-05 05:01:19 +00:00
Daniel Eischen
c72cd7c9e2 Protect against a race between granting a lock and accessing
other parts of the lock.

Submitted by:	davidxu
2003-05-04 22:29:09 +00:00
Daniel Eischen
40791d9d15 Fix suspend and resume.
Submitted (in part) by:	Kazuaki Oda <kaakun@highway.ne.jp>
2003-05-04 16:17:01 +00:00
David Xu
99c883294c Handle thread canceled case, it is same as signal caused backout,
but will break out of loop.
2003-05-02 11:39:00 +00:00
Daniel Eischen
d143dde438 Move the mailbox to the beginning of the thread and align the
thread so that the context (SSE FPU state) is also aligned.
2003-04-30 15:05:17 +00:00
David Xu
d1021be03f Call kse_wakeup_mutli() after remove current thread from RUNQ to avoid
doing unnecessary idle kse wakeup.
2003-04-30 01:15:21 +00:00
David Xu
30a2952c90 Call kse_wakeup_multi() to wakeup idle KSEs when there are threads ready
to run.
2003-04-30 01:03:58 +00:00
Daniel Eischen
6cc13fa9ad Create the thread signal lock as a KSE lock (as opposed to
a thread lock).

Better protect access to thread state while searching for
threads to handle a signal.

Better protect access to process pending signals while processing
a thread in sigwait().

Submitted by:	davidxu
2003-04-29 21:03:33 +00:00
Daniel Eischen
55613576f5 o Don't add a scope system thread's KSE to the list of available
KSEs when it's thread exits; allow the GC handler to do that.

o Make spinlock/spinlock critical regions.

The following were submitted by davidxu

  o Alow thr_switch() to take a null mailbox argument.

  o Better protect cancellation checks.

  o Don't set KSE specific data when creating new KSEs; rely on the
    first upcall of the KSE to set it.

  o Add the ability to set the maximum concurrency level and do this
    automatically.  We should have a way to enable/disable this with
    some sort of tunable because some applications may not want this
    to be the default.

  o Hold the scheduling lock across thread switch calls.

  o If scheduling of a thread fails, make sure to remove it from the list
    of active threads.

  o Better protect accesses to a joining threads when the target thread is
    exited and detached.

  o Remove some macro definitions that are now provided by <sys/kse.h>.

  o Don't leave the library in threaded mode if creation of the initial
    KSE fails.

  o Wakeup idle KSEs when there are threads ready to run.

  o Maintain the number of threads active in the priority queue.
2003-04-28 23:56:12 +00:00
Daniel Eischen
76f344139c Use the correct link entry for walking the list of threads.
While I'm here, use the TAILQ_FOREACH macro instead of a more
manual method which was inherited from libc_r (so we could
remove elements from the list which isn't needed for libpthread).

Submitted by:	Kazuaki Oda <kaakun@highway.ne.jp>
2003-04-28 21:35:06 +00:00
Daniel Eischen
fd47bf962d Remove the %gs restoring hack (already commented out).
Don't install man pages.

Temporarily (again) rename the library to libkse.  It will be put back
to libpthread after more wide-spread testing.
2003-04-25 01:31:56 +00:00
Daniel Eischen
c159269082 Remove the i386-specific hack (well, we only run on i386 anyways)
to always set %gs when resuming a thread.

Install this library as libpthread instead of libkse.
2003-04-23 21:48:29 +00:00
Daniel Eischen
f1c8192fd4 Protect thread errno from being changed while operating
on behalf of the KSE.

Add a kse_reinit function to reinitialize a reused KSE.

Submitted by:	davidxu
2003-04-23 21:46:50 +00:00
Daniel Eischen
29fde418c1 Set the quantum for scope system threads to 0 (no quantum). 2003-04-22 21:32:32 +00:00
Daniel Eischen
42a5f6248b Add a working pthread_[gs]etconcurrency. Initial null implementation
provided by Sergey A. Osokin <osa@freebsd.org.ru>.

In order to test this on a single CPU machine, you need to:

    sysctl kern.threads.debug=1
    sysctl kern.threads.virtual_cpu=2
2003-04-22 20:29:16 +00:00
Daniel Eischen
6dee371a55 Add a couple asserts to pthread_cond_foo to ensure the (low-level)
lock level is 0.  Thus far, the threads implementation doesn't use
mutexes or condition variables so the lock level should be 0.

Save the return value when trying to schedule a new thread and
use this to return an error from pthread_create().

Change the max sleep time for an idle KSE to 1 minute from 2 minutes.

Maintain a count of the number of KSEs within a KSEG.

With these changes scope system threads seem to work, but heavy
use of them crash the kernel (supposedly VM bugs).
2003-04-22 20:28:33 +00:00
Daniel Eischen
02245e6120 Add an i386-specifc hack to always set %gs. There still seems
to be instances where the kernel doesn't  properly save and/or
restore it.

Use noupcall and nocompleted flags in the KSE mailbox.  These
require kernel changes to work which will be committed sometime
later.  Things still work without the changes.

Remove the general kse entry function and use two different
functions -- one for scope system threads and one for scope
process threads.  The scope system function is not yet enabled
and we use the same function for all threads at the moment.

Keep a copy of the KSE stack for the case that a KSE runs
a scope system thread and uses the same stack as the thread
(no upcalls are generated, so a separate stack isn't needed).
This isn't enabled yet.

Use a separate field for the KSE waiting flag.  It isn't
correct to use the mailbox flags field.

The following fixes were provided by David Xu:

  o Initialize condition variable locks with thread versions
    of the low-level locking functions instead of the kse versions.

  o Enable threading before creating the first thread instead
    of after.

  o Don't enter critical regions when trying to malloc/free
    or call functions that malloc/free.

  o Take the scheduling lock when inheriting thread attributes.

  o Check the attribute's stack pointer instead of the
    attributes stack size for null when allocating a
    thread's stack.

  o Add a kseg reinit function so we don't have to destroy and
    then recreate the same lock.

  o Check the return value of kse_create() and return an
    appropriate error if it fails.

  o Don't forget to destroy a thread's locks when freeing it.

  o Examine the correct flags word for checking to see if
    a thread is in a synchronization queue.

Things should now work on an SMP kernel.
2003-04-21 04:02:56 +00:00