2005-01-06 23:35:40 +00:00
|
|
|
/*-
|
2002-06-29 07:04:59 +00:00
|
|
|
* Copyright (C) 2001 Julian Elischer <julian@freebsd.org>.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice(s), this list of conditions and the following disclaimer as
|
2004-01-10 18:34:01 +00:00
|
|
|
* the first lines of this file unmodified other than the possible
|
2002-06-29 07:04:59 +00:00
|
|
|
* addition of one or more copyright notices.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice(s), this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
|
|
|
|
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
|
|
|
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
|
|
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY
|
|
|
|
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
|
|
|
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
|
|
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
|
|
|
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
|
|
|
|
* DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2002-06-29 07:04:59 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/kernel.h>
|
2005-01-29 23:12:00 +00:00
|
|
|
#include <sys/imgact.h>
|
2002-06-29 07:04:59 +00:00
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/mutex.h>
|
|
|
|
#include <sys/proc.h>
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
#include <sys/ptrace.h>
|
2002-11-17 23:26:42 +00:00
|
|
|
#include <sys/smp.h>
|
2004-10-07 13:50:10 +00:00
|
|
|
#include <sys/syscallsubr.h>
|
2002-10-24 08:46:34 +00:00
|
|
|
#include <sys/sysproto.h>
|
2002-11-21 01:22:38 +00:00
|
|
|
#include <sys/sched.h>
|
2002-06-29 07:04:59 +00:00
|
|
|
#include <sys/signalvar.h>
|
Switch the sleep/wakeup and condition variable implementations to use the
sleep queue interface:
- Sleep queues attempt to merge some of the benefits of both sleep queues
and condition variables. Having sleep qeueus in a hash table avoids
having to allocate a queue head for each wait channel. Thus, struct cv
has shrunk down to just a single char * pointer now. However, the
hash table does not hold threads directly, but queue heads. This means
that once you have located a queue in the hash bucket, you no longer have
to walk the rest of the hash chain looking for threads. Instead, you have
a list of all the threads sleeping on that wait channel.
- Outside of the sleepq code and the sleep/cv code the kernel no longer
differentiates between cv's and sleep/wakeup. For example, calls to
abortsleep() and cv_abort() are replaced with a call to sleepq_abort().
Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and
cv_waitq_remove() have been replaced with calls to sleepq_remove().
- The sched_sleep() function no longer accepts a priority argument as
sleep's no longer inherently bump the priority. Instead, this is soley
a propery of msleep() which explicitly calls sched_prio() before
blocking.
- The TDF_ONSLEEPQ flag has been dropped as it was never used. The
associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been
dropped and replaced with a single explicit clearing of td_wchan.
TD_SET_ONSLEEPQ() would really have only made sense if it had taken
the wait channel and message as arguments anyway. Now that that only
happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
|
|
|
#include <sys/sleepqueue.h>
|
2007-11-05 11:36:16 +00:00
|
|
|
#include <sys/syslog.h>
|
2002-06-29 07:04:59 +00:00
|
|
|
#include <sys/kse.h>
|
|
|
|
#include <sys/ktr.h>
|
|
|
|
#include <vm/uma.h>
|
2002-07-17 23:43:55 +00:00
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2003-02-17 05:14:26 +00:00
|
|
|
static uma_zone_t upcall_zone;
|
2002-06-29 07:04:59 +00:00
|
|
|
|
2002-09-15 23:52:25 +00:00
|
|
|
/* DEBUG ONLY */
|
2004-06-07 07:25:03 +00:00
|
|
|
extern int virtual_cpu;
|
|
|
|
extern int thread_debug;
|
2004-06-11 17:48:20 +00:00
|
|
|
|
2004-06-07 07:25:03 +00:00
|
|
|
extern int max_threads_per_proc;
|
|
|
|
extern int max_groups_per_proc;
|
|
|
|
extern int max_threads_hits;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
extern struct mtx kse_lock;
|
2002-09-15 23:52:25 +00:00
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
|
2004-01-10 18:34:01 +00:00
|
|
|
TAILQ_HEAD(, kse_upcall) zombie_upcalls =
|
2003-02-17 05:14:26 +00:00
|
|
|
TAILQ_HEAD_INITIALIZER(zombie_upcalls);
|
2002-06-29 07:04:59 +00:00
|
|
|
|
2004-08-31 11:52:05 +00:00
|
|
|
static int thread_update_usr_ticks(struct thread *td);
|
2007-11-05 11:36:16 +00:00
|
|
|
static int thread_alloc_spare(struct thread *td);
|
2007-06-12 19:49:39 +00:00
|
|
|
static struct thread *thread_schedule_upcall(struct thread *td, struct kse_upcall *ku);
|
|
|
|
static struct kse_upcall *upcall_alloc(void);
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
struct mtx kse_lock;
|
|
|
|
MTX_SYSINIT(kse_lock, &kse_lock, "kse lock", MTX_SPIN);
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
struct kse_upcall *
|
|
|
|
upcall_alloc(void)
|
|
|
|
{
|
|
|
|
struct kse_upcall *ku;
|
|
|
|
|
2005-02-24 00:05:50 +00:00
|
|
|
ku = uma_zalloc(upcall_zone, M_WAITOK | M_ZERO);
|
2003-02-17 05:14:26 +00:00
|
|
|
return (ku);
|
|
|
|
}
|
|
|
|
|
2007-07-27 09:21:18 +00:00
|
|
|
void
|
|
|
|
upcall_reap(void)
|
|
|
|
{
|
|
|
|
TAILQ_HEAD(, kse_upcall) zupcalls;
|
|
|
|
struct kse_upcall *ku_item, *ku_tmp;
|
|
|
|
|
|
|
|
TAILQ_INIT(&zupcalls);
|
|
|
|
mtx_lock_spin(&kse_lock);
|
|
|
|
if (!TAILQ_EMPTY(&zombie_upcalls)) {
|
|
|
|
TAILQ_CONCAT(&zupcalls, &zombie_upcalls, ku_link);
|
|
|
|
TAILQ_INIT(&zombie_upcalls);
|
|
|
|
}
|
|
|
|
mtx_unlock_spin(&kse_lock);
|
|
|
|
TAILQ_FOREACH_SAFE(ku_item, &zupcalls, ku_link, ku_tmp)
|
|
|
|
uma_zfree(upcall_zone, ku_item);
|
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
void
|
|
|
|
upcall_remove(struct thread *td)
|
|
|
|
{
|
|
|
|
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK_ASSERT(td->td_proc, MA_OWNED);
|
2007-07-23 14:52:22 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2004-09-22 15:24:33 +00:00
|
|
|
if (td->td_upcall != NULL) {
|
2007-03-07 20:17:41 +00:00
|
|
|
/*
|
|
|
|
* If we are not a bound thread then decrement the count of
|
|
|
|
* possible upcall sources
|
|
|
|
*/
|
|
|
|
if (td->td_pflags & TDP_SA)
|
|
|
|
td->td_proc->p_numupcalls--;
|
2007-07-23 14:52:22 +00:00
|
|
|
mtx_lock_spin(&kse_lock);
|
2003-02-17 05:14:26 +00:00
|
|
|
td->td_upcall->ku_owner = NULL;
|
2007-07-23 14:52:22 +00:00
|
|
|
TAILQ_REMOVE(&td->td_upcall->ku_proc->p_upcalls, td->td_upcall,
|
|
|
|
ku_link);
|
|
|
|
TAILQ_INSERT_HEAD(&zombie_upcalls, td->td_upcall, ku_link);
|
|
|
|
mtx_unlock_spin(&kse_lock);
|
2004-09-22 15:24:33 +00:00
|
|
|
td->td_upcall = NULL;
|
2004-01-10 18:34:01 +00:00
|
|
|
}
|
2003-02-17 05:14:26 +00:00
|
|
|
}
|
2006-10-26 21:42:22 +00:00
|
|
|
#endif
|
2003-02-17 05:14:26 +00:00
|
|
|
|
2003-12-07 19:34:29 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
|
|
|
struct kse_switchin_args {
|
2004-07-12 07:39:20 +00:00
|
|
|
struct kse_thr_mailbox *tmbx;
|
|
|
|
int flags;
|
2003-12-07 19:34:29 +00:00
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
#ifdef KSE
|
|
|
|
void
|
|
|
|
kse_unlink(struct thread *td)
|
|
|
|
{
|
|
|
|
mtx_lock_spin(&kse_lock);
|
|
|
|
thread_unlink(td);
|
|
|
|
mtx_unlock_spin(&kse_lock);
|
2007-06-12 19:49:39 +00:00
|
|
|
upcall_remove(td);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2003-12-07 19:34:29 +00:00
|
|
|
int
|
|
|
|
kse_switchin(struct thread *td, struct kse_switchin_args *uap)
|
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2004-07-12 07:39:20 +00:00
|
|
|
struct kse_thr_mailbox tmbx;
|
|
|
|
struct kse_upcall *ku;
|
2003-12-07 19:34:29 +00:00
|
|
|
int error;
|
|
|
|
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
|
|
|
if ((ku = td->td_upcall) == NULL || TD_CAN_UNBIND(td)) {
|
|
|
|
thread_unlock(td);
|
2004-07-12 07:39:20 +00:00
|
|
|
return (EINVAL);
|
2007-07-23 14:52:22 +00:00
|
|
|
}
|
|
|
|
thread_unlock(td);
|
2004-07-12 07:39:20 +00:00
|
|
|
error = (uap->tmbx == NULL) ? EINVAL : 0;
|
2003-12-07 19:34:29 +00:00
|
|
|
if (!error)
|
2004-07-12 07:39:20 +00:00
|
|
|
error = copyin(uap->tmbx, &tmbx, sizeof(tmbx));
|
|
|
|
if (!error && (uap->flags & KSE_SWITCHIN_SETTMBX))
|
|
|
|
error = (suword(&ku->ku_mailbox->km_curthread,
|
|
|
|
(long)uap->tmbx) != 0 ? EINVAL : 0);
|
2003-12-07 19:34:29 +00:00
|
|
|
if (!error)
|
2004-07-12 07:39:20 +00:00
|
|
|
error = set_mcontext(td, &tmbx.tm_context.uc_mcontext);
|
|
|
|
if (!error) {
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
suword32(&uap->tmbx->tm_lwp, td->td_tid);
|
2004-07-12 07:39:20 +00:00
|
|
|
if (uap->flags & KSE_SWITCHIN_SETTMBX) {
|
|
|
|
td->td_mailbox = uap->tmbx;
|
2004-08-28 04:08:05 +00:00
|
|
|
td->td_pflags |= TDP_CAN_UNBIND;
|
2004-07-12 07:39:20 +00:00
|
|
|
}
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
PROC_LOCK(td->td_proc);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
if (td->td_proc->p_flag & P_TRACED) {
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
_PHOLD(td->td_proc);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
if (tmbx.tm_dflags & TMDF_SSTEP)
|
|
|
|
ptrace_single_step(td);
|
|
|
|
else
|
|
|
|
ptrace_clear_single_step(td);
|
2004-08-03 02:23:06 +00:00
|
|
|
if (tmbx.tm_dflags & TMDF_SUSPEND) {
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
/* fuword can block, check again */
|
|
|
|
if (td->td_upcall)
|
|
|
|
ku->ku_flags |= KUF_DOUPCALL;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
}
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
_PRELE(td->td_proc);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
}
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
PROC_UNLOCK(td->td_proc);
|
2004-07-12 07:39:20 +00:00
|
|
|
}
|
2003-12-07 19:34:29 +00:00
|
|
|
return ((error == 0) ? EJUSTRETURN : error);
|
2006-10-26 21:42:22 +00:00
|
|
|
#else /* !KSE */
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
2003-12-07 19:34:29 +00:00
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
|
|
|
struct kse_thr_interrupt_args {
|
|
|
|
struct kse_thr_mailbox * tmbx;
|
2003-07-17 22:45:33 +00:00
|
|
|
int cmd;
|
|
|
|
long data;
|
2003-02-17 05:14:26 +00:00
|
|
|
};
|
|
|
|
*/
|
2002-10-24 08:46:34 +00:00
|
|
|
int
|
|
|
|
kse_thr_interrupt(struct thread *td, struct kse_thr_interrupt_args *uap)
|
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2004-10-07 13:50:10 +00:00
|
|
|
struct kse_execve_args args;
|
2005-01-29 23:12:00 +00:00
|
|
|
struct image_args iargs;
|
2002-10-30 02:28:41 +00:00
|
|
|
struct proc *p;
|
|
|
|
struct thread *td2;
|
2004-08-08 22:32:20 +00:00
|
|
|
struct kse_upcall *ku;
|
|
|
|
struct kse_thr_mailbox *tmbx;
|
|
|
|
uint32_t flags;
|
2004-10-07 13:50:10 +00:00
|
|
|
int error;
|
2002-10-24 08:46:34 +00:00
|
|
|
|
2002-10-31 08:00:51 +00:00
|
|
|
p = td->td_proc;
|
2003-08-26 11:33:15 +00:00
|
|
|
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (!(p->p_flag & P_SA)) {
|
|
|
|
PROC_UNLOCK(p);
|
2002-10-30 05:09:29 +00:00
|
|
|
return (EINVAL);
|
2007-07-23 14:52:22 +00:00
|
|
|
}
|
|
|
|
PROC_UNLOCK(p);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
|
2003-07-17 22:45:33 +00:00
|
|
|
switch (uap->cmd) {
|
|
|
|
case KSE_INTR_SENDSIG:
|
|
|
|
if (uap->data < 0 || uap->data > _SIG_MAXSIG)
|
|
|
|
return (EINVAL);
|
|
|
|
case KSE_INTR_INTERRUPT:
|
|
|
|
case KSE_INTR_RESTART:
|
|
|
|
PROC_LOCK(p);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2003-07-17 22:45:33 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td2) {
|
|
|
|
if (td2->td_mailbox == uap->tmbx)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (td2 == NULL) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2003-07-17 22:45:33 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
return (ESRCH);
|
|
|
|
}
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(td2);
|
|
|
|
PROC_SUNLOCK(p);
|
2003-07-17 22:45:33 +00:00
|
|
|
if (uap->cmd == KSE_INTR_SENDSIG) {
|
|
|
|
if (uap->data > 0) {
|
|
|
|
td2->td_flags &= ~TDF_INTERRUPT;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td2);
|
2005-11-03 04:49:16 +00:00
|
|
|
tdsignal(p, td2, (int)uap->data, NULL);
|
2003-07-17 22:45:33 +00:00
|
|
|
} else {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td2);
|
2003-07-17 22:45:33 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
td2->td_flags |= TDF_INTERRUPT | TDF_ASTPENDING;
|
|
|
|
if (TD_CAN_UNBIND(td2))
|
|
|
|
td2->td_upcall->ku_flags |= KUF_DOUPCALL;
|
|
|
|
if (uap->cmd == KSE_INTR_INTERRUPT)
|
|
|
|
td2->td_intrval = EINTR;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
else
|
2003-07-17 22:45:33 +00:00
|
|
|
td2->td_intrval = ERESTART;
|
Switch the sleep/wakeup and condition variable implementations to use the
sleep queue interface:
- Sleep queues attempt to merge some of the benefits of both sleep queues
and condition variables. Having sleep qeueus in a hash table avoids
having to allocate a queue head for each wait channel. Thus, struct cv
has shrunk down to just a single char * pointer now. However, the
hash table does not hold threads directly, but queue heads. This means
that once you have located a queue in the hash bucket, you no longer have
to walk the rest of the hash chain looking for threads. Instead, you have
a list of all the threads sleeping on that wait channel.
- Outside of the sleepq code and the sleep/cv code the kernel no longer
differentiates between cv's and sleep/wakeup. For example, calls to
abortsleep() and cv_abort() are replaced with a call to sleepq_abort().
Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and
cv_waitq_remove() have been replaced with calls to sleepq_remove().
- The sched_sleep() function no longer accepts a priority argument as
sleep's no longer inherently bump the priority. Instead, this is soley
a propery of msleep() which explicitly calls sched_prio() before
blocking.
- The TDF_ONSLEEPQ flag has been dropped as it was never used. The
associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been
dropped and replaced with a single explicit clearing of td_wchan.
TD_SET_ONSLEEPQ() would really have only made sense if it had taken
the wait channel and message as arguments anyway. Now that that only
happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
|
|
|
if (TD_ON_SLEEPQ(td2) && (td2->td_flags & TDF_SINTR))
|
Fix a long standing race between sleep queue and thread
suspension code. When a thread A is going to sleep, it calls
sleepq_catch_signals() to detect any pending signals or thread
suspension request, if nothing happens, it returns without
holding process lock or scheduler lock, this opens a race
window which allows thread B to come in and do process
suspension work, however since A is still at running state,
thread B can do nothing to A, thread A continues, and puts
itself into actually sleeping state, but B has never seen it,
and it sits there forever until B is woken up by other threads
sometimes later(this can be very long delay or never
happen). Fix this bug by forcing sleepq_catch_signals to
return with scheduler lock held.
Fix sleepq_abort() by passing it an interrupted code, previously,
it worked as wakeup_one(), and the interruption can not be
identified correctly by sleep queue code when the sleeping
thread is resumed.
Let thread_suspend_check() returns EINTR or ERESTART, so sleep
queue no longer has to use SIGSTOP as a hack to build a return
value.
Reviewed by: jhb
MFC after: 1 week
2006-02-15 23:52:01 +00:00
|
|
|
sleepq_abort(td2, td2->td_intrval);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td2);
|
2002-10-30 02:28:41 +00:00
|
|
|
}
|
2003-07-17 22:45:33 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
break;
|
|
|
|
case KSE_INTR_SIGEXIT:
|
|
|
|
if (uap->data < 1 || uap->data > _SIG_MAXSIG)
|
|
|
|
return (EINVAL);
|
|
|
|
PROC_LOCK(p);
|
|
|
|
sigexit(td, (int)uap->data);
|
|
|
|
break;
|
2004-08-08 22:32:20 +00:00
|
|
|
|
|
|
|
case KSE_INTR_DBSUSPEND:
|
|
|
|
/* this sub-function is only for bound thread */
|
|
|
|
if (td->td_pflags & TDP_SA)
|
|
|
|
return (EINVAL);
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
2004-08-08 22:32:20 +00:00
|
|
|
ku = td->td_upcall;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2004-08-08 22:32:20 +00:00
|
|
|
tmbx = (void *)fuword((void *)&ku->ku_mailbox->km_curthread);
|
|
|
|
if (tmbx == NULL || tmbx == (void *)-1)
|
|
|
|
return (EINVAL);
|
|
|
|
flags = 0;
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_LOCK(p);
|
2004-08-08 22:32:20 +00:00
|
|
|
while ((p->p_flag & P_TRACED) && !(p->p_flag & P_SINGLE_EXIT)) {
|
|
|
|
flags = fuword32(&tmbx->tm_dflags);
|
|
|
|
if (!(flags & TMDF_SUSPEND))
|
|
|
|
break;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2004-08-08 22:32:20 +00:00
|
|
|
thread_stopped(p);
|
|
|
|
PROC_UNLOCK(p);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(td);
|
|
|
|
thread_suspend_one(td);
|
|
|
|
PROC_SUNLOCK(p);
|
2004-08-08 22:32:20 +00:00
|
|
|
mi_switch(SW_VOL, NULL);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td);
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_LOCK(p);
|
2004-08-08 22:32:20 +00:00
|
|
|
}
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_UNLOCK(p);
|
2004-08-08 22:32:20 +00:00
|
|
|
return (0);
|
|
|
|
|
2004-10-07 13:50:10 +00:00
|
|
|
case KSE_INTR_EXECVE:
|
|
|
|
error = copyin((void *)uap->data, &args, sizeof(args));
|
|
|
|
if (error)
|
|
|
|
return (error);
|
2005-01-29 23:12:00 +00:00
|
|
|
error = exec_copyin_args(&iargs, args.path, UIO_USERSPACE,
|
|
|
|
args.argv, args.envp);
|
|
|
|
if (error == 0)
|
|
|
|
error = kern_execve(td, &iargs, NULL);
|
2004-10-07 13:50:10 +00:00
|
|
|
if (error == 0) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
SIGSETOR(td->td_siglist, args.sigpend);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
kern_sigprocmask(td, SIG_SETMASK, &args.sigmask, NULL,
|
|
|
|
0);
|
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
|
2003-07-17 22:45:33 +00:00
|
|
|
default:
|
|
|
|
return (EINVAL);
|
2002-10-30 02:28:41 +00:00
|
|
|
}
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
return (0);
|
2006-10-26 21:42:22 +00:00
|
|
|
#else /* !KSE */
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
|
|
|
struct kse_exit_args {
|
|
|
|
register_t dummy;
|
|
|
|
};
|
|
|
|
*/
|
2002-10-24 08:46:34 +00:00
|
|
|
int
|
|
|
|
kse_exit(struct thread *td, struct kse_exit_args *uap)
|
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2002-10-24 08:46:34 +00:00
|
|
|
struct proc *p;
|
2003-06-04 00:12:57 +00:00
|
|
|
struct kse_upcall *ku, *ku2;
|
|
|
|
int error, count;
|
2002-10-24 08:46:34 +00:00
|
|
|
|
|
|
|
p = td->td_proc;
|
2004-06-11 17:48:20 +00:00
|
|
|
/*
|
|
|
|
* Ensure that this is only called from the UTS
|
|
|
|
*/
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
|
|
|
if ((ku = td->td_upcall) == NULL || TD_CAN_UNBIND(td)) {
|
|
|
|
thread_unlock(td);
|
2002-10-30 03:01:28 +00:00
|
|
|
return (EINVAL);
|
2007-07-23 14:52:22 +00:00
|
|
|
}
|
|
|
|
thread_unlock(td);
|
2004-06-11 17:48:20 +00:00
|
|
|
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* Calculate the existing non-exiting upcalls in this process.
|
2004-06-11 17:48:20 +00:00
|
|
|
* If we are the last upcall but there are still other threads,
|
|
|
|
* then do not exit. We need the other threads to be able to
|
|
|
|
* complete whatever they are doing.
|
|
|
|
* XXX This relies on the userland knowing what to do if we return.
|
|
|
|
* It may be a better choice to convert ourselves into a kse_release
|
|
|
|
* ( or similar) and wait in the kernel to be needed.
|
2007-03-07 20:17:41 +00:00
|
|
|
* XXX Where are those other threads? I suppose they are waiting in
|
|
|
|
* the kernel. We should wait for them all at the user boundary after
|
|
|
|
* turning into an exit.
|
2004-06-11 17:48:20 +00:00
|
|
|
*/
|
2007-03-07 20:17:41 +00:00
|
|
|
count = 0;
|
2002-10-24 08:46:34 +00:00
|
|
|
PROC_LOCK(p);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
FOREACH_UPCALL_IN_PROC(p, ku2) {
|
2007-03-07 20:17:41 +00:00
|
|
|
if ((ku2->ku_flags & KUF_EXITING) == 0)
|
2003-06-04 00:12:57 +00:00
|
|
|
count++;
|
|
|
|
}
|
2007-03-07 20:17:41 +00:00
|
|
|
if (count == 1 && (p->p_numthreads > 1)) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2002-10-24 08:46:34 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
return (EDEADLK);
|
|
|
|
}
|
2003-06-04 00:12:57 +00:00
|
|
|
ku->ku_flags |= KUF_EXITING;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2003-06-04 00:12:57 +00:00
|
|
|
PROC_UNLOCK(p);
|
2004-06-11 17:48:20 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Mark the UTS mailbox as having been finished with.
|
|
|
|
* If that fails then just go for a segfault.
|
|
|
|
* XXX need to check it that can be deliverred without a mailbox.
|
|
|
|
*/
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
error = suword32(&ku->ku_mailbox->km_flags, ku->ku_mflags|KMF_DONE);
|
2004-08-08 22:32:20 +00:00
|
|
|
if (!(td->td_pflags & TDP_SA))
|
|
|
|
if (suword32(&td->td_mailbox->tm_lwp, 0))
|
|
|
|
error = EFAULT;
|
2003-06-04 00:12:57 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (error)
|
|
|
|
psignal(p, SIGSEGV);
|
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most
changes in MD code are trivial, before this change, trapsignal and
sendsig use discrete parameters, now they uses member fields of
ksiginfo_t structure. For sendsig, this change allows us to pass
POSIX realtime signal value to user code.
2. Remove cpu_thread_siginfo, it is no longer needed because we now always
generate ksiginfo_t data and feed it to libpthread.
3. Add p_sigqueue to proc structure to hold shared signals which were
blocked by all threads in the proc.
4. Add td_sigqueue to thread structure to hold all signals delivered to
thread.
5. i386 and amd64 now return POSIX standard si_code, other arches will
be fixed.
6. In this sigqueue implementation, pending signal set is kept as before,
an extra siginfo list holds additional siginfo_t data for signals.
kernel code uses psignal() still behavior as before, it won't be failed
even under memory pressure, only exception is when deleting a signal,
we should call sigqueue_delete to remove signal from sigqueue but
not SIGDELSET. Current there is no kernel code will deliver a signal
with additional data, so kernel should be as stable as before,
a ksiginfo can carry more information, for example, allow signal to
be delivered but throw away siginfo data if memory is not enough.
SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can
not be caught or masked.
The sigqueue() syscall allows user code to queue a signal to target
process, if resource is unavailable, EAGAIN will be returned as
specification said.
Just before thread exits, signal queue memory will be freed by
sigqueue_flush.
Current, all signals are allowed to be queued, not only realtime signals.
Earlier patch reviewed by: jhb, deischen
Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
|
|
|
sigqueue_flush(&td->td_sigqueue);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
2003-02-17 05:14:26 +00:00
|
|
|
upcall_remove(td);
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2004-09-05 02:09:54 +00:00
|
|
|
if (p->p_numthreads != 1) {
|
2003-03-11 00:07:53 +00:00
|
|
|
thread_stopped(p);
|
2002-10-24 08:46:34 +00:00
|
|
|
thread_exit();
|
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
2004-09-05 02:09:54 +00:00
|
|
|
/*
|
|
|
|
* This is the last thread. Just return to the user.
|
|
|
|
* Effectively we have left threading mode..
|
|
|
|
* The only real thing left to do is ensure that the
|
|
|
|
* scheduler sets out concurrency back to 1 as that may be a
|
|
|
|
* resource leak otherwise.
|
|
|
|
* This is an A[PB]I issue.. what SHOULD we do?
|
|
|
|
* One possibility is to return to the user. It may not cope well.
|
|
|
|
* The other possibility would be to let the process exit.
|
|
|
|
*/
|
2004-10-05 20:39:26 +00:00
|
|
|
thread_unthread(td);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2004-09-05 02:09:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2006-12-12 08:01:55 +00:00
|
|
|
#if 0
|
2002-10-30 03:01:28 +00:00
|
|
|
return (0);
|
2004-09-05 02:09:54 +00:00
|
|
|
#else
|
2006-12-12 08:01:55 +00:00
|
|
|
printf("kse_exit: called on last thread. Calling exit1()");
|
2004-09-05 02:09:54 +00:00
|
|
|
exit1(td, 0);
|
|
|
|
#endif
|
2006-10-26 21:42:22 +00:00
|
|
|
#else /* !KSE */
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
|
|
|
|
2002-12-10 02:33:45 +00:00
|
|
|
/*
|
2002-12-28 01:23:07 +00:00
|
|
|
* Either becomes an upcall or waits for an awakening event and
|
2003-02-17 05:14:26 +00:00
|
|
|
* then becomes an upcall. Only error cases return.
|
2002-12-10 02:33:45 +00:00
|
|
|
*/
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
|
|
|
struct kse_release_args {
|
2003-02-20 08:18:15 +00:00
|
|
|
struct timespec *timeout;
|
2003-02-17 05:14:26 +00:00
|
|
|
};
|
|
|
|
*/
|
2002-10-24 08:46:34 +00:00
|
|
|
int
|
2003-02-17 05:14:26 +00:00
|
|
|
kse_release(struct thread *td, struct kse_release_args *uap)
|
2002-10-24 08:46:34 +00:00
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2002-10-24 08:46:34 +00:00
|
|
|
struct proc *p;
|
2003-06-15 12:51:26 +00:00
|
|
|
struct kse_upcall *ku;
|
|
|
|
struct timespec timeout;
|
2003-02-20 08:18:15 +00:00
|
|
|
struct timeval tv;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
sigset_t sigset;
|
2003-02-20 08:18:15 +00:00
|
|
|
int error;
|
2002-10-24 08:46:34 +00:00
|
|
|
|
|
|
|
p = td->td_proc;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
2006-12-12 08:01:55 +00:00
|
|
|
if ((ku = td->td_upcall) == NULL || TD_CAN_UNBIND(td)) {
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2006-12-12 08:01:55 +00:00
|
|
|
printf("kse_release: called outside of threading. exiting");
|
|
|
|
exit1(td, 0);
|
|
|
|
}
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2003-02-20 08:18:15 +00:00
|
|
|
if (uap->timeout != NULL) {
|
|
|
|
if ((error = copyin(uap->timeout, &timeout, sizeof(timeout))))
|
|
|
|
return (error);
|
|
|
|
TIMESPEC_TO_TIMEVAL(&tv, &timeout);
|
|
|
|
}
|
2004-06-02 07:52:36 +00:00
|
|
|
if (td->td_pflags & TDP_SA)
|
2003-06-15 12:51:26 +00:00
|
|
|
td->td_pflags |= TDP_UPCALLING;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
else {
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
ku->ku_mflags = fuword32(&ku->ku_mailbox->km_flags);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
if (ku->ku_mflags == -1) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
sigexit(td, SIGSEGV);
|
|
|
|
}
|
|
|
|
}
|
2003-02-20 08:18:15 +00:00
|
|
|
PROC_LOCK(p);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
if (ku->ku_mflags & KMF_WAITSIGEVENT) {
|
|
|
|
/* UTS wants to wait for signal event */
|
2004-08-08 22:32:20 +00:00
|
|
|
if (!(p->p_flag & P_SIGEVENT) &&
|
|
|
|
!(ku->ku_flags & KUF_DOUPCALL)) {
|
2004-04-28 20:36:53 +00:00
|
|
|
td->td_kflags |= TDK_KSERELSIG;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
error = msleep(&p->p_siglist, &p->p_mtx, PPAUSE|PCATCH,
|
|
|
|
"ksesigwait", (uap->timeout ? tvtohz(&tv) : 0));
|
2004-04-28 20:36:53 +00:00
|
|
|
td->td_kflags &= ~(TDK_KSERELSIG | TDK_WAKEUP);
|
|
|
|
}
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
p->p_flag &= ~P_SIGEVENT;
|
|
|
|
sigset = p->p_siglist;
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
error = copyout(&sigset, &ku->ku_mailbox->km_sigscaught,
|
|
|
|
sizeof(sigset));
|
|
|
|
} else {
|
2004-08-08 22:32:20 +00:00
|
|
|
if ((ku->ku_flags & KUF_DOUPCALL) == 0 &&
|
|
|
|
((ku->ku_mflags & KMF_NOCOMPLETED) ||
|
2006-12-06 06:34:57 +00:00
|
|
|
(p->p_completed == NULL))) {
|
|
|
|
p->p_upsleeps++;
|
2004-04-28 20:36:53 +00:00
|
|
|
td->td_kflags |= TDK_KSEREL;
|
2006-12-06 06:34:57 +00:00
|
|
|
error = msleep(&p->p_completed, &p->p_mtx,
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
PPAUSE|PCATCH, "kserel",
|
|
|
|
(uap->timeout ? tvtohz(&tv) : 0));
|
2004-04-28 20:36:53 +00:00
|
|
|
td->td_kflags &= ~(TDK_KSEREL | TDK_WAKEUP);
|
2006-12-06 06:34:57 +00:00
|
|
|
p->p_upsleeps--;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
}
|
|
|
|
PROC_UNLOCK(p);
|
2003-06-15 12:51:26 +00:00
|
|
|
}
|
|
|
|
if (ku->ku_flags & KUF_DOUPCALL) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2003-06-15 12:51:26 +00:00
|
|
|
ku->ku_flags &= ~KUF_DOUPCALL;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
2002-12-10 02:33:45 +00:00
|
|
|
return (0);
|
2006-10-26 21:42:22 +00:00
|
|
|
#else /* !KSE */
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* struct kse_wakeup_args {
|
|
|
|
struct kse_mailbox *mbx;
|
|
|
|
}; */
|
|
|
|
int
|
|
|
|
kse_wakeup(struct thread *td, struct kse_wakeup_args *uap)
|
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2002-10-24 08:46:34 +00:00
|
|
|
struct proc *p;
|
2003-02-17 05:14:26 +00:00
|
|
|
struct kse_upcall *ku;
|
2002-12-28 01:23:07 +00:00
|
|
|
struct thread *td2;
|
2002-10-24 08:46:34 +00:00
|
|
|
|
|
|
|
p = td->td_proc;
|
2002-12-28 01:23:07 +00:00
|
|
|
td2 = NULL;
|
2003-02-17 05:14:26 +00:00
|
|
|
ku = NULL;
|
2002-10-24 08:46:34 +00:00
|
|
|
/* KSE-enabled processes only, please. */
|
2003-02-17 05:14:26 +00:00
|
|
|
PROC_LOCK(p);
|
2007-07-23 14:52:22 +00:00
|
|
|
if (!(p->p_flag & P_SA)) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2002-10-24 08:46:34 +00:00
|
|
|
if (uap->mbx) {
|
2006-12-06 06:34:57 +00:00
|
|
|
FOREACH_UPCALL_IN_PROC(p, ku) {
|
|
|
|
if (ku->ku_mailbox == uap->mbx)
|
2002-12-28 01:23:07 +00:00
|
|
|
break;
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
|
|
|
} else {
|
2006-12-06 06:34:57 +00:00
|
|
|
if (p->p_upsleeps) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
wakeup(&p->p_completed);
|
2003-02-17 05:14:26 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
return (0);
|
2002-12-28 01:23:07 +00:00
|
|
|
}
|
2006-12-06 06:34:57 +00:00
|
|
|
ku = TAILQ_FIRST(&p->p_upcalls);
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
2004-04-28 20:36:53 +00:00
|
|
|
if (ku == NULL) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2003-02-17 05:14:26 +00:00
|
|
|
PROC_UNLOCK(p);
|
2004-04-28 20:36:53 +00:00
|
|
|
return (ESRCH);
|
|
|
|
}
|
2007-07-23 14:52:22 +00:00
|
|
|
mtx_lock_spin(&kse_lock);
|
2004-04-28 20:36:53 +00:00
|
|
|
if ((td2 = ku->ku_owner) == NULL) {
|
2007-07-23 14:52:22 +00:00
|
|
|
mtx_unlock_spin(&kse_lock);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_UNLOCK(p);
|
2004-04-28 20:36:53 +00:00
|
|
|
panic("%s: no owner", __func__);
|
|
|
|
} else if (td2->td_kflags & (TDK_KSEREL | TDK_KSERELSIG)) {
|
2007-07-23 14:52:22 +00:00
|
|
|
mtx_unlock_spin(&kse_lock);
|
2004-04-28 20:36:53 +00:00
|
|
|
if (!(td2->td_kflags & TDK_WAKEUP)) {
|
|
|
|
td2->td_kflags |= TDK_WAKEUP;
|
|
|
|
if (td2->td_kflags & TDK_KSEREL)
|
2006-12-06 06:34:57 +00:00
|
|
|
sleepq_remove(td2, &p->p_completed);
|
2004-04-28 20:36:53 +00:00
|
|
|
else
|
|
|
|
sleepq_remove(td2, &p->p_siglist);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ku->ku_flags |= KUF_DOUPCALL;
|
2007-07-23 14:52:22 +00:00
|
|
|
mtx_unlock_spin(&kse_lock);
|
2003-01-03 20:41:49 +00:00
|
|
|
}
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2003-02-17 05:14:26 +00:00
|
|
|
PROC_UNLOCK(p);
|
2004-04-28 20:36:53 +00:00
|
|
|
return (0);
|
2006-10-26 21:42:22 +00:00
|
|
|
#else /* !KSE */
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
|
|
|
|
2004-01-10 18:34:01 +00:00
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* newgroup == 0: first call: use current KSE, don't schedule an upcall
|
2003-02-17 05:14:26 +00:00
|
|
|
* All other situations, do allocate max new KSEs and schedule an upcall.
|
2004-09-05 02:09:54 +00:00
|
|
|
*
|
|
|
|
* XXX should be changed so that 'first' behaviour lasts for as long
|
2006-12-06 06:34:57 +00:00
|
|
|
* as you have not made a thread in this proc. i.e. as long as we do not have
|
2004-09-05 02:09:54 +00:00
|
|
|
* a mailbox..
|
2002-10-24 08:46:34 +00:00
|
|
|
*/
|
|
|
|
/* struct kse_create_args {
|
|
|
|
struct kse_mailbox *mbx;
|
|
|
|
int newgroup;
|
|
|
|
}; */
|
|
|
|
int
|
|
|
|
kse_create(struct thread *td, struct kse_create_args *uap)
|
|
|
|
{
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2002-10-24 08:46:34 +00:00
|
|
|
struct proc *p;
|
|
|
|
struct kse_mailbox mbx;
|
2003-02-17 05:14:26 +00:00
|
|
|
struct kse_upcall *newku;
|
2003-06-15 12:51:26 +00:00
|
|
|
int err, ncpus, sa = 0, first = 0;
|
|
|
|
struct thread *newtd;
|
2002-10-24 08:46:34 +00:00
|
|
|
|
|
|
|
p = td->td_proc;
|
|
|
|
|
2004-10-05 20:48:16 +00:00
|
|
|
/*
|
|
|
|
* Processes using the other threading model can't
|
|
|
|
* suddenly start calling this one
|
2006-12-06 06:34:57 +00:00
|
|
|
* XXX maybe...
|
2004-10-05 20:48:16 +00:00
|
|
|
*/
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_LOCK(p);
|
2004-10-05 20:48:16 +00:00
|
|
|
if ((p->p_flag & (P_SA|P_HADTHREADS)) == P_HADTHREADS) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
2003-06-15 12:51:26 +00:00
|
|
|
if (!(p->p_flag & P_SA)) {
|
|
|
|
first = 1;
|
2004-09-05 02:09:54 +00:00
|
|
|
p->p_flag |= P_SA|P_HADTHREADS;
|
2003-06-15 12:51:26 +00:00
|
|
|
}
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_UNLOCK(p);
|
2004-10-05 20:48:16 +00:00
|
|
|
|
2006-12-06 06:34:57 +00:00
|
|
|
if ((err = copyin(uap->mbx, &mbx, sizeof(mbx))))
|
|
|
|
return (err);
|
|
|
|
|
|
|
|
ncpus = mp_ncpus;
|
|
|
|
if (virtual_cpu != 0)
|
|
|
|
ncpus = virtual_cpu;
|
2004-08-01 23:02:00 +00:00
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* If the new UTS mailbox says that this
|
|
|
|
* will be a BOUND lwp, then it had better
|
|
|
|
* have its thread mailbox already there.
|
2004-08-01 23:02:00 +00:00
|
|
|
*/
|
2006-12-06 06:34:57 +00:00
|
|
|
if ((mbx.km_flags & KMF_BOUND) || uap->newgroup) {
|
2007-03-07 20:17:41 +00:00
|
|
|
/* It's a bound thread (1:1) */
|
2006-12-06 06:34:57 +00:00
|
|
|
if (mbx.km_curthread == NULL)
|
|
|
|
return (EINVAL);
|
|
|
|
ncpus = 1;
|
|
|
|
if (!(uap->newgroup || first))
|
|
|
|
return (EINVAL);
|
|
|
|
} else {
|
2007-03-07 20:17:41 +00:00
|
|
|
/* It's an upcall capable thread */
|
2006-12-06 06:34:57 +00:00
|
|
|
sa = TDP_SA;
|
2004-01-10 18:34:01 +00:00
|
|
|
PROC_LOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
/*
|
|
|
|
* Limit it to NCPU upcall contexts per proc in any case.
|
2007-03-07 20:17:41 +00:00
|
|
|
* numupcalls will soon be numkse or something
|
|
|
|
* as it will represent the number of
|
|
|
|
* non-bound upcalls available. (i.e. ones that can
|
|
|
|
* actually call up).
|
2006-12-06 06:34:57 +00:00
|
|
|
*/
|
|
|
|
if (p->p_numupcalls >= ncpus) {
|
2003-08-26 11:33:15 +00:00
|
|
|
PROC_UNLOCK(p);
|
2003-02-17 05:14:26 +00:00
|
|
|
return (EPROCLIM);
|
|
|
|
}
|
2007-03-07 20:17:41 +00:00
|
|
|
p->p_numupcalls++;
|
2006-12-06 06:34:57 +00:00
|
|
|
PROC_UNLOCK(p);
|
2003-02-17 05:14:26 +00:00
|
|
|
}
|
|
|
|
|
2007-11-05 11:36:16 +00:00
|
|
|
/*
|
|
|
|
* For the first call this may not have been set.
|
|
|
|
* Of course nor may it actually be needed.
|
|
|
|
* thread_schedule_upcall() will look for it.
|
|
|
|
*/
|
|
|
|
if (td->td_standin == NULL) {
|
|
|
|
if (!thread_alloc_spare(td))
|
|
|
|
return (ENOMEM);
|
|
|
|
}
|
|
|
|
|
2004-08-01 23:02:00 +00:00
|
|
|
/*
|
|
|
|
* Even bound LWPs get a mailbox and an upcall to hold it.
|
2007-03-07 20:17:41 +00:00
|
|
|
* XXX This should change.
|
2004-08-01 23:02:00 +00:00
|
|
|
*/
|
2003-02-17 05:14:26 +00:00
|
|
|
newku = upcall_alloc();
|
|
|
|
newku->ku_mailbox = uap->mbx;
|
|
|
|
newku->ku_func = mbx.km_func;
|
|
|
|
bcopy(&mbx.km_stack, &newku->ku_stack, sizeof(stack_t));
|
|
|
|
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
PROC_LOCK(p);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2007-03-07 20:17:41 +00:00
|
|
|
/*
|
|
|
|
* If we are the first time, and a normal thread,
|
|
|
|
* then transfer all the signals back to the 'process'.
|
|
|
|
* SA threading will make a special thread to handle them.
|
|
|
|
*/
|
|
|
|
if (first) {
|
|
|
|
sigqueue_move_set(&td->td_sigqueue, &p->p_sigqueue,
|
|
|
|
&td->td_sigqueue.sq_signals);
|
|
|
|
SIGFILLSET(td->td_sigmask);
|
|
|
|
SIG_CANTMASK(td->td_sigmask);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
}
|
2004-08-01 23:02:00 +00:00
|
|
|
|
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* Make the new upcall available to the process.
|
2004-09-22 15:24:33 +00:00
|
|
|
* It may or may not use it, but it's available.
|
2004-08-01 23:02:00 +00:00
|
|
|
*/
|
2007-07-23 14:52:22 +00:00
|
|
|
TAILQ_INSERT_TAIL(&p->p_upcalls, newku, ku_link);
|
|
|
|
newku->ku_proc = p;
|
2005-08-19 13:35:34 +00:00
|
|
|
PROC_UNLOCK(p);
|
2003-03-19 05:49:38 +00:00
|
|
|
if (mbx.km_quantum)
|
2006-12-06 06:34:57 +00:00
|
|
|
/* XXX should this be in the thread? */
|
|
|
|
p->p_upquantum = max(1, mbx.km_quantum / tick);
|
2003-02-17 05:14:26 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Each upcall structure has an owner thread, find which
|
|
|
|
* one owns it.
|
|
|
|
*/
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(td);
|
|
|
|
mtx_lock_spin(&kse_lock);
|
2003-02-17 05:14:26 +00:00
|
|
|
if (uap->newgroup) {
|
2004-01-10 18:34:01 +00:00
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* The newgroup parameter now means
|
|
|
|
* "bound, non SA, system scope"
|
|
|
|
* It is only used for the interrupt thread at the
|
2007-03-07 20:17:41 +00:00
|
|
|
* moment I think.. (or system scope threads dopey).
|
2006-12-06 06:34:57 +00:00
|
|
|
* We'll rename it later.
|
2003-02-17 05:14:26 +00:00
|
|
|
*/
|
2003-06-15 12:51:26 +00:00
|
|
|
newtd = thread_schedule_upcall(td, newku);
|
2002-10-24 08:46:34 +00:00
|
|
|
} else {
|
|
|
|
/*
|
2004-08-01 23:02:00 +00:00
|
|
|
* If the current thread hasn't an upcall structure,
|
2003-02-17 05:14:26 +00:00
|
|
|
* just assign the upcall to it.
|
2004-08-01 23:02:00 +00:00
|
|
|
* It'll just return.
|
2002-10-24 08:46:34 +00:00
|
|
|
*/
|
2003-02-17 05:14:26 +00:00
|
|
|
if (td->td_upcall == NULL) {
|
|
|
|
newku->ku_owner = td;
|
|
|
|
td->td_upcall = newku;
|
2003-06-15 12:51:26 +00:00
|
|
|
newtd = td;
|
2003-02-17 05:14:26 +00:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Create a new upcall thread to own it.
|
|
|
|
*/
|
2003-06-15 12:51:26 +00:00
|
|
|
newtd = thread_schedule_upcall(td, newku);
|
2003-02-17 05:14:26 +00:00
|
|
|
}
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
mtx_unlock_spin(&kse_lock);
|
|
|
|
thread_unlock(td);
|
|
|
|
PROC_SUNLOCK(p);
|
2004-08-01 23:02:00 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Let the UTS instance know its LWPID.
|
|
|
|
* It doesn't really care. But the debugger will.
|
2006-12-06 06:34:57 +00:00
|
|
|
* XXX warning.. remember that this moves.
|
2004-08-01 23:02:00 +00:00
|
|
|
*/
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
suword32(&newku->ku_mailbox->km_lwp, newtd->td_tid);
|
2004-08-01 23:02:00 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* In the same manner, if the UTS has a current user thread,
|
|
|
|
* then it is also running on this LWP so set it as well.
|
|
|
|
* The library could do that of course.. but why not..
|
2007-03-07 20:17:41 +00:00
|
|
|
* XXX I'm not sure this can ever happen but ...
|
|
|
|
* XXX does the UTS ever set this in the mailbox before calling this?
|
2004-08-01 23:02:00 +00:00
|
|
|
*/
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
if (mbx.km_curthread)
|
|
|
|
suword32(&mbx.km_curthread->tm_lwp, newtd->td_tid);
|
2004-08-01 23:02:00 +00:00
|
|
|
|
|
|
|
if (sa) {
|
|
|
|
newtd->td_pflags |= TDP_SA;
|
2006-12-06 06:34:57 +00:00
|
|
|
/*
|
|
|
|
* If we are starting a new thread, kick it off.
|
|
|
|
*/
|
|
|
|
if (newtd != td) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(newtd);
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_add(newtd, SRQ_BORING);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(newtd);
|
2006-12-06 06:34:57 +00:00
|
|
|
}
|
2004-08-01 23:02:00 +00:00
|
|
|
} else {
|
2004-06-02 07:52:36 +00:00
|
|
|
newtd->td_pflags &= ~TDP_SA;
|
2004-08-01 23:02:00 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Since a library will use the mailbox pointer to
|
|
|
|
* identify even a bound thread, and the mailbox pointer
|
|
|
|
* will never be allowed to change after this syscall
|
|
|
|
* for a bound thread, set it here so the library can
|
|
|
|
* find the thread after the syscall returns.
|
|
|
|
*/
|
|
|
|
newtd->td_mailbox = mbx.km_curthread;
|
|
|
|
|
2003-06-20 09:12:12 +00:00
|
|
|
if (newtd != td) {
|
2004-08-01 23:02:00 +00:00
|
|
|
/*
|
|
|
|
* If we did create a new thread then
|
|
|
|
* make sure it goes to the right place
|
|
|
|
* when it starts up, and make sure that it runs
|
|
|
|
* at full speed when it gets there.
|
|
|
|
* thread_schedule_upcall() copies all cpu state
|
|
|
|
* to the new thread, so we should clear single step
|
|
|
|
* flag here.
|
|
|
|
*/
|
2005-04-23 02:32:32 +00:00
|
|
|
cpu_set_upcall_kse(newtd, newku->ku_func,
|
|
|
|
newku->ku_mailbox, &newku->ku_stack);
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (p->p_flag & P_TRACED) {
|
|
|
|
_PHOLD(p);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
ptrace_clear_single_step(newtd);
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
_PRELE(p);
|
|
|
|
}
|
|
|
|
PROC_UNLOCK(p);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(newtd);
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_add(newtd, SRQ_BORING);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(newtd);
|
2003-06-20 09:12:12 +00:00
|
|
|
}
|
2003-06-15 12:51:26 +00:00
|
|
|
}
|
2002-10-24 08:46:34 +00:00
|
|
|
return (0);
|
2006-10-26 21:42:22 +00:00
|
|
|
#else /* !KSE */
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
2002-10-24 08:46:34 +00:00
|
|
|
}
|
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
#ifdef KSE
|
2002-06-29 07:04:59 +00:00
|
|
|
/*
|
2004-06-11 17:48:20 +00:00
|
|
|
* Initialize global thread allocation resources.
|
2002-06-29 07:04:59 +00:00
|
|
|
*/
|
|
|
|
void
|
2004-06-07 07:25:03 +00:00
|
|
|
kseinit(void)
|
2002-06-29 07:04:59 +00:00
|
|
|
{
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
upcall_zone = uma_zcreate("UPCALL", sizeof(struct kse_upcall),
|
|
|
|
NULL, NULL, NULL, NULL, UMA_ALIGN_CACHE, 0);
|
2002-06-29 07:04:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Store the thread context in the UTS's mailbox.
|
2002-09-27 07:11:11 +00:00
|
|
|
* then add the mailbox at the head of a list we are building in user space.
|
2006-12-06 06:34:57 +00:00
|
|
|
* The list is anchored in the proc structure.
|
2002-06-29 07:04:59 +00:00
|
|
|
*/
|
|
|
|
int
|
2003-07-17 22:45:33 +00:00
|
|
|
thread_export_context(struct thread *td, int willexit)
|
2002-06-29 07:04:59 +00:00
|
|
|
{
|
2002-10-05 04:49:46 +00:00
|
|
|
struct proc *p;
|
2002-09-27 07:11:11 +00:00
|
|
|
uintptr_t mbx;
|
|
|
|
void *addr;
|
2004-08-31 11:52:05 +00:00
|
|
|
int error = 0, sig;
|
2003-06-04 00:12:57 +00:00
|
|
|
mcontext_t mc;
|
2002-06-29 07:04:59 +00:00
|
|
|
|
2002-10-05 04:49:46 +00:00
|
|
|
p = td->td_proc;
|
|
|
|
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
/*
|
|
|
|
* Post sync signal, or process SIGKILL and SIGSTOP.
|
|
|
|
* For sync signal, it is only possible when the signal is not
|
|
|
|
* caught by userland or process is being debugged.
|
|
|
|
*/
|
2003-07-17 22:45:33 +00:00
|
|
|
PROC_LOCK(p);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
if (td->td_flags & TDF_NEEDSIGCHK) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(td);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
td->td_flags &= ~TDF_NEEDSIGCHK;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
mtx_lock(&p->p_sigacts->ps_mtx);
|
|
|
|
while ((sig = cursig(td)) != 0)
|
|
|
|
postsig(sig);
|
|
|
|
mtx_unlock(&p->p_sigacts->ps_mtx);
|
|
|
|
}
|
2003-07-17 22:45:33 +00:00
|
|
|
if (willexit)
|
|
|
|
SIGFILLSET(td->td_sigmask);
|
|
|
|
PROC_UNLOCK(p);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
/* Export the user/machine context. */
|
|
|
|
get_mcontext(td, &mc, 0);
|
|
|
|
addr = (void *)(&td->td_mailbox->tm_context.uc_mcontext);
|
|
|
|
error = copyout(&mc, addr, sizeof(mcontext_t));
|
|
|
|
if (error)
|
|
|
|
goto bad;
|
|
|
|
|
|
|
|
addr = (caddr_t)(&td->td_mailbox->tm_lwp);
|
|
|
|
if (suword32(addr, 0)) {
|
|
|
|
error = EFAULT;
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
/* Get address in latest mbox of list pointer */
|
2002-09-27 07:11:11 +00:00
|
|
|
addr = (void *)(&td->td_mailbox->tm_next);
|
|
|
|
/*
|
|
|
|
* Put the saved address of the previous first
|
|
|
|
* entry into this one
|
|
|
|
*/
|
|
|
|
for (;;) {
|
2006-12-06 06:34:57 +00:00
|
|
|
mbx = (uintptr_t)p->p_completed;
|
2002-09-27 07:11:11 +00:00
|
|
|
if (suword(addr, mbx)) {
|
2002-12-28 01:23:07 +00:00
|
|
|
error = EFAULT;
|
2002-11-18 01:59:31 +00:00
|
|
|
goto bad;
|
2002-09-27 07:11:11 +00:00
|
|
|
}
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_LOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
if (mbx == (uintptr_t)p->p_completed) {
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
2006-12-06 06:34:57 +00:00
|
|
|
p->p_completed = td->td_mailbox;
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
|
|
|
* The thread context may be taken away by
|
|
|
|
* other upcall threads when we unlock
|
|
|
|
* process lock. it's no longer valid to
|
|
|
|
* use it again in any other places.
|
|
|
|
*/
|
|
|
|
td->td_mailbox = NULL;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_UNLOCK(p);
|
2002-09-27 07:11:11 +00:00
|
|
|
break;
|
|
|
|
}
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_UNLOCK(p);
|
2002-09-27 07:11:11 +00:00
|
|
|
}
|
2003-02-17 05:14:26 +00:00
|
|
|
td->td_usticks = 0;
|
2002-09-27 07:11:11 +00:00
|
|
|
return (0);
|
2002-11-18 01:59:31 +00:00
|
|
|
|
|
|
|
bad:
|
|
|
|
PROC_LOCK(p);
|
2003-07-17 22:45:33 +00:00
|
|
|
sigexit(td, SIGILL);
|
2002-12-28 01:23:07 +00:00
|
|
|
return (error);
|
2002-06-29 07:04:59 +00:00
|
|
|
}
|
|
|
|
|
2002-09-27 07:11:11 +00:00
|
|
|
/*
|
2006-12-06 06:34:57 +00:00
|
|
|
* Take the list of completed mailboxes for this Process and put them on this
|
2003-02-17 05:14:26 +00:00
|
|
|
* upcall's mailbox as it's the next one going up.
|
2002-09-27 07:11:11 +00:00
|
|
|
*/
|
|
|
|
static int
|
2006-12-06 06:34:57 +00:00
|
|
|
thread_link_mboxes(struct proc *p, struct kse_upcall *ku)
|
2002-09-27 07:11:11 +00:00
|
|
|
{
|
|
|
|
void *addr;
|
|
|
|
uintptr_t mbx;
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
addr = (void *)(&ku->ku_mailbox->km_completed);
|
2002-09-27 07:11:11 +00:00
|
|
|
for (;;) {
|
2006-12-06 06:34:57 +00:00
|
|
|
mbx = (uintptr_t)p->p_completed;
|
2002-09-27 07:11:11 +00:00
|
|
|
if (suword(addr, mbx)) {
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
psignal(p, SIGSEGV);
|
|
|
|
PROC_UNLOCK(p);
|
2002-09-27 07:11:11 +00:00
|
|
|
return (EFAULT);
|
|
|
|
}
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_LOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
if (mbx == (uintptr_t)p->p_completed) {
|
|
|
|
p->p_completed = NULL;
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_UNLOCK(p);
|
2002-09-27 07:11:11 +00:00
|
|
|
break;
|
|
|
|
}
|
2002-09-29 02:48:37 +00:00
|
|
|
PROC_UNLOCK(p);
|
2002-09-27 07:11:11 +00:00
|
|
|
}
|
|
|
|
return (0);
|
|
|
|
}
|
2002-06-29 07:04:59 +00:00
|
|
|
|
2002-11-18 01:59:31 +00:00
|
|
|
/*
|
|
|
|
* This function should be called at statclock interrupt time
|
|
|
|
*/
|
|
|
|
int
|
2003-02-17 05:14:26 +00:00
|
|
|
thread_statclock(int user)
|
2002-11-18 01:59:31 +00:00
|
|
|
{
|
|
|
|
struct thread *td = curthread;
|
2004-01-10 18:34:01 +00:00
|
|
|
|
2004-08-31 11:52:05 +00:00
|
|
|
if (!(td->td_pflags & TDP_SA))
|
2003-06-15 12:51:26 +00:00
|
|
|
return (0);
|
2002-11-18 01:59:31 +00:00
|
|
|
if (user) {
|
|
|
|
/* Current always do via ast() */
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(td);
|
2004-07-16 21:04:55 +00:00
|
|
|
td->td_flags |= TDF_ASTPENDING;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td);
|
2003-02-17 05:14:26 +00:00
|
|
|
td->td_uuticks++;
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
} else if (td->td_mailbox != NULL)
|
|
|
|
td->td_usticks++;
|
2003-02-17 05:14:26 +00:00
|
|
|
return (0);
|
2002-11-18 01:59:31 +00:00
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
2003-02-26 00:58:23 +00:00
|
|
|
* Export state clock ticks for userland
|
2003-02-17 05:14:26 +00:00
|
|
|
*/
|
2002-11-18 01:59:31 +00:00
|
|
|
static int
|
2004-08-31 11:52:05 +00:00
|
|
|
thread_update_usr_ticks(struct thread *td)
|
2002-11-18 01:59:31 +00:00
|
|
|
{
|
|
|
|
struct proc *p = td->td_proc;
|
|
|
|
caddr_t addr;
|
2003-08-07 15:04:27 +00:00
|
|
|
u_int uticks;
|
2003-02-01 12:17:09 +00:00
|
|
|
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
|
|
|
if (td->td_mailbox == NULL) {
|
|
|
|
thread_unlock(td);
|
2003-02-17 05:14:26 +00:00
|
|
|
return (-1);
|
2007-07-23 14:52:22 +00:00
|
|
|
}
|
|
|
|
thread_unlock(td);
|
2004-01-10 18:34:01 +00:00
|
|
|
|
2004-08-31 11:52:05 +00:00
|
|
|
if ((uticks = td->td_uuticks) != 0) {
|
2003-02-26 00:58:23 +00:00
|
|
|
td->td_uuticks = 0;
|
2004-08-31 11:52:05 +00:00
|
|
|
addr = (caddr_t)&td->td_mailbox->tm_uticks;
|
|
|
|
if (suword32(addr, uticks+fuword32(addr)))
|
|
|
|
goto error;
|
2003-03-19 05:49:38 +00:00
|
|
|
}
|
2004-08-31 11:52:05 +00:00
|
|
|
if ((uticks = td->td_usticks) != 0) {
|
|
|
|
td->td_usticks = 0;
|
|
|
|
addr = (caddr_t)&td->td_mailbox->tm_sticks;
|
|
|
|
if (suword32(addr, uticks+fuword32(addr)))
|
|
|
|
goto error;
|
2002-11-18 01:59:31 +00:00
|
|
|
}
|
2003-02-17 05:14:26 +00:00
|
|
|
return (0);
|
2004-08-31 11:52:05 +00:00
|
|
|
|
|
|
|
error:
|
|
|
|
PROC_LOCK(p);
|
|
|
|
psignal(p, SIGSEGV);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
return (-2);
|
2002-11-18 01:59:31 +00:00
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
|
|
|
* This function is intended to be used to initialize a spare thread
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
* for upcall. Initialize thread's large data area outside the thread lock
|
2004-09-05 02:09:54 +00:00
|
|
|
* for thread_schedule_upcall(). The crhold is also here to get it out
|
|
|
|
* from the schedlock as it has a mutex op itself.
|
2004-10-05 20:48:16 +00:00
|
|
|
* XXX BUG.. we need to get the cr ref after the thread has
|
|
|
|
* checked and chenged its own, not 6 months before...
|
2003-02-17 05:14:26 +00:00
|
|
|
*/
|
2007-11-05 11:36:16 +00:00
|
|
|
int
|
2004-08-02 23:48:43 +00:00
|
|
|
thread_alloc_spare(struct thread *td)
|
2003-02-17 05:14:26 +00:00
|
|
|
{
|
2004-08-03 01:43:29 +00:00
|
|
|
struct thread *spare;
|
2004-03-13 22:31:39 +00:00
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
if (td->td_standin)
|
2007-11-05 11:36:16 +00:00
|
|
|
return (1);
|
2004-08-02 23:48:43 +00:00
|
|
|
spare = thread_alloc();
|
2007-11-05 11:36:16 +00:00
|
|
|
if (spare == NULL)
|
|
|
|
return (0);
|
2003-02-17 05:14:26 +00:00
|
|
|
td->td_standin = spare;
|
|
|
|
bzero(&spare->td_startzero,
|
2004-11-20 23:00:59 +00:00
|
|
|
__rangeof(struct thread, td_startzero, td_endzero));
|
2003-02-17 05:14:26 +00:00
|
|
|
spare->td_proc = td->td_proc;
|
|
|
|
spare->td_ucred = crhold(td->td_ucred);
|
2007-09-17 05:31:39 +00:00
|
|
|
spare->td_flags = TDF_INMEM;
|
2007-11-05 11:36:16 +00:00
|
|
|
return (1);
|
2003-02-17 05:14:26 +00:00
|
|
|
}
|
2002-10-24 08:46:34 +00:00
|
|
|
|
2002-06-29 07:04:59 +00:00
|
|
|
/*
|
2002-09-16 19:26:48 +00:00
|
|
|
* Create a thread and schedule it for upcall on the KSE given.
|
2002-12-28 01:23:07 +00:00
|
|
|
* Use our thread's standin so that we don't have to allocate one.
|
2002-06-29 07:04:59 +00:00
|
|
|
*/
|
|
|
|
struct thread *
|
2003-02-17 05:14:26 +00:00
|
|
|
thread_schedule_upcall(struct thread *td, struct kse_upcall *ku)
|
2002-06-29 07:04:59 +00:00
|
|
|
{
|
|
|
|
struct thread *td2;
|
|
|
|
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
|
|
|
mtx_assert(&kse_lock, MA_OWNED);
|
2004-01-10 18:34:01 +00:00
|
|
|
/*
|
2003-02-17 05:14:26 +00:00
|
|
|
* Schedule an upcall thread on specified kse_upcall,
|
|
|
|
* the kse_upcall must be free.
|
|
|
|
* td must have a spare thread.
|
2002-10-09 02:33:36 +00:00
|
|
|
*/
|
2003-02-17 05:14:26 +00:00
|
|
|
KASSERT(ku->ku_owner == NULL, ("%s: upcall has owner", __func__));
|
2002-10-09 02:33:36 +00:00
|
|
|
if ((td2 = td->td_standin) != NULL) {
|
|
|
|
td->td_standin = NULL;
|
2002-06-29 07:04:59 +00:00
|
|
|
} else {
|
2003-02-17 05:14:26 +00:00
|
|
|
panic("no reserve thread when scheduling an upcall");
|
2002-10-30 03:01:28 +00:00
|
|
|
return (NULL);
|
2002-06-29 07:04:59 +00:00
|
|
|
}
|
|
|
|
CTR3(KTR_PROC, "thread_schedule_upcall: thread %p (pid %d, %s)",
|
2007-11-14 06:21:24 +00:00
|
|
|
td2, td->td_proc->p_pid, td->td_name);
|
2004-08-09 21:57:30 +00:00
|
|
|
/*
|
|
|
|
* Bzero already done in thread_alloc_spare() because we can't
|
|
|
|
* do the crhold here because we are in schedlock already.
|
|
|
|
*/
|
2002-09-06 07:00:37 +00:00
|
|
|
bcopy(&td->td_startcopy, &td2->td_startcopy,
|
2004-11-20 23:00:59 +00:00
|
|
|
__rangeof(struct thread, td_startcopy, td_endcopy));
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
sched_fork_thread(td, td2);
|
2006-12-06 06:34:57 +00:00
|
|
|
thread_link(td2, ku->ku_proc);
|
2007-11-15 14:16:20 +00:00
|
|
|
bcopy(ku->ku_proc->p_comm, td2->td_name, sizeof(td2->td_name));
|
2004-06-11 17:48:20 +00:00
|
|
|
/* inherit parts of blocked thread's context as a good template */
|
2003-06-04 21:13:21 +00:00
|
|
|
cpu_set_upcall(td2, td);
|
2003-02-17 05:14:26 +00:00
|
|
|
/* Let the new thread become owner of the upcall */
|
|
|
|
ku->ku_owner = td2;
|
|
|
|
td2->td_upcall = ku;
|
2004-06-02 07:52:36 +00:00
|
|
|
td2->td_pflags = TDP_SA|TDP_UPCALLING;
|
2003-02-17 05:14:26 +00:00
|
|
|
td2->td_state = TDS_CAN_RUN;
|
2002-10-09 02:33:36 +00:00
|
|
|
td2->td_inhibitors = 0;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
SIGFILLSET(td2->td_sigmask);
|
|
|
|
SIG_CANTMASK(td2->td_sigmask);
|
2002-10-09 02:33:36 +00:00
|
|
|
return (td2); /* bogus.. should be a void function */
|
2002-06-29 07:04:59 +00:00
|
|
|
}
|
|
|
|
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
/*
|
|
|
|
* It is only used when thread generated a trap and process is being
|
|
|
|
* debugged.
|
|
|
|
*/
|
2003-02-17 09:58:11 +00:00
|
|
|
void
|
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most
changes in MD code are trivial, before this change, trapsignal and
sendsig use discrete parameters, now they uses member fields of
ksiginfo_t structure. For sendsig, this change allows us to pass
POSIX realtime signal value to user code.
2. Remove cpu_thread_siginfo, it is no longer needed because we now always
generate ksiginfo_t data and feed it to libpthread.
3. Add p_sigqueue to proc structure to hold shared signals which were
blocked by all threads in the proc.
4. Add td_sigqueue to thread structure to hold all signals delivered to
thread.
5. i386 and amd64 now return POSIX standard si_code, other arches will
be fixed.
6. In this sigqueue implementation, pending signal set is kept as before,
an extra siginfo list holds additional siginfo_t data for signals.
kernel code uses psignal() still behavior as before, it won't be failed
even under memory pressure, only exception is when deleting a signal,
we should call sigqueue_delete to remove signal from sigqueue but
not SIGDELSET. Current there is no kernel code will deliver a signal
with additional data, so kernel should be as stable as before,
a ksiginfo can carry more information, for example, allow signal to
be delivered but throw away siginfo data if memory is not enough.
SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can
not be caught or masked.
The sigqueue() syscall allows user code to queue a signal to target
process, if resource is unavailable, EAGAIN will be returned as
specification said.
Just before thread exits, signal queue memory will be freed by
sigqueue_flush.
Current, all signals are allowed to be queued, not only realtime signals.
Earlier patch reviewed by: jhb, deischen
Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
|
|
|
thread_signal_add(struct thread *td, ksiginfo_t *ksi)
|
2002-09-16 19:26:48 +00:00
|
|
|
{
|
2003-02-17 09:58:11 +00:00
|
|
|
struct proc *p;
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
struct sigacts *ps;
|
2002-09-16 19:26:48 +00:00
|
|
|
int error;
|
|
|
|
|
2003-06-06 02:17:38 +00:00
|
|
|
p = td->td_proc;
|
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
ps = p->p_sigacts;
|
|
|
|
mtx_assert(&ps->ps_mtx, MA_OWNED);
|
2003-02-17 09:58:11 +00:00
|
|
|
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
mtx_unlock(&ps->ps_mtx);
|
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most
changes in MD code are trivial, before this change, trapsignal and
sendsig use discrete parameters, now they uses member fields of
ksiginfo_t structure. For sendsig, this change allows us to pass
POSIX realtime signal value to user code.
2. Remove cpu_thread_siginfo, it is no longer needed because we now always
generate ksiginfo_t data and feed it to libpthread.
3. Add p_sigqueue to proc structure to hold shared signals which were
blocked by all threads in the proc.
4. Add td_sigqueue to thread structure to hold all signals delivered to
thread.
5. i386 and amd64 now return POSIX standard si_code, other arches will
be fixed.
6. In this sigqueue implementation, pending signal set is kept as before,
an extra siginfo list holds additional siginfo_t data for signals.
kernel code uses psignal() still behavior as before, it won't be failed
even under memory pressure, only exception is when deleting a signal,
we should call sigqueue_delete to remove signal from sigqueue but
not SIGDELSET. Current there is no kernel code will deliver a signal
with additional data, so kernel should be as stable as before,
a ksiginfo can carry more information, for example, allow signal to
be delivered but throw away siginfo data if memory is not enough.
SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can
not be caught or masked.
The sigqueue() syscall allows user code to queue a signal to target
process, if resource is unavailable, EAGAIN will be returned as
specification said.
Just before thread exits, signal queue memory will be freed by
sigqueue_flush.
Current, all signals are allowed to be queued, not only realtime signals.
Earlier patch reviewed by: jhb, deischen
Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
|
|
|
SIGADDSET(td->td_sigmask, ksi->ksi_signo);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
PROC_UNLOCK(p);
|
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most
changes in MD code are trivial, before this change, trapsignal and
sendsig use discrete parameters, now they uses member fields of
ksiginfo_t structure. For sendsig, this change allows us to pass
POSIX realtime signal value to user code.
2. Remove cpu_thread_siginfo, it is no longer needed because we now always
generate ksiginfo_t data and feed it to libpthread.
3. Add p_sigqueue to proc structure to hold shared signals which were
blocked by all threads in the proc.
4. Add td_sigqueue to thread structure to hold all signals delivered to
thread.
5. i386 and amd64 now return POSIX standard si_code, other arches will
be fixed.
6. In this sigqueue implementation, pending signal set is kept as before,
an extra siginfo list holds additional siginfo_t data for signals.
kernel code uses psignal() still behavior as before, it won't be failed
even under memory pressure, only exception is when deleting a signal,
we should call sigqueue_delete to remove signal from sigqueue but
not SIGDELSET. Current there is no kernel code will deliver a signal
with additional data, so kernel should be as stable as before,
a ksiginfo can carry more information, for example, allow signal to
be delivered but throw away siginfo data if memory is not enough.
SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can
not be caught or masked.
The sigqueue() syscall allows user code to queue a signal to target
process, if resource is unavailable, EAGAIN will be returned as
specification said.
Just before thread exits, signal queue memory will be freed by
sigqueue_flush.
Current, all signals are allowed to be queued, not only realtime signals.
Earlier patch reviewed by: jhb, deischen
Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
|
|
|
error = copyout(&ksi->ksi_info, &td->td_mailbox->tm_syncsig,
|
|
|
|
sizeof(siginfo_t));
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
if (error) {
|
|
|
|
PROC_LOCK(p);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
sigexit(td, SIGSEGV);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
}
|
2003-02-17 09:58:11 +00:00
|
|
|
PROC_LOCK(p);
|
o Change kse_thr_interrupt to allow send a signal to a specified thread,
or unblock a thread in kernel, and allow UTS to specify whether syscall
should be restarted.
o Add ability for UTS to monitor signal comes in and removed from process,
the flag PS_SIGEVENT is used to indicate the events.
o Add a KMF_WAITSIGEVENT for KSE mailbox flag, UTS call kse_release with
this flag set to wait for above signal event.
o For SA based thread, kernel masks all signal in its signal mask, let
UTS to use kse_thr_interrupt interrupt a thread, and install a signal
frame in userland for the thread.
o Add a tm_syncsig in thread mailbox, when a hardware trap occurs,
it is used to deliver synchronous signal to userland, and upcall
is schedule, so UTS can process the synchronous signal for the thread.
Reviewed by: julian (mentor)
2003-06-28 08:29:05 +00:00
|
|
|
mtx_lock(&ps->ps_mtx);
|
2002-09-16 19:26:48 +00:00
|
|
|
}
|
2004-09-05 02:09:54 +00:00
|
|
|
#include "opt_sched.h"
|
|
|
|
struct thread *
|
|
|
|
thread_switchout(struct thread *td, int flags, struct thread *nextthread)
|
2003-03-19 05:49:38 +00:00
|
|
|
{
|
|
|
|
struct kse_upcall *ku;
|
2003-06-20 09:12:12 +00:00
|
|
|
struct thread *td2;
|
2003-03-19 05:49:38 +00:00
|
|
|
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
THREAD_LOCK_ASSERT(td, MA_OWNED);
|
2003-03-19 05:49:38 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the outgoing thread is in threaded group and has never
|
|
|
|
* scheduled an upcall, decide whether this is a short
|
|
|
|
* or long term event and thus whether or not to schedule
|
|
|
|
* an upcall.
|
|
|
|
* If it is a short term event, just suspend it in
|
|
|
|
* a way that takes its KSE with it.
|
|
|
|
* Select the events for which we want to schedule upcalls.
|
2004-08-08 22:32:20 +00:00
|
|
|
* For now it's just sleep or if thread is suspended but
|
|
|
|
* process wide suspending flag is not set (debugger
|
|
|
|
* suspends thread).
|
2003-03-19 05:49:38 +00:00
|
|
|
* XXXKSE eventually almost any inhibition could do.
|
|
|
|
*/
|
2004-08-08 22:32:20 +00:00
|
|
|
if (TD_CAN_UNBIND(td) && (td->td_standin) &&
|
|
|
|
(TD_ON_SLEEPQ(td) || (TD_IS_SUSPENDED(td) &&
|
|
|
|
!P_SHOULDSTOP(td->td_proc)))) {
|
2004-01-10 18:34:01 +00:00
|
|
|
/*
|
2003-03-19 05:49:38 +00:00
|
|
|
* Release ownership of upcall, and schedule an upcall
|
|
|
|
* thread, this new upcall thread becomes the owner of
|
2004-08-09 21:57:30 +00:00
|
|
|
* the upcall structure. It will be ahead of us in the
|
|
|
|
* run queue, so as we are stopping, it should either
|
|
|
|
* start up immediatly, or at least before us if
|
|
|
|
* we release our slot.
|
2003-03-19 05:49:38 +00:00
|
|
|
*/
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
mtx_lock_spin(&kse_lock);
|
2003-03-19 05:49:38 +00:00
|
|
|
ku = td->td_upcall;
|
|
|
|
ku->ku_owner = NULL;
|
2004-01-10 18:34:01 +00:00
|
|
|
td->td_upcall = NULL;
|
2004-08-28 04:08:05 +00:00
|
|
|
td->td_pflags &= ~TDP_CAN_UNBIND;
|
2003-06-20 09:12:12 +00:00
|
|
|
td2 = thread_schedule_upcall(td, ku);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
mtx_unlock_spin(&kse_lock);
|
2004-09-05 02:09:54 +00:00
|
|
|
if (flags & SW_INVOL || nextthread) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_lock(td2);
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_add(td2, SRQ_YIELDING);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
thread_unlock(td2);
|
2004-09-05 02:09:54 +00:00
|
|
|
} else {
|
|
|
|
/* Keep up with reality.. we have one extra thread
|
|
|
|
* in the picture.. and it's 'running'.
|
|
|
|
*/
|
|
|
|
return td2;
|
|
|
|
}
|
2003-03-19 05:49:38 +00:00
|
|
|
}
|
2004-09-05 02:09:54 +00:00
|
|
|
return (nextthread);
|
2003-03-19 05:49:38 +00:00
|
|
|
}
|
|
|
|
|
2002-10-24 23:09:48 +00:00
|
|
|
/*
|
2003-02-17 05:14:26 +00:00
|
|
|
* Setup done on the thread when it enters the kernel.
|
2002-10-24 23:09:48 +00:00
|
|
|
*/
|
|
|
|
void
|
2004-08-31 07:34:54 +00:00
|
|
|
thread_user_enter(struct thread *td)
|
2002-10-24 23:09:48 +00:00
|
|
|
{
|
2004-10-06 00:49:41 +00:00
|
|
|
struct proc *p = td->td_proc;
|
2003-02-17 05:14:26 +00:00
|
|
|
struct kse_upcall *ku;
|
2003-04-21 07:27:59 +00:00
|
|
|
struct kse_thr_mailbox *tmbx;
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
uint32_t flags;
|
2003-04-21 07:27:59 +00:00
|
|
|
|
2004-10-06 00:49:41 +00:00
|
|
|
/*
|
|
|
|
* First check that we shouldn't just abort. we
|
|
|
|
* can suspend it here or just exit.
|
|
|
|
*/
|
|
|
|
if (__predict_false(P_SHOULDSTOP(p))) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
thread_suspend_check(0);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
|
|
|
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
if (!(td->td_pflags & TDP_SA))
|
|
|
|
return;
|
|
|
|
|
2002-10-24 23:09:48 +00:00
|
|
|
/*
|
|
|
|
* If we are doing a syscall in a KSE environment,
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
* note where our mailbox is.
|
2002-10-24 23:09:48 +00:00
|
|
|
*/
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
ku = td->td_upcall;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
|
|
|
|
KASSERT(ku != NULL, ("no upcall owned"));
|
|
|
|
KASSERT(ku->ku_owner == td, ("wrong owner"));
|
|
|
|
KASSERT(!TD_CAN_UNBIND(td), ("can unbind"));
|
|
|
|
|
2007-11-05 11:36:16 +00:00
|
|
|
if (td->td_standin == NULL) {
|
|
|
|
if (!thread_alloc_spare(td)) {
|
|
|
|
PROC_LOCK(p);
|
|
|
|
if (kern_logsigexit)
|
|
|
|
log(LOG_INFO,
|
|
|
|
"pid %d (%s), uid %d: thread_alloc_spare failed\n",
|
|
|
|
p->p_pid, p->p_comm,
|
|
|
|
td->td_ucred ? td->td_ucred->cr_uid : -1);
|
|
|
|
sigexit(td, SIGSEGV); /* XXX ? */
|
|
|
|
/* panic("thread_user_enter: thread_alloc_spare failed"); */
|
|
|
|
}
|
|
|
|
}
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
ku->ku_mflags = fuword32((void *)&ku->ku_mailbox->km_flags);
|
|
|
|
tmbx = (void *)fuword((void *)&ku->ku_mailbox->km_curthread);
|
|
|
|
if ((tmbx == NULL) || (tmbx == (void *)-1L) ||
|
|
|
|
(ku->ku_mflags & KMF_NOUPCALL)) {
|
|
|
|
td->td_mailbox = NULL;
|
|
|
|
} else {
|
|
|
|
flags = fuword32(&tmbx->tm_flags);
|
|
|
|
/*
|
|
|
|
* On some architectures, TP register points to thread
|
|
|
|
* mailbox but not points to kse mailbox, and userland
|
|
|
|
* can not atomically clear km_curthread, but can
|
|
|
|
* use TP register, and set TMF_NOUPCALL in thread
|
|
|
|
* flag to indicate a critical region.
|
|
|
|
*/
|
|
|
|
if (flags & TMF_NOUPCALL) {
|
2003-02-17 05:14:26 +00:00
|
|
|
td->td_mailbox = NULL;
|
2002-10-24 23:09:48 +00:00
|
|
|
} else {
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
td->td_mailbox = tmbx;
|
2004-08-28 04:08:05 +00:00
|
|
|
td->td_pflags |= TDP_CAN_UNBIND;
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_LOCK(p);
|
2004-10-06 00:49:41 +00:00
|
|
|
if (__predict_false(p->p_flag & P_TRACED)) {
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
flags = fuword32(&tmbx->tm_dflags);
|
2004-08-03 02:23:06 +00:00
|
|
|
if (flags & TMDF_SUSPEND) {
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
/* fuword can block, check again */
|
|
|
|
if (td->td_upcall)
|
|
|
|
ku->ku_flags |= KUF_DOUPCALL;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
}
|
2003-08-05 12:00:55 +00:00
|
|
|
}
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_UNLOCK(p);
|
2002-10-24 23:09:48 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-09-16 19:26:48 +00:00
|
|
|
/*
|
|
|
|
* The extra work we go through if we are a threaded process when we
|
|
|
|
* return to userland.
|
2002-06-29 07:04:59 +00:00
|
|
|
*
|
|
|
|
* If we are a KSE process and returning to user mode, check for
|
|
|
|
* extra work to do before we return (e.g. for more syscalls
|
|
|
|
* to complete first). If we were in a critical section, we should
|
|
|
|
* just return to let it finish. Same if we were in the UTS (in
|
2002-09-16 19:26:48 +00:00
|
|
|
* which case the mailbox's context's busy indicator will be set).
|
|
|
|
* The only traps we suport will have set the mailbox.
|
|
|
|
* We will clear it here.
|
2002-06-29 07:04:59 +00:00
|
|
|
*/
|
|
|
|
int
|
2002-09-23 06:14:30 +00:00
|
|
|
thread_userret(struct thread *td, struct trapframe *frame)
|
2002-06-29 07:04:59 +00:00
|
|
|
{
|
2003-02-17 05:14:26 +00:00
|
|
|
struct kse_upcall *ku;
|
2002-10-09 02:33:36 +00:00
|
|
|
struct proc *p;
|
2002-11-18 12:28:15 +00:00
|
|
|
struct timespec ts;
|
2006-12-06 06:34:57 +00:00
|
|
|
int error = 0, uts_crit;
|
2002-06-29 07:04:59 +00:00
|
|
|
|
2003-06-15 12:51:26 +00:00
|
|
|
/* Nothing to do with bound thread */
|
2004-06-02 07:52:36 +00:00
|
|
|
if (!(td->td_pflags & TDP_SA))
|
2003-02-17 05:14:26 +00:00
|
|
|
return (0);
|
2002-10-09 02:33:36 +00:00
|
|
|
|
2002-09-16 19:26:48 +00:00
|
|
|
/*
|
2004-08-31 11:52:05 +00:00
|
|
|
* Update stat clock count for userland
|
2002-09-16 19:26:48 +00:00
|
|
|
*/
|
2004-08-31 11:52:05 +00:00
|
|
|
if (td->td_mailbox != NULL) {
|
|
|
|
thread_update_usr_ticks(td);
|
|
|
|
uts_crit = 0;
|
|
|
|
} else {
|
|
|
|
uts_crit = 1;
|
2003-02-17 05:14:26 +00:00
|
|
|
}
|
2002-12-28 01:23:07 +00:00
|
|
|
|
2004-08-31 11:52:05 +00:00
|
|
|
p = td->td_proc;
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_lock(td);
|
2004-08-31 11:52:05 +00:00
|
|
|
ku = td->td_upcall;
|
|
|
|
|
2004-01-10 18:34:01 +00:00
|
|
|
/*
|
2003-02-17 05:14:26 +00:00
|
|
|
* Optimisation:
|
|
|
|
* This thread has not started any upcall.
|
|
|
|
* If there is no work to report other than ourself,
|
|
|
|
* then it can return direct to userland.
|
|
|
|
*/
|
2002-12-28 01:23:07 +00:00
|
|
|
if (TD_CAN_UNBIND(td)) {
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2004-08-28 04:08:05 +00:00
|
|
|
td->td_pflags &= ~TDP_CAN_UNBIND;
|
2003-03-31 22:49:17 +00:00
|
|
|
if ((td->td_flags & TDF_NEEDSIGCHK) == 0 &&
|
2006-12-06 06:34:57 +00:00
|
|
|
(p->p_completed == NULL) &&
|
2003-03-19 05:49:38 +00:00
|
|
|
(ku->ku_flags & KUF_DOUPCALL) == 0 &&
|
2006-12-06 06:34:57 +00:00
|
|
|
(p->p_upquantum && ticks < p->p_nextupcall)) {
|
2003-03-14 03:52:16 +00:00
|
|
|
nanotime(&ts);
|
2003-03-19 05:49:38 +00:00
|
|
|
error = copyout(&ts,
|
2003-03-14 03:52:16 +00:00
|
|
|
(caddr_t)&ku->ku_mailbox->km_timeofday,
|
|
|
|
sizeof(ts));
|
2003-03-11 02:59:50 +00:00
|
|
|
td->td_mailbox = 0;
|
2003-04-21 07:27:59 +00:00
|
|
|
ku->ku_mflags = 0;
|
2003-03-14 03:52:16 +00:00
|
|
|
if (error)
|
|
|
|
goto out;
|
2003-03-11 02:59:50 +00:00
|
|
|
return (0);
|
2002-12-28 01:23:07 +00:00
|
|
|
}
|
2003-07-17 22:45:33 +00:00
|
|
|
thread_export_context(td, 0);
|
2002-09-27 07:11:11 +00:00
|
|
|
/*
|
2003-02-17 05:14:26 +00:00
|
|
|
* There is something to report, and we own an upcall
|
2005-06-23 21:55:43 +00:00
|
|
|
* structure, we can go to userland.
|
2003-02-17 05:14:26 +00:00
|
|
|
* Turn ourself into an upcall thread.
|
2002-09-27 07:11:11 +00:00
|
|
|
*/
|
2003-06-15 03:18:58 +00:00
|
|
|
td->td_pflags |= TDP_UPCALLING;
|
2003-04-21 07:27:59 +00:00
|
|
|
} else if (td->td_mailbox && (ku == NULL)) {
|
2007-07-23 14:52:22 +00:00
|
|
|
thread_unlock(td);
|
2003-07-17 22:45:33 +00:00
|
|
|
thread_export_context(td, 1);
|
2003-03-11 00:07:53 +00:00
|
|
|
PROC_LOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
if (p->p_upsleeps)
|
|
|
|
wakeup(&p->p_completed);
|
2007-03-21 21:20:51 +00:00
|
|
|
WITNESS_WARN(WARN_PANIC, &p->p_mtx.lock_object,
|
2005-09-02 20:20:01 +00:00
|
|
|
"thread exiting in userret");
|
1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most
changes in MD code are trivial, before this change, trapsignal and
sendsig use discrete parameters, now they uses member fields of
ksiginfo_t structure. For sendsig, this change allows us to pass
POSIX realtime signal value to user code.
2. Remove cpu_thread_siginfo, it is no longer needed because we now always
generate ksiginfo_t data and feed it to libpthread.
3. Add p_sigqueue to proc structure to hold shared signals which were
blocked by all threads in the proc.
4. Add td_sigqueue to thread structure to hold all signals delivered to
thread.
5. i386 and amd64 now return POSIX standard si_code, other arches will
be fixed.
6. In this sigqueue implementation, pending signal set is kept as before,
an extra siginfo list holds additional siginfo_t data for signals.
kernel code uses psignal() still behavior as before, it won't be failed
even under memory pressure, only exception is when deleting a signal,
we should call sigqueue_delete to remove signal from sigqueue but
not SIGDELSET. Current there is no kernel code will deliver a signal
with additional data, so kernel should be as stable as before,
a ksiginfo can carry more information, for example, allow signal to
be delivered but throw away siginfo data if memory is not enough.
SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can
not be caught or masked.
The sigqueue() syscall allows user code to queue a signal to target
process, if resource is unavailable, EAGAIN will be returned as
specification said.
Just before thread exits, signal queue memory will be freed by
sigqueue_flush.
Current, all signals are allowed to be queued, not only realtime signals.
Earlier patch reviewed by: jhb, deischen
Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
|
|
|
sigqueue_flush(&td->td_sigqueue);
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2003-03-11 00:07:53 +00:00
|
|
|
thread_stopped(p);
|
2002-12-28 01:23:07 +00:00
|
|
|
thread_exit();
|
2003-02-17 05:14:26 +00:00
|
|
|
/* NOTREACHED */
|
2007-07-23 14:52:22 +00:00
|
|
|
} else
|
|
|
|
thread_unlock(td);
|
2002-10-09 02:33:36 +00:00
|
|
|
|
2004-08-28 04:16:32 +00:00
|
|
|
KASSERT(ku != NULL, ("upcall is NULL"));
|
2003-02-20 01:11:17 +00:00
|
|
|
KASSERT(TD_CAN_UNBIND(td) == 0, ("can unbind"));
|
|
|
|
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
PROC_SLOCK(p);
|
2003-02-20 01:11:17 +00:00
|
|
|
if (p->p_numthreads > max_threads_per_proc) {
|
|
|
|
max_threads_hits++;
|
|
|
|
while (p->p_numthreads > max_threads_per_proc) {
|
2006-12-06 06:34:57 +00:00
|
|
|
if (p->p_numupcalls >= max_threads_per_proc)
|
2003-02-20 01:11:17 +00:00
|
|
|
break;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2003-06-10 02:21:32 +00:00
|
|
|
if (msleep(&p->p_numthreads, &p->p_mtx, PPAUSE|PCATCH,
|
2005-09-30 06:09:41 +00:00
|
|
|
"maxthreads", hz/10) != EWOULDBLOCK) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2003-06-11 01:08:33 +00:00
|
|
|
break;
|
2007-07-23 14:52:22 +00:00
|
|
|
} else
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2003-02-20 01:11:17 +00:00
|
|
|
}
|
|
|
|
}
|
2007-07-23 14:52:22 +00:00
|
|
|
PROC_SUNLOCK(p);
|
|
|
|
PROC_UNLOCK(p);
|
2003-02-20 01:11:17 +00:00
|
|
|
|
2003-06-15 03:18:58 +00:00
|
|
|
if (td->td_pflags & TDP_UPCALLING) {
|
2003-04-21 07:27:59 +00:00
|
|
|
uts_crit = 0;
|
2006-12-06 06:34:57 +00:00
|
|
|
p->p_nextupcall = ticks + p->p_upquantum;
|
2004-01-10 18:34:01 +00:00
|
|
|
/*
|
2002-12-28 01:23:07 +00:00
|
|
|
* There is no more work to do and we are going to ride
|
2003-02-17 05:14:26 +00:00
|
|
|
* this thread up to userland as an upcall.
|
2002-12-28 01:23:07 +00:00
|
|
|
* Do the last parts of the setup needed for the upcall.
|
|
|
|
*/
|
|
|
|
CTR3(KTR_PROC, "userret: upcall thread %p (pid %d, %s)",
|
2007-11-14 06:21:24 +00:00
|
|
|
td, td->td_proc->p_pid, td->td_name);
|
2002-10-09 02:33:36 +00:00
|
|
|
|
2003-06-15 03:18:58 +00:00
|
|
|
td->td_pflags &= ~TDP_UPCALLING;
|
2003-06-15 12:51:26 +00:00
|
|
|
if (ku->ku_flags & KUF_DOUPCALL) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
2003-02-17 05:14:26 +00:00
|
|
|
ku->ku_flags &= ~KUF_DOUPCALL;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2003-06-15 12:51:26 +00:00
|
|
|
}
|
2003-04-21 07:27:59 +00:00
|
|
|
/*
|
|
|
|
* Set user context to the UTS
|
|
|
|
*/
|
|
|
|
if (!(ku->ku_mflags & KMF_NOUPCALL)) {
|
2005-04-23 02:32:32 +00:00
|
|
|
cpu_set_upcall_kse(td, ku->ku_func, ku->ku_mailbox,
|
|
|
|
&ku->ku_stack);
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (p->p_flag & P_TRACED) {
|
|
|
|
_PHOLD(p);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
ptrace_clear_single_step(td);
|
Close some races between procfs/ptrace and exit(2):
- Reorder the events in exit(2) slightly so that we trigger the S_EXIT
stop event earlier. After we have signalled that, we set P_WEXIT and
then wait for any processes with a hold on the vmspace via PHOLD to
release it. PHOLD now KASSERT()'s that P_WEXIT is clear when it is
invoked, and PRELE now does a wakeup if P_WEXIT is set and p_lock drops
to zero.
- Change proc_rwmem() to require that the processing read from has its
vmspace held via PHOLD by the caller and get rid of all the junk to
screw around with the vmspace reference count as we no longer need it.
- In ptrace() and pseudofs(), treat a process with P_WEXIT set as if it
doesn't exist.
- Only do one PHOLD in kern_ptrace() now, and do it earlier so it covers
FIX_SSTEP() (since on alpha at least this can end up calling proc_rwmem()
to clear an earlier single-step simualted via a breakpoint). We only
do one to avoid races. Also, by making the EINVAL error for unknown
requests be part of the default: case in the switch, the various
switch cases can now just break out to return which removes a _lot_ of
duplicated PRELE and proc unlocks, etc. Also, it fixes at least one bug
where a LWP ptrace command could return EINVAL with the proc lock still
held.
- Changed the locking for ptrace_single_step(), ptrace_set_pc(), and
ptrace_clear_single_step() to always be called with the proc lock
held (it was a mixed bag previously). Alpha and arm have to drop
the lock while the mess around with breakpoints, but other archs
avoid extra lock release/acquires in ptrace(). I did have to fix a
couple of other consumers in kern_kse and a few other places to
hold the proc lock and PHOLD.
Tested by: ps (1 mostly, but some bits of 2-4 as well)
MFC after: 1 week
2006-02-22 18:57:50 +00:00
|
|
|
_PRELE(p);
|
|
|
|
}
|
|
|
|
PROC_UNLOCK(p);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
error = suword32(&ku->ku_mailbox->km_lwp,
|
2004-08-09 21:57:30 +00:00
|
|
|
td->td_tid);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
if (error)
|
|
|
|
goto out;
|
2003-04-21 07:27:59 +00:00
|
|
|
error = suword(&ku->ku_mailbox->km_curthread, 0);
|
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2003-02-17 05:14:26 +00:00
|
|
|
/*
|
2002-12-28 01:23:07 +00:00
|
|
|
* Unhook the list of completed threads.
|
2004-01-10 18:34:01 +00:00
|
|
|
* anything that completes after this gets to
|
2002-12-28 01:23:07 +00:00
|
|
|
* come in next time.
|
|
|
|
* Put the list of completed thread mailboxes on
|
|
|
|
* this KSE's mailbox.
|
|
|
|
*/
|
2003-04-21 07:27:59 +00:00
|
|
|
if (!(ku->ku_mflags & KMF_NOCOMPLETED) &&
|
2006-12-06 06:34:57 +00:00
|
|
|
(error = thread_link_mboxes(p, ku)) != 0)
|
2003-02-19 04:01:55 +00:00
|
|
|
goto out;
|
2003-04-21 07:27:59 +00:00
|
|
|
}
|
|
|
|
if (!uts_crit) {
|
2002-11-18 12:28:15 +00:00
|
|
|
nanotime(&ts);
|
2003-04-21 07:27:59 +00:00
|
|
|
error = copyout(&ts, &ku->ku_mailbox->km_timeofday, sizeof(ts));
|
2003-02-19 04:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (error) {
|
|
|
|
/*
|
2003-02-19 13:40:24 +00:00
|
|
|
* Things are going to be so screwed we should just kill
|
|
|
|
* the process.
|
2003-02-19 04:01:55 +00:00
|
|
|
* how do we do that?
|
|
|
|
*/
|
2004-10-06 00:49:41 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
psignal(p, SIGSEGV);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Optimisation:
|
|
|
|
* Ensure that we have a spare thread available,
|
|
|
|
* for when we re-enter the kernel.
|
|
|
|
*/
|
|
|
|
if (td->td_standin == NULL)
|
2007-11-05 11:36:16 +00:00
|
|
|
thread_alloc_spare(td); /* XXX care of failure ? */
|
2002-11-18 12:28:15 +00:00
|
|
|
}
|
2002-12-28 01:23:07 +00:00
|
|
|
|
2003-04-21 07:27:59 +00:00
|
|
|
ku->ku_mflags = 0;
|
2002-12-28 01:23:07 +00:00
|
|
|
td->td_mailbox = NULL;
|
2003-02-17 05:14:26 +00:00
|
|
|
td->td_usticks = 0;
|
2002-10-09 02:33:36 +00:00
|
|
|
return (error); /* go sync */
|
2002-06-29 07:04:59 +00:00
|
|
|
}
|
|
|
|
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
/*
|
|
|
|
* called after ptrace resumed a process, force all
|
|
|
|
* virtual CPUs to schedule upcall for SA process,
|
|
|
|
* because debugger may have changed something in userland,
|
|
|
|
* we should notice UTS as soon as possible.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
thread_continued(struct proc *p)
|
|
|
|
{
|
|
|
|
struct kse_upcall *ku;
|
|
|
|
struct thread *td;
|
|
|
|
|
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2005-08-19 22:30:13 +00:00
|
|
|
KASSERT(P_SHOULDSTOP(p), ("process not stopped"));
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
|
|
|
|
if (!(p->p_flag & P_SA))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (p->p_flag & P_TRACED) {
|
2006-12-06 06:34:57 +00:00
|
|
|
td = TAILQ_FIRST(&p->p_threads);
|
|
|
|
if (td && (td->td_pflags & TDP_SA)) {
|
|
|
|
FOREACH_UPCALL_IN_PROC(p, ku) {
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SLOCK(p);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
ku->ku_flags |= KUF_DOUPCALL;
|
Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
|
|
|
PROC_SUNLOCK(p);
|
2006-12-06 06:34:57 +00:00
|
|
|
wakeup(&p->p_completed);
|
Add code to support debugging threaded process.
1. Add tm_lwpid into kse_thr_mailbox to indicate which kernel
thread current user thread is running on. Add tm_dflags into
kse_thr_mailbox, the flags is written by debugger, it tells
UTS and kernel what should be done when the process is being
debugged, current, there two flags TMDF_SSTEP and TMDF_DONOTRUNUSER.
TMDF_SSTEP is used to tell kernel to turn on single stepping,
or turn off if it is not set.
TMDF_DONOTRUNUSER is used to tell kernel to schedule upcall
whenever possible, to UTS, it means do not run the user thread
until debugger clears it, this behaviour is necessary because
gdb wants to resume only one thread when the thread's pc is
at a breakpoint, and thread needs to go forward, in order to
avoid other threads sneak pass the breakpoints, it needs to remove
breakpoint, only wants one thread to go. Also, add km_lwp to
kse_mailbox, the lwp id is copied to kse_thr_mailbox at context
switch time when process is not being debugged, so when process
is attached, debugger can map kernel thread to user thread.
2. Add p_xthread to proc strcuture and td_xsig to thread structure.
p_xthread is used by a thread when it wants to report event
to debugger, every thread can set the pointer, especially, when
it is used in ptracestop, it is the last thread reporting event
will win the race. Every thread has a td_xsig to exchange signal
with debugger, thread uses TDF_XSIG flag to indicate it is reporting
signal to debugger, if the flag is not cleared, thread will keep
retrying until it is cleared by debugger, p_xthread may be
used by debugger to indicate CURRENT thread. The p_xstat is still
in proc structure to keep wait() to work, in future, we may
just use td_xsig.
3. Add TDF_DBSUSPEND flag, the flag is used by debugger to suspend
a thread. When process stops, debugger can set the flag for
thread, thread will check the flag in thread_suspend_check,
enters a loop, unless it is cleared by debugger, process is
detached or process is existing. The flag is also checked in
ptracestop, so debugger can temporarily suspend a thread even
if the thread wants to exchange signal.
4. Current, in ptrace, we always resume all threads, but if a thread
has already a TDF_DBSUSPEND flag set by debugger, it won't run.
Encouraged by: marcel, julian, deischen
2004-07-13 07:33:40 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2006-10-26 21:42:22 +00:00
|
|
|
#endif
|