1997-02-10 02:22:35 +00:00
|
|
|
/*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1989, 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* @(#)kern_exit.c 8.7 (Berkeley) 2/12/94
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
1997-12-16 17:40:42 +00:00
|
|
|
#include "opt_compat.h"
|
1996-01-03 21:42:35 +00:00
|
|
|
#include "opt_ktrace.h"
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
1995-10-23 15:42:12 +00:00
|
|
|
#include <sys/sysproto.h>
|
1998-11-10 09:16:29 +00:00
|
|
|
#include <sys/kernel.h>
|
1997-10-12 20:26:33 +00:00
|
|
|
#include <sys/malloc.h>
|
2000-10-20 07:58:15 +00:00
|
|
|
#include <sys/mutex.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/proc.h>
|
1997-12-06 04:11:14 +00:00
|
|
|
#include <sys/pioctl.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/tty.h>
|
|
|
|
#include <sys/wait.h>
|
|
|
|
#include <sys/vnode.h>
|
|
|
|
#include <sys/resourcevar.h>
|
1994-10-02 17:35:40 +00:00
|
|
|
#include <sys/signalvar.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/ptrace.h>
|
1996-03-11 02:24:21 +00:00
|
|
|
#include <sys/acct.h> /* for acct_process() function prototype */
|
1994-10-02 17:35:40 +00:00
|
|
|
#include <sys/filedesc.h>
|
1996-01-01 12:23:39 +00:00
|
|
|
#include <sys/shm.h>
|
|
|
|
#include <sys/sem.h>
|
1997-07-17 04:49:43 +00:00
|
|
|
#include <sys/aio.h>
|
This Implements the mumbled about "Jail" feature.
This is a seriously beefed up chroot kind of thing. The process
is jailed along the same lines as a chroot does it, but with
additional tough restrictions imposed on what the superuser can do.
For all I know, it is safe to hand over the root bit inside a
prison to the customer living in that prison, this is what
it was developed for in fact: "real virtual servers".
Each prison has an ip number associated with it, which all IP
communications will be coerced to use and each prison has its own
hostname.
Needless to say, you need more RAM this way, but the advantage is
that each customer can run their own particular version of apache
and not stomp on the toes of their neighbors.
It generally does what one would expect, but setting up a jail
still takes a little knowledge.
A few notes:
I have no scripts for setting up a jail, don't ask me for them.
The IP number should be an alias on one of the interfaces.
mount a /proc in each jail, it will make ps more useable.
/proc/<pid>/status tells the hostname of the prison for
jailed processes.
Quotas are only sensible if you have a mountpoint per prison.
There are no privisions for stopping resource-hogging.
Some "#ifdef INET" and similar may be missing (send patches!)
If somebody wants to take it from here and develop it into
more of a "virtual machine" they should be most welcome!
Tools, comments, patches & documentation most welcome.
Have fun...
Sponsored by: http://www.rndassociates.com/
Run for almost a year by: http://www.servetheweb.com/
1999-04-28 11:38:52 +00:00
|
|
|
#include <sys/jail.h>
|
1996-01-01 12:23:39 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
1997-02-10 02:22:35 +00:00
|
|
|
#include <sys/lock.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_map.h>
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
#include <vm/vm_zone.h>
|
1999-01-07 21:23:50 +00:00
|
|
|
#include <sys/user.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-01-31 03:15:13 +00:00
|
|
|
/* Required to be non-static for SysVR4 emulator */
|
1999-01-30 06:25:00 +00:00
|
|
|
MALLOC_DEFINE(M_ZOMBIE, "zombie", "zombie proc status");
|
1997-10-11 18:31:40 +00:00
|
|
|
|
1999-11-19 21:29:03 +00:00
|
|
|
static MALLOC_DEFINE(M_ATEXIT, "atexit", "atexit callback");
|
|
|
|
|
1997-11-20 19:09:43 +00:00
|
|
|
static int wait1 __P((struct proc *, struct wait_args *, int));
|
1995-10-23 15:42:12 +00:00
|
|
|
|
1996-08-19 02:28:24 +00:00
|
|
|
/*
|
|
|
|
* callout list for things to do at exit time
|
|
|
|
*/
|
1999-11-19 21:29:03 +00:00
|
|
|
struct exitlist {
|
1996-08-19 02:28:24 +00:00
|
|
|
exitlist_fn function;
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_ENTRY(exitlist) next;
|
1999-11-19 21:29:03 +00:00
|
|
|
};
|
1996-08-19 02:28:24 +00:00
|
|
|
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_HEAD(exit_list_head, exitlist);
|
1999-11-19 21:29:03 +00:00
|
|
|
static struct exit_list_head exit_list = TAILQ_HEAD_INITIALIZER(exit_list);
|
1996-08-19 02:28:24 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* exit --
|
|
|
|
* Death of process.
|
|
|
|
*/
|
1996-09-13 09:20:15 +00:00
|
|
|
void
|
2000-07-29 00:16:28 +00:00
|
|
|
sys_exit(p, uap)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct proc *p;
|
2000-07-29 00:16:28 +00:00
|
|
|
struct sys_exit_args /* {
|
1995-10-23 15:42:12 +00:00
|
|
|
int rval;
|
|
|
|
} */ *uap;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
exit1(p, W_EXITCODE(uap->rval, 0));
|
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Exit: deallocate address space and other resources, change proc state
|
|
|
|
* to zombie, and unlink proc from allproc and parent's lists. Save exit
|
|
|
|
* status and rusage for wait(). Check for child processes and orphan them.
|
|
|
|
*/
|
1996-09-13 09:20:15 +00:00
|
|
|
void
|
1994-05-24 10:09:53 +00:00
|
|
|
exit1(p, rv)
|
|
|
|
register struct proc *p;
|
|
|
|
int rv;
|
|
|
|
{
|
|
|
|
register struct proc *q, *nq;
|
|
|
|
register struct vmspace *vm;
|
1999-11-19 21:29:03 +00:00
|
|
|
struct exitlist *ep;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1994-10-27 05:21:39 +00:00
|
|
|
if (p->p_pid == 1) {
|
|
|
|
printf("init died (signal %d, exit %d)\n",
|
1994-05-24 10:09:53 +00:00
|
|
|
WTERMSIG(rv), WEXITSTATUS(rv));
|
1994-10-27 05:21:39 +00:00
|
|
|
panic("Going nowhere without my init!");
|
|
|
|
}
|
1997-06-16 00:29:36 +00:00
|
|
|
|
1997-07-06 02:40:43 +00:00
|
|
|
aio_proc_rundown(p);
|
|
|
|
|
1997-06-16 00:29:36 +00:00
|
|
|
/* are we a task leader? */
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1997-06-16 00:29:36 +00:00
|
|
|
if(p == p->p_leader) {
|
|
|
|
struct kill_args killArgs;
|
2001-01-24 00:33:44 +00:00
|
|
|
|
1997-06-16 00:29:36 +00:00
|
|
|
killArgs.signum = SIGKILL;
|
|
|
|
q = p->p_peers;
|
|
|
|
while(q) {
|
|
|
|
killArgs.pid = q->p_pid;
|
|
|
|
/*
|
|
|
|
* The interface for kill is better
|
|
|
|
* than the internal signal
|
|
|
|
*/
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1997-11-06 19:29:57 +00:00
|
|
|
kill(p, &killArgs);
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1997-06-16 00:29:36 +00:00
|
|
|
nq = q;
|
|
|
|
q = q->p_peers;
|
|
|
|
}
|
1999-06-07 20:37:29 +00:00
|
|
|
while (p->p_peers)
|
2001-01-24 00:33:44 +00:00
|
|
|
msleep((caddr_t)p, &p->p_mtx, PWAIT, "exit1", 0);
|
|
|
|
}
|
|
|
|
PROC_UNLOCK(p);
|
1997-06-16 00:29:36 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#ifdef PGINPROF
|
|
|
|
vmsizmon();
|
|
|
|
#endif
|
1997-12-06 04:11:14 +00:00
|
|
|
STOPEVENT(p, S_EXIT, rv);
|
2000-01-10 04:09:05 +00:00
|
|
|
wakeup(&p->p_stype); /* Wakeup anyone in procfs' PIOCWAIT */
|
1997-12-06 04:11:14 +00:00
|
|
|
|
1996-08-19 02:28:24 +00:00
|
|
|
/*
|
1999-04-17 08:36:07 +00:00
|
|
|
* Check if any loadable modules need anything done at process exit.
|
1996-08-19 02:28:24 +00:00
|
|
|
* e.g. SYSV IPC stuff
|
|
|
|
* XXX what if one of these generates an error?
|
|
|
|
*/
|
1999-11-19 21:29:03 +00:00
|
|
|
TAILQ_FOREACH(ep, &exit_list, next)
|
1996-08-19 02:28:24 +00:00
|
|
|
(*ep->function)(p);
|
|
|
|
|
2001-01-24 00:33:44 +00:00
|
|
|
stopprofclock(p);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
MALLOC(p->p_ru, struct rusage *, sizeof(struct rusage),
|
|
|
|
M_ZOMBIE, M_WAITOK);
|
|
|
|
/*
|
|
|
|
* If parent is waiting for us to exit or exec,
|
|
|
|
* P_PPWAIT is set; we will wakeup the parent below.
|
|
|
|
*/
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_flag &= ~(P_TRACED | P_PPWAIT);
|
|
|
|
p->p_flag |= P_WEXIT;
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGEMPTYSET(p->p_siglist);
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1998-04-06 08:26:08 +00:00
|
|
|
if (timevalisset(&p->p_realtimer.it_value))
|
2000-11-27 22:52:31 +00:00
|
|
|
callout_stop(&p->p_itcallout);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
/*
|
|
|
|
* Reset any sigio structures pointing to us as a result of
|
|
|
|
* F_SETOWN with our pid.
|
|
|
|
*/
|
|
|
|
funsetownlst(&p->p_sigiolst);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Close open files and release open-file table.
|
|
|
|
* This may block!
|
|
|
|
*/
|
|
|
|
fdfree(p);
|
|
|
|
|
2001-01-24 00:33:44 +00:00
|
|
|
/*
|
|
|
|
* Remove ourself from our leader's peer list and wake our leader.
|
|
|
|
*/
|
|
|
|
PROC_LOCK(p);
|
1999-06-07 20:37:29 +00:00
|
|
|
if(p->p_leader->p_peers) {
|
|
|
|
q = p->p_leader;
|
|
|
|
while(q->p_peers != p)
|
|
|
|
q = q->p_peers;
|
|
|
|
q->p_peers = p->p_peers;
|
|
|
|
wakeup((caddr_t)p->p_leader);
|
|
|
|
}
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-06-07 20:37:29 +00:00
|
|
|
|
1996-01-08 04:30:48 +00:00
|
|
|
/*
|
|
|
|
* XXX Shutdown SYSV semaphores
|
|
|
|
*/
|
1995-12-27 15:25:30 +00:00
|
|
|
semexit(p);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* The next two chunks should probably be moved to vmspace_exit. */
|
|
|
|
vm = p->p_vmspace;
|
|
|
|
/*
|
|
|
|
* Release user portion of address space.
|
|
|
|
* This releases references to vnodes,
|
|
|
|
* which could cause I/O if the file has been unlinked.
|
|
|
|
* Need to do this early enough that we can still sleep.
|
|
|
|
* Can't free the entire vmspace as the kernel stack
|
|
|
|
* may be mapped within that space also.
|
|
|
|
*/
|
1996-10-12 21:35:25 +00:00
|
|
|
if (vm->vm_refcnt == 1) {
|
1997-04-07 07:16:06 +00:00
|
|
|
if (vm->vm_shm)
|
|
|
|
shmexit(p);
|
1999-02-19 14:25:37 +00:00
|
|
|
pmap_remove_pages(vmspace_pmap(vm), VM_MIN_ADDRESS,
|
1996-10-12 21:35:25 +00:00
|
|
|
VM_MAXUSER_ADDRESS);
|
1996-07-30 03:08:57 +00:00
|
|
|
(void) vm_map_remove(&vm->vm_map, VM_MIN_ADDRESS,
|
|
|
|
VM_MAXUSER_ADDRESS);
|
1996-10-12 21:35:25 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (SESS_LEADER(p)) {
|
|
|
|
register struct session *sp = p->p_session;
|
|
|
|
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (sp->s_ttyvp) {
|
|
|
|
/*
|
|
|
|
* Controlling process.
|
|
|
|
* Signal foreground pgrp,
|
|
|
|
* drain controlling terminal
|
|
|
|
* and revoke access to controlling terminal.
|
|
|
|
*/
|
1996-10-04 23:43:12 +00:00
|
|
|
if (sp->s_ttyp && (sp->s_ttyp->t_session == sp)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
if (sp->s_ttyp->t_pgrp)
|
|
|
|
pgsignal(sp->s_ttyp->t_pgrp, SIGHUP, 1);
|
|
|
|
(void) ttywait(sp->s_ttyp);
|
|
|
|
/*
|
|
|
|
* The tty could have been revoked
|
|
|
|
* if we blocked.
|
|
|
|
*/
|
|
|
|
if (sp->s_ttyvp)
|
1997-02-10 02:22:35 +00:00
|
|
|
VOP_REVOKE(sp->s_ttyvp, REVOKEALL);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
if (sp->s_ttyvp)
|
|
|
|
vrele(sp->s_ttyvp);
|
|
|
|
sp->s_ttyvp = NULL;
|
|
|
|
/*
|
|
|
|
* s_ttyp is not zero'd; we use this to indicate
|
|
|
|
* that the session once had a controlling terminal.
|
|
|
|
* (for logging and informational purposes)
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
sp->s_leader = NULL;
|
2001-01-24 00:33:44 +00:00
|
|
|
} else
|
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
fixjobc(p, p->p_pgrp, 0);
|
|
|
|
(void)acct_process(p);
|
|
|
|
#ifdef KTRACE
|
1995-05-30 08:16:23 +00:00
|
|
|
/*
|
1994-05-24 10:09:53 +00:00
|
|
|
* release trace file
|
|
|
|
*/
|
|
|
|
p->p_traceflag = 0; /* don't trace the vrele() */
|
|
|
|
if (p->p_tracep)
|
|
|
|
vrele(p->p_tracep);
|
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* Remove proc from allproc queue and pidhash chain.
|
|
|
|
* Place onto zombproc. Unlink from parent's child list.
|
|
|
|
*/
|
2000-12-13 00:17:05 +00:00
|
|
|
ALLPROC_LOCK(AP_EXCLUSIVE);
|
1996-03-11 06:05:03 +00:00
|
|
|
LIST_REMOVE(p, p_list);
|
|
|
|
LIST_INSERT_HEAD(&zombproc, p, p_list);
|
|
|
|
LIST_REMOVE(p, p_hash);
|
2000-12-13 00:17:05 +00:00
|
|
|
ALLPROC_LOCK(AP_RELEASE);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_LOCK(PT_EXCLUSIVE);
|
1999-11-16 10:56:05 +00:00
|
|
|
q = LIST_FIRST(&p->p_children);
|
2001-01-24 00:33:44 +00:00
|
|
|
if (q != NULL) /* only need this if any child is S_ZOMB */
|
1994-05-24 10:09:53 +00:00
|
|
|
wakeup((caddr_t) initproc);
|
2001-01-24 00:33:44 +00:00
|
|
|
for (; q != NULL; q = nq) {
|
1999-11-16 10:56:05 +00:00
|
|
|
nq = LIST_NEXT(q, p_sibling);
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(q);
|
2001-03-07 02:22:31 +00:00
|
|
|
proc_reparent(q, initproc);
|
1999-03-02 00:28:09 +00:00
|
|
|
q->p_sigparent = SIGCHLD;
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Traced processes are killed
|
|
|
|
* since their existence means someone is screwing up.
|
|
|
|
*/
|
|
|
|
if (q->p_flag & P_TRACED) {
|
|
|
|
q->p_flag &= ~P_TRACED;
|
|
|
|
psignal(q, SIGKILL);
|
2001-03-07 02:22:31 +00:00
|
|
|
}
|
|
|
|
PROC_UNLOCK(q);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Save exit status and final rusage info, adding in child rusage
|
|
|
|
* info and self times.
|
|
|
|
*/
|
|
|
|
p->p_xstat = rv;
|
|
|
|
*p->p_ru = p->p_stats->p_ru;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
calcru(p, &p->p_ru->ru_utime, &p->p_ru->ru_stime, NULL);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
ruadd(p->p_ru, &p->p_stats->p_cru);
|
|
|
|
|
1999-03-11 21:53:12 +00:00
|
|
|
/*
|
|
|
|
* Pretend that an mi_switch() to the next process occurs now. We
|
|
|
|
* must set `switchtime' directly since we will call cpu_switch()
|
|
|
|
* directly. Set it now so that the rest of the exit time gets
|
|
|
|
* counted somewhere if possible.
|
|
|
|
*/
|
2001-02-22 19:50:37 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-02-22 20:16:51 +00:00
|
|
|
microuptime(PCPU_PTR(switchtime));
|
2001-01-10 04:43:51 +00:00
|
|
|
PCPU_SET(switchticks, ticks);
|
2001-02-22 20:16:51 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1999-03-11 21:53:12 +00:00
|
|
|
|
2000-04-16 18:53:38 +00:00
|
|
|
/*
|
2000-05-21 16:27:41 +00:00
|
|
|
* notify interested parties of our demise.
|
2000-04-16 18:53:38 +00:00
|
|
|
*/
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
2000-05-21 16:27:41 +00:00
|
|
|
KNOTE(&p->p_klist, NOTE_EXIT);
|
2000-04-16 18:53:38 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-10-11 20:33:17 +00:00
|
|
|
* Notify parent that we're gone. If parent has the PS_NOCLDWAIT
|
1997-09-13 19:42:29 +00:00
|
|
|
* flag set, notify process 1 instead (and hope it will handle
|
|
|
|
* this situation).
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-10-11 20:33:17 +00:00
|
|
|
if (p->p_pptr->p_procsig->ps_flag & PS_NOCLDWAIT) {
|
1997-09-13 19:42:29 +00:00
|
|
|
struct proc *pp = p->p_pptr;
|
|
|
|
proc_reparent(p, initproc);
|
|
|
|
/*
|
|
|
|
* If this was the last child of our parent, notify
|
|
|
|
* parent, so in case he was wait(2)ing, he will
|
|
|
|
* continue.
|
|
|
|
*/
|
|
|
|
if (LIST_EMPTY(&pp->p_children))
|
|
|
|
wakeup((caddr_t)pp);
|
|
|
|
}
|
|
|
|
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_LOCK(p->p_pptr);
|
|
|
|
if (p->p_sigparent && p->p_pptr != initproc)
|
1998-12-19 02:55:34 +00:00
|
|
|
psignal(p->p_pptr, p->p_sigparent);
|
2001-03-07 02:22:31 +00:00
|
|
|
else
|
1998-12-19 02:55:34 +00:00
|
|
|
psignal(p->p_pptr, SIGCHLD);
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_UNLOCK(p->p_pptr);
|
|
|
|
PROC_UNLOCK(p);
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
2001-03-07 02:22:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Clear curproc after we've done all operations
|
|
|
|
* that could block, and before tearing down the rest
|
|
|
|
* of the process state that might be used from clock, etc.
|
|
|
|
* Also, can't clear curproc while we're still runnable,
|
|
|
|
* as we're not on a run queue (we are current, just not
|
|
|
|
* a proper proc any longer!).
|
|
|
|
*
|
|
|
|
* Other substructures are freed from wait().
|
|
|
|
*/
|
2001-01-24 00:33:44 +00:00
|
|
|
mtx_assert(&Giant, MA_OWNED);
|
1994-12-28 06:15:08 +00:00
|
|
|
if (--p->p_limit->p_refcnt == 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
FREE(p->p_limit, M_SUBPROC);
|
1994-12-28 06:15:08 +00:00
|
|
|
p->p_limit = NULL;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Finally, call machine-dependent code to release the remaining
|
|
|
|
* resources including address space, the kernel stack and pcb.
|
|
|
|
* The address space is released by "vmspace_free(p->p_vmspace)";
|
|
|
|
* This is machine-dependent, as we may have to change stacks
|
|
|
|
* or ensure that the current one isn't reallocated before we
|
1996-04-11 20:56:29 +00:00
|
|
|
* finish. cpu_exit will end with a call to cpu_switch(), finishing
|
1994-05-24 10:09:53 +00:00
|
|
|
* our execution (pun intended).
|
|
|
|
*/
|
|
|
|
cpu_exit(p);
|
|
|
|
}
|
|
|
|
|
1995-11-11 05:49:22 +00:00
|
|
|
#ifdef COMPAT_43
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
1997-11-06 19:29:57 +00:00
|
|
|
owait(p, uap)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct proc *p;
|
1995-10-23 15:42:12 +00:00
|
|
|
register struct owait_args /* {
|
|
|
|
int dummy;
|
|
|
|
} */ *uap;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1995-10-07 23:47:26 +00:00
|
|
|
struct wait_args w;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1995-10-07 23:47:26 +00:00
|
|
|
w.options = 0;
|
|
|
|
w.rusage = NULL;
|
|
|
|
w.pid = WAIT_ANY;
|
|
|
|
w.status = NULL;
|
1997-11-20 19:09:43 +00:00
|
|
|
return (wait1(p, &w, 1));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1995-11-11 05:49:22 +00:00
|
|
|
#endif /* COMPAT_43 */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
1997-11-06 19:29:57 +00:00
|
|
|
wait4(p, uap)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct proc *p;
|
|
|
|
struct wait_args *uap;
|
|
|
|
{
|
1995-10-23 15:42:12 +00:00
|
|
|
|
1997-11-20 19:09:43 +00:00
|
|
|
return (wait1(p, uap, 0));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1995-10-07 23:47:26 +00:00
|
|
|
static int
|
1997-11-20 19:09:43 +00:00
|
|
|
wait1(q, uap, compat)
|
1995-10-07 23:47:26 +00:00
|
|
|
register struct proc *q;
|
1995-10-23 15:42:12 +00:00
|
|
|
register struct wait_args /* {
|
|
|
|
int pid;
|
|
|
|
int *status;
|
|
|
|
int options;
|
|
|
|
struct rusage *rusage;
|
|
|
|
} */ *uap;
|
1995-10-07 23:47:26 +00:00
|
|
|
int compat;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
register int nfound;
|
|
|
|
register struct proc *p, *t;
|
1995-10-23 19:44:38 +00:00
|
|
|
int status, error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if (uap->pid == 0)
|
|
|
|
uap->pid = -q->p_pgid;
|
1999-03-02 00:28:09 +00:00
|
|
|
if (uap->options &~ (WUNTRACED|WNOHANG|WLINUXCLONE))
|
1994-05-24 10:09:53 +00:00
|
|
|
return (EINVAL);
|
|
|
|
loop:
|
|
|
|
nfound = 0;
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_LOCK(PT_SHARED);
|
1999-11-16 10:56:05 +00:00
|
|
|
LIST_FOREACH(p, &q->p_children, p_sibling) {
|
1994-05-24 10:09:53 +00:00
|
|
|
if (uap->pid != WAIT_ANY &&
|
|
|
|
p->p_pid != uap->pid && p->p_pgid != -uap->pid)
|
|
|
|
continue;
|
1999-03-02 00:28:09 +00:00
|
|
|
|
2000-12-18 07:10:04 +00:00
|
|
|
/*
|
|
|
|
* This special case handles a kthread spawned by linux_clone
|
|
|
|
* (see linux_misc.c). The linux_wait4 and linux_waitpid
|
|
|
|
* functions need to be able to distinguish between waiting
|
|
|
|
* on a process and waiting on a thread. It is a thread if
|
|
|
|
* p_sigparent is not SIGCHLD, and the WLINUXCLONE option
|
|
|
|
* signifies we want to wait for threads and not processes.
|
1999-03-02 00:28:09 +00:00
|
|
|
*/
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
2000-12-18 07:10:04 +00:00
|
|
|
if ((p->p_sigparent != SIGCHLD) ^
|
2001-01-24 00:33:44 +00:00
|
|
|
((uap->options & WLINUXCLONE) != 0)) {
|
|
|
|
PROC_UNLOCK(p);
|
1999-03-02 00:28:09 +00:00
|
|
|
continue;
|
2001-01-24 00:33:44 +00:00
|
|
|
}
|
1999-03-02 00:28:09 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
nfound++;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (p->p_stat == SZOMB) {
|
1994-08-06 07:15:04 +00:00
|
|
|
/* charge childs scheduling cpu usage to parent */
|
1994-09-12 11:27:03 +00:00
|
|
|
if (curproc->p_pid != 1) {
|
Scheduler fixes equivalent to the ones logged in the following NetBSD
commit to kern_synch.c:
----------------------------
revision 1.55
date: 1999/02/23 02:56:03; author: ross; state: Exp; lines: +39 -10
Scheduler bug fixes and reorganization
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclk() mechanism at a new clock at schedhz, so high
platform hz values don't cause nice +0 processes to look like they are
niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code
=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX. Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value. It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)
=== New schedclk() mechansism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock. Define these for alpha, where hz==1024, and nice was
totally broke.
=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater wieght) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this, it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.
=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
----------------------------
The details are a little different in FreeBSD:
=== nice bug === Fixing this is the main point of this commit. We use
essentially the same clipping rule as NetBSD (our limit on p_estcpu
differs by a scale factor). However, clipping at all is fundamentally
bad. It gives free CPU the hoggiest hogs once they reach the limit, and
reaching the limit is normal for long-running hogs. This will be fixed
later.
=== New schedclk() mechanism === We don't use the NetBSD schedclk()
(now schedclock()) mechanism. We require (real)stathz to be about 128
and scale by an extra factor of 2 compared with NetBSD's statclock().
We scale p_estcpu instead of scaling the clock. This is more accurate
and flexible.
=== Algorithm change === Same change.
=== Other bugs === The p_pctcpu bug was fixed long ago. We don't try as
hard to abstract functionality yet.
Related changes: the new limit on p_estcpu must be exported to kern_exit.c
for clipping in wait1().
Agreed with by: dufault
1999-11-28 12:12:14 +00:00
|
|
|
curproc->p_estcpu =
|
|
|
|
ESTCPULIM(curproc->p_estcpu + p->p_estcpu);
|
1994-09-12 11:27:03 +00:00
|
|
|
}
|
1994-08-06 07:15:04 +00:00
|
|
|
|
2001-03-07 02:22:31 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
|
|
|
|
1997-11-20 19:09:43 +00:00
|
|
|
q->p_retval[0] = p->p_pid;
|
1995-11-11 05:49:22 +00:00
|
|
|
#ifdef COMPAT_43
|
1995-10-07 23:47:26 +00:00
|
|
|
if (compat)
|
1997-11-20 19:09:43 +00:00
|
|
|
q->p_retval[1] = p->p_xstat;
|
1994-05-24 10:09:53 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
if (uap->status) {
|
|
|
|
status = p->p_xstat; /* convert to int */
|
1994-09-25 19:34:02 +00:00
|
|
|
if ((error = copyout((caddr_t)&status,
|
|
|
|
(caddr_t)uap->status, sizeof(status))))
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
if (uap->rusage && (error = copyout((caddr_t)p->p_ru,
|
|
|
|
(caddr_t)uap->rusage, sizeof (struct rusage))))
|
|
|
|
return (error);
|
|
|
|
/*
|
|
|
|
* If we got the child via a ptrace 'attach',
|
|
|
|
* we need to give it back to the old parent.
|
|
|
|
*/
|
2001-03-07 02:22:31 +00:00
|
|
|
PROCTREE_LOCK(PT_EXCLUSIVE);
|
2000-12-23 19:43:10 +00:00
|
|
|
if (p->p_oppid) {
|
|
|
|
if ((t = pfind(p->p_oppid)) != NULL) {
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_LOCK(p);
|
2000-12-23 19:43:10 +00:00
|
|
|
p->p_oppid = 0;
|
|
|
|
proc_reparent(p, t);
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
PROC_LOCK(t);
|
2000-12-23 19:43:10 +00:00
|
|
|
psignal(t, SIGCHLD);
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_UNLOCK(t);
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
2001-03-07 02:22:31 +00:00
|
|
|
wakeup((caddr_t)t);
|
2000-12-23 19:43:10 +00:00
|
|
|
return (0);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-03-07 02:22:31 +00:00
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_xstat = 0;
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
ruadd(&q->p_stats->p_cru, p->p_ru);
|
|
|
|
FREE(p->p_ru, M_ZOMBIE);
|
1994-12-28 06:15:08 +00:00
|
|
|
p->p_ru = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Decrement the count of procs running with this uid.
|
|
|
|
*/
|
2000-09-05 22:11:13 +00:00
|
|
|
(void)chgproccnt(p->p_cred->p_uidinfo, -1, 0);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1996-01-19 04:00:31 +00:00
|
|
|
/*
|
|
|
|
* Release reference to text vnode
|
|
|
|
*/
|
|
|
|
if (p->p_textvp)
|
|
|
|
vrele(p->p_textvp);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Free up credentials.
|
|
|
|
*/
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (--p->p_cred->p_refcnt == 0) {
|
1999-11-21 12:38:21 +00:00
|
|
|
crfree(p->p_ucred);
|
2000-09-05 22:11:13 +00:00
|
|
|
uifree(p->p_cred->p_uidinfo);
|
1994-05-24 10:09:53 +00:00
|
|
|
FREE(p->p_cred, M_SUBPROC);
|
1994-12-28 06:15:08 +00:00
|
|
|
p->p_cred = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1999-11-16 20:31:58 +00:00
|
|
|
/*
|
|
|
|
* Remove unused arguments
|
|
|
|
*/
|
|
|
|
if (p->p_args && --p->p_args->ar_ref == 0)
|
|
|
|
FREE(p->p_args, M_PARGS);
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-11-16 20:31:58 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Finally finished with old proc entry.
|
|
|
|
* Unlink it from its process group and free it.
|
|
|
|
*/
|
|
|
|
leavepgrp(p);
|
2000-12-23 19:43:10 +00:00
|
|
|
|
2000-12-13 00:17:05 +00:00
|
|
|
ALLPROC_LOCK(AP_EXCLUSIVE);
|
1996-03-11 06:05:03 +00:00
|
|
|
LIST_REMOVE(p, p_list); /* off zombproc */
|
2000-12-13 00:17:05 +00:00
|
|
|
ALLPROC_LOCK(AP_RELEASE);
|
2000-12-23 19:43:10 +00:00
|
|
|
|
|
|
|
PROCTREE_LOCK(PT_EXCLUSIVE);
|
1996-03-11 06:05:03 +00:00
|
|
|
LIST_REMOVE(p, p_sibling);
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1998-12-19 02:55:34 +00:00
|
|
|
if (--p->p_procsig->ps_refcnt == 0) {
|
1999-01-07 21:23:50 +00:00
|
|
|
if (p->p_sigacts != &p->p_addr->u_sigacts)
|
|
|
|
FREE(p->p_sigacts, M_SUBPROC);
|
|
|
|
FREE(p->p_procsig, M_SUBPROC);
|
1998-12-19 02:55:34 +00:00
|
|
|
p->p_procsig = NULL;
|
|
|
|
}
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-01-26 02:38:12 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Give machine-dependent layer a chance
|
|
|
|
* to free anything that cpu_exit couldn't
|
|
|
|
* release while still running in process context.
|
|
|
|
*/
|
|
|
|
cpu_wait(p);
|
2000-12-03 01:22:34 +00:00
|
|
|
mtx_destroy(&p->p_mtx);
|
VM level code cleanups.
1) Start using TSM.
Struct procs continue to point to upages structure, after being freed.
Struct vmspace continues to point to pte object and kva space for kstack.
u_map is now superfluous.
2) vm_map's don't need to be reference counted. They always exist either
in the kernel or in a vmspace. The vmspaces are managed by reference
counts.
3) Remove the "wired" vm_map nonsense.
4) No need to keep a cache of kernel stack kva's.
5) Get rid of strange looking ++var, and change to var++.
6) Change more data structures to use our "zone" allocator. Added
struct proc, struct vmspace and struct vnode. This saves a significant
amount of kva space and physical memory. Additionally, this enables
TSM for the zone managed memory.
7) Keep ioopt disabled for now.
8) Remove the now bogus "single use" map concept.
9) Use generation counts or id's for data structures residing in TSM, where
it allows us to avoid unneeded restart overhead during traversals, where
blocking might occur.
10) Account better for memory deficits, so the pageout daemon will be able
to make enough memory available (experimental.)
11) Fix some vnode locking problems. (From Tor, I think.)
12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
(experimental.)
13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c
code. Use generation counts, get rid of unneded collpase operations,
and clean up the cluster code.
14) Make vm_zone more suitable for TSM.
This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)
This is not the infamous, final cleanup of the vnode stuff, but a necessary
step. Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
|
|
|
zfree(proc_zone, p);
|
1994-05-24 10:09:53 +00:00
|
|
|
nprocs--;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
if (p->p_stat == SSTOP && (p->p_flag & P_WAITED) == 0 &&
|
|
|
|
(p->p_flag & P_TRACED || uap->options & WUNTRACED)) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_flag |= P_WAITED;
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
1997-11-20 19:09:43 +00:00
|
|
|
q->p_retval[0] = p->p_pid;
|
1995-11-11 05:49:22 +00:00
|
|
|
#ifdef COMPAT_43
|
1995-10-07 23:47:26 +00:00
|
|
|
if (compat) {
|
1997-11-20 19:09:43 +00:00
|
|
|
q->p_retval[1] = W_STOPCODE(p->p_xstat);
|
1994-05-24 10:09:53 +00:00
|
|
|
error = 0;
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
if (uap->status) {
|
|
|
|
status = W_STOPCODE(p->p_xstat);
|
|
|
|
error = copyout((caddr_t)&status,
|
|
|
|
(caddr_t)uap->status, sizeof(status));
|
|
|
|
} else
|
|
|
|
error = 0;
|
|
|
|
return (error);
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_LOCK(PT_RELEASE);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (nfound == 0)
|
|
|
|
return (ECHILD);
|
|
|
|
if (uap->options & WNOHANG) {
|
1997-11-20 19:09:43 +00:00
|
|
|
q->p_retval[0] = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
}
|
1994-09-25 19:34:02 +00:00
|
|
|
if ((error = tsleep((caddr_t)q, PWAIT | PCATCH, "wait", 0)))
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
goto loop;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2000-12-23 19:43:10 +00:00
|
|
|
* Make process 'parent' the new parent of process 'child'.
|
|
|
|
* Must be called with an exclusive hold of proctree lock.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
proc_reparent(child, parent)
|
|
|
|
register struct proc *child;
|
|
|
|
register struct proc *parent;
|
|
|
|
{
|
|
|
|
|
2000-12-23 19:43:10 +00:00
|
|
|
PROCTREE_ASSERT(PT_EXCLUSIVE);
|
2001-03-07 02:22:31 +00:00
|
|
|
PROC_LOCK_ASSERT(child, MA_OWNED);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (child->p_pptr == parent)
|
|
|
|
return;
|
|
|
|
|
1996-03-11 06:05:03 +00:00
|
|
|
LIST_REMOVE(child, p_sibling);
|
|
|
|
LIST_INSERT_HEAD(&parent->p_children, child, p_sibling);
|
1994-05-24 10:09:53 +00:00
|
|
|
child->p_pptr = parent;
|
|
|
|
}
|
1996-08-19 02:28:24 +00:00
|
|
|
|
1996-08-22 03:50:33 +00:00
|
|
|
/*
|
|
|
|
* The next two functions are to handle adding/deleting items on the
|
1996-08-19 02:28:24 +00:00
|
|
|
* exit callout list
|
1996-08-22 03:50:33 +00:00
|
|
|
*
|
|
|
|
* at_exit():
|
|
|
|
* Take the arguments given and put them onto the exit callout list,
|
1996-08-19 02:28:24 +00:00
|
|
|
* However first make sure that it's not already there.
|
|
|
|
* returns 0 on success.
|
|
|
|
*/
|
1999-11-19 21:29:03 +00:00
|
|
|
|
1996-08-19 02:28:24 +00:00
|
|
|
int
|
1997-08-26 00:15:04 +00:00
|
|
|
at_exit(function)
|
|
|
|
exitlist_fn function;
|
1996-08-19 02:28:24 +00:00
|
|
|
{
|
1999-11-19 21:29:03 +00:00
|
|
|
struct exitlist *ep;
|
1996-08-22 03:50:33 +00:00
|
|
|
|
1999-11-19 21:29:03 +00:00
|
|
|
#ifdef INVARIANTS
|
1996-08-22 03:50:33 +00:00
|
|
|
/* Be noisy if the programmer has lost track of things */
|
|
|
|
if (rm_at_exit(function))
|
1999-11-19 21:29:03 +00:00
|
|
|
printf("WARNING: exit callout entry (%p) already present\n",
|
|
|
|
function);
|
|
|
|
#endif
|
|
|
|
ep = malloc(sizeof(*ep), M_ATEXIT, M_NOWAIT);
|
1996-08-22 03:50:33 +00:00
|
|
|
if (ep == NULL)
|
|
|
|
return (ENOMEM);
|
1996-08-19 02:28:24 +00:00
|
|
|
ep->function = function;
|
1999-11-19 21:29:03 +00:00
|
|
|
TAILQ_INSERT_TAIL(&exit_list, ep, next);
|
1996-08-22 03:50:33 +00:00
|
|
|
return (0);
|
1996-08-19 02:28:24 +00:00
|
|
|
}
|
1999-11-19 21:29:03 +00:00
|
|
|
|
1996-08-19 02:28:24 +00:00
|
|
|
/*
|
1999-11-19 21:29:03 +00:00
|
|
|
* Scan the exit callout list for the given item and remove it.
|
|
|
|
* Returns the number of items removed (0 or 1)
|
1996-08-19 02:28:24 +00:00
|
|
|
*/
|
|
|
|
int
|
1997-08-26 00:15:04 +00:00
|
|
|
rm_at_exit(function)
|
|
|
|
exitlist_fn function;
|
1996-08-19 02:28:24 +00:00
|
|
|
{
|
1999-11-19 21:29:03 +00:00
|
|
|
struct exitlist *ep;
|
1996-08-19 02:28:24 +00:00
|
|
|
|
1999-11-19 21:29:03 +00:00
|
|
|
TAILQ_FOREACH(ep, &exit_list, next) {
|
1996-08-22 03:50:33 +00:00
|
|
|
if (ep->function == function) {
|
1999-11-19 21:29:03 +00:00
|
|
|
TAILQ_REMOVE(&exit_list, ep, next);
|
|
|
|
free(ep, M_ATEXIT);
|
|
|
|
return(1);
|
1996-08-19 02:28:24 +00:00
|
|
|
}
|
1999-11-19 21:29:03 +00:00
|
|
|
}
|
|
|
|
return (0);
|
1996-08-19 02:28:24 +00:00
|
|
|
}
|
1999-01-07 21:23:50 +00:00
|
|
|
|
|
|
|
void check_sigacts (void)
|
|
|
|
{
|
|
|
|
struct proc *p = curproc;
|
|
|
|
struct sigacts *pss;
|
|
|
|
int s;
|
|
|
|
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_LOCK(p);
|
1999-01-07 21:23:50 +00:00
|
|
|
if (p->p_procsig->ps_refcnt == 1 &&
|
|
|
|
p->p_sigacts != &p->p_addr->u_sigacts) {
|
|
|
|
pss = p->p_sigacts;
|
|
|
|
s = splhigh();
|
|
|
|
p->p_addr->u_sigacts = *pss;
|
|
|
|
p->p_sigacts = &p->p_addr->u_sigacts;
|
|
|
|
splx(s);
|
|
|
|
FREE(pss, M_SUBPROC);
|
|
|
|
}
|
2001-01-24 00:33:44 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-01-07 21:23:50 +00:00
|
|
|
}
|
1999-01-26 02:38:12 +00:00
|
|
|
|