freebsd-skq/sys/kern/kern_fork.c

776 lines
19 KiB
C
Raw Normal View History

1994-05-24 10:09:53 +00:00
/*
* Copyright (c) 1982, 1986, 1989, 1991, 1993
* The Regents of the University of California. All rights reserved.
* (c) UNIX System Laboratories, Inc.
* All or some portions of this file are derived from material licensed
* to the University of California by American Telephone and Telegraph
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
* the permission of UNIX System Laboratories, Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)kern_fork.c 8.6 (Berkeley) 4/8/94
1999-08-28 01:08:13 +00:00
* $FreeBSD$
1994-05-24 10:09:53 +00:00
*/
#include "opt_ktrace.h"
1994-05-24 10:09:53 +00:00
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/sysproto.h>
1994-05-24 10:09:53 +00:00
#include <sys/filedesc.h>
#include <sys/kernel.h>
#include <sys/sysctl.h>
1994-05-24 10:09:53 +00:00
#include <sys/malloc.h>
#include <sys/mutex.h>
1994-05-24 10:09:53 +00:00
#include <sys/proc.h>
#include <sys/resourcevar.h>
#include <sys/syscall.h>
1994-05-24 10:09:53 +00:00
#include <sys/vnode.h>
#include <sys/acct.h>
#include <sys/ktr.h>
1994-05-24 10:09:53 +00:00
#include <sys/ktrace.h>
#include <sys/kthread.h>
#include <sys/unistd.h>
#include <sys/jail.h>
#include <sys/sx.h>
1994-05-24 10:09:53 +00:00
#include <vm/vm.h>
#include <sys/lock.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#include <vm/vm_extern.h>
VM level code cleanups. 1) Start using TSM. Struct procs continue to point to upages structure, after being freed. Struct vmspace continues to point to pte object and kva space for kstack. u_map is now superfluous. 2) vm_map's don't need to be reference counted. They always exist either in the kernel or in a vmspace. The vmspaces are managed by reference counts. 3) Remove the "wired" vm_map nonsense. 4) No need to keep a cache of kernel stack kva's. 5) Get rid of strange looking ++var, and change to var++. 6) Change more data structures to use our "zone" allocator. Added struct proc, struct vmspace and struct vnode. This saves a significant amount of kva space and physical memory. Additionally, this enables TSM for the zone managed memory. 7) Keep ioopt disabled for now. 8) Remove the now bogus "single use" map concept. 9) Use generation counts or id's for data structures residing in TSM, where it allows us to avoid unneeded restart overhead during traversals, where blocking might occur. 10) Account better for memory deficits, so the pageout daemon will be able to make enough memory available (experimental.) 11) Fix some vnode locking problems. (From Tor, I think.) 12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp. (experimental.) 13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c code. Use generation counts, get rid of unneded collpase operations, and clean up the cluster code. 14) Make vm_zone more suitable for TSM. This commit is partially as a result of discussions and contributions from other people, including DG, Tor Egge, PHK, and probably others that I have forgotten to attribute (so let me know, if I forgot.) This is not the infamous, final cleanup of the vnode stuff, but a necessary step. Vnode mgmt should be correct, but things might still change, and there is still some missing stuff (like ioopt, and physical backing of non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
#include <vm/vm_zone.h>
#include <sys/vmmeter.h>
#include <sys/user.h>
static MALLOC_DEFINE(M_ATFORK, "atfork", "atfork callback");
static int fast_vfork = 1;
SYSCTL_INT(_kern, OID_AUTO, fast_vfork, CTLFLAG_RW, &fast_vfork, 0,
"flag to indicate whether we have a fast vfork()");
/*
* These are the stuctures used to create a callout list for things to do
* when forking a process
*/
struct forklist {
forklist_fn function;
TAILQ_ENTRY(forklist) next;
};
static struct sx fork_list_lock;
TAILQ_HEAD(forklist_head, forklist);
static struct forklist_head fork_list = TAILQ_HEAD_INITIALIZER(fork_list);
#ifndef _SYS_SYSPROTO_H_
struct fork_args {
int dummy;
};
#endif
static void
init_fork_list(void *data __unused)
{
sx_init(&fork_list_lock, "fork list");
}
SYSINIT(fork_list, SI_SUB_INTRINSIC, SI_ORDER_ANY, init_fork_list, NULL);
1994-05-24 10:09:53 +00:00
/* ARGSUSED */
int
fork(p, uap)
1994-05-24 10:09:53 +00:00
struct proc *p;
struct fork_args *uap;
{
int error;
struct proc *p2;
error = fork1(p, RFFDG | RFPROC, &p2);
if (error == 0) {
p->p_retval[0] = p2->p_pid;
p->p_retval[1] = 0;
}
return error;
1994-05-24 10:09:53 +00:00
}
/* ARGSUSED */
int
vfork(p, uap)
1994-05-24 10:09:53 +00:00
struct proc *p;
struct vfork_args *uap;
1994-05-24 10:09:53 +00:00
{
int error;
struct proc *p2;
error = fork1(p, RFFDG | RFPROC | RFPPWAIT | RFMEM, &p2);
if (error == 0) {
p->p_retval[0] = p2->p_pid;
p->p_retval[1] = 0;
}
return error;
}
1994-05-24 10:09:53 +00:00
int
rfork(p, uap)
struct proc *p;
struct rfork_args *uap;
{
int error;
struct proc *p2;
/* mask kernel only flags out of the user flags */
error = fork1(p, uap->flags & ~RFKERNELONLY, &p2);
if (error == 0) {
p->p_retval[0] = p2 ? p2->p_pid : 0;
p->p_retval[1] = 0;
}
return error;
1994-05-24 10:09:53 +00:00
}
int nprocs = 1; /* process 0 */
static int nextpid = 0;
SYSCTL_INT(_kern, OID_AUTO, lastpid, CTLFLAG_RD, &nextpid, 0,
"Last used PID");
1994-05-24 10:09:53 +00:00
/*
* Random component to nextpid generation. We mix in a random factor to make
* it a little harder to predict. We sanity check the modulus value to avoid
* doing it in critical paths. Don't let it be too small or we pointlessly
* waste randomness entropy, and don't let it be impossibly large. Using a
* modulus that is too big causes a LOT more process table scans and slows
* down fork processing as the pidchecked caching is defeated.
*/
static int randompid = 0;
static int
sysctl_kern_randompid(SYSCTL_HANDLER_ARGS)
{
int error, pid;
pid = randompid;
error = sysctl_handle_int(oidp, &pid, 0, req);
if (error || !req->newptr)
return (error);
if (pid < 0 || pid > PID_MAX - 100) /* out of range */
pid = PID_MAX - 100;
else if (pid < 2) /* NOP */
pid = 0;
else if (pid < 100) /* Make it reasonable */
pid = 100;
randompid = pid;
return (error);
}
SYSCTL_PROC(_kern, OID_AUTO, randompid, CTLTYPE_INT|CTLFLAG_RW,
0, 0, sysctl_kern_randompid, "I", "Random PID modulus");
int
fork1(p1, flags, procp)
struct proc *p1; /* parent proc */
int flags;
struct proc **procp; /* child proc */
1994-05-24 10:09:53 +00:00
{
struct proc *p2, *pptr;
uid_t uid;
1994-05-24 10:09:53 +00:00
struct proc *newproc;
int trypid;
int ok;
static int pidchecked = 0;
struct forklist *ep;
struct filedesc *fd;
/* Can't copy and clear */
if ((flags & (RFFDG|RFCFDG)) == (RFFDG|RFCFDG))
return (EINVAL);
1994-05-24 10:09:53 +00:00
/*
* Here we don't create a new process, but we divorce
* certain parts of a process from itself.
*/
if ((flags & RFPROC) == 0) {
1999-12-06 04:53:08 +00:00
vm_fork(p1, 0, flags);
/*
* Close all file descriptors.
*/
if (flags & RFCFDG) {
struct filedesc *fdtmp;
fdtmp = fdinit(p1);
PROC_LOCK(p1);
fdfree(p1);
p1->p_fd = fdtmp;
PROC_UNLOCK(p1);
}
/*
* Unshare file descriptors (from parent.)
*/
if (flags & RFFDG) {
if (p1->p_fd->fd_refcnt > 1) {
struct filedesc *newfd;
newfd = fdcopy(p1);
PROC_LOCK(p1);
fdfree(p1);
p1->p_fd = newfd;
PROC_UNLOCK(p1);
}
}
*procp = NULL;
return (0);
}
1994-05-24 10:09:53 +00:00
/*
* Although process entries are dynamically created, we still keep
* a global limit on the maximum number we will create. Don't allow
* a nonprivileged user to use the last process; don't let root
* exceed the limit. The variable nprocs is the current number of
* processes, maxproc is the limit.
*/
uid = p1->p_cred->p_ruid;
if ((nprocs >= maxproc - 1 && uid != 0) || nprocs >= maxproc) {
tablefull("proc");
return (EAGAIN);
}
/*
* Increment the nprocs resource before blocking can occur. There
* are hard-limits as to the number of processes that can run.
*/
nprocs++;
1994-05-24 10:09:53 +00:00
/*
* Increment the count of procs running with this uid. Don't allow
* a nonprivileged user to exceed their current limit.
*/
ok = chgproccnt(p1->p_cred->p_uidinfo, 1,
(uid != 0) ? p1->p_rlimit[RLIMIT_NPROC].rlim_cur : 0);
if (!ok) {
/*
* Back out the process count
*/
nprocs--;
1994-05-24 10:09:53 +00:00
return (EAGAIN);
}
/* Allocate new proc. */
VM level code cleanups. 1) Start using TSM. Struct procs continue to point to upages structure, after being freed. Struct vmspace continues to point to pte object and kva space for kstack. u_map is now superfluous. 2) vm_map's don't need to be reference counted. They always exist either in the kernel or in a vmspace. The vmspaces are managed by reference counts. 3) Remove the "wired" vm_map nonsense. 4) No need to keep a cache of kernel stack kva's. 5) Get rid of strange looking ++var, and change to var++. 6) Change more data structures to use our "zone" allocator. Added struct proc, struct vmspace and struct vnode. This saves a significant amount of kva space and physical memory. Additionally, this enables TSM for the zone managed memory. 7) Keep ioopt disabled for now. 8) Remove the now bogus "single use" map concept. 9) Use generation counts or id's for data structures residing in TSM, where it allows us to avoid unneeded restart overhead during traversals, where blocking might occur. 10) Account better for memory deficits, so the pageout daemon will be able to make enough memory available (experimental.) 11) Fix some vnode locking problems. (From Tor, I think.) 12) Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp. (experimental.) 13) Significantly shrink, cleanup, and make slightly faster the vm_fault.c code. Use generation counts, get rid of unneded collpase operations, and clean up the cluster code. 14) Make vm_zone more suitable for TSM. This commit is partially as a result of discussions and contributions from other people, including DG, Tor Egge, PHK, and probably others that I have forgotten to attribute (so let me know, if I forgot.) This is not the infamous, final cleanup of the vnode stuff, but a necessary step. Vnode mgmt should be correct, but things might still change, and there is still some missing stuff (like ioopt, and physical backing of non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
newproc = zalloc(proc_zone);
1994-05-24 10:09:53 +00:00
/*
* Setup linkage for kernel based threading
*/
if((flags & RFTHREAD) != 0) {
newproc->p_peers = p1->p_peers;
p1->p_peers = newproc;
newproc->p_leader = p1->p_leader;
} else {
newproc->p_peers = NULL;
newproc->p_leader = newproc;
}
newproc->p_vmspace = NULL;
1994-05-24 10:09:53 +00:00
/*
* Find an unused process ID. We remember a range of unused IDs
* ready to use (from nextpid+1 through pidchecked-1).
*
* If RFHIGHPID is set (used during system boot), do not allocate
* low-numbered pids.
1994-05-24 10:09:53 +00:00
*/
ALLPROC_LOCK(AP_EXCLUSIVE);
trypid = nextpid + 1;
if (flags & RFHIGHPID) {
if (trypid < 10) {
trypid = 10;
}
} else {
if (randompid)
trypid += arc4random() % randompid;
}
1994-05-24 10:09:53 +00:00
retry:
/*
* If the process ID prototype has wrapped around,
* restart somewhat above 0, as the low-numbered procs
* tend to include daemons that don't exit.
*/
if (trypid >= PID_MAX) {
trypid = trypid % PID_MAX;
if (trypid < 100)
trypid += 100;
1994-05-24 10:09:53 +00:00
pidchecked = 0;
}
if (trypid >= pidchecked) {
1994-05-24 10:09:53 +00:00
int doingzomb = 0;
pidchecked = PID_MAX;
/*
* Scan the active and zombie procs to check whether this pid
* is in use. Remember the lowest pid that's greater
* than trypid, so we can avoid checking for a while.
1994-05-24 10:09:53 +00:00
*/
p2 = LIST_FIRST(&allproc);
1994-05-24 10:09:53 +00:00
again:
for (; p2 != NULL; p2 = LIST_NEXT(p2, p_list)) {
while (p2->p_pid == trypid ||
p2->p_pgrp->pg_id == trypid ||
p2->p_session->s_sid == trypid) {
trypid++;
if (trypid >= pidchecked)
1994-05-24 10:09:53 +00:00
goto retry;
}
if (p2->p_pid > trypid && pidchecked > p2->p_pid)
1994-05-24 10:09:53 +00:00
pidchecked = p2->p_pid;
if (p2->p_pgrp->pg_id > trypid &&
1994-05-24 10:09:53 +00:00
pidchecked > p2->p_pgrp->pg_id)
pidchecked = p2->p_pgrp->pg_id;
if (p2->p_session->s_sid > trypid &&
pidchecked > p2->p_session->s_sid)
pidchecked = p2->p_session->s_sid;
1994-05-24 10:09:53 +00:00
}
if (!doingzomb) {
doingzomb = 1;
p2 = LIST_FIRST(&zombproc);
1994-05-24 10:09:53 +00:00
goto again;
}
}
/*
* RFHIGHPID does not mess with the nextpid counter during boot.
*/
if (flags & RFHIGHPID)
pidchecked = 0;
else
nextpid = trypid;
p2 = newproc;
p2->p_intr_nesting_level = 0;
p2->p_stat = SIDL; /* protect against others */
p2->p_pid = trypid;
LIST_INSERT_HEAD(&allproc, p2, p_list);
LIST_INSERT_HEAD(PIDHASH(p2->p_pid), p2, p_hash);
ALLPROC_LOCK(AP_RELEASE);
1994-05-24 10:09:53 +00:00
/*
* Make a proc table entry for the new process.
* Start by zeroing the section of proc that is zero-initialized,
* then copy the section that is copied directly from the parent.
*/
bzero(&p2->p_startzero,
(unsigned) ((caddr_t)&p2->p_endzero - (caddr_t)&p2->p_startzero));
PROC_LOCK(p1);
1994-05-24 10:09:53 +00:00
bcopy(&p1->p_startcopy, &p2->p_startcopy,
(unsigned) ((caddr_t)&p2->p_endcopy - (caddr_t)&p2->p_startcopy));
PROC_UNLOCK(p1);
1994-05-24 10:09:53 +00:00
mtx_init(&p2->p_mtx, "process lock", MTX_DEF);
PROC_LOCK(p2);
p2->p_aioinfo = NULL;
1994-05-24 10:09:53 +00:00
/*
* Duplicate sub-structures as needed.
* Increase reference counts on shared objects.
* The p_stats and p_sigacts substructs are set in vm_fork.
*/
p2->p_flag = 0;
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_lock_spin(&sched_lock);
p2->p_sflag = PS_INMEM;
if (p1->p_sflag & PS_PROFIL)
1994-05-24 10:09:53 +00:00
startprofclock(p2);
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_unlock_spin(&sched_lock);
/*
* We start off holding one spinlock after fork: sched_lock.
*/
p2->p_spinlocks = 1;
PROC_UNLOCK(p2);
1994-05-24 10:09:53 +00:00
MALLOC(p2->p_cred, struct pcred *, sizeof(struct pcred),
M_SUBPROC, M_WAITOK);
PROC_LOCK(p2);
PROC_LOCK(p1);
1994-05-24 10:09:53 +00:00
bcopy(p1->p_cred, p2->p_cred, sizeof(*p2->p_cred));
p2->p_cred->p_refcnt = 1;
crhold(p1->p_ucred);
uihold(p1->p_cred->p_uidinfo);
1994-05-24 10:09:53 +00:00
if (p2->p_args)
p2->p_args->ar_ref++;
if (flags & RFSIGSHARE) {
p2->p_procsig = p1->p_procsig;
p2->p_procsig->ps_refcnt++;
if (p1->p_sigacts == &p1->p_addr->u_sigacts) {
struct sigacts *newsigacts;
PROC_UNLOCK(p1);
PROC_UNLOCK(p2);
/* Create the shared sigacts structure */
MALLOC(newsigacts, struct sigacts *,
sizeof(struct sigacts), M_SUBPROC, M_WAITOK);
PROC_LOCK(p2);
PROC_LOCK(p1);
/*
* Set p_sigacts to the new shared structure.
* Note that this is updating p1->p_sigacts at the
* same time, since p_sigacts is just a pointer to
* the shared p_procsig->ps_sigacts.
*/
p2->p_sigacts = newsigacts;
bcopy(&p1->p_addr->u_sigacts, p2->p_sigacts,
sizeof(*p2->p_sigacts));
*p2->p_sigacts = p1->p_addr->u_sigacts;
}
} else {
PROC_UNLOCK(p1);
PROC_UNLOCK(p2);
MALLOC(p2->p_procsig, struct procsig *, sizeof(struct procsig),
M_SUBPROC, M_WAITOK);
PROC_LOCK(p2);
PROC_LOCK(p1);
bcopy(p1->p_procsig, p2->p_procsig, sizeof(*p2->p_procsig));
p2->p_procsig->ps_refcnt = 1;
p2->p_sigacts = NULL; /* finished in vm_fork() */
}
if (flags & RFLINUXTHPN)
p2->p_sigparent = SIGUSR1;
else
p2->p_sigparent = SIGCHLD;
1994-05-24 10:09:53 +00:00
/* bump references to the text vnode (for procfs) */
p2->p_textvp = p1->p_textvp;
PROC_UNLOCK(p1);
PROC_UNLOCK(p2);
1994-05-24 10:09:53 +00:00
if (p2->p_textvp)
VREF(p2->p_textvp);
if (flags & RFCFDG)
fd = fdinit(p1);
else if (flags & RFFDG)
fd = fdcopy(p1);
else
fd = fdshare(p1);
PROC_LOCK(p2);
p2->p_fd = fd;
1994-05-24 10:09:53 +00:00
/*
* If p_limit is still copy-on-write, bump refcnt,
* otherwise get a copy that won't be modified.
* (If PL_SHAREMOD is clear, the structure is shared
* copy-on-write.)
*/
PROC_LOCK(p1);
1994-05-24 10:09:53 +00:00
if (p1->p_limit->p_lflags & PL_SHAREMOD)
p2->p_limit = limcopy(p1->p_limit);
else {
p2->p_limit = p1->p_limit;
p2->p_limit->p_refcnt++;
}
/*
* Preserve some more flags in subprocess. PS_PROFIL has already
* been preserved.
*/
p2->p_flag |= p1->p_flag & P_SUGID;
1994-05-24 10:09:53 +00:00
if (p1->p_session->s_ttyvp != NULL && p1->p_flag & P_CONTROLT)
p2->p_flag |= P_CONTROLT;
if (flags & RFPPWAIT)
1994-05-24 10:09:53 +00:00
p2->p_flag |= P_PPWAIT;
LIST_INSERT_AFTER(p1, p2, p_pglist);
PROC_UNLOCK(p1);
PROC_UNLOCK(p2);
/*
* Attach the new process to its parent.
*
* If RFNOWAIT is set, the newly created process becomes a child
* of init. This effectively disassociates the child from the
* parent.
*/
if (flags & RFNOWAIT)
pptr = initproc;
else
pptr = p1;
PROCTREE_LOCK(PT_EXCLUSIVE);
PROC_LOCK(p2);
p2->p_pptr = pptr;
PROC_UNLOCK(p2);
LIST_INSERT_HEAD(&pptr->p_children, p2, p_sibling);
PROCTREE_LOCK(PT_RELEASE);
PROC_LOCK(p2);
LIST_INIT(&p2->p_children);
LIST_INIT(&p2->p_heldmtx);
LIST_INIT(&p2->p_contested);
callout_init(&p2->p_itcallout, 0);
2000-12-01 04:55:52 +00:00
callout_init(&p2->p_slpcallout, 1);
PROC_LOCK(p1);
1994-05-24 10:09:53 +00:00
#ifdef KTRACE
/*
* Copy traceflag and tracefile if enabled.
* If not inherited, these were zeroed above.
*/
if (p1->p_traceflag & KTRFAC_INHERIT) {
1994-05-24 10:09:53 +00:00
p2->p_traceflag = p1->p_traceflag;
if ((p2->p_tracep = p1->p_tracep) != NULL) {
PROC_UNLOCK(p1);
PROC_UNLOCK(p2);
1994-05-24 10:09:53 +00:00
VREF(p2->p_tracep);
PROC_LOCK(p2);
PROC_LOCK(p1);
}
1994-05-24 10:09:53 +00:00
}
#endif
/*
* set priority of child to be that of parent
*/
mtx_lock_spin(&sched_lock);
p2->p_estcpu = p1->p_estcpu;
mtx_unlock_spin(&sched_lock);
1994-05-24 10:09:53 +00:00
/*
* This begins the section where we must prevent the parent
* from being swapped.
*/
_PHOLD(p1);
PROC_UNLOCK(p1);
PROC_UNLOCK(p2);
/*
The biggie: Get rid of the UPAGES from the top of the per-process address space. (!) Have each process use the kernel stack and pcb in the kvm space. Since the stacks are at a different address, we cannot copy the stack at fork() and allow the child to return up through the function call tree to return to user mode - create a new execution context and have the new process begin executing from cpu_switch() and go to user mode directly. In theory this should speed up fork a bit. Context switch the tss_esp0 pointer in the common tss. This is a lot simpler since than swithching the gdt[GPROC0_SEL].sd.sd_base pointer to each process's tss since the esp0 pointer is a 32 bit pointer, and the sd_base setting is split into three different bit sections at non-aligned boundaries and requires a lot of twiddling to reset. The 8K of memory at the top of the process space is now empty, and unmapped (and unmappable, it's higher than VM_MAXUSER_ADDRESS). Simplity the pmap code to manage process contexts, we no longer have to double map the UPAGES, this simplifies and should measuably speed up fork(). The following parts came from John Dyson: Set PG_G on the UPAGES that are now in kernel context, and invalidate them when swapping them out. Move the upages object (upobj) from the vmspace to the proc structure. Now that the UPAGES (pcb and kernel stack) are out of user space, make rfork(..RFMEM..) do what was intended by sharing the vmspace entirely via reference counting rather than simply inheriting the mappings.
1997-04-07 07:16:06 +00:00
* Finish creating the child process. It will return via a different
* execution path later. (ie: directly into user mode)
*/
The biggie: Get rid of the UPAGES from the top of the per-process address space. (!) Have each process use the kernel stack and pcb in the kvm space. Since the stacks are at a different address, we cannot copy the stack at fork() and allow the child to return up through the function call tree to return to user mode - create a new execution context and have the new process begin executing from cpu_switch() and go to user mode directly. In theory this should speed up fork a bit. Context switch the tss_esp0 pointer in the common tss. This is a lot simpler since than swithching the gdt[GPROC0_SEL].sd.sd_base pointer to each process's tss since the esp0 pointer is a 32 bit pointer, and the sd_base setting is split into three different bit sections at non-aligned boundaries and requires a lot of twiddling to reset. The 8K of memory at the top of the process space is now empty, and unmapped (and unmappable, it's higher than VM_MAXUSER_ADDRESS). Simplity the pmap code to manage process contexts, we no longer have to double map the UPAGES, this simplifies and should measuably speed up fork(). The following parts came from John Dyson: Set PG_G on the UPAGES that are now in kernel context, and invalidate them when swapping them out. Move the upages object (upobj) from the vmspace to the proc structure. Now that the UPAGES (pcb and kernel stack) are out of user space, make rfork(..RFMEM..) do what was intended by sharing the vmspace entirely via reference counting rather than simply inheriting the mappings.
1997-04-07 07:16:06 +00:00
vm_fork(p1, p2, flags);
1994-05-24 10:09:53 +00:00
if (flags == (RFFDG | RFPROC)) {
cnt.v_forks++;
cnt.v_forkpages += p2->p_vmspace->vm_dsize + p2->p_vmspace->vm_ssize;
} else if (flags == (RFFDG | RFPROC | RFPPWAIT | RFMEM)) {
cnt.v_vforks++;
cnt.v_vforkpages += p2->p_vmspace->vm_dsize + p2->p_vmspace->vm_ssize;
} else if (p1 == &proc0) {
cnt.v_kthreads++;
cnt.v_kthreadpages += p2->p_vmspace->vm_dsize + p2->p_vmspace->vm_ssize;
} else {
cnt.v_rforks++;
cnt.v_rforkpages += p2->p_vmspace->vm_dsize + p2->p_vmspace->vm_ssize;
}
/*
* Both processes are set up, now check if any loadable modules want
* to adjust anything.
* What if they have an error? XXX
*/
sx_slock(&fork_list_lock);
TAILQ_FOREACH(ep, &fork_list, next) {
(*ep->function)(p1, p2, flags);
}
sx_sunlock(&fork_list_lock);
1994-05-24 10:09:53 +00:00
/*
* If RFSTOPPED not requested, make child runnable and add to
* run queue.
1994-05-24 10:09:53 +00:00
*/
The biggie: Get rid of the UPAGES from the top of the per-process address space. (!) Have each process use the kernel stack and pcb in the kvm space. Since the stacks are at a different address, we cannot copy the stack at fork() and allow the child to return up through the function call tree to return to user mode - create a new execution context and have the new process begin executing from cpu_switch() and go to user mode directly. In theory this should speed up fork a bit. Context switch the tss_esp0 pointer in the common tss. This is a lot simpler since than swithching the gdt[GPROC0_SEL].sd.sd_base pointer to each process's tss since the esp0 pointer is a 32 bit pointer, and the sd_base setting is split into three different bit sections at non-aligned boundaries and requires a lot of twiddling to reset. The 8K of memory at the top of the process space is now empty, and unmapped (and unmappable, it's higher than VM_MAXUSER_ADDRESS). Simplity the pmap code to manage process contexts, we no longer have to double map the UPAGES, this simplifies and should measuably speed up fork(). The following parts came from John Dyson: Set PG_G on the UPAGES that are now in kernel context, and invalidate them when swapping them out. Move the upages object (upobj) from the vmspace to the proc structure. Now that the UPAGES (pcb and kernel stack) are out of user space, make rfork(..RFMEM..) do what was intended by sharing the vmspace entirely via reference counting rather than simply inheriting the mappings.
1997-04-07 07:16:06 +00:00
microtime(&(p2->p_stats->p_start));
p2->p_acflag = AFORK;
if ((flags & RFSTOPPED) == 0) {
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_lock_spin(&sched_lock);
p2->p_stat = SRUN;
setrunqueue(p2);
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_unlock_spin(&sched_lock);
}
1994-05-24 10:09:53 +00:00
/*
* Now can be swapped.
*/
PROC_LOCK(p1);
_PRELE(p1);
1994-05-24 10:09:53 +00:00
/*
* tell any interested parties about the new process
*/
KNOTE(&p1->p_klist, NOTE_FORK | p2->p_pid);
PROC_UNLOCK(p1);
1994-05-24 10:09:53 +00:00
/*
* Preserve synchronization semantics of vfork. If waiting for
* child to exec or exit, set P_PPWAIT on child, and sleep on our
* proc (in case of exit).
*/
PROC_LOCK(p2);
while (p2->p_flag & P_PPWAIT)
msleep(p1, &p2->p_mtx, PWAIT, "ppwait", 0);
PROC_UNLOCK(p2);
1994-05-24 10:09:53 +00:00
/*
* Return child proc pointer to parent.
1994-05-24 10:09:53 +00:00
*/
*procp = p2;
1994-05-24 10:09:53 +00:00
return (0);
}
/*
* The next two functionms are general routines to handle adding/deleting
* items on the fork callout list.
*
* at_fork():
* Take the arguments given and put them onto the fork callout list,
* However first make sure that it's not already there.
* Returns 0 on success or a standard error number.
*/
int
1997-08-26 00:15:04 +00:00
at_fork(function)
forklist_fn function;
{
struct forklist *ep;
#ifdef INVARIANTS
/* let the programmer know if he's been stupid */
if (rm_at_fork(function))
printf("WARNING: fork callout entry (%p) already present\n",
function);
#endif
ep = malloc(sizeof(*ep), M_ATFORK, M_NOWAIT);
if (ep == NULL)
return (ENOMEM);
ep->function = function;
sx_xlock(&fork_list_lock);
TAILQ_INSERT_TAIL(&fork_list, ep, next);
sx_xunlock(&fork_list_lock);
return (0);
}
/*
* Scan the exit callout list for the given item and remove it..
* Returns the number of items removed (0 or 1)
*/
int
1997-08-26 00:15:04 +00:00
rm_at_fork(function)
forklist_fn function;
{
struct forklist *ep;
sx_xlock(&fork_list_lock);
TAILQ_FOREACH(ep, &fork_list, next) {
if (ep->function == function) {
TAILQ_REMOVE(&fork_list, ep, next);
sx_xunlock(&fork_list_lock);
free(ep, M_ATFORK);
return(1);
}
}
sx_xunlock(&fork_list_lock);
return (0);
}
/*
* Handle the return of a child process from fork1(). This function
* is called from the MD fork_trampoline() entry point.
*/
void
fork_exit(callout, arg, frame)
void (*callout)(void *, struct trapframe *);
void *arg;
struct trapframe *frame;
{
struct proc *p;
p = curproc;
/*
* Setup the sched_lock state so that we can release it.
*/
sched_lock.mtx_lock = (uintptr_t)p;
sched_lock.mtx_recurse = 0;
/*
* XXX: We really shouldn't have to do this.
*/
mtx_intr_enable(&sched_lock);
mtx_unlock_spin(&sched_lock);
#ifdef SMP
if (PCPU_GET(switchtime.tv_sec) == 0)
microuptime(PCPU_PTR(switchtime));
PCPU_SET(switchticks, ticks);
#endif
/*
* cpu_set_fork_handler intercepts this function call to
* have this call a non-return function to stay in kernel mode.
* initproc has its own fork handler, but it does return.
*/
KASSERT(callout != NULL, ("NULL callout in fork_exit"));
callout(arg, frame);
/*
* Check if a kernel thread misbehaved and returned from its main
* function.
*/
PROC_LOCK(p);
if (p->p_flag & P_KTHREAD) {
PROC_UNLOCK(p);
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_lock(&Giant);
printf("Kernel thread \"%s\" (pid %d) exited prematurely.\n",
p->p_comm, p->p_pid);
kthread_exit(0);
}
PROC_UNLOCK(p);
mtx_assert(&Giant, MA_NOTOWNED);
}
/*
* Simplified back end of syscall(), used when returning from fork()
* directly into user mode. Giant is not held on entry, and must not
* be held on return. This function is passed in to fork_exit() as the
* first parameter and is called when returning to a new userland process.
*/
void
fork_return(p, frame)
struct proc *p;
struct trapframe *frame;
{
userret(p, frame, 0);
#ifdef KTRACE
if (KTRPOINT(p, KTR_SYSRET)) {
if (!mtx_owned(&Giant))
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_lock(&Giant);
ktrsysret(p->p_tracep, SYS_fork, 0, 0);
}
#endif
if (mtx_owned(&Giant))
Change and clean the mutex lock interface. mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
mtx_unlock(&Giant);
mtx_assert(&Giant, MA_NOTOWNED);
}