2005-01-06 23:35:40 +00:00
|
|
|
/*-
|
2017-11-18 14:26:50 +00:00
|
|
|
* SPDX-License-Identifier: BSD-4-Clause
|
|
|
|
*
|
1995-08-28 09:19:25 +00:00
|
|
|
* Copyright (c) 1995 Terrence R. Lambert
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1989, 1991, 1992, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* @(#)init_main.c 8.9 (Berkeley) 1/21/94
|
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2006-05-14 07:11:28 +00:00
|
|
|
#include "opt_ddb.h"
|
1999-05-05 12:20:23 +00:00
|
|
|
#include "opt_init_path.h"
|
2015-02-05 07:51:38 +00:00
|
|
|
#include "opt_verbose_sysinit.h"
|
1996-03-02 18:24:13 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
2002-01-13 21:37:49 +00:00
|
|
|
#include <sys/kernel.h>
|
2018-11-14 19:10:35 +00:00
|
|
|
#include <sys/epoch.h>
|
2002-09-01 21:41:24 +00:00
|
|
|
#include <sys/exec.h>
|
1997-01-27 12:43:36 +00:00
|
|
|
#include <sys/file.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/filedesc.h>
|
2018-12-04 00:15:47 +00:00
|
|
|
#include <sys/imgact.h>
|
2009-05-27 14:11:23 +00:00
|
|
|
#include <sys/jail.h>
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <sys/ktr.h>
|
2001-03-28 11:52:56 +00:00
|
|
|
#include <sys/lock.h>
|
2011-03-05 12:40:35 +00:00
|
|
|
#include <sys/loginclass.h>
|
1997-01-16 15:58:32 +00:00
|
|
|
#include <sys/mount.h>
|
2000-10-20 07:58:15 +00:00
|
|
|
#include <sys/mutex.h>
|
2002-09-01 20:37:28 +00:00
|
|
|
#include <sys/syscallsubr.h>
|
1995-12-04 16:48:58 +00:00
|
|
|
#include <sys/sysctl.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/proc.h>
|
2011-03-29 17:47:25 +00:00
|
|
|
#include <sys/racct.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/resourcevar.h>
|
|
|
|
#include <sys/systm.h>
|
2000-09-10 13:54:52 +00:00
|
|
|
#include <sys/signalvar.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/vnode.h>
|
1994-08-24 11:52:21 +00:00
|
|
|
#include <sys/sysent.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/reboot.h>
|
2002-11-21 01:22:38 +00:00
|
|
|
#include <sys/sched.h>
|
2001-03-28 11:52:56 +00:00
|
|
|
#include <sys/sx.h>
|
1995-10-08 00:06:22 +00:00
|
|
|
#include <sys/sysproto.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <sys/vmmeter.h>
|
1997-12-12 04:00:59 +00:00
|
|
|
#include <sys/unistd.h>
|
1998-10-09 23:42:47 +00:00
|
|
|
#include <sys/malloc.h>
|
2000-09-02 19:17:34 +00:00
|
|
|
#include <sys/conf.h>
|
Add cpuset, an api for thread to cpu binding and cpu resource grouping
and assignment.
- Add a reference to a struct cpuset in each thread that is inherited from
the thread that created it.
- Release the reference when the thread is destroyed.
- Add prototypes for syscalls and macros for manipulating cpusets in
sys/cpuset.h
- Add syscalls to create, get, and set new numbered cpusets:
cpuset(), cpuset_{get,set}id()
- Add syscalls for getting and setting affinity masks for cpusets or
individual threads: cpuid_{get,set}affinity()
- Add types for the 'level' and 'which' parameters for the cpuset. This
will permit expansion of the api to cover cpu masks for other objects
identifiable with an id_t integer. For example, IRQs and Jails may be
coming soon.
- The root set 0 contains all valid cpus. All thread initially belong to
cpuset 1. This permits migrating all threads off of certain cpus to
reserve them for special applications.
Sponsored by: Nokia
Discussed with: arch, rwatson, brooks, davidxu, deischen
Reviewed by: antoine
2008-03-02 07:39:22 +00:00
|
|
|
#include <sys/cpuset.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <machine/cpu.h>
|
|
|
|
|
2006-02-02 01:16:31 +00:00
|
|
|
#include <security/audit/audit.h>
|
2006-10-22 11:52:19 +00:00
|
|
|
#include <security/mac/mac_framework.h>
|
2006-02-02 01:16:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
2018-02-06 22:10:07 +00:00
|
|
|
#include <vm/vm_extern.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_map.h>
|
1997-03-01 17:49:09 +00:00
|
|
|
#include <sys/copyright.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#include <ddb/ddb.h>
|
|
|
|
#include <ddb/db_sym.h>
|
|
|
|
|
2000-08-11 09:05:12 +00:00
|
|
|
void mi_startup(void); /* Should be elsewhere */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/* Components of the first process -- never freed. */
|
1995-12-10 13:45:30 +00:00
|
|
|
static struct session session0;
|
|
|
|
static struct pgrp pgrp0;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct proc proc0;
|
2017-02-07 17:03:22 +00:00
|
|
|
struct thread0_storage thread0_st __aligned(32);
|
2003-04-13 21:29:11 +00:00
|
|
|
struct vmspace vmspace0;
|
1995-08-28 09:19:25 +00:00
|
|
|
struct proc *initproc;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-04-03 22:24:36 +00:00
|
|
|
#ifndef BOOTHOWTO
|
|
|
|
#define BOOTHOWTO 0
|
|
|
|
#endif
|
|
|
|
int boothowto = BOOTHOWTO; /* initialized so that it can be patched */
|
2010-08-09 14:48:31 +00:00
|
|
|
SYSCTL_INT(_debug, OID_AUTO, boothowto, CTLFLAG_RD, &boothowto, 0,
|
|
|
|
"Boot control flags, passed from loader");
|
2013-04-03 22:24:36 +00:00
|
|
|
|
|
|
|
#ifndef BOOTVERBOSE
|
|
|
|
#define BOOTVERBOSE 0
|
|
|
|
#endif
|
|
|
|
int bootverbose = BOOTVERBOSE;
|
2010-08-09 14:48:31 +00:00
|
|
|
SYSCTL_INT(_debug, OID_AUTO, bootverbose, CTLFLAG_RW, &bootverbose, 0,
|
|
|
|
"Control the output of verbose kernel messages");
|
1995-12-04 16:48:58 +00:00
|
|
|
|
2018-06-20 19:23:56 +00:00
|
|
|
#ifdef VERBOSE_SYSINIT
|
|
|
|
/*
|
|
|
|
* We'll use the defined value of VERBOSE_SYSINIT from the kernel config to
|
|
|
|
* dictate the default VERBOSE_SYSINIT behavior. Significant values for this
|
|
|
|
* option and associated tunable are:
|
|
|
|
* - 0, 'compiled in but silent by default'
|
|
|
|
* - 1, 'compiled in but verbose by default' (default)
|
|
|
|
*/
|
|
|
|
int verbose_sysinit = VERBOSE_SYSINIT;
|
|
|
|
TUNABLE_INT("debug.verbose_sysinit", &verbose_sysinit);
|
|
|
|
#endif
|
|
|
|
|
2015-08-26 23:58:03 +00:00
|
|
|
#ifdef INVARIANTS
|
2015-08-28 19:53:19 +00:00
|
|
|
FEATURE(invariants, "Kernel compiled with INVARIANTS, may affect performance");
|
2015-08-26 23:58:03 +00:00
|
|
|
#endif
|
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
1995-08-28 09:19:25 +00:00
|
|
|
* This ensures that there is at least one entry so that the sysinit_set
|
|
|
|
* symbol is not undefined. A sybsystem ID of SI_SUB_DUMMY is never
|
|
|
|
* executed.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(placeholder, SI_SUB_DUMMY, SI_ORDER_ANY, NULL, NULL);
|
1994-08-27 16:14:39 +00:00
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
/*
|
|
|
|
* The sysinit table itself. Items are checked off as the are run.
|
|
|
|
* If we want to register new sysinit types, add them to newsysinit.
|
|
|
|
*/
|
2001-06-13 10:58:39 +00:00
|
|
|
SET_DECLARE(sysinit_set, struct sysinit);
|
|
|
|
struct sysinit **sysinit, **sysinit_end;
|
|
|
|
struct sysinit **newsysinit, **newsysinit_end;
|
1998-10-09 23:42:47 +00:00
|
|
|
|
2017-11-09 22:51:48 +00:00
|
|
|
EVENTHANDLER_LIST_DECLARE(process_init);
|
|
|
|
EVENTHANDLER_LIST_DECLARE(thread_init);
|
|
|
|
EVENTHANDLER_LIST_DECLARE(process_ctor);
|
|
|
|
EVENTHANDLER_LIST_DECLARE(thread_ctor);
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
/*
|
|
|
|
* Merge a new sysinit set into the current set, reallocating it if
|
|
|
|
* necessary. This can only be called after malloc is running.
|
|
|
|
*/
|
|
|
|
void
|
2001-06-13 10:58:39 +00:00
|
|
|
sysinit_add(struct sysinit **set, struct sysinit **set_end)
|
1998-10-09 23:42:47 +00:00
|
|
|
{
|
|
|
|
struct sysinit **newset;
|
|
|
|
struct sysinit **sipp;
|
|
|
|
struct sysinit **xipp;
|
2001-06-13 10:58:39 +00:00
|
|
|
int count;
|
1998-10-09 23:42:47 +00:00
|
|
|
|
2001-06-13 10:58:39 +00:00
|
|
|
count = set_end - set;
|
1998-10-09 23:42:47 +00:00
|
|
|
if (newsysinit)
|
2001-06-13 10:58:39 +00:00
|
|
|
count += newsysinit_end - newsysinit;
|
1998-10-15 17:09:19 +00:00
|
|
|
else
|
2001-06-13 10:58:39 +00:00
|
|
|
count += sysinit_end - sysinit;
|
2018-01-21 15:42:36 +00:00
|
|
|
newset = malloc(count * sizeof(*sipp), M_TEMP, M_NOWAIT);
|
1998-10-09 23:42:47 +00:00
|
|
|
if (newset == NULL)
|
|
|
|
panic("cannot malloc for sysinit");
|
|
|
|
xipp = newset;
|
1998-10-15 17:09:19 +00:00
|
|
|
if (newsysinit)
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = newsysinit; sipp < newsysinit_end; sipp++)
|
1998-10-09 23:42:47 +00:00
|
|
|
*xipp++ = *sipp;
|
1998-10-15 17:09:19 +00:00
|
|
|
else
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = sysinit; sipp < sysinit_end; sipp++)
|
1998-10-15 17:09:19 +00:00
|
|
|
*xipp++ = *sipp;
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = set; sipp < set_end; sipp++)
|
1998-10-15 17:09:19 +00:00
|
|
|
*xipp++ = *sipp;
|
|
|
|
if (newsysinit)
|
|
|
|
free(newsysinit, M_TEMP);
|
1998-10-09 23:42:47 +00:00
|
|
|
newsysinit = newset;
|
2001-06-13 10:58:39 +00:00
|
|
|
newsysinit_end = newset + count;
|
1998-10-09 23:42:47 +00:00
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
|
2012-06-01 15:42:37 +00:00
|
|
|
#if defined (DDB) && defined(VERBOSE_SYSINIT)
|
|
|
|
static const char *
|
|
|
|
symbol_name(vm_offset_t va, db_strategy_t strategy)
|
|
|
|
{
|
|
|
|
const char *name;
|
|
|
|
c_db_sym_t sym;
|
|
|
|
db_expr_t offset;
|
|
|
|
|
|
|
|
if (va == 0)
|
|
|
|
return (NULL);
|
|
|
|
sym = db_search_symbol(va, strategy, &offset);
|
|
|
|
if (offset != 0)
|
|
|
|
return (NULL);
|
|
|
|
db_symbol_values(sym, &name, NULL);
|
|
|
|
return (name);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* System startup; initialize the world, create process 0, mount root
|
|
|
|
* filesystem, and fork to create init and pagedaemon. Most of the
|
|
|
|
* hard work is done in the lower-level initialization routines including
|
|
|
|
* startup(), which does memory initialization and autoconfiguration.
|
1995-08-28 09:19:25 +00:00
|
|
|
*
|
|
|
|
* This allows simple addition of new kernel subsystems that require
|
|
|
|
* boot time initialization. It also allows substitution of subsystem
|
|
|
|
* (for instance, a scheduler, kernel profiler, or VM system) by object
|
1998-01-30 11:34:06 +00:00
|
|
|
* module. Finally, it allows for optional "kernel threads".
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2000-08-11 09:05:12 +00:00
|
|
|
mi_startup(void)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1995-08-28 09:19:25 +00:00
|
|
|
|
2016-10-20 01:21:10 +00:00
|
|
|
struct sysinit **sipp; /* system initialization*/
|
|
|
|
struct sysinit **xipp; /* interior loop of sort*/
|
|
|
|
struct sysinit *save; /* bubble*/
|
1995-08-28 09:19:25 +00:00
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
int last;
|
|
|
|
int verbose;
|
|
|
|
#endif
|
|
|
|
|
2017-12-31 09:22:31 +00:00
|
|
|
TSENTER();
|
|
|
|
|
2010-10-28 14:17:06 +00:00
|
|
|
if (boothowto & RB_VERBOSE)
|
|
|
|
bootverbose++;
|
|
|
|
|
2001-06-13 10:58:39 +00:00
|
|
|
if (sysinit == NULL) {
|
|
|
|
sysinit = SET_BEGIN(sysinit_set);
|
|
|
|
sysinit_end = SET_LIMIT(sysinit_set);
|
|
|
|
}
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
restart:
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
* Perform a bubble sort of the system initialization objects by
|
|
|
|
* their subsystem (primary key) and order (secondary key).
|
|
|
|
*/
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = sysinit; sipp < sysinit_end; sipp++) {
|
|
|
|
for (xipp = sipp + 1; xipp < sysinit_end; xipp++) {
|
1998-10-09 23:42:47 +00:00
|
|
|
if ((*sipp)->subsystem < (*xipp)->subsystem ||
|
|
|
|
((*sipp)->subsystem == (*xipp)->subsystem &&
|
2000-08-02 21:05:21 +00:00
|
|
|
(*sipp)->order <= (*xipp)->order))
|
1995-08-28 09:19:25 +00:00
|
|
|
continue; /* skip*/
|
|
|
|
save = *sipp;
|
|
|
|
*sipp = *xipp;
|
|
|
|
*xipp = save;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
last = SI_SUB_COPYRIGHT;
|
|
|
|
verbose = 0;
|
|
|
|
#if !defined(DDB)
|
|
|
|
printf("VERBOSE_SYSINIT: DDB not enabled, symbol lookups disabled.\n");
|
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
* Traverse the (now) ordered list of system initialization tasks.
|
|
|
|
* Perform each task, and continue on to the next task.
|
|
|
|
*/
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = sysinit; sipp < sysinit_end; sipp++) {
|
1997-12-12 04:00:59 +00:00
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
if ((*sipp)->subsystem == SI_SUB_DUMMY)
|
1995-08-28 09:19:25 +00:00
|
|
|
continue; /* skip dummy task(s)*/
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
if ((*sipp)->subsystem == SI_SUB_DONE)
|
|
|
|
continue;
|
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
2018-06-20 19:23:56 +00:00
|
|
|
if ((*sipp)->subsystem > last && verbose_sysinit != 0) {
|
2006-05-12 02:01:38 +00:00
|
|
|
verbose = 1;
|
|
|
|
last = (*sipp)->subsystem;
|
|
|
|
printf("subsystem %x\n", last);
|
|
|
|
}
|
|
|
|
if (verbose) {
|
|
|
|
#if defined(DDB)
|
2012-06-01 15:42:37 +00:00
|
|
|
const char *func, *data;
|
|
|
|
|
|
|
|
func = symbol_name((vm_offset_t)(*sipp)->func,
|
|
|
|
DB_STGY_PROC);
|
|
|
|
data = symbol_name((vm_offset_t)(*sipp)->udata,
|
|
|
|
DB_STGY_ANY);
|
|
|
|
if (func != NULL && data != NULL)
|
|
|
|
printf(" %s(&%s)... ", func, data);
|
|
|
|
else if (func != NULL)
|
|
|
|
printf(" %s(%p)... ", func, (*sipp)->udata);
|
2006-05-12 02:01:38 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
printf(" %p(%p)... ", (*sipp)->func,
|
|
|
|
(*sipp)->udata);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1999-07-01 13:21:46 +00:00
|
|
|
/* Call function */
|
|
|
|
(*((*sipp)->func))((*sipp)->udata);
|
1998-10-09 23:42:47 +00:00
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
if (verbose)
|
|
|
|
printf("done.\n");
|
|
|
|
#endif
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
/* Check off the one we're just done */
|
|
|
|
(*sipp)->subsystem = SI_SUB_DONE;
|
|
|
|
|
|
|
|
/* Check if we've installed more sysinit items via KLD */
|
|
|
|
if (newsysinit != NULL) {
|
2001-06-13 10:58:39 +00:00
|
|
|
if (sysinit != SET_BEGIN(sysinit_set))
|
1998-10-09 23:42:47 +00:00
|
|
|
free(sysinit, M_TEMP);
|
|
|
|
sysinit = newsysinit;
|
2001-06-13 10:58:39 +00:00
|
|
|
sysinit_end = newsysinit_end;
|
1998-10-09 23:42:47 +00:00
|
|
|
newsysinit = NULL;
|
2001-06-13 10:58:39 +00:00
|
|
|
newsysinit_end = NULL;
|
1998-10-09 23:42:47 +00:00
|
|
|
goto restart;
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-31 09:22:31 +00:00
|
|
|
TSEXIT(); /* Here so we don't overlap with start_init. */
|
|
|
|
|
2013-07-24 09:45:31 +00:00
|
|
|
mtx_assert(&Giant, MA_OWNED | MA_NOTRECURSED);
|
|
|
|
mtx_unlock(&Giant);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now hand over this thread to swapper.
|
|
|
|
*/
|
|
|
|
swapper();
|
1995-08-28 09:19:25 +00:00
|
|
|
/* NOTREACHED*/
|
|
|
|
}
|
|
|
|
|
1995-08-29 23:59:22 +00:00
|
|
|
static void
|
2009-09-30 11:14:13 +00:00
|
|
|
print_caddr_t(void *data)
|
1995-08-29 23:59:22 +00:00
|
|
|
{
|
|
|
|
printf("%s", (char *)data);
|
|
|
|
}
|
2009-10-01 10:53:12 +00:00
|
|
|
|
|
|
|
static void
|
|
|
|
print_version(void *data __unused)
|
|
|
|
{
|
|
|
|
int len;
|
|
|
|
|
|
|
|
/* Strip a trailing newline from version. */
|
|
|
|
len = strlen(version);
|
|
|
|
while (len > 0 && version[len - 1] == '\n')
|
|
|
|
len--;
|
|
|
|
printf("%.*s %s\n", len, version, machine);
|
2013-02-02 11:58:35 +00:00
|
|
|
printf("%s\n", compiler_version);
|
2009-10-01 10:53:12 +00:00
|
|
|
}
|
|
|
|
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(announce, SI_SUB_COPYRIGHT, SI_ORDER_FIRST, print_caddr_t,
|
|
|
|
copyright);
|
|
|
|
SYSINIT(trademark, SI_SUB_COPYRIGHT, SI_ORDER_SECOND, print_caddr_t,
|
|
|
|
trademark);
|
2009-10-01 10:53:12 +00:00
|
|
|
SYSINIT(version, SI_SUB_COPYRIGHT, SI_ORDER_THIRD, print_version, NULL);
|
2001-05-17 22:28:46 +00:00
|
|
|
|
2004-02-29 16:56:54 +00:00
|
|
|
#ifdef WITNESS
|
|
|
|
static char wit_warn[] =
|
|
|
|
"WARNING: WITNESS option enabled, expect reduced performance.\n";
|
2006-09-25 23:19:01 +00:00
|
|
|
SYSINIT(witwarn, SI_SUB_COPYRIGHT, SI_ORDER_THIRD + 1,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, wit_warn);
|
2013-07-24 09:45:31 +00:00
|
|
|
SYSINIT(witwarn2, SI_SUB_LAST, SI_ORDER_THIRD + 1,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, wit_warn);
|
2004-02-29 16:56:54 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef DIAGNOSTIC
|
|
|
|
static char diag_warn[] =
|
|
|
|
"WARNING: DIAGNOSTIC option enabled, expect reduced performance.\n";
|
2006-09-26 00:15:56 +00:00
|
|
|
SYSINIT(diagwarn, SI_SUB_COPYRIGHT, SI_ORDER_THIRD + 2,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, diag_warn);
|
2013-07-24 09:45:31 +00:00
|
|
|
SYSINIT(diagwarn2, SI_SUB_LAST, SI_ORDER_THIRD + 2,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, diag_warn);
|
2004-02-29 16:56:54 +00:00
|
|
|
#endif
|
|
|
|
|
Reorganize syscall entry and leave handling.
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
2010-05-23 18:32:02 +00:00
|
|
|
static int
|
2017-06-12 21:03:23 +00:00
|
|
|
null_fetch_syscall_args(struct thread *td __unused)
|
Reorganize syscall entry and leave handling.
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
2010-05-23 18:32:02 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
panic("null_fetch_syscall_args");
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
null_set_syscall_retval(struct thread *td __unused, int error __unused)
|
|
|
|
{
|
|
|
|
|
|
|
|
panic("null_set_syscall_retval");
|
|
|
|
}
|
|
|
|
|
2002-09-01 21:41:24 +00:00
|
|
|
struct sysentvec null_sysvec = {
|
2008-09-24 10:14:37 +00:00
|
|
|
.sv_size = 0,
|
|
|
|
.sv_table = NULL,
|
|
|
|
.sv_errsize = 0,
|
|
|
|
.sv_errtbl = NULL,
|
|
|
|
.sv_transtrap = NULL,
|
|
|
|
.sv_fixup = NULL,
|
|
|
|
.sv_sendsig = NULL,
|
|
|
|
.sv_sigcode = NULL,
|
|
|
|
.sv_szsigcode = NULL,
|
|
|
|
.sv_name = "null",
|
|
|
|
.sv_coredump = NULL,
|
|
|
|
.sv_imgact_try = NULL,
|
|
|
|
.sv_minsigstksz = 0,
|
|
|
|
.sv_pagesize = PAGE_SIZE,
|
|
|
|
.sv_minuser = VM_MIN_ADDRESS,
|
|
|
|
.sv_maxuser = VM_MAXUSER_ADDRESS,
|
|
|
|
.sv_usrstack = USRSTACK,
|
|
|
|
.sv_psstrings = PS_STRINGS,
|
|
|
|
.sv_stackprot = VM_PROT_ALL,
|
|
|
|
.sv_copyout_strings = NULL,
|
|
|
|
.sv_setregs = NULL,
|
|
|
|
.sv_fixlimit = NULL,
|
Reorganize syscall entry and leave handling.
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
2010-05-23 18:32:02 +00:00
|
|
|
.sv_maxssiz = NULL,
|
|
|
|
.sv_flags = 0,
|
|
|
|
.sv_set_syscall_retval = null_set_syscall_retval,
|
|
|
|
.sv_fetch_syscall_args = null_fetch_syscall_args,
|
|
|
|
.sv_syscallnames = NULL,
|
2011-03-08 19:01:45 +00:00
|
|
|
.sv_schedtail = NULL,
|
2015-05-24 14:51:29 +00:00
|
|
|
.sv_thread_detach = NULL,
|
2016-01-09 20:18:53 +00:00
|
|
|
.sv_trap = NULL,
|
2002-09-01 21:41:24 +00:00
|
|
|
};
|
2002-07-20 02:56:12 +00:00
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
2016-10-20 01:19:37 +00:00
|
|
|
* The two following SYSINIT's are proc0 specific glue code. I am not
|
|
|
|
* convinced that they can not be safely combined, but their order of
|
|
|
|
* operation has been maintained as the same as the original init_main.c
|
|
|
|
* for right now.
|
1995-08-28 09:19:25 +00:00
|
|
|
*/
|
|
|
|
/* ARGSUSED*/
|
1995-12-10 13:45:30 +00:00
|
|
|
static void
|
2000-08-11 09:05:12 +00:00
|
|
|
proc0_init(void *dummy __unused)
|
1995-08-28 09:19:25 +00:00
|
|
|
{
|
2004-11-07 12:39:28 +00:00
|
|
|
struct proc *p;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
2015-03-16 00:10:03 +00:00
|
|
|
struct ucred *newcred;
|
2017-11-01 05:51:20 +00:00
|
|
|
struct uidinfo tmpuinfo;
|
2017-11-01 06:12:14 +00:00
|
|
|
struct loginclass tmplc = {
|
|
|
|
.lc_name = "",
|
|
|
|
};
|
2010-04-11 16:26:07 +00:00
|
|
|
vm_paddr_t pageablemem;
|
|
|
|
int i;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
1994-05-24 10:09:53 +00:00
|
|
|
p = &proc0;
|
2002-02-07 20:58:47 +00:00
|
|
|
td = &thread0;
|
2007-11-18 13:56:51 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
/*
|
2007-12-04 12:28:07 +00:00
|
|
|
* Initialize magic number and osrel.
|
2000-09-07 01:33:02 +00:00
|
|
|
*/
|
|
|
|
p->p_magic = P_MAGIC;
|
2007-12-04 12:28:07 +00:00
|
|
|
p->p_osrel = osreldate;
|
2000-09-07 01:33:02 +00:00
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* Initialize thread and process structures.
|
|
|
|
*/
|
|
|
|
procinit(); /* set up proc zone */
|
|
|
|
threadinit(); /* set up UMA zones */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialise scheduler resources.
|
|
|
|
* Add scheduler specific parts to proc, thread as needed.
|
|
|
|
*/
|
2004-09-05 02:09:54 +00:00
|
|
|
schedinit(); /* scheduler gets its house in order */
|
1996-07-31 09:26:54 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Create process 0 (the swapper).
|
|
|
|
*/
|
1996-03-11 06:14:38 +00:00
|
|
|
LIST_INSERT_HEAD(&allproc, p, p_list);
|
2001-04-11 18:50:50 +00:00
|
|
|
LIST_INSERT_HEAD(PIDHASH(0), p, p_hash);
|
2002-04-04 21:03:38 +00:00
|
|
|
mtx_init(&pgrp0.pg_mtx, "process group", NULL, MTX_DEF | MTX_DUPOK);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_pgrp = &pgrp0;
|
1996-03-11 06:14:38 +00:00
|
|
|
LIST_INSERT_HEAD(PGRPHASH(0), &pgrp0, pg_hash);
|
|
|
|
LIST_INIT(&pgrp0.pg_members);
|
|
|
|
LIST_INSERT_HEAD(&pgrp0.pg_members, p, p_pglist);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
pgrp0.pg_session = &session0;
|
2002-04-04 21:03:38 +00:00
|
|
|
mtx_init(&session0.s_mtx, "session", NULL, MTX_DEF);
|
Integrate the new MPSAFE TTY layer to the FreeBSD operating system.
The last half year I've been working on a replacement TTY layer for the
FreeBSD kernel. The new TTY layer was designed to improve the following:
- Improved driver model:
The old TTY layer has a driver model that is not abstract enough to
make it friendly to use. A good example is the output path, where the
device drivers directly access the output buffers. This means that an
in-kernel PPP implementation must always convert network buffers into
TTY buffers.
If a PPP implementation would be built on top of the new TTY layer
(still needs a hooks layer, though), it would allow the PPP
implementation to directly hand the data to the TTY driver.
- Improved hotplugging:
With the old TTY layer, it isn't entirely safe to destroy TTY's from
the system. This implementation has a two-step destructing design,
where the driver first abandons the TTY. After all threads have left
the TTY, the TTY layer calls a routine in the driver, which can be
used to free resources (unit numbers, etc).
The pts(4) driver also implements this feature, which means
posix_openpt() will now return PTY's that are created on the fly.
- Improved performance:
One of the major improvements is the per-TTY mutex, which is expected
to improve scalability when compared to the old Giant locking.
Another change is the unbuffered copying to userspace, which is both
used on TTY device nodes and PTY masters.
Upgrading should be quite straightforward. Unlike previous versions,
existing kernel configuration files do not need to be changed, except
when they reference device drivers that are listed in UPDATING.
Obtained from: //depot/projects/mpsafetty/...
Approved by: philip (ex-mentor)
Discussed: on the lists, at BSDCan, at the DevSummit
Sponsored by: Snow B.V., the Netherlands
dcons(4) fixed by: kan
2008-08-20 08:31:58 +00:00
|
|
|
refcount_init(&session0.s_count, 1);
|
1994-05-24 10:09:53 +00:00
|
|
|
session0.s_leader = p;
|
|
|
|
|
2002-07-20 02:56:12 +00:00
|
|
|
p->p_sysent = &null_sysvec;
|
2016-02-09 16:30:16 +00:00
|
|
|
p->p_flag = P_SYSTEM | P_INMEM | P_KPROC;
|
2013-09-19 18:53:42 +00:00
|
|
|
p->p_flag2 = 0;
|
Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..
2002-06-29 17:26:22 +00:00
|
|
|
p->p_state = PRS_NORMAL;
|
When filt_proc() removes event from the knlist due to the process
exiting (NOTE_EXIT->knlist_remove_inevent()), two things happen:
- knote kn_knlist pointer is reset
- INFLUX knote is removed from the process knlist.
And, there are two consequences:
- KN_LIST_UNLOCK() on such knote is nop
- there is nothing which would block exit1() from processing past the
knlist_destroy() (and knlist_destroy() resets knlist lock pointers).
Both consequences result either in leaked process lock, or
dereferencing NULL function pointers for locking.
Handle this by stopping embedding the process knlist into struct proc.
Instead, the knlist is allocated together with struct proc, but marked
as autodestroy on the zombie reap, by knlist_detach() function. The
knlist is freed when last kevent is removed from the list, in
particular, at the zombie reap time if the list is empty. As result,
the knlist_remove_inevent() is no longer needed and removed.
Other changes:
In filt_procattach(), clear NOTE_EXEC and NOTE_FORK desired events
from kn_sfflags for knote registered by kernel to only get NOTE_CHILD
notifications. The flags leak resulted in excessive
NOTE_EXEC/NOTE_FORK reports.
Fix immediate note activation in filt_procattach(). Condition should
be either the immediate CHILD_NOTE activation, or immediate NOTE_EXIT
report for the exiting process.
In knote_fork(), do not perform racy check for KN_INFLUX before kq
lock is taken. Besides being racy, it did not accounted for notes
just added by scan (KN_SCAN).
Some minor and incomplete style fixes.
Analyzed and tested by: Eric Badger <eric@badgerio.us>
Reviewed by: jhb
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
Approved by: re (gjb)
Differential revision: https://reviews.freebsd.org/D6859
2016-06-27 21:52:17 +00:00
|
|
|
p->p_klist = knlist_alloc(&p->p_mtx);
|
Moderate rewrite of kernel ktrace code to attempt to generally improve
reliability when tracing fast-moving processes or writing traces to
slow file systems by avoiding unbounded queueuing and dropped records.
Record loss was previously possible when the global pool of records
become depleted as a result of record generation outstripping record
commit, which occurred quickly in many common situations.
These changes partially restore the 4.x model of committing ktrace
records at the point of trace generation (synchronous), but maintain
the 5.x deferred record commit behavior (asynchronous) for situations
where entering VFS and sleeping is not possible (i.e., in the
scheduler). Records are now queued per-process as opposed to
globally, with processes responsible for committing records from their
own context as required.
- Eliminate the ktrace worker thread and global record queue, as they
are no longer used. Keep the global free record list, as records
are still used.
- Add a per-process record queue, which will hold any asynchronously
generated records, such as from context switches. This replaces the
global queue as the place to submit asynchronous records to.
- When a record is committed asynchronously, simply queue it to the
process.
- When a record is committed synchronously, first drain any pending
per-process records in order to maintain ordering as best we can.
Currently ordering between competing threads is provided via a global
ktrace_sx, but a per-process flag or lock may be desirable in the
future.
- When a process returns to user space following a system call, trap,
signal delivery, etc, flush any pending records.
- When a process exits, flush any pending records.
- Assert on process tear-down that there are no pending records.
- Slightly abstract the notion of being "in ktrace", which is used to
prevent the recursive generation of records, as well as generating
traces for ktrace events.
Future work here might look at changing the set of events marked for
synchronous and asynchronous record generation, re-balancing queue
depth, timeliness of commit to disk, and so on. I.e., performing a
drain every (n) records.
MFC after: 1 month
Discussed with: jhb
Requested by: Marc Olzheim <marcolz at stack dot nl>
2005-11-13 13:27:44 +00:00
|
|
|
STAILQ_INIT(&p->p_ktr);
|
2004-06-16 00:26:31 +00:00
|
|
|
p->p_nice = NZERO;
|
2012-08-16 13:01:56 +00:00
|
|
|
/* pid_max cannot be greater than PID_MAX */
|
2007-12-22 04:56:48 +00:00
|
|
|
td->td_tid = PID_MAX + 1;
|
2010-10-17 11:01:52 +00:00
|
|
|
LIST_INSERT_HEAD(TIDHASH(td->td_tid), td, td_hash);
|
Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..
2002-06-29 17:26:22 +00:00
|
|
|
td->td_state = TDS_RUNNING;
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_pri_class = PRI_TIMESHARE;
|
|
|
|
td->td_user_pri = PUSER;
|
2006-11-12 11:48:37 +00:00
|
|
|
td->td_base_user_pri = PUSER;
|
2010-12-09 02:42:02 +00:00
|
|
|
td->td_lend_user_pri = PRI_MAX;
|
Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..
2002-06-29 17:26:22 +00:00
|
|
|
td->td_priority = PVM;
|
2011-01-06 22:26:00 +00:00
|
|
|
td->td_base_pri = PVM;
|
2016-07-11 21:25:28 +00:00
|
|
|
td->td_oncpu = curcpu;
|
2012-01-22 11:01:36 +00:00
|
|
|
td->td_flags = TDF_INMEM;
|
|
|
|
td->td_pflags = TDP_KTHREAD;
|
Add cpuset, an api for thread to cpu binding and cpu resource grouping
and assignment.
- Add a reference to a struct cpuset in each thread that is inherited from
the thread that created it.
- Release the reference when the thread is destroyed.
- Add prototypes for syscalls and macros for manipulating cpusets in
sys/cpuset.h
- Add syscalls to create, get, and set new numbered cpusets:
cpuset(), cpuset_{get,set}id()
- Add syscalls for getting and setting affinity masks for cpusets or
individual threads: cpuid_{get,set}affinity()
- Add types for the 'level' and 'which' parameters for the cpuset. This
will permit expansion of the api to cover cpu masks for other objects
identifiable with an id_t integer. For example, IRQs and Jails may be
coming soon.
- The root set 0 contains all valid cpus. All thread initially belong to
cpuset 1. This permits migrating all threads off of certain cpus to
reserve them for special applications.
Sponsored by: Nokia
Discussed with: arch, rwatson, brooks, davidxu, deischen
Reviewed by: antoine
2008-03-02 07:39:22 +00:00
|
|
|
td->td_cpuset = cpuset_thread0();
|
2018-01-12 22:48:23 +00:00
|
|
|
td->td_domain.dr_policy = td->td_cpuset->cs_domain;
|
2018-11-14 19:10:35 +00:00
|
|
|
epoch_thread_init(td);
|
2015-02-27 16:28:55 +00:00
|
|
|
prison0_init();
|
1997-06-16 00:29:36 +00:00
|
|
|
p->p_peers = 0;
|
|
|
|
p->p_leader = p;
|
2014-12-15 12:01:42 +00:00
|
|
|
p->p_reaper = p;
|
2018-07-05 16:16:28 +00:00
|
|
|
p->p_treeflag |= P_TREE_REAPER;
|
2014-12-15 12:01:42 +00:00
|
|
|
LIST_INIT(&p->p_reaplist);
|
1997-06-16 00:29:36 +00:00
|
|
|
|
2007-10-26 08:00:41 +00:00
|
|
|
strncpy(p->p_comm, "kernel", sizeof (p->p_comm));
|
|
|
|
strncpy(td->td_name, "swapper", sizeof (td->td_name));
|
1994-05-24 10:09:53 +00:00
|
|
|
|
Fix a race between kern_setitimer() and realitexpire(), where the
callout is started before kern_setitimer() acquires process mutex, but
looses a race and kern_setitimer() gets the process mutex before the
callout. Then, assuming that new specified struct itimerval has
it_interval zero, but it_value non-zero, the callout, after it starts
executing again, clears p->p_realtimer.it_value, but kern_setitimer()
already rescheduled the callout.
As the result of the race, both p_realtimer is zero, and the callout
is rescheduled. Then, in the exit1(), the exit code sees that it_value
is zero and does not even try to stop the callout. This allows the
struct proc to be reused and eventually the armed callout is
re-initialized. The consequence is the corrupted callwheel tailq.
Use process mutex to interlock the callout start, which fixes the race.
Reported and tested by: pho
Reviewed by: jhb
MFC after: 2 weeks
2012-12-04 20:49:39 +00:00
|
|
|
callout_init_mtx(&p->p_itcallout, &p->p_mtx, 0);
|
2007-06-01 01:12:45 +00:00
|
|
|
callout_init_mtx(&p->p_limco, &p->p_mtx, 0);
|
2015-05-22 17:05:21 +00:00
|
|
|
callout_init(&td->td_slpcallout, 1);
|
2000-11-27 22:52:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Create credentials. */
|
2015-03-16 00:10:03 +00:00
|
|
|
newcred = crget();
|
|
|
|
newcred->cr_ngroups = 1; /* group 0 */
|
2017-11-01 05:51:20 +00:00
|
|
|
/* A hack to prevent uifind from tripping over NULL pointers. */
|
|
|
|
curthread->td_ucred = newcred;
|
|
|
|
tmpuinfo.ui_uid = 1;
|
|
|
|
newcred->cr_uidinfo = newcred->cr_ruidinfo = &tmpuinfo;
|
2015-03-16 00:10:03 +00:00
|
|
|
newcred->cr_uidinfo = uifind(0);
|
|
|
|
newcred->cr_ruidinfo = uifind(0);
|
2017-11-01 06:12:14 +00:00
|
|
|
newcred->cr_loginclass = &tmplc;
|
|
|
|
newcred->cr_loginclass = loginclass_find("default");
|
2017-11-01 05:51:20 +00:00
|
|
|
/* End hack. creds get properly set later with thread_cow_get_proc */
|
|
|
|
curthread->td_ucred = NULL;
|
2015-03-16 00:10:03 +00:00
|
|
|
newcred->cr_prison = &prison0;
|
2015-03-21 20:24:54 +00:00
|
|
|
proc_set_cred_init(p, newcred);
|
2006-02-02 01:16:31 +00:00
|
|
|
#ifdef AUDIT
|
2015-03-16 00:10:03 +00:00
|
|
|
audit_cred_kproc0(newcred);
|
2006-02-02 01:16:31 +00:00
|
|
|
#endif
|
2002-07-31 00:39:19 +00:00
|
|
|
#ifdef MAC
|
2015-03-16 00:10:03 +00:00
|
|
|
mac_cred_create_swapper(newcred);
|
2002-07-31 00:39:19 +00:00
|
|
|
#endif
|
- Merge struct procsig with struct sigacts.
- Move struct sigacts out of the u-area and malloc() it using the
M_SUBPROC malloc bucket.
- Add a small sigacts_*() API for managing sigacts structures: sigacts_alloc(),
sigacts_free(), sigacts_copy(), sigacts_share(), and sigacts_shared().
- Remove the p_sigignore, p_sigacts, and p_sigcatch macros.
- Add a mutex to struct sigacts that protects all the members of the struct.
- Add sigacts locking.
- Remove Giant from nosys(), kill(), killpg(), and kern_sigaction() now
that sigacts is locked.
- Several in-kernel functions such as psignal(), tdsignal(), trapsignal(),
and thread_stopped() are now MP safe.
Reviewed by: arch@
Approved by: re (rwatson)
2003-05-13 20:36:02 +00:00
|
|
|
/* Create sigacts. */
|
|
|
|
p->p_sigacts = sigacts_alloc();
|
1998-12-19 02:55:34 +00:00
|
|
|
|
2000-08-11 09:05:12 +00:00
|
|
|
/* Initialize signal state for process 0. */
|
|
|
|
siginit(&proc0);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Create the file descriptor table. */
|
2014-11-13 21:15:09 +00:00
|
|
|
p->p_fd = fdinit(NULL, false);
|
2003-06-02 16:05:32 +00:00
|
|
|
p->p_fdtol = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/* Create the limits structures. */
|
2004-02-04 21:52:57 +00:00
|
|
|
p->p_limit = lim_alloc();
|
|
|
|
for (i = 0; i < RLIM_NLIMITS; i++)
|
|
|
|
p->p_limit->pl_rlimit[i].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[i].rlim_max = RLIM_INFINITY;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NOFILE].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NOFILE].rlim_max = maxfiles;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NPROC].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NPROC].rlim_max = maxproc;
|
2010-04-11 16:26:07 +00:00
|
|
|
p->p_limit->pl_rlimit[RLIMIT_DATA].rlim_cur = dfldsiz;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_DATA].rlim_max = maxdsiz;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_STACK].rlim_cur = dflssiz;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_STACK].rlim_max = maxssiz;
|
|
|
|
/* Cast to avoid overflow on i386/PAE. */
|
2018-02-06 22:10:07 +00:00
|
|
|
pageablemem = ptoa((vm_paddr_t)vm_free_count());
|
2010-04-11 16:26:07 +00:00
|
|
|
p->p_limit->pl_rlimit[RLIMIT_RSS].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_RSS].rlim_max = pageablemem;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_MEMLOCK].rlim_cur = pageablemem / 3;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_MEMLOCK].rlim_max = pageablemem;
|
2002-10-09 17:17:24 +00:00
|
|
|
p->p_cpulimit = RLIM_INFINITY;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2015-06-10 10:43:59 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
thread_cow_get_proc(td, p);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
|
2011-03-29 17:47:25 +00:00
|
|
|
/* Initialize resource accounting structures. */
|
|
|
|
racct_create(&p->p_racct);
|
|
|
|
|
2004-11-20 02:28:48 +00:00
|
|
|
p->p_stats = pstats_alloc();
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Allocate a prototype map so we have something to fork. */
|
|
|
|
p->p_vmspace = &vmspace0;
|
|
|
|
vmspace0.vm_refcnt = 1;
|
2015-05-15 08:30:29 +00:00
|
|
|
pmap_pinit0(vmspace_pmap(&vmspace0));
|
2009-10-02 17:48:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* proc0 is not expected to enter usermode, so there is no special
|
|
|
|
* handling for sv_minuser here, like is done for exec_new_vmspace().
|
|
|
|
*/
|
2010-04-03 19:07:05 +00:00
|
|
|
vm_map_init(&vmspace0.vm_map, vmspace_pmap(&vmspace0),
|
|
|
|
p->p_sysent->sv_minuser, p->p_sysent->sv_maxuser);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2010-07-22 05:42:29 +00:00
|
|
|
/*
|
|
|
|
* Call the init and ctor for the new thread and proc. We wait
|
|
|
|
* to do this until all other structures are fairly sane.
|
2007-11-18 13:56:51 +00:00
|
|
|
*/
|
2017-11-09 22:51:48 +00:00
|
|
|
EVENTHANDLER_DIRECT_INVOKE(process_init, p);
|
|
|
|
EVENTHANDLER_DIRECT_INVOKE(thread_init, td);
|
|
|
|
EVENTHANDLER_DIRECT_INVOKE(process_ctor, p);
|
|
|
|
EVENTHANDLER_DIRECT_INVOKE(thread_ctor, td);
|
2007-11-18 13:56:51 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1996-03-11 06:14:38 +00:00
|
|
|
* Charge root for one process.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
o Merge contents of struct pcred into struct ucred. Specifically, add the
real uid, saved uid, real gid, and saved gid to ucred, as well as the
pcred->pc_uidinfo, which was associated with the real uid, only rename
it to cr_ruidinfo so as not to conflict with cr_uidinfo, which
corresponds to the effective uid.
o Remove p_cred from struct proc; add p_ucred to struct proc, replacing
original macro that pointed.
p->p_ucred to p->p_cred->pc_ucred.
o Universally update code so that it makes use of ucred instead of pcred,
p->p_ucred instead of p->p_pcred, cr_ruidinfo instead of p_uidinfo,
cr_{r,sv}{u,g}id instead of p_*, etc.
o Remove pcred0 and its initialization from init_main.c; initialize
cr_ruidinfo there.
o Restruction many credential modification chunks to always crdup while
we figure out locking and optimizations; generally speaking, this
means moving to a structure like this:
newcred = crdup(oldcred);
...
p->p_ucred = newcred;
crfree(oldcred);
It's not race-free, but better than nothing. There are also races
in sys_process.c, all inter-process authorization, fork, exec, and
exit.
o Remove sigio->sio_ruid since sigio->sio_ucred now contains the ruid;
remove comments indicating that the old arrangement was a problem.
o Restructure exec1() a little to use newcred/oldcred arrangement, and
use improved uid management primitives.
o Clean up exit1() so as to do less work in credential cleanup due to
pcred removal.
o Clean up fork1() so as to do less work in credential cleanup and
allocation.
o Clean up ktrcanset() to take into account changes, and move to using
suser_xxx() instead of performing a direct uid==0 comparision.
o Improve commenting in various kern_prot.c credential modification
calls to better document current behavior. In a couple of places,
current behavior is a little questionable and we need to check
POSIX.1 to make sure it's "right". More commenting work still
remains to be done.
o Update credential management calls, such as crfree(), to take into
account new ruidinfo reference.
o Modify or add the following uid and gid helper routines:
change_euid()
change_egid()
change_ruid()
change_rgid()
change_svuid()
change_svgid()
In each case, the call now acts on a credential not a process, and as
such no longer requires more complicated process locking/etc. They
now assume the caller will do any necessary allocation of an
exclusive credential reference. Each is commented to document its
reference requirements.
o CANSIGIO() is simplified to require only credentials, not processes
and pcreds.
o Remove lots of (p_pcred==NULL) checks.
o Add an XXX to authorization code in nfs_lock.c, since it's
questionable, and needs to be considered carefully.
o Simplify posix4 authorization code to require only credentials, not
processes and pcreds. Note that this authorization, as well as
CANSIGIO(), needs to be updated to use the p_cansignal() and
p_cansched() centralized authorization routines, as they currently
do not take into account some desirable restrictions that are handled
by the centralized routines, as well as being inconsistent with other
similar authorization instances.
o Update libkvm to take these changes into account.
Obtained from: TrustedBSD Project
Reviewed by: green, bde, jhb, freebsd-arch, freebsd-audit
2001-05-25 16:59:11 +00:00
|
|
|
(void)chgproccnt(p->p_ucred->cr_ruidinfo, 1, 0);
|
2011-03-31 19:22:11 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
racct_add_force(p, RACCT_NPROC, 1);
|
|
|
|
PROC_UNLOCK(p);
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(p0init, SI_SUB_INTRINSIC, SI_ORDER_FIRST, proc0_init, NULL);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/* ARGSUSED*/
|
1995-12-10 13:45:30 +00:00
|
|
|
static void
|
2000-08-11 09:05:12 +00:00
|
|
|
proc0_post(void *dummy __unused)
|
1995-08-28 09:19:25 +00:00
|
|
|
{
|
1998-04-04 13:26:20 +00:00
|
|
|
struct timespec ts;
|
2000-08-11 09:05:12 +00:00
|
|
|
struct proc *p;
|
2007-06-09 18:56:11 +00:00
|
|
|
struct rusage ru;
|
2008-01-10 22:11:20 +00:00
|
|
|
struct thread *td;
|
1996-09-23 04:37:54 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
1999-02-25 11:03:08 +00:00
|
|
|
* Now we can look at the time, having had a chance to verify the
|
2002-05-16 21:28:32 +00:00
|
|
|
* time from the filesystem. Pretend that proc0 started now.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2001-03-28 11:52:56 +00:00
|
|
|
sx_slock(&allproc_lock);
|
2007-01-17 14:58:53 +00:00
|
|
|
FOREACH_PROC_IN_SYSTEM(p) {
|
2018-06-15 00:36:41 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (p->p_state == PRS_NEW) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
continue;
|
|
|
|
}
|
2003-05-01 16:59:23 +00:00
|
|
|
microuptime(&p->p_stats->p_start);
|
2014-11-26 14:10:00 +00:00
|
|
|
PROC_STATLOCK(p);
|
2007-06-09 18:56:11 +00:00
|
|
|
rufetch(p, &ru); /* Clears thread stats */
|
2006-02-07 21:22:02 +00:00
|
|
|
p->p_rux.rux_runtime = 0;
|
2007-06-09 18:56:11 +00:00
|
|
|
p->p_rux.rux_uticks = 0;
|
|
|
|
p->p_rux.rux_sticks = 0;
|
|
|
|
p->p_rux.rux_iticks = 0;
|
2018-06-15 00:36:41 +00:00
|
|
|
PROC_STATUNLOCK(p);
|
2008-01-10 22:11:20 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
|
|
|
td->td_runtime = 0;
|
|
|
|
}
|
2018-06-15 00:36:41 +00:00
|
|
|
PROC_UNLOCK(p);
|
2000-08-11 09:05:12 +00:00
|
|
|
}
|
2001-03-28 11:52:56 +00:00
|
|
|
sx_sunlock(&allproc_lock);
|
2006-02-07 21:22:02 +00:00
|
|
|
PCPU_SET(switchtime, cpu_ticks());
|
2000-09-07 01:33:02 +00:00
|
|
|
PCPU_SET(switchticks, ticks);
|
1995-08-28 09:19:25 +00:00
|
|
|
|
1996-09-23 04:37:54 +00:00
|
|
|
/*
|
|
|
|
* Give the ``random'' number generator a thump.
|
|
|
|
*/
|
1998-04-04 13:26:20 +00:00
|
|
|
nanotime(&ts);
|
|
|
|
srandom(ts.tv_sec ^ ts.tv_nsec);
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(p0post, SI_SUB_INTRINSIC_POST, SI_ORDER_FIRST, proc0_post, NULL);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2009-10-20 16:36:51 +00:00
|
|
|
static void
|
|
|
|
random_init(void *dummy __unused)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After CPU has been started we have some randomness on most
|
|
|
|
* platforms via get_cyclecount(). For platforms that don't
|
|
|
|
* we will reseed random(9) in proc0_post() as well.
|
|
|
|
*/
|
|
|
|
srandom(get_cyclecount());
|
|
|
|
}
|
|
|
|
SYSINIT(random, SI_SUB_RANDOM, SI_ORDER_FIRST, random_init, NULL);
|
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
***************************************************************************
|
|
|
|
****
|
|
|
|
**** The following SYSINIT's and glue code should be moved to the
|
|
|
|
**** respective files on a per subsystem basis.
|
|
|
|
****
|
|
|
|
***************************************************************************
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* List of paths to try when searching for "init".
|
|
|
|
*/
|
1999-04-20 21:15:13 +00:00
|
|
|
static char init_path[MAXPATHLEN] =
|
1999-05-05 12:20:23 +00:00
|
|
|
#ifdef INIT_PATH
|
|
|
|
__XSTRING(INIT_PATH);
|
|
|
|
#else
|
2011-10-27 10:25:11 +00:00
|
|
|
"/sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init";
|
1999-05-05 12:20:23 +00:00
|
|
|
#endif
|
2001-12-16 16:07:20 +00:00
|
|
|
SYSCTL_STRING(_kern, OID_AUTO, init_path, CTLFLAG_RD, init_path, 0,
|
|
|
|
"Path used to search the init process");
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2005-09-15 13:16:07 +00:00
|
|
|
/*
|
|
|
|
* Shutdown timeout of init(8).
|
|
|
|
* Unused within kernel, but used to control init(8), hence do not remove.
|
|
|
|
*/
|
|
|
|
#ifndef INIT_SHUTDOWN_TIMEOUT
|
|
|
|
#define INIT_SHUTDOWN_TIMEOUT 120
|
|
|
|
#endif
|
|
|
|
static int init_shutdown_timeout = INIT_SHUTDOWN_TIMEOUT;
|
|
|
|
SYSCTL_INT(_kern, OID_AUTO, init_shutdown_timeout,
|
2010-08-09 14:48:31 +00:00
|
|
|
CTLFLAG_RW, &init_shutdown_timeout, 0, "Shutdown timeout of init(8). "
|
|
|
|
"Unused within kernel, but used to control init(8)");
|
2005-09-15 13:16:07 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-04-20 21:15:13 +00:00
|
|
|
* Start the initial user process; try exec'ing each pathname in init_path.
|
1994-05-24 10:09:53 +00:00
|
|
|
* The program is invoked with one argument containing the boot flags.
|
|
|
|
*/
|
|
|
|
static void
|
2000-08-11 09:05:12 +00:00
|
|
|
start_init(void *dummy)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2018-12-04 00:15:47 +00:00
|
|
|
struct image_args args;
|
2018-12-05 19:18:16 +00:00
|
|
|
int error;
|
2018-05-17 23:07:51 +00:00
|
|
|
char *var, *path;
|
|
|
|
char *free_init_path, *tmp_init_path;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
1999-07-01 13:21:46 +00:00
|
|
|
struct proc *p;
|
2018-12-04 00:15:47 +00:00
|
|
|
struct vmspace *oldvmspace;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2017-12-31 09:22:31 +00:00
|
|
|
TSENTER(); /* Here so we don't overlap with mi_startup. */
|
|
|
|
|
2001-09-12 08:38:13 +00:00
|
|
|
td = curthread;
|
|
|
|
p = td->td_proc;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-07-03 08:52:37 +00:00
|
|
|
vfs_mountroot();
|
2002-03-08 10:33:11 +00:00
|
|
|
|
2015-04-16 20:53:15 +00:00
|
|
|
/* Wipe GELI passphrase from the environment. */
|
|
|
|
kern_unsetenv("kern.geom.eli.passphrase");
|
|
|
|
|
2014-10-16 18:04:43 +00:00
|
|
|
if ((var = kern_getenv("init_path")) != NULL) {
|
2002-10-17 20:03:38 +00:00
|
|
|
strlcpy(init_path, var, sizeof(init_path));
|
2002-04-17 13:06:36 +00:00
|
|
|
freeenv(var);
|
1999-04-20 21:15:13 +00:00
|
|
|
}
|
2018-05-17 23:07:51 +00:00
|
|
|
free_init_path = tmp_init_path = strdup(init_path, M_TEMP);
|
1999-04-20 21:15:13 +00:00
|
|
|
|
2018-05-17 23:07:51 +00:00
|
|
|
while ((path = strsep(&tmp_init_path, ":")) != NULL) {
|
1999-04-20 21:15:13 +00:00
|
|
|
if (bootverbose)
|
2018-05-17 23:07:51 +00:00
|
|
|
printf("start_init: trying %s\n", path);
|
1999-04-20 21:15:13 +00:00
|
|
|
|
2018-12-04 00:15:47 +00:00
|
|
|
memset(&args, 0, sizeof(args));
|
|
|
|
error = exec_alloc_args(&args);
|
|
|
|
if (error != 0)
|
|
|
|
panic("%s: Can't allocate space for init arguments %d",
|
|
|
|
__func__, error);
|
|
|
|
|
|
|
|
error = exec_args_add_fname(&args, path, UIO_SYSSPACE);
|
|
|
|
if (error != 0)
|
|
|
|
panic("%s: Can't add fname %d", __func__, error);
|
|
|
|
error = exec_args_add_arg(&args, path, UIO_SYSSPACE);
|
|
|
|
if (error != 0)
|
|
|
|
panic("%s: Can't add argv[0] %d", __func__, error);
|
2018-12-05 19:18:16 +00:00
|
|
|
if (boothowto & RB_SINGLE)
|
|
|
|
error = exec_args_add_arg(&args, "-s", UIO_SYSSPACE);
|
2018-12-04 00:15:47 +00:00
|
|
|
if (error != 0)
|
|
|
|
panic("%s: Can't add argv[0] %d", __func__, error);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Now try to exec the program. If can't for any reason
|
|
|
|
* other than it doesn't exist, complain.
|
1995-08-28 09:19:25 +00:00
|
|
|
*
|
2000-08-11 09:05:12 +00:00
|
|
|
* Otherwise, return via fork_trampoline() all the way
|
1999-07-01 13:21:46 +00:00
|
|
|
* to user mode as init!
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2018-12-04 00:15:47 +00:00
|
|
|
KASSERT((td->td_pflags & TDP_EXECVMSPC) == 0,
|
|
|
|
("nested execve"));
|
|
|
|
oldvmspace = td->td_proc->p_vmspace;
|
|
|
|
error = kern_execve(td, &args, NULL);
|
|
|
|
KASSERT(error != 0,
|
|
|
|
("kern_execve returned success, not EJUSTRETURN"));
|
|
|
|
if (error == EJUSTRETURN) {
|
|
|
|
if ((td->td_pflags & TDP_EXECVMSPC) != 0) {
|
|
|
|
KASSERT(p->p_vmspace != oldvmspace,
|
|
|
|
("oldvmspace still used"));
|
|
|
|
vmspace_free(oldvmspace);
|
|
|
|
td->td_pflags &= ~TDP_EXECVMSPC;
|
|
|
|
}
|
2018-05-17 23:07:51 +00:00
|
|
|
free(free_init_path, M_TEMP);
|
2017-12-31 09:22:31 +00:00
|
|
|
TSEXIT();
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
2000-09-15 19:25:29 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
if (error != ENOENT)
|
2018-05-17 23:07:51 +00:00
|
|
|
printf("exec %s: error %d\n", path, error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2018-05-17 23:07:51 +00:00
|
|
|
free(free_init_path, M_TEMP);
|
1999-12-20 02:50:49 +00:00
|
|
|
printf("init: not found in path %s\n", init_path);
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("no init");
|
|
|
|
}
|
2000-08-11 09:05:12 +00:00
|
|
|
|
|
|
|
/*
|
2016-11-08 23:59:41 +00:00
|
|
|
* Like kproc_create(), but runs in its own address space.
|
2000-08-11 09:05:12 +00:00
|
|
|
* We do this early to reserve pid 1.
|
|
|
|
*
|
|
|
|
* Note special case - do not make it runnable yet. Other work
|
|
|
|
* in progress will change this more.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
create_init(const void *udata __unused)
|
|
|
|
{
|
2016-02-04 04:22:18 +00:00
|
|
|
struct fork_req fr;
|
2002-04-19 13:35:53 +00:00
|
|
|
struct ucred *newcred, *oldcred;
|
2015-07-16 14:30:11 +00:00
|
|
|
struct thread *td;
|
2000-08-11 09:05:12 +00:00
|
|
|
int error;
|
|
|
|
|
2016-02-04 04:22:18 +00:00
|
|
|
bzero(&fr, sizeof(fr));
|
|
|
|
fr.fr_flags = RFFDG | RFPROC | RFSTOPPED;
|
|
|
|
fr.fr_procp = &initproc;
|
|
|
|
error = fork1(&thread0, &fr);
|
2000-08-11 09:05:12 +00:00
|
|
|
if (error)
|
|
|
|
panic("cannot fork init: %d\n", error);
|
2004-01-16 20:29:23 +00:00
|
|
|
KASSERT(initproc->p_pid == 1, ("create_init: initproc->p_pid != 1"));
|
2002-04-19 13:35:53 +00:00
|
|
|
/* divorce init's credentials from the kernel's */
|
|
|
|
newcred = crget();
|
2014-12-15 12:01:42 +00:00
|
|
|
sx_xlock(&proctree_lock);
|
2001-01-24 10:40:56 +00:00
|
|
|
PROC_LOCK(initproc);
|
2007-09-17 05:31:39 +00:00
|
|
|
initproc->p_flag |= P_SYSTEM | P_INMEM;
|
2014-12-15 12:01:42 +00:00
|
|
|
initproc->p_treeflag |= P_TREE_REAPER;
|
2002-04-19 13:35:53 +00:00
|
|
|
oldcred = initproc->p_ucred;
|
|
|
|
crcopy(newcred, oldcred);
|
2002-07-31 00:39:19 +00:00
|
|
|
#ifdef MAC
|
2008-10-28 11:33:06 +00:00
|
|
|
mac_cred_create_init(newcred);
|
2006-02-02 01:16:31 +00:00
|
|
|
#endif
|
|
|
|
#ifdef AUDIT
|
2007-06-07 22:27:15 +00:00
|
|
|
audit_cred_proc1(newcred);
|
2002-07-31 00:39:19 +00:00
|
|
|
#endif
|
2015-03-16 00:10:03 +00:00
|
|
|
proc_set_cred(initproc, newcred);
|
2015-07-16 14:30:11 +00:00
|
|
|
td = FIRST_THREAD_IN_PROC(initproc);
|
|
|
|
crfree(td->td_ucred);
|
|
|
|
td->td_ucred = crhold(initproc->p_ucred);
|
2001-01-24 10:40:56 +00:00
|
|
|
PROC_UNLOCK(initproc);
|
2014-12-15 12:01:42 +00:00
|
|
|
sx_xunlock(&proctree_lock);
|
2002-04-19 13:35:53 +00:00
|
|
|
crfree(oldcred);
|
2016-06-16 12:05:44 +00:00
|
|
|
cpu_fork_kthread_handler(FIRST_THREAD_IN_PROC(initproc),
|
|
|
|
start_init, NULL);
|
2000-08-11 09:05:12 +00:00
|
|
|
}
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(init, SI_SUB_CREATE_INIT, SI_ORDER_FIRST, create_init, NULL);
|
2000-08-11 09:05:12 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make it runnable now.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
kick_init(const void *udata __unused)
|
|
|
|
{
|
2002-02-07 20:58:47 +00:00
|
|
|
struct thread *td;
|
2001-01-24 10:40:56 +00:00
|
|
|
|
2002-02-07 20:58:47 +00:00
|
|
|
td = FIRST_THREAD_IN_PROC(initproc);
|
Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
|
|
|
thread_lock(td);
|
2002-09-11 08:13:56 +00:00
|
|
|
TD_SET_CAN_RUN(td);
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_add(td, SRQ_BORING);
|
Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
|
|
|
thread_unlock(td);
|
2000-08-11 09:05:12 +00:00
|
|
|
}
|
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random.
This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources.
The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people.
The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway.
Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to.
My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise.
My Nomex pants are on. Let the feedback commence!
Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?)
Approved by: so(des)
2014-10-30 21:21:53 +00:00
|
|
|
SYSINIT(kickinit, SI_SUB_KTHREAD_INIT, SI_ORDER_MIDDLE, kick_init, NULL);
|