2005-01-06 23:35:40 +00:00
|
|
|
/*-
|
1995-08-28 09:19:25 +00:00
|
|
|
* Copyright (c) 1995 Terrence R. Lambert
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1989, 1991, 1992, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* @(#)init_main.c 8.9 (Berkeley) 1/21/94
|
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2006-05-14 07:11:28 +00:00
|
|
|
#include "opt_ddb.h"
|
1999-05-05 12:20:23 +00:00
|
|
|
#include "opt_init_path.h"
|
2015-02-05 07:51:38 +00:00
|
|
|
#include "opt_verbose_sysinit.h"
|
1996-03-02 18:24:13 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
2002-01-13 21:37:49 +00:00
|
|
|
#include <sys/kernel.h>
|
2002-09-01 21:41:24 +00:00
|
|
|
#include <sys/exec.h>
|
1997-01-27 12:43:36 +00:00
|
|
|
#include <sys/file.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/filedesc.h>
|
2009-05-27 14:11:23 +00:00
|
|
|
#include <sys/jail.h>
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <sys/ktr.h>
|
2001-03-28 11:52:56 +00:00
|
|
|
#include <sys/lock.h>
|
2011-03-05 12:40:35 +00:00
|
|
|
#include <sys/loginclass.h>
|
1997-01-16 15:58:32 +00:00
|
|
|
#include <sys/mount.h>
|
2000-10-20 07:58:15 +00:00
|
|
|
#include <sys/mutex.h>
|
2002-09-01 20:37:28 +00:00
|
|
|
#include <sys/syscallsubr.h>
|
1995-12-04 16:48:58 +00:00
|
|
|
#include <sys/sysctl.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/proc.h>
|
2011-03-29 17:47:25 +00:00
|
|
|
#include <sys/racct.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/resourcevar.h>
|
|
|
|
#include <sys/systm.h>
|
2000-09-10 13:54:52 +00:00
|
|
|
#include <sys/signalvar.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/vnode.h>
|
1994-08-24 11:52:21 +00:00
|
|
|
#include <sys/sysent.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/reboot.h>
|
2002-11-21 01:22:38 +00:00
|
|
|
#include <sys/sched.h>
|
2001-03-28 11:52:56 +00:00
|
|
|
#include <sys/sx.h>
|
1995-10-08 00:06:22 +00:00
|
|
|
#include <sys/sysproto.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <sys/vmmeter.h>
|
1997-12-12 04:00:59 +00:00
|
|
|
#include <sys/unistd.h>
|
1998-10-09 23:42:47 +00:00
|
|
|
#include <sys/malloc.h>
|
2000-09-02 19:17:34 +00:00
|
|
|
#include <sys/conf.h>
|
Add cpuset, an api for thread to cpu binding and cpu resource grouping
and assignment.
- Add a reference to a struct cpuset in each thread that is inherited from
the thread that created it.
- Release the reference when the thread is destroyed.
- Add prototypes for syscalls and macros for manipulating cpusets in
sys/cpuset.h
- Add syscalls to create, get, and set new numbered cpusets:
cpuset(), cpuset_{get,set}id()
- Add syscalls for getting and setting affinity masks for cpusets or
individual threads: cpuid_{get,set}affinity()
- Add types for the 'level' and 'which' parameters for the cpuset. This
will permit expansion of the api to cover cpu masks for other objects
identifiable with an id_t integer. For example, IRQs and Jails may be
coming soon.
- The root set 0 contains all valid cpus. All thread initially belong to
cpuset 1. This permits migrating all threads off of certain cpus to
reserve them for special applications.
Sponsored by: Nokia
Discussed with: arch, rwatson, brooks, davidxu, deischen
Reviewed by: antoine
2008-03-02 07:39:22 +00:00
|
|
|
#include <sys/cpuset.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <machine/cpu.h>
|
|
|
|
|
2006-02-02 01:16:31 +00:00
|
|
|
#include <security/audit/audit.h>
|
2006-10-22 11:52:19 +00:00
|
|
|
#include <security/mac/mac_framework.h>
|
2006-02-02 01:16:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <vm/vm.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_param.h>
|
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_map.h>
|
Add an initial NUMA affinity/policy configuration for threads and processes.
This is based on work done by jeff@ and jhb@, as well as the numa.diff
patch that has been circulating when someone asks for first-touch NUMA
on -10 or -11.
* Introduce a simple set of VM policy and iterator types.
* tie the policy types into the vm_phys path for now, mirroring how
the initial first-touch allocation work was enabled.
* add syscalls to control changing thread and process defaults.
* add a global NUMA VM domain policy.
* implement a simple cascade policy order - if a thread policy exists, use it;
if a process policy exists, use it; use the default policy.
* processes inherit policies from their parent processes, threads inherit
policies from their parent threads.
* add a simple tool (numactl) to query and modify default thread/process
policities.
* add documentation for the new syscalls, for numa and for numactl.
* re-enable first touch NUMA again by default, as now policies can be
set in a variety of methods.
This is only relevant for very specific workloads.
This doesn't pretend to be a final NUMA solution.
The previous defaults in -HEAD (with MAXMEMDOM set) can be achieved by
'sysctl vm.default_policy=rr'.
This is only relevant if MAXMEMDOM is set to something other than 1.
Ie, if you're using GENERIC or a modified kernel with non-NUMA, then
this is a glorified no-op for you.
Thank you to Norse Corp for giving me access to rather large
(for FreeBSD!) NUMA machines in order to develop and verify this.
Thank you to Dell for providing me with dual socket sandybridge
and westmere v3 hardware to do NUMA development with.
Thank you to Scott Long at Netflix for providing me with access
to the two-socket, four-domain haswell v3 hardware.
Thank you to Peter Holm for running the stress testing suite
against the NUMA branch during various stages of development!
Tested:
* MIPS (regression testing; non-NUMA)
* i386 (regression testing; non-NUMA GENERIC)
* amd64 (regression testing; non-NUMA GENERIC)
* westmere, 2 socket (thankyou norse!)
* sandy bridge, 2 socket (thankyou dell!)
* ivy bridge, 2 socket (thankyou norse!)
* westmere-EX, 4 socket / 1TB RAM (thankyou norse!)
* haswell, 2 socket (thankyou norse!)
* haswell v3, 2 socket (thankyou dell)
* haswell v3, 2x18 core (thankyou scott long / netflix!)
* Peter Holm ran a stress test suite on this work and found one
issue, but has not been able to verify it (it doesn't look NUMA
related, and he only saw it once over many testing runs.)
* I've tested bhyve instances running in fixed NUMA domains and cpusets;
all seems to work correctly.
Verified:
* intel-pcm - pcm-numa.x and pcm-memory.x, whilst selecting different
NUMA policies for processes under test.
Review:
This was reviewed through phabricator (https://reviews.freebsd.org/D2559)
as well as privately and via emails to freebsd-arch@. The git history
with specific attributes is available at https://github.com/erikarn/freebsd/
in the NUMA branch (https://github.com/erikarn/freebsd/compare/local/adrian_numa_policy).
This has been reviewed by a number of people (stas, rpaulo, kib, ngie,
wblock) but not achieved a clear consensus. My hope is that with further
exposure and testing more functionality can be implemented and evaluated.
Notes:
* The VM doesn't handle unbalanced domains very well, and if you have an overly
unbalanced memory setup whilst under high memory pressure, VM page allocation
may fail leading to a kernel panic. This was a problem in the past, but it's
much more easily triggered now with these tools.
* This work only controls the path through vm_phys; it doesn't yet strongly/predictably
affect contigmalloc, KVA placement, UMA, etc. So, driver placement of memory
isn't really guaranteed in any way. That's next on my plate.
Sponsored by: Norse Corp, Inc.; Dell
2015-07-11 15:21:37 +00:00
|
|
|
#include <vm/vm_domain.h>
|
1997-03-01 17:49:09 +00:00
|
|
|
#include <sys/copyright.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#include <ddb/ddb.h>
|
|
|
|
#include <ddb/db_sym.h>
|
|
|
|
|
2000-08-11 09:05:12 +00:00
|
|
|
void mi_startup(void); /* Should be elsewhere */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/* Components of the first process -- never freed. */
|
1995-12-10 13:45:30 +00:00
|
|
|
static struct session session0;
|
|
|
|
static struct pgrp pgrp0;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct proc proc0;
|
2016-06-05 17:04:03 +00:00
|
|
|
struct thread0_storage thread0_st __aligned(16);
|
2003-04-13 21:29:11 +00:00
|
|
|
struct vmspace vmspace0;
|
1995-08-28 09:19:25 +00:00
|
|
|
struct proc *initproc;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2013-04-03 22:24:36 +00:00
|
|
|
#ifndef BOOTHOWTO
|
|
|
|
#define BOOTHOWTO 0
|
|
|
|
#endif
|
|
|
|
int boothowto = BOOTHOWTO; /* initialized so that it can be patched */
|
2010-08-09 14:48:31 +00:00
|
|
|
SYSCTL_INT(_debug, OID_AUTO, boothowto, CTLFLAG_RD, &boothowto, 0,
|
|
|
|
"Boot control flags, passed from loader");
|
2013-04-03 22:24:36 +00:00
|
|
|
|
|
|
|
#ifndef BOOTVERBOSE
|
|
|
|
#define BOOTVERBOSE 0
|
|
|
|
#endif
|
|
|
|
int bootverbose = BOOTVERBOSE;
|
2010-08-09 14:48:31 +00:00
|
|
|
SYSCTL_INT(_debug, OID_AUTO, bootverbose, CTLFLAG_RW, &bootverbose, 0,
|
|
|
|
"Control the output of verbose kernel messages");
|
1995-12-04 16:48:58 +00:00
|
|
|
|
2015-08-26 23:58:03 +00:00
|
|
|
#ifdef INVARIANTS
|
2015-08-28 19:53:19 +00:00
|
|
|
FEATURE(invariants, "Kernel compiled with INVARIANTS, may affect performance");
|
2015-08-26 23:58:03 +00:00
|
|
|
#endif
|
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
1995-08-28 09:19:25 +00:00
|
|
|
* This ensures that there is at least one entry so that the sysinit_set
|
|
|
|
* symbol is not undefined. A sybsystem ID of SI_SUB_DUMMY is never
|
|
|
|
* executed.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(placeholder, SI_SUB_DUMMY, SI_ORDER_ANY, NULL, NULL);
|
1994-08-27 16:14:39 +00:00
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
/*
|
|
|
|
* The sysinit table itself. Items are checked off as the are run.
|
|
|
|
* If we want to register new sysinit types, add them to newsysinit.
|
|
|
|
*/
|
2001-06-13 10:58:39 +00:00
|
|
|
SET_DECLARE(sysinit_set, struct sysinit);
|
|
|
|
struct sysinit **sysinit, **sysinit_end;
|
|
|
|
struct sysinit **newsysinit, **newsysinit_end;
|
1998-10-09 23:42:47 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Merge a new sysinit set into the current set, reallocating it if
|
|
|
|
* necessary. This can only be called after malloc is running.
|
|
|
|
*/
|
|
|
|
void
|
2001-06-13 10:58:39 +00:00
|
|
|
sysinit_add(struct sysinit **set, struct sysinit **set_end)
|
1998-10-09 23:42:47 +00:00
|
|
|
{
|
|
|
|
struct sysinit **newset;
|
|
|
|
struct sysinit **sipp;
|
|
|
|
struct sysinit **xipp;
|
2001-06-13 10:58:39 +00:00
|
|
|
int count;
|
1998-10-09 23:42:47 +00:00
|
|
|
|
2001-06-13 10:58:39 +00:00
|
|
|
count = set_end - set;
|
1998-10-09 23:42:47 +00:00
|
|
|
if (newsysinit)
|
2001-06-13 10:58:39 +00:00
|
|
|
count += newsysinit_end - newsysinit;
|
1998-10-15 17:09:19 +00:00
|
|
|
else
|
2001-06-13 10:58:39 +00:00
|
|
|
count += sysinit_end - sysinit;
|
1998-10-09 23:42:47 +00:00
|
|
|
newset = malloc(count * sizeof(*sipp), M_TEMP, M_NOWAIT);
|
|
|
|
if (newset == NULL)
|
|
|
|
panic("cannot malloc for sysinit");
|
|
|
|
xipp = newset;
|
1998-10-15 17:09:19 +00:00
|
|
|
if (newsysinit)
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = newsysinit; sipp < newsysinit_end; sipp++)
|
1998-10-09 23:42:47 +00:00
|
|
|
*xipp++ = *sipp;
|
1998-10-15 17:09:19 +00:00
|
|
|
else
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = sysinit; sipp < sysinit_end; sipp++)
|
1998-10-15 17:09:19 +00:00
|
|
|
*xipp++ = *sipp;
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = set; sipp < set_end; sipp++)
|
1998-10-15 17:09:19 +00:00
|
|
|
*xipp++ = *sipp;
|
|
|
|
if (newsysinit)
|
|
|
|
free(newsysinit, M_TEMP);
|
1998-10-09 23:42:47 +00:00
|
|
|
newsysinit = newset;
|
2001-06-13 10:58:39 +00:00
|
|
|
newsysinit_end = newset + count;
|
1998-10-09 23:42:47 +00:00
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
|
2012-06-01 15:42:37 +00:00
|
|
|
#if defined (DDB) && defined(VERBOSE_SYSINIT)
|
|
|
|
static const char *
|
|
|
|
symbol_name(vm_offset_t va, db_strategy_t strategy)
|
|
|
|
{
|
|
|
|
const char *name;
|
|
|
|
c_db_sym_t sym;
|
|
|
|
db_expr_t offset;
|
|
|
|
|
|
|
|
if (va == 0)
|
|
|
|
return (NULL);
|
|
|
|
sym = db_search_symbol(va, strategy, &offset);
|
|
|
|
if (offset != 0)
|
|
|
|
return (NULL);
|
|
|
|
db_symbol_values(sym, &name, NULL);
|
|
|
|
return (name);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* System startup; initialize the world, create process 0, mount root
|
|
|
|
* filesystem, and fork to create init and pagedaemon. Most of the
|
|
|
|
* hard work is done in the lower-level initialization routines including
|
|
|
|
* startup(), which does memory initialization and autoconfiguration.
|
1995-08-28 09:19:25 +00:00
|
|
|
*
|
|
|
|
* This allows simple addition of new kernel subsystems that require
|
|
|
|
* boot time initialization. It also allows substitution of subsystem
|
|
|
|
* (for instance, a scheduler, kernel profiler, or VM system) by object
|
1998-01-30 11:34:06 +00:00
|
|
|
* module. Finally, it allows for optional "kernel threads".
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2000-08-11 09:05:12 +00:00
|
|
|
mi_startup(void)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1995-08-28 09:19:25 +00:00
|
|
|
|
|
|
|
register struct sysinit **sipp; /* system initialization*/
|
|
|
|
register struct sysinit **xipp; /* interior loop of sort*/
|
|
|
|
register struct sysinit *save; /* bubble*/
|
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
int last;
|
|
|
|
int verbose;
|
|
|
|
#endif
|
|
|
|
|
2010-10-28 14:17:06 +00:00
|
|
|
if (boothowto & RB_VERBOSE)
|
|
|
|
bootverbose++;
|
|
|
|
|
2001-06-13 10:58:39 +00:00
|
|
|
if (sysinit == NULL) {
|
|
|
|
sysinit = SET_BEGIN(sysinit_set);
|
|
|
|
sysinit_end = SET_LIMIT(sysinit_set);
|
|
|
|
}
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
restart:
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
* Perform a bubble sort of the system initialization objects by
|
|
|
|
* their subsystem (primary key) and order (secondary key).
|
|
|
|
*/
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = sysinit; sipp < sysinit_end; sipp++) {
|
|
|
|
for (xipp = sipp + 1; xipp < sysinit_end; xipp++) {
|
1998-10-09 23:42:47 +00:00
|
|
|
if ((*sipp)->subsystem < (*xipp)->subsystem ||
|
|
|
|
((*sipp)->subsystem == (*xipp)->subsystem &&
|
2000-08-02 21:05:21 +00:00
|
|
|
(*sipp)->order <= (*xipp)->order))
|
1995-08-28 09:19:25 +00:00
|
|
|
continue; /* skip*/
|
|
|
|
save = *sipp;
|
|
|
|
*sipp = *xipp;
|
|
|
|
*xipp = save;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
last = SI_SUB_COPYRIGHT;
|
|
|
|
verbose = 0;
|
|
|
|
#if !defined(DDB)
|
|
|
|
printf("VERBOSE_SYSINIT: DDB not enabled, symbol lookups disabled.\n");
|
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
* Traverse the (now) ordered list of system initialization tasks.
|
|
|
|
* Perform each task, and continue on to the next task.
|
|
|
|
*/
|
2001-06-13 10:58:39 +00:00
|
|
|
for (sipp = sysinit; sipp < sysinit_end; sipp++) {
|
1997-12-12 04:00:59 +00:00
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
if ((*sipp)->subsystem == SI_SUB_DUMMY)
|
1995-08-28 09:19:25 +00:00
|
|
|
continue; /* skip dummy task(s)*/
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
if ((*sipp)->subsystem == SI_SUB_DONE)
|
|
|
|
continue;
|
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
if ((*sipp)->subsystem > last) {
|
|
|
|
verbose = 1;
|
|
|
|
last = (*sipp)->subsystem;
|
|
|
|
printf("subsystem %x\n", last);
|
|
|
|
}
|
|
|
|
if (verbose) {
|
|
|
|
#if defined(DDB)
|
2012-06-01 15:42:37 +00:00
|
|
|
const char *func, *data;
|
|
|
|
|
|
|
|
func = symbol_name((vm_offset_t)(*sipp)->func,
|
|
|
|
DB_STGY_PROC);
|
|
|
|
data = symbol_name((vm_offset_t)(*sipp)->udata,
|
|
|
|
DB_STGY_ANY);
|
|
|
|
if (func != NULL && data != NULL)
|
|
|
|
printf(" %s(&%s)... ", func, data);
|
|
|
|
else if (func != NULL)
|
|
|
|
printf(" %s(%p)... ", func, (*sipp)->udata);
|
2006-05-12 02:01:38 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
printf(" %p(%p)... ", (*sipp)->func,
|
|
|
|
(*sipp)->udata);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1999-07-01 13:21:46 +00:00
|
|
|
/* Call function */
|
|
|
|
(*((*sipp)->func))((*sipp)->udata);
|
1998-10-09 23:42:47 +00:00
|
|
|
|
2006-05-12 02:01:38 +00:00
|
|
|
#if defined(VERBOSE_SYSINIT)
|
|
|
|
if (verbose)
|
|
|
|
printf("done.\n");
|
|
|
|
#endif
|
|
|
|
|
1998-10-09 23:42:47 +00:00
|
|
|
/* Check off the one we're just done */
|
|
|
|
(*sipp)->subsystem = SI_SUB_DONE;
|
|
|
|
|
|
|
|
/* Check if we've installed more sysinit items via KLD */
|
|
|
|
if (newsysinit != NULL) {
|
2001-06-13 10:58:39 +00:00
|
|
|
if (sysinit != SET_BEGIN(sysinit_set))
|
1998-10-09 23:42:47 +00:00
|
|
|
free(sysinit, M_TEMP);
|
|
|
|
sysinit = newsysinit;
|
2001-06-13 10:58:39 +00:00
|
|
|
sysinit_end = newsysinit_end;
|
1998-10-09 23:42:47 +00:00
|
|
|
newsysinit = NULL;
|
2001-06-13 10:58:39 +00:00
|
|
|
newsysinit_end = NULL;
|
1998-10-09 23:42:47 +00:00
|
|
|
goto restart;
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-07-24 09:45:31 +00:00
|
|
|
mtx_assert(&Giant, MA_OWNED | MA_NOTRECURSED);
|
|
|
|
mtx_unlock(&Giant);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now hand over this thread to swapper.
|
|
|
|
*/
|
|
|
|
swapper();
|
1995-08-28 09:19:25 +00:00
|
|
|
/* NOTREACHED*/
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
***************************************************************************
|
|
|
|
****
|
|
|
|
**** The following SYSINIT's belong elsewhere, but have not yet
|
|
|
|
**** been moved.
|
|
|
|
****
|
|
|
|
***************************************************************************
|
|
|
|
*/
|
1995-08-29 23:59:22 +00:00
|
|
|
static void
|
2009-09-30 11:14:13 +00:00
|
|
|
print_caddr_t(void *data)
|
1995-08-29 23:59:22 +00:00
|
|
|
{
|
|
|
|
printf("%s", (char *)data);
|
|
|
|
}
|
2009-10-01 10:53:12 +00:00
|
|
|
|
|
|
|
static void
|
|
|
|
print_version(void *data __unused)
|
|
|
|
{
|
|
|
|
int len;
|
|
|
|
|
|
|
|
/* Strip a trailing newline from version. */
|
|
|
|
len = strlen(version);
|
|
|
|
while (len > 0 && version[len - 1] == '\n')
|
|
|
|
len--;
|
|
|
|
printf("%.*s %s\n", len, version, machine);
|
2013-02-02 11:58:35 +00:00
|
|
|
printf("%s\n", compiler_version);
|
2009-10-01 10:53:12 +00:00
|
|
|
}
|
|
|
|
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(announce, SI_SUB_COPYRIGHT, SI_ORDER_FIRST, print_caddr_t,
|
|
|
|
copyright);
|
|
|
|
SYSINIT(trademark, SI_SUB_COPYRIGHT, SI_ORDER_SECOND, print_caddr_t,
|
|
|
|
trademark);
|
2009-10-01 10:53:12 +00:00
|
|
|
SYSINIT(version, SI_SUB_COPYRIGHT, SI_ORDER_THIRD, print_version, NULL);
|
2001-05-17 22:28:46 +00:00
|
|
|
|
2004-02-29 16:56:54 +00:00
|
|
|
#ifdef WITNESS
|
|
|
|
static char wit_warn[] =
|
|
|
|
"WARNING: WITNESS option enabled, expect reduced performance.\n";
|
2006-09-25 23:19:01 +00:00
|
|
|
SYSINIT(witwarn, SI_SUB_COPYRIGHT, SI_ORDER_THIRD + 1,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, wit_warn);
|
2013-07-24 09:45:31 +00:00
|
|
|
SYSINIT(witwarn2, SI_SUB_LAST, SI_ORDER_THIRD + 1,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, wit_warn);
|
2004-02-29 16:56:54 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef DIAGNOSTIC
|
|
|
|
static char diag_warn[] =
|
|
|
|
"WARNING: DIAGNOSTIC option enabled, expect reduced performance.\n";
|
2006-09-26 00:15:56 +00:00
|
|
|
SYSINIT(diagwarn, SI_SUB_COPYRIGHT, SI_ORDER_THIRD + 2,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, diag_warn);
|
2013-07-24 09:45:31 +00:00
|
|
|
SYSINIT(diagwarn2, SI_SUB_LAST, SI_ORDER_THIRD + 2,
|
2008-03-16 10:58:09 +00:00
|
|
|
print_caddr_t, diag_warn);
|
2004-02-29 16:56:54 +00:00
|
|
|
#endif
|
|
|
|
|
Reorganize syscall entry and leave handling.
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
2010-05-23 18:32:02 +00:00
|
|
|
static int
|
|
|
|
null_fetch_syscall_args(struct thread *td __unused,
|
|
|
|
struct syscall_args *sa __unused)
|
|
|
|
{
|
|
|
|
|
|
|
|
panic("null_fetch_syscall_args");
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
null_set_syscall_retval(struct thread *td __unused, int error __unused)
|
|
|
|
{
|
|
|
|
|
|
|
|
panic("null_set_syscall_retval");
|
|
|
|
}
|
|
|
|
|
2002-09-01 21:41:24 +00:00
|
|
|
struct sysentvec null_sysvec = {
|
2008-09-24 10:14:37 +00:00
|
|
|
.sv_size = 0,
|
|
|
|
.sv_table = NULL,
|
|
|
|
.sv_mask = 0,
|
|
|
|
.sv_errsize = 0,
|
|
|
|
.sv_errtbl = NULL,
|
|
|
|
.sv_transtrap = NULL,
|
|
|
|
.sv_fixup = NULL,
|
|
|
|
.sv_sendsig = NULL,
|
|
|
|
.sv_sigcode = NULL,
|
|
|
|
.sv_szsigcode = NULL,
|
|
|
|
.sv_name = "null",
|
|
|
|
.sv_coredump = NULL,
|
|
|
|
.sv_imgact_try = NULL,
|
|
|
|
.sv_minsigstksz = 0,
|
|
|
|
.sv_pagesize = PAGE_SIZE,
|
|
|
|
.sv_minuser = VM_MIN_ADDRESS,
|
|
|
|
.sv_maxuser = VM_MAXUSER_ADDRESS,
|
|
|
|
.sv_usrstack = USRSTACK,
|
|
|
|
.sv_psstrings = PS_STRINGS,
|
|
|
|
.sv_stackprot = VM_PROT_ALL,
|
|
|
|
.sv_copyout_strings = NULL,
|
|
|
|
.sv_setregs = NULL,
|
|
|
|
.sv_fixlimit = NULL,
|
Reorganize syscall entry and leave handling.
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
2010-05-23 18:32:02 +00:00
|
|
|
.sv_maxssiz = NULL,
|
|
|
|
.sv_flags = 0,
|
|
|
|
.sv_set_syscall_retval = null_set_syscall_retval,
|
|
|
|
.sv_fetch_syscall_args = null_fetch_syscall_args,
|
|
|
|
.sv_syscallnames = NULL,
|
2011-03-08 19:01:45 +00:00
|
|
|
.sv_schedtail = NULL,
|
2015-05-24 14:51:29 +00:00
|
|
|
.sv_thread_detach = NULL,
|
2016-01-09 20:18:53 +00:00
|
|
|
.sv_trap = NULL,
|
2002-09-01 21:41:24 +00:00
|
|
|
};
|
2002-07-20 02:56:12 +00:00
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
***************************************************************************
|
|
|
|
****
|
2002-11-30 22:15:30 +00:00
|
|
|
**** The two following SYSINIT's are proc0 specific glue code. I am not
|
1995-08-28 09:19:25 +00:00
|
|
|
**** convinced that they can not be safely combined, but their order of
|
|
|
|
**** operation has been maintained as the same as the original init_main.c
|
|
|
|
**** for right now.
|
|
|
|
****
|
|
|
|
**** These probably belong in init_proc.c or kern_proc.c, since they
|
|
|
|
**** deal with proc0 (the fork template process).
|
|
|
|
****
|
|
|
|
***************************************************************************
|
|
|
|
*/
|
|
|
|
/* ARGSUSED*/
|
1995-12-10 13:45:30 +00:00
|
|
|
static void
|
2000-08-11 09:05:12 +00:00
|
|
|
proc0_init(void *dummy __unused)
|
1995-08-28 09:19:25 +00:00
|
|
|
{
|
2004-11-07 12:39:28 +00:00
|
|
|
struct proc *p;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
2015-03-16 00:10:03 +00:00
|
|
|
struct ucred *newcred;
|
2010-04-11 16:26:07 +00:00
|
|
|
vm_paddr_t pageablemem;
|
|
|
|
int i;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
1994-05-24 10:09:53 +00:00
|
|
|
p = &proc0;
|
2002-02-07 20:58:47 +00:00
|
|
|
td = &thread0;
|
2007-11-18 13:56:51 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
/*
|
2007-12-04 12:28:07 +00:00
|
|
|
* Initialize magic number and osrel.
|
2000-09-07 01:33:02 +00:00
|
|
|
*/
|
|
|
|
p->p_magic = P_MAGIC;
|
2007-12-04 12:28:07 +00:00
|
|
|
p->p_osrel = osreldate;
|
2000-09-07 01:33:02 +00:00
|
|
|
|
2006-10-26 21:42:22 +00:00
|
|
|
/*
|
|
|
|
* Initialize thread and process structures.
|
|
|
|
*/
|
|
|
|
procinit(); /* set up proc zone */
|
|
|
|
threadinit(); /* set up UMA zones */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialise scheduler resources.
|
|
|
|
* Add scheduler specific parts to proc, thread as needed.
|
|
|
|
*/
|
2004-09-05 02:09:54 +00:00
|
|
|
schedinit(); /* scheduler gets its house in order */
|
1996-07-31 09:26:54 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Create process 0 (the swapper).
|
|
|
|
*/
|
1996-03-11 06:14:38 +00:00
|
|
|
LIST_INSERT_HEAD(&allproc, p, p_list);
|
2001-04-11 18:50:50 +00:00
|
|
|
LIST_INSERT_HEAD(PIDHASH(0), p, p_hash);
|
2002-04-04 21:03:38 +00:00
|
|
|
mtx_init(&pgrp0.pg_mtx, "process group", NULL, MTX_DEF | MTX_DUPOK);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_pgrp = &pgrp0;
|
1996-03-11 06:14:38 +00:00
|
|
|
LIST_INSERT_HEAD(PGRPHASH(0), &pgrp0, pg_hash);
|
|
|
|
LIST_INIT(&pgrp0.pg_members);
|
|
|
|
LIST_INSERT_HEAD(&pgrp0.pg_members, p, p_pglist);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
pgrp0.pg_session = &session0;
|
2002-04-04 21:03:38 +00:00
|
|
|
mtx_init(&session0.s_mtx, "session", NULL, MTX_DEF);
|
Integrate the new MPSAFE TTY layer to the FreeBSD operating system.
The last half year I've been working on a replacement TTY layer for the
FreeBSD kernel. The new TTY layer was designed to improve the following:
- Improved driver model:
The old TTY layer has a driver model that is not abstract enough to
make it friendly to use. A good example is the output path, where the
device drivers directly access the output buffers. This means that an
in-kernel PPP implementation must always convert network buffers into
TTY buffers.
If a PPP implementation would be built on top of the new TTY layer
(still needs a hooks layer, though), it would allow the PPP
implementation to directly hand the data to the TTY driver.
- Improved hotplugging:
With the old TTY layer, it isn't entirely safe to destroy TTY's from
the system. This implementation has a two-step destructing design,
where the driver first abandons the TTY. After all threads have left
the TTY, the TTY layer calls a routine in the driver, which can be
used to free resources (unit numbers, etc).
The pts(4) driver also implements this feature, which means
posix_openpt() will now return PTY's that are created on the fly.
- Improved performance:
One of the major improvements is the per-TTY mutex, which is expected
to improve scalability when compared to the old Giant locking.
Another change is the unbuffered copying to userspace, which is both
used on TTY device nodes and PTY masters.
Upgrading should be quite straightforward. Unlike previous versions,
existing kernel configuration files do not need to be changed, except
when they reference device drivers that are listed in UPDATING.
Obtained from: //depot/projects/mpsafetty/...
Approved by: philip (ex-mentor)
Discussed: on the lists, at BSDCan, at the DevSummit
Sponsored by: Snow B.V., the Netherlands
dcons(4) fixed by: kan
2008-08-20 08:31:58 +00:00
|
|
|
refcount_init(&session0.s_count, 1);
|
1994-05-24 10:09:53 +00:00
|
|
|
session0.s_leader = p;
|
|
|
|
|
2002-07-20 02:56:12 +00:00
|
|
|
p->p_sysent = &null_sysvec;
|
2016-02-09 16:30:16 +00:00
|
|
|
p->p_flag = P_SYSTEM | P_INMEM | P_KPROC;
|
2013-09-19 18:53:42 +00:00
|
|
|
p->p_flag2 = 0;
|
Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..
2002-06-29 17:26:22 +00:00
|
|
|
p->p_state = PRS_NORMAL;
|
When filt_proc() removes event from the knlist due to the process
exiting (NOTE_EXIT->knlist_remove_inevent()), two things happen:
- knote kn_knlist pointer is reset
- INFLUX knote is removed from the process knlist.
And, there are two consequences:
- KN_LIST_UNLOCK() on such knote is nop
- there is nothing which would block exit1() from processing past the
knlist_destroy() (and knlist_destroy() resets knlist lock pointers).
Both consequences result either in leaked process lock, or
dereferencing NULL function pointers for locking.
Handle this by stopping embedding the process knlist into struct proc.
Instead, the knlist is allocated together with struct proc, but marked
as autodestroy on the zombie reap, by knlist_detach() function. The
knlist is freed when last kevent is removed from the list, in
particular, at the zombie reap time if the list is empty. As result,
the knlist_remove_inevent() is no longer needed and removed.
Other changes:
In filt_procattach(), clear NOTE_EXEC and NOTE_FORK desired events
from kn_sfflags for knote registered by kernel to only get NOTE_CHILD
notifications. The flags leak resulted in excessive
NOTE_EXEC/NOTE_FORK reports.
Fix immediate note activation in filt_procattach(). Condition should
be either the immediate CHILD_NOTE activation, or immediate NOTE_EXIT
report for the exiting process.
In knote_fork(), do not perform racy check for KN_INFLUX before kq
lock is taken. Besides being racy, it did not accounted for notes
just added by scan (KN_SCAN).
Some minor and incomplete style fixes.
Analyzed and tested by: Eric Badger <eric@badgerio.us>
Reviewed by: jhb
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
Approved by: re (gjb)
Differential revision: https://reviews.freebsd.org/D6859
2016-06-27 21:52:17 +00:00
|
|
|
p->p_klist = knlist_alloc(&p->p_mtx);
|
Moderate rewrite of kernel ktrace code to attempt to generally improve
reliability when tracing fast-moving processes or writing traces to
slow file systems by avoiding unbounded queueuing and dropped records.
Record loss was previously possible when the global pool of records
become depleted as a result of record generation outstripping record
commit, which occurred quickly in many common situations.
These changes partially restore the 4.x model of committing ktrace
records at the point of trace generation (synchronous), but maintain
the 5.x deferred record commit behavior (asynchronous) for situations
where entering VFS and sleeping is not possible (i.e., in the
scheduler). Records are now queued per-process as opposed to
globally, with processes responsible for committing records from their
own context as required.
- Eliminate the ktrace worker thread and global record queue, as they
are no longer used. Keep the global free record list, as records
are still used.
- Add a per-process record queue, which will hold any asynchronously
generated records, such as from context switches. This replaces the
global queue as the place to submit asynchronous records to.
- When a record is committed asynchronously, simply queue it to the
process.
- When a record is committed synchronously, first drain any pending
per-process records in order to maintain ordering as best we can.
Currently ordering between competing threads is provided via a global
ktrace_sx, but a per-process flag or lock may be desirable in the
future.
- When a process returns to user space following a system call, trap,
signal delivery, etc, flush any pending records.
- When a process exits, flush any pending records.
- Assert on process tear-down that there are no pending records.
- Slightly abstract the notion of being "in ktrace", which is used to
prevent the recursive generation of records, as well as generating
traces for ktrace events.
Future work here might look at changing the set of events marked for
synchronous and asynchronous record generation, re-balancing queue
depth, timeliness of commit to disk, and so on. I.e., performing a
drain every (n) records.
MFC after: 1 month
Discussed with: jhb
Requested by: Marc Olzheim <marcolz at stack dot nl>
2005-11-13 13:27:44 +00:00
|
|
|
STAILQ_INIT(&p->p_ktr);
|
2004-06-16 00:26:31 +00:00
|
|
|
p->p_nice = NZERO;
|
2012-08-16 13:01:56 +00:00
|
|
|
/* pid_max cannot be greater than PID_MAX */
|
2007-12-22 04:56:48 +00:00
|
|
|
td->td_tid = PID_MAX + 1;
|
2010-10-17 11:01:52 +00:00
|
|
|
LIST_INSERT_HEAD(TIDHASH(td->td_tid), td, td_hash);
|
Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..
2002-06-29 17:26:22 +00:00
|
|
|
td->td_state = TDS_RUNNING;
|
2006-10-26 21:42:22 +00:00
|
|
|
td->td_pri_class = PRI_TIMESHARE;
|
|
|
|
td->td_user_pri = PUSER;
|
2006-11-12 11:48:37 +00:00
|
|
|
td->td_base_user_pri = PUSER;
|
2010-12-09 02:42:02 +00:00
|
|
|
td->td_lend_user_pri = PRI_MAX;
|
Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..
2002-06-29 17:26:22 +00:00
|
|
|
td->td_priority = PVM;
|
2011-01-06 22:26:00 +00:00
|
|
|
td->td_base_pri = PVM;
|
2016-07-11 21:25:28 +00:00
|
|
|
td->td_oncpu = curcpu;
|
2012-01-22 11:01:36 +00:00
|
|
|
td->td_flags = TDF_INMEM;
|
|
|
|
td->td_pflags = TDP_KTHREAD;
|
Add cpuset, an api for thread to cpu binding and cpu resource grouping
and assignment.
- Add a reference to a struct cpuset in each thread that is inherited from
the thread that created it.
- Release the reference when the thread is destroyed.
- Add prototypes for syscalls and macros for manipulating cpusets in
sys/cpuset.h
- Add syscalls to create, get, and set new numbered cpusets:
cpuset(), cpuset_{get,set}id()
- Add syscalls for getting and setting affinity masks for cpusets or
individual threads: cpuid_{get,set}affinity()
- Add types for the 'level' and 'which' parameters for the cpuset. This
will permit expansion of the api to cover cpu masks for other objects
identifiable with an id_t integer. For example, IRQs and Jails may be
coming soon.
- The root set 0 contains all valid cpus. All thread initially belong to
cpuset 1. This permits migrating all threads off of certain cpus to
reserve them for special applications.
Sponsored by: Nokia
Discussed with: arch, rwatson, brooks, davidxu, deischen
Reviewed by: antoine
2008-03-02 07:39:22 +00:00
|
|
|
td->td_cpuset = cpuset_thread0();
|
Add an initial NUMA affinity/policy configuration for threads and processes.
This is based on work done by jeff@ and jhb@, as well as the numa.diff
patch that has been circulating when someone asks for first-touch NUMA
on -10 or -11.
* Introduce a simple set of VM policy and iterator types.
* tie the policy types into the vm_phys path for now, mirroring how
the initial first-touch allocation work was enabled.
* add syscalls to control changing thread and process defaults.
* add a global NUMA VM domain policy.
* implement a simple cascade policy order - if a thread policy exists, use it;
if a process policy exists, use it; use the default policy.
* processes inherit policies from their parent processes, threads inherit
policies from their parent threads.
* add a simple tool (numactl) to query and modify default thread/process
policities.
* add documentation for the new syscalls, for numa and for numactl.
* re-enable first touch NUMA again by default, as now policies can be
set in a variety of methods.
This is only relevant for very specific workloads.
This doesn't pretend to be a final NUMA solution.
The previous defaults in -HEAD (with MAXMEMDOM set) can be achieved by
'sysctl vm.default_policy=rr'.
This is only relevant if MAXMEMDOM is set to something other than 1.
Ie, if you're using GENERIC or a modified kernel with non-NUMA, then
this is a glorified no-op for you.
Thank you to Norse Corp for giving me access to rather large
(for FreeBSD!) NUMA machines in order to develop and verify this.
Thank you to Dell for providing me with dual socket sandybridge
and westmere v3 hardware to do NUMA development with.
Thank you to Scott Long at Netflix for providing me with access
to the two-socket, four-domain haswell v3 hardware.
Thank you to Peter Holm for running the stress testing suite
against the NUMA branch during various stages of development!
Tested:
* MIPS (regression testing; non-NUMA)
* i386 (regression testing; non-NUMA GENERIC)
* amd64 (regression testing; non-NUMA GENERIC)
* westmere, 2 socket (thankyou norse!)
* sandy bridge, 2 socket (thankyou dell!)
* ivy bridge, 2 socket (thankyou norse!)
* westmere-EX, 4 socket / 1TB RAM (thankyou norse!)
* haswell, 2 socket (thankyou norse!)
* haswell v3, 2 socket (thankyou dell)
* haswell v3, 2x18 core (thankyou scott long / netflix!)
* Peter Holm ran a stress test suite on this work and found one
issue, but has not been able to verify it (it doesn't look NUMA
related, and he only saw it once over many testing runs.)
* I've tested bhyve instances running in fixed NUMA domains and cpusets;
all seems to work correctly.
Verified:
* intel-pcm - pcm-numa.x and pcm-memory.x, whilst selecting different
NUMA policies for processes under test.
Review:
This was reviewed through phabricator (https://reviews.freebsd.org/D2559)
as well as privately and via emails to freebsd-arch@. The git history
with specific attributes is available at https://github.com/erikarn/freebsd/
in the NUMA branch (https://github.com/erikarn/freebsd/compare/local/adrian_numa_policy).
This has been reviewed by a number of people (stas, rpaulo, kib, ngie,
wblock) but not achieved a clear consensus. My hope is that with further
exposure and testing more functionality can be implemented and evaluated.
Notes:
* The VM doesn't handle unbalanced domains very well, and if you have an overly
unbalanced memory setup whilst under high memory pressure, VM page allocation
may fail leading to a kernel panic. This was a problem in the past, but it's
much more easily triggered now with these tools.
* This work only controls the path through vm_phys; it doesn't yet strongly/predictably
affect contigmalloc, KVA placement, UMA, etc. So, driver placement of memory
isn't really guaranteed in any way. That's next on my plate.
Sponsored by: Norse Corp, Inc.; Dell
2015-07-11 15:21:37 +00:00
|
|
|
vm_domain_policy_init(&td->td_vm_dom_policy);
|
|
|
|
vm_domain_policy_set(&td->td_vm_dom_policy, VM_POLICY_NONE, -1);
|
|
|
|
vm_domain_policy_init(&p->p_vm_dom_policy);
|
|
|
|
vm_domain_policy_set(&p->p_vm_dom_policy, VM_POLICY_NONE, -1);
|
2015-02-27 16:28:55 +00:00
|
|
|
prison0_init();
|
1997-06-16 00:29:36 +00:00
|
|
|
p->p_peers = 0;
|
|
|
|
p->p_leader = p;
|
2014-12-15 12:01:42 +00:00
|
|
|
p->p_reaper = p;
|
|
|
|
LIST_INIT(&p->p_reaplist);
|
1997-06-16 00:29:36 +00:00
|
|
|
|
2007-10-26 08:00:41 +00:00
|
|
|
strncpy(p->p_comm, "kernel", sizeof (p->p_comm));
|
|
|
|
strncpy(td->td_name, "swapper", sizeof (td->td_name));
|
1994-05-24 10:09:53 +00:00
|
|
|
|
Fix a race between kern_setitimer() and realitexpire(), where the
callout is started before kern_setitimer() acquires process mutex, but
looses a race and kern_setitimer() gets the process mutex before the
callout. Then, assuming that new specified struct itimerval has
it_interval zero, but it_value non-zero, the callout, after it starts
executing again, clears p->p_realtimer.it_value, but kern_setitimer()
already rescheduled the callout.
As the result of the race, both p_realtimer is zero, and the callout
is rescheduled. Then, in the exit1(), the exit code sees that it_value
is zero and does not even try to stop the callout. This allows the
struct proc to be reused and eventually the armed callout is
re-initialized. The consequence is the corrupted callwheel tailq.
Use process mutex to interlock the callout start, which fixes the race.
Reported and tested by: pho
Reviewed by: jhb
MFC after: 2 weeks
2012-12-04 20:49:39 +00:00
|
|
|
callout_init_mtx(&p->p_itcallout, &p->p_mtx, 0);
|
2007-06-01 01:12:45 +00:00
|
|
|
callout_init_mtx(&p->p_limco, &p->p_mtx, 0);
|
2015-05-22 17:05:21 +00:00
|
|
|
callout_init(&td->td_slpcallout, 1);
|
2000-11-27 22:52:31 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Create credentials. */
|
2015-03-16 00:10:03 +00:00
|
|
|
newcred = crget();
|
|
|
|
newcred->cr_ngroups = 1; /* group 0 */
|
|
|
|
newcred->cr_uidinfo = uifind(0);
|
|
|
|
newcred->cr_ruidinfo = uifind(0);
|
|
|
|
newcred->cr_prison = &prison0;
|
|
|
|
newcred->cr_loginclass = loginclass_find("default");
|
2015-03-21 20:24:54 +00:00
|
|
|
proc_set_cred_init(p, newcred);
|
2006-02-02 01:16:31 +00:00
|
|
|
#ifdef AUDIT
|
2015-03-16 00:10:03 +00:00
|
|
|
audit_cred_kproc0(newcred);
|
2006-02-02 01:16:31 +00:00
|
|
|
#endif
|
2002-07-31 00:39:19 +00:00
|
|
|
#ifdef MAC
|
2015-03-16 00:10:03 +00:00
|
|
|
mac_cred_create_swapper(newcred);
|
2002-07-31 00:39:19 +00:00
|
|
|
#endif
|
- Merge struct procsig with struct sigacts.
- Move struct sigacts out of the u-area and malloc() it using the
M_SUBPROC malloc bucket.
- Add a small sigacts_*() API for managing sigacts structures: sigacts_alloc(),
sigacts_free(), sigacts_copy(), sigacts_share(), and sigacts_shared().
- Remove the p_sigignore, p_sigacts, and p_sigcatch macros.
- Add a mutex to struct sigacts that protects all the members of the struct.
- Add sigacts locking.
- Remove Giant from nosys(), kill(), killpg(), and kern_sigaction() now
that sigacts is locked.
- Several in-kernel functions such as psignal(), tdsignal(), trapsignal(),
and thread_stopped() are now MP safe.
Reviewed by: arch@
Approved by: re (rwatson)
2003-05-13 20:36:02 +00:00
|
|
|
/* Create sigacts. */
|
|
|
|
p->p_sigacts = sigacts_alloc();
|
1998-12-19 02:55:34 +00:00
|
|
|
|
2000-08-11 09:05:12 +00:00
|
|
|
/* Initialize signal state for process 0. */
|
|
|
|
siginit(&proc0);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Create the file descriptor table. */
|
2014-11-13 21:15:09 +00:00
|
|
|
p->p_fd = fdinit(NULL, false);
|
2003-06-02 16:05:32 +00:00
|
|
|
p->p_fdtol = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/* Create the limits structures. */
|
2004-02-04 21:52:57 +00:00
|
|
|
p->p_limit = lim_alloc();
|
|
|
|
for (i = 0; i < RLIM_NLIMITS; i++)
|
|
|
|
p->p_limit->pl_rlimit[i].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[i].rlim_max = RLIM_INFINITY;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NOFILE].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NOFILE].rlim_max = maxfiles;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NPROC].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_NPROC].rlim_max = maxproc;
|
2010-04-11 16:26:07 +00:00
|
|
|
p->p_limit->pl_rlimit[RLIMIT_DATA].rlim_cur = dfldsiz;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_DATA].rlim_max = maxdsiz;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_STACK].rlim_cur = dflssiz;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_STACK].rlim_max = maxssiz;
|
|
|
|
/* Cast to avoid overflow on i386/PAE. */
|
2014-03-22 10:26:09 +00:00
|
|
|
pageablemem = ptoa((vm_paddr_t)vm_cnt.v_free_count);
|
2010-04-11 16:26:07 +00:00
|
|
|
p->p_limit->pl_rlimit[RLIMIT_RSS].rlim_cur =
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_RSS].rlim_max = pageablemem;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_MEMLOCK].rlim_cur = pageablemem / 3;
|
|
|
|
p->p_limit->pl_rlimit[RLIMIT_MEMLOCK].rlim_max = pageablemem;
|
2002-10-09 17:17:24 +00:00
|
|
|
p->p_cpulimit = RLIM_INFINITY;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2015-06-10 10:43:59 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
thread_cow_get_proc(td, p);
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
|
2011-03-29 17:47:25 +00:00
|
|
|
/* Initialize resource accounting structures. */
|
|
|
|
racct_create(&p->p_racct);
|
|
|
|
|
2004-11-20 02:28:48 +00:00
|
|
|
p->p_stats = pstats_alloc();
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Allocate a prototype map so we have something to fork. */
|
|
|
|
p->p_vmspace = &vmspace0;
|
|
|
|
vmspace0.vm_refcnt = 1;
|
2015-05-15 08:30:29 +00:00
|
|
|
pmap_pinit0(vmspace_pmap(&vmspace0));
|
2009-10-02 17:48:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* proc0 is not expected to enter usermode, so there is no special
|
|
|
|
* handling for sv_minuser here, like is done for exec_new_vmspace().
|
|
|
|
*/
|
2010-04-03 19:07:05 +00:00
|
|
|
vm_map_init(&vmspace0.vm_map, vmspace_pmap(&vmspace0),
|
|
|
|
p->p_sysent->sv_minuser, p->p_sysent->sv_maxuser);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2010-07-22 05:42:29 +00:00
|
|
|
/*
|
|
|
|
* Call the init and ctor for the new thread and proc. We wait
|
|
|
|
* to do this until all other structures are fairly sane.
|
2007-11-18 13:56:51 +00:00
|
|
|
*/
|
|
|
|
EVENTHANDLER_INVOKE(process_init, p);
|
|
|
|
EVENTHANDLER_INVOKE(thread_init, td);
|
|
|
|
EVENTHANDLER_INVOKE(process_ctor, p);
|
|
|
|
EVENTHANDLER_INVOKE(thread_ctor, td);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1996-03-11 06:14:38 +00:00
|
|
|
* Charge root for one process.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
o Merge contents of struct pcred into struct ucred. Specifically, add the
real uid, saved uid, real gid, and saved gid to ucred, as well as the
pcred->pc_uidinfo, which was associated with the real uid, only rename
it to cr_ruidinfo so as not to conflict with cr_uidinfo, which
corresponds to the effective uid.
o Remove p_cred from struct proc; add p_ucred to struct proc, replacing
original macro that pointed.
p->p_ucred to p->p_cred->pc_ucred.
o Universally update code so that it makes use of ucred instead of pcred,
p->p_ucred instead of p->p_pcred, cr_ruidinfo instead of p_uidinfo,
cr_{r,sv}{u,g}id instead of p_*, etc.
o Remove pcred0 and its initialization from init_main.c; initialize
cr_ruidinfo there.
o Restruction many credential modification chunks to always crdup while
we figure out locking and optimizations; generally speaking, this
means moving to a structure like this:
newcred = crdup(oldcred);
...
p->p_ucred = newcred;
crfree(oldcred);
It's not race-free, but better than nothing. There are also races
in sys_process.c, all inter-process authorization, fork, exec, and
exit.
o Remove sigio->sio_ruid since sigio->sio_ucred now contains the ruid;
remove comments indicating that the old arrangement was a problem.
o Restructure exec1() a little to use newcred/oldcred arrangement, and
use improved uid management primitives.
o Clean up exit1() so as to do less work in credential cleanup due to
pcred removal.
o Clean up fork1() so as to do less work in credential cleanup and
allocation.
o Clean up ktrcanset() to take into account changes, and move to using
suser_xxx() instead of performing a direct uid==0 comparision.
o Improve commenting in various kern_prot.c credential modification
calls to better document current behavior. In a couple of places,
current behavior is a little questionable and we need to check
POSIX.1 to make sure it's "right". More commenting work still
remains to be done.
o Update credential management calls, such as crfree(), to take into
account new ruidinfo reference.
o Modify or add the following uid and gid helper routines:
change_euid()
change_egid()
change_ruid()
change_rgid()
change_svuid()
change_svgid()
In each case, the call now acts on a credential not a process, and as
such no longer requires more complicated process locking/etc. They
now assume the caller will do any necessary allocation of an
exclusive credential reference. Each is commented to document its
reference requirements.
o CANSIGIO() is simplified to require only credentials, not processes
and pcreds.
o Remove lots of (p_pcred==NULL) checks.
o Add an XXX to authorization code in nfs_lock.c, since it's
questionable, and needs to be considered carefully.
o Simplify posix4 authorization code to require only credentials, not
processes and pcreds. Note that this authorization, as well as
CANSIGIO(), needs to be updated to use the p_cansignal() and
p_cansched() centralized authorization routines, as they currently
do not take into account some desirable restrictions that are handled
by the centralized routines, as well as being inconsistent with other
similar authorization instances.
o Update libkvm to take these changes into account.
Obtained from: TrustedBSD Project
Reviewed by: green, bde, jhb, freebsd-arch, freebsd-audit
2001-05-25 16:59:11 +00:00
|
|
|
(void)chgproccnt(p->p_ucred->cr_ruidinfo, 1, 0);
|
2011-03-31 19:22:11 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
racct_add_force(p, RACCT_NPROC, 1);
|
|
|
|
PROC_UNLOCK(p);
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(p0init, SI_SUB_INTRINSIC, SI_ORDER_FIRST, proc0_init, NULL);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/* ARGSUSED*/
|
1995-12-10 13:45:30 +00:00
|
|
|
static void
|
2000-08-11 09:05:12 +00:00
|
|
|
proc0_post(void *dummy __unused)
|
1995-08-28 09:19:25 +00:00
|
|
|
{
|
1998-04-04 13:26:20 +00:00
|
|
|
struct timespec ts;
|
2000-08-11 09:05:12 +00:00
|
|
|
struct proc *p;
|
2007-06-09 18:56:11 +00:00
|
|
|
struct rusage ru;
|
2008-01-10 22:11:20 +00:00
|
|
|
struct thread *td;
|
1996-09-23 04:37:54 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
1999-02-25 11:03:08 +00:00
|
|
|
* Now we can look at the time, having had a chance to verify the
|
2002-05-16 21:28:32 +00:00
|
|
|
* time from the filesystem. Pretend that proc0 started now.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
2001-03-28 11:52:56 +00:00
|
|
|
sx_slock(&allproc_lock);
|
2007-01-17 14:58:53 +00:00
|
|
|
FOREACH_PROC_IN_SYSTEM(p) {
|
2003-05-01 16:59:23 +00:00
|
|
|
microuptime(&p->p_stats->p_start);
|
2014-11-26 14:10:00 +00:00
|
|
|
PROC_STATLOCK(p);
|
2007-06-09 18:56:11 +00:00
|
|
|
rufetch(p, &ru); /* Clears thread stats */
|
2014-11-26 14:10:00 +00:00
|
|
|
PROC_STATUNLOCK(p);
|
2006-02-07 21:22:02 +00:00
|
|
|
p->p_rux.rux_runtime = 0;
|
2007-06-09 18:56:11 +00:00
|
|
|
p->p_rux.rux_uticks = 0;
|
|
|
|
p->p_rux.rux_sticks = 0;
|
|
|
|
p->p_rux.rux_iticks = 0;
|
2008-01-10 22:11:20 +00:00
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
|
|
|
td->td_runtime = 0;
|
|
|
|
}
|
2000-08-11 09:05:12 +00:00
|
|
|
}
|
2001-03-28 11:52:56 +00:00
|
|
|
sx_sunlock(&allproc_lock);
|
2006-02-07 21:22:02 +00:00
|
|
|
PCPU_SET(switchtime, cpu_ticks());
|
2000-09-07 01:33:02 +00:00
|
|
|
PCPU_SET(switchticks, ticks);
|
1995-08-28 09:19:25 +00:00
|
|
|
|
1996-09-23 04:37:54 +00:00
|
|
|
/*
|
|
|
|
* Give the ``random'' number generator a thump.
|
|
|
|
*/
|
1998-04-04 13:26:20 +00:00
|
|
|
nanotime(&ts);
|
|
|
|
srandom(ts.tv_sec ^ ts.tv_nsec);
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(p0post, SI_SUB_INTRINSIC_POST, SI_ORDER_FIRST, proc0_post, NULL);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2009-10-20 16:36:51 +00:00
|
|
|
static void
|
|
|
|
random_init(void *dummy __unused)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After CPU has been started we have some randomness on most
|
|
|
|
* platforms via get_cyclecount(). For platforms that don't
|
|
|
|
* we will reseed random(9) in proc0_post() as well.
|
|
|
|
*/
|
|
|
|
srandom(get_cyclecount());
|
|
|
|
}
|
|
|
|
SYSINIT(random, SI_SUB_RANDOM, SI_ORDER_FIRST, random_init, NULL);
|
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
***************************************************************************
|
|
|
|
****
|
|
|
|
**** The following SYSINIT's and glue code should be moved to the
|
|
|
|
**** respective files on a per subsystem basis.
|
|
|
|
****
|
|
|
|
***************************************************************************
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
***************************************************************************
|
|
|
|
****
|
|
|
|
**** The following code probably belongs in another file, like
|
2000-08-11 09:05:12 +00:00
|
|
|
**** kern/init_init.c.
|
1995-08-28 09:19:25 +00:00
|
|
|
****
|
|
|
|
***************************************************************************
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* List of paths to try when searching for "init".
|
|
|
|
*/
|
1999-04-20 21:15:13 +00:00
|
|
|
static char init_path[MAXPATHLEN] =
|
1999-05-05 12:20:23 +00:00
|
|
|
#ifdef INIT_PATH
|
|
|
|
__XSTRING(INIT_PATH);
|
|
|
|
#else
|
2011-10-27 10:25:11 +00:00
|
|
|
"/sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init";
|
1999-05-05 12:20:23 +00:00
|
|
|
#endif
|
2001-12-16 16:07:20 +00:00
|
|
|
SYSCTL_STRING(_kern, OID_AUTO, init_path, CTLFLAG_RD, init_path, 0,
|
|
|
|
"Path used to search the init process");
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2005-09-15 13:16:07 +00:00
|
|
|
/*
|
|
|
|
* Shutdown timeout of init(8).
|
|
|
|
* Unused within kernel, but used to control init(8), hence do not remove.
|
|
|
|
*/
|
|
|
|
#ifndef INIT_SHUTDOWN_TIMEOUT
|
|
|
|
#define INIT_SHUTDOWN_TIMEOUT 120
|
|
|
|
#endif
|
|
|
|
static int init_shutdown_timeout = INIT_SHUTDOWN_TIMEOUT;
|
|
|
|
SYSCTL_INT(_kern, OID_AUTO, init_shutdown_timeout,
|
2010-08-09 14:48:31 +00:00
|
|
|
CTLFLAG_RW, &init_shutdown_timeout, 0, "Shutdown timeout of init(8). "
|
|
|
|
"Unused within kernel, but used to control init(8)");
|
2005-09-15 13:16:07 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
1999-04-20 21:15:13 +00:00
|
|
|
* Start the initial user process; try exec'ing each pathname in init_path.
|
1994-05-24 10:09:53 +00:00
|
|
|
* The program is invoked with one argument containing the boot flags.
|
|
|
|
*/
|
|
|
|
static void
|
2000-08-11 09:05:12 +00:00
|
|
|
start_init(void *dummy)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
vm_offset_t addr;
|
|
|
|
struct execve_args args;
|
1999-04-20 21:15:13 +00:00
|
|
|
int options, error;
|
|
|
|
char *var, *path, *next, *s;
|
|
|
|
char *ucp, **uap, *arg0, *arg1;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
1999-07-01 13:21:46 +00:00
|
|
|
struct proc *p;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
|
2001-07-04 16:20:28 +00:00
|
|
|
GIANT_REQUIRED;
|
|
|
|
|
2001-09-12 08:38:13 +00:00
|
|
|
td = curthread;
|
|
|
|
p = td->td_proc;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-07-03 08:52:37 +00:00
|
|
|
vfs_mountroot();
|
2002-03-08 10:33:11 +00:00
|
|
|
|
2015-04-16 20:53:15 +00:00
|
|
|
/* Wipe GELI passphrase from the environment. */
|
|
|
|
kern_unsetenv("kern.geom.eli.passphrase");
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Need just enough stack to hold the faked-up "execve()" arguments.
|
|
|
|
*/
|
2002-09-21 22:07:17 +00:00
|
|
|
addr = p->p_sysent->sv_usrstack - PAGE_SIZE;
|
2013-09-09 18:11:59 +00:00
|
|
|
if (vm_map_find(&p->p_vmspace->vm_map, NULL, 0, &addr, PAGE_SIZE, 0,
|
|
|
|
VMFS_NO_SPACE, VM_PROT_ALL, VM_PROT_ALL, 0) != 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("init: couldn't allocate argument space");
|
|
|
|
p->p_vmspace->vm_maxsaddr = (caddr_t)addr;
|
1994-05-25 09:21:21 +00:00
|
|
|
p->p_vmspace->vm_ssize = 1;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2014-10-16 18:04:43 +00:00
|
|
|
if ((var = kern_getenv("init_path")) != NULL) {
|
2002-10-17 20:03:38 +00:00
|
|
|
strlcpy(init_path, var, sizeof(init_path));
|
2002-04-17 13:06:36 +00:00
|
|
|
freeenv(var);
|
1999-04-20 21:15:13 +00:00
|
|
|
}
|
|
|
|
|
1999-05-05 12:20:23 +00:00
|
|
|
for (path = init_path; *path != '\0'; path = next) {
|
1999-05-11 10:08:10 +00:00
|
|
|
while (*path == ':')
|
1999-04-20 21:15:13 +00:00
|
|
|
path++;
|
1999-05-05 12:20:23 +00:00
|
|
|
if (*path == '\0')
|
1999-04-20 21:15:13 +00:00
|
|
|
break;
|
1999-05-11 10:08:10 +00:00
|
|
|
for (next = path; *next != '\0' && *next != ':'; next++)
|
1999-04-20 21:15:13 +00:00
|
|
|
/* nothing */ ;
|
|
|
|
if (bootverbose)
|
1999-04-24 18:50:48 +00:00
|
|
|
printf("start_init: trying %.*s\n", (int)(next - path),
|
|
|
|
path);
|
1999-04-20 21:15:13 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Move out the boot flag argument.
|
|
|
|
*/
|
|
|
|
options = 0;
|
2002-09-21 22:07:17 +00:00
|
|
|
ucp = (char *)p->p_sysent->sv_usrstack;
|
1994-05-24 10:09:53 +00:00
|
|
|
(void)subyte(--ucp, 0); /* trailing zero */
|
|
|
|
if (boothowto & RB_SINGLE) {
|
|
|
|
(void)subyte(--ucp, 's');
|
|
|
|
options = 1;
|
|
|
|
}
|
|
|
|
#ifdef notyet
|
|
|
|
if (boothowto & RB_FASTBOOT) {
|
|
|
|
(void)subyte(--ucp, 'f');
|
|
|
|
options = 1;
|
|
|
|
}
|
|
|
|
#endif
|
1995-04-10 07:44:31 +00:00
|
|
|
|
|
|
|
#ifdef BOOTCDROM
|
|
|
|
(void)subyte(--ucp, 'C');
|
|
|
|
options = 1;
|
2000-08-20 21:34:39 +00:00
|
|
|
#endif
|
2000-09-02 19:17:34 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
if (options == 0)
|
|
|
|
(void)subyte(--ucp, '-');
|
|
|
|
(void)subyte(--ucp, '-'); /* leading hyphen */
|
|
|
|
arg1 = ucp;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move out the file name (also arg 0).
|
|
|
|
*/
|
1999-04-20 21:15:13 +00:00
|
|
|
(void)subyte(--ucp, 0);
|
|
|
|
for (s = next - 1; s >= path; s--)
|
|
|
|
(void)subyte(--ucp, *s);
|
1994-05-24 10:09:53 +00:00
|
|
|
arg0 = ucp;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move out the arg pointers.
|
|
|
|
*/
|
2016-04-21 19:57:40 +00:00
|
|
|
uap = (char **)rounddown2((intptr_t)ucp, sizeof(intptr_t));
|
1998-07-15 05:21:48 +00:00
|
|
|
(void)suword((caddr_t)--uap, (long)0); /* terminator */
|
|
|
|
(void)suword((caddr_t)--uap, (long)(intptr_t)arg1);
|
|
|
|
(void)suword((caddr_t)--uap, (long)(intptr_t)arg0);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Point at the arguments.
|
|
|
|
*/
|
|
|
|
args.fname = arg0;
|
1994-05-25 09:21:21 +00:00
|
|
|
args.argv = uap;
|
|
|
|
args.envv = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Now try to exec the program. If can't for any reason
|
|
|
|
* other than it doesn't exist, complain.
|
1995-08-28 09:19:25 +00:00
|
|
|
*
|
2000-08-11 09:05:12 +00:00
|
|
|
* Otherwise, return via fork_trampoline() all the way
|
1999-07-01 13:21:46 +00:00
|
|
|
* to user mode as init!
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2011-09-16 13:58:51 +00:00
|
|
|
if ((error = sys_execve(td, &args)) == 0) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
2000-09-15 19:25:29 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
if (error != ENOENT)
|
1999-04-24 18:50:48 +00:00
|
|
|
printf("exec %.*s: error %d\n", (int)(next - path),
|
|
|
|
path, error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-12-20 02:50:49 +00:00
|
|
|
printf("init: not found in path %s\n", init_path);
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("no init");
|
|
|
|
}
|
2000-08-11 09:05:12 +00:00
|
|
|
|
|
|
|
/*
|
2007-10-20 23:23:23 +00:00
|
|
|
* Like kproc_create(), but runs in it's own address space.
|
2000-08-11 09:05:12 +00:00
|
|
|
* We do this early to reserve pid 1.
|
|
|
|
*
|
|
|
|
* Note special case - do not make it runnable yet. Other work
|
|
|
|
* in progress will change this more.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
create_init(const void *udata __unused)
|
|
|
|
{
|
2016-02-04 04:22:18 +00:00
|
|
|
struct fork_req fr;
|
2002-04-19 13:35:53 +00:00
|
|
|
struct ucred *newcred, *oldcred;
|
2015-07-16 14:30:11 +00:00
|
|
|
struct thread *td;
|
2000-08-11 09:05:12 +00:00
|
|
|
int error;
|
|
|
|
|
2016-02-04 04:22:18 +00:00
|
|
|
bzero(&fr, sizeof(fr));
|
|
|
|
fr.fr_flags = RFFDG | RFPROC | RFSTOPPED;
|
|
|
|
fr.fr_procp = &initproc;
|
|
|
|
error = fork1(&thread0, &fr);
|
2000-08-11 09:05:12 +00:00
|
|
|
if (error)
|
|
|
|
panic("cannot fork init: %d\n", error);
|
2004-01-16 20:29:23 +00:00
|
|
|
KASSERT(initproc->p_pid == 1, ("create_init: initproc->p_pid != 1"));
|
2002-04-19 13:35:53 +00:00
|
|
|
/* divorce init's credentials from the kernel's */
|
|
|
|
newcred = crget();
|
2014-12-15 12:01:42 +00:00
|
|
|
sx_xlock(&proctree_lock);
|
2001-01-24 10:40:56 +00:00
|
|
|
PROC_LOCK(initproc);
|
2007-09-17 05:31:39 +00:00
|
|
|
initproc->p_flag |= P_SYSTEM | P_INMEM;
|
2014-12-15 12:01:42 +00:00
|
|
|
initproc->p_treeflag |= P_TREE_REAPER;
|
|
|
|
LIST_INSERT_HEAD(&initproc->p_reaplist, &proc0, p_reapsibling);
|
2002-04-19 13:35:53 +00:00
|
|
|
oldcred = initproc->p_ucred;
|
|
|
|
crcopy(newcred, oldcred);
|
2002-07-31 00:39:19 +00:00
|
|
|
#ifdef MAC
|
2008-10-28 11:33:06 +00:00
|
|
|
mac_cred_create_init(newcred);
|
2006-02-02 01:16:31 +00:00
|
|
|
#endif
|
|
|
|
#ifdef AUDIT
|
2007-06-07 22:27:15 +00:00
|
|
|
audit_cred_proc1(newcred);
|
2002-07-31 00:39:19 +00:00
|
|
|
#endif
|
2015-03-16 00:10:03 +00:00
|
|
|
proc_set_cred(initproc, newcred);
|
2015-07-16 14:30:11 +00:00
|
|
|
td = FIRST_THREAD_IN_PROC(initproc);
|
|
|
|
crfree(td->td_ucred);
|
|
|
|
td->td_ucred = crhold(initproc->p_ucred);
|
2001-01-24 10:40:56 +00:00
|
|
|
PROC_UNLOCK(initproc);
|
2014-12-15 12:01:42 +00:00
|
|
|
sx_xunlock(&proctree_lock);
|
2002-04-19 13:35:53 +00:00
|
|
|
crfree(oldcred);
|
2016-06-16 12:05:44 +00:00
|
|
|
cpu_fork_kthread_handler(FIRST_THREAD_IN_PROC(initproc),
|
|
|
|
start_init, NULL);
|
2000-08-11 09:05:12 +00:00
|
|
|
}
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(init, SI_SUB_CREATE_INIT, SI_ORDER_FIRST, create_init, NULL);
|
2000-08-11 09:05:12 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make it runnable now.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
kick_init(const void *udata __unused)
|
|
|
|
{
|
2002-02-07 20:58:47 +00:00
|
|
|
struct thread *td;
|
2001-01-24 10:40:56 +00:00
|
|
|
|
2002-02-07 20:58:47 +00:00
|
|
|
td = FIRST_THREAD_IN_PROC(initproc);
|
Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
|
|
|
thread_lock(td);
|
2002-09-11 08:13:56 +00:00
|
|
|
TD_SET_CAN_RUN(td);
|
2007-01-23 08:46:51 +00:00
|
|
|
sched_add(td, SRQ_BORING);
|
Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
|
|
|
thread_unlock(td);
|
2000-08-11 09:05:12 +00:00
|
|
|
}
|
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random.
This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources.
The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people.
The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway.
Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to.
My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise.
My Nomex pants are on. Let the feedback commence!
Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?)
Approved by: so(des)
2014-10-30 21:21:53 +00:00
|
|
|
SYSINIT(kickinit, SI_SUB_KTHREAD_INIT, SI_ORDER_MIDDLE, kick_init, NULL);
|