Commit Graph

161 Commits

Author SHA1 Message Date
Benno Rice
3e4409437f Copy the "implementation" of pmap_prefault from sparc64. 2002-03-07 12:22:08 +00:00
Benno Rice
9164438ed2 Move tunable initialisation so it can get access to physmem. 2002-03-07 10:15:17 +00:00
Benno Rice
d2c1f57685 Calculate physmem. 2002-03-07 10:09:24 +00:00
Andrew R. Reiter
66c862bc1b - Move a comment from being on the same line as a #ifdef to the line
following it.  This should have gone in the previous commit, but
  misviewed Bruce's patch.

Requested by: bde
2002-02-28 21:52:08 +00:00
Benno Rice
677bcc872c cpu_switch now works, for kthreads at least. 2002-02-28 12:06:49 +00:00
Benno Rice
0e1338662a Various cleanups. 2002-02-28 12:00:24 +00:00
Benno Rice
b8160b5af4 - Prevent the decrementer interrupt handler from nesting.
- Catch some more cases of PSL_EE and PSL_RI getting out of sync.
2002-02-28 11:57:47 +00:00
Benno Rice
ac6ba8bd4a - Modify pmap_activate so it only marks the pmap as active.
- Add a pmap_deactivate function.
2002-02-28 11:55:44 +00:00
Benno Rice
3301d20ad9 GC an unused variable in cpu_fork(). 2002-02-28 08:48:58 +00:00
Andrew R. Reiter
216ae18217 - Fix panic() message and a couple style nits that snuck in from the
recent diagnostics commit (rev. 1.84).
2002-02-28 08:28:14 +00:00
Benno Rice
4eed0cf1be Make fork work, at least for kthreads. Switching still has some issues. 2002-02-28 03:24:07 +00:00
Benno Rice
b7ac845009 - Rearrange the sequence of events in powerpc_init() somewhat.
- Catch another instance of PSL_EE being cleared without PSL_RI.
2002-02-28 03:15:49 +00:00
Benno Rice
88afb2a31b Implement the following functions:
- pmap_remove
	- pmap_kremove
	- pmap_qremove
2002-02-28 02:54:16 +00:00
Benno Rice
54eb8bbc14 Remove most of the usage of critical_enter/exit.
I put these in to match the use of spl*() in the NetBSD code I was basing this
on, but it appears to cause problems.

I'm doing this in a separate commit so as to be able to refer back if locking
becomes an issue at a later stage.
2002-02-28 02:45:10 +00:00
Mike Silbersack
7f3a40933b Fix a horribly suboptimal algorithm in the vm_daemon.
In order to determine what to page out, the vm_daemon checks
reference bits on all pages belonging to all processes.  Unfortunately,
the algorithm used reacted badly with shared pages; each shared page
would be checked once per process sharing it; this caused an O(N^2)
growth of tlb invalidations.  The algorithm has been changed so that
each page will be checked only 16 times.

Prior to this change, a fork/sleepbomb of 1300 processes could cause
the vm_daemon to take over 60 seconds to complete, effectively
freezing the system for that time period.  With this change
in place, the vm_daemon completes in less than a second.  Any system
with hundreds of processes sharing pages should benefit from this change.

Note that the vm_daemon is only run when the system is under extreme
memory pressure.  It is likely that many people with loaded systems saw
no symptoms of this problem until they reached the point where swapping
began.

Special thanks go to dillon, peter, and Chuck Cranor, who helped me
get up to speed with vm internals.

PR:		33542, 20393
Reviewed by:	dillon
MFC after:	1 week
2002-02-27 18:03:02 +00:00
Benno Rice
f8e03c1093 Don't call critical_enter()/critical_exit() around calls to pmap_pvo_enter()
as it does it's own handling of critical sections.
2002-02-23 05:55:51 +00:00
Julian Elischer
77c4066424 Add some DIAGNOSTIC code.
While in userland, keep the thread's ucred reference in a shadow
field so that the usual place to store it is NULL.
If DIAGNOSTIC is not set, the thread ucred is kept valid until the next
kernel entry, at which time it is checked against the process cred
and possibly corrected. Produces a BIG speedup in
kernels with INVARIANTS set. (A previous commit corrected it
for the non INVARIANTS case already)

Reviewed by:	dillon@freebsd.org
2002-02-22 23:58:22 +00:00
Julian Elischer
bc08443ef1 Add change to teh PPC to keep it in step with i386 and MI code
Pointy hat this direction please...
2002-02-19 03:27:08 +00:00
Benno Rice
5244eac968 Complete rework of the PowerPC pmap and a number of other bits in the early
boot sequence.

The new pmap.c is based on NetBSD's newer pmap.c (for the mpc6xx processors)
which is 70% faster than the older code that the original pmap.c was based
on.  It has also been based on the framework established by jake's initial
sparc64 pmap.c.

There is no change to how far the kernel gets (it makes it to the mountroot
prompt in psim) but the new pmap code is a lot cleaner.

Obtained from:	NetBSD (pmap code)
2002-02-14 01:39:11 +00:00
Julian Elischer
079b7badea Pre-KSE/M3 commit.
this is a low-functionality change that changes the kernel to access the main
thread of a process via the linked list of threads rather than
assuming that it is embedded in the process. It IS still embeded there
but remove all teh code that assumes that in preparation for the next commit
which will actually move it out.

Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
2002-02-07 20:58:47 +00:00
Bruce Evans
55a9536b65 Compile osigreturn() unconditionally since it will always be needed on
some arches and the syscall table is machine-independent.  It was
(bogusly) conditional on COMPAT_43, so this usually makes no difference.

ia64: in addition:
- replace the bogus cloned comment before osigreturn() by a correct one.
  osigreturn() is just a stub fo ia64's.
- fix the formatting of cloned comment before sigreturn().
- fix the return code.  use nosys() instead of returning ENOSYS to get
  the same semantics as if the syscall is not in the syscall table.
  Generating SIGSYS is actually correct here.
- fix style bugs.

powerpc: copy the cleaned up ia64 stub.  This mainly fixes a bogus comment.

sparc64: copy the cleaned up the ia64 stub, since there was no stub before.
2002-02-01 15:44:03 +00:00
Andrew Gallatin
350cb38b1f Simple fixes to get the powerpc kernel compiling again.
Reviewed by:	mp
2002-01-28 14:07:36 +00:00
John Baldwin
0bbc882680 Overhaul the per-CPU support a bit:
- The MI portions of struct globaldata have been consolidated into a MI
  struct pcpu.  The MD per-CPU data are specified via a macro defined in
  machine/pcpu.h.  A macro was chosen over a struct mdpcpu so that the
  interface would be cleaner (PCPU_GET(my_md_field) vs.
  PCPU_GET(md.md_my_md_field)).
- All references to globaldata are changed to pcpu instead.  In a UP kernel,
  this data was stored as global variables which is where the original name
  came from.  In an SMP world this data is per-CPU and ideally private to each
  CPU outside of the context of debuggers.  This also included combining
  machine/globaldata.h and machine/globals.h into machine/pcpu.h.
- The pointer to the thread using the FPU on i386 was renamed from
  npxthread to fpcurthread to be identical with other architectures.
- Make the show pcpu ddb command MI with a MD callout to display MD
  fields.
- The globaldata_register() function was renamed to pcpu_init() and now
  init's MI fields of a struct pcpu in addition to registering it with
  the internal array and list.
- A pcpu_destroy() function was added to remove a struct pcpu from the
  internal array and list.

Tested on:	alpha, i386
Reviewed by:	peter, jake
2001-12-11 23:33:44 +00:00
Matthew Dillon
66a11b9fb1 Allow maxusers to be specified as 0 in the kernel config, which will
cause the system to auto-size to between 32 and 512 depending on the
amount of memory.

MFC after:	1 week
2001-12-09 01:57:09 +00:00
Mark Peek
e398651c46 Don't enable FP in the kernel. It is not needed when -msoft-float is used.
Reminded by:	benno
2001-11-13 00:44:21 +00:00
Mark Peek
0308a57783 Clean up the trap handling code and make it consistent with the other platforms.
Submitted by:	jhb
2001-11-05 00:49:03 +00:00
Mark Peek
8f718dfdc3 Add enable_fpu/save_fpu for handling the floating point registers in the PCB.
Obtained from:	NetBSD
2001-11-05 00:45:33 +00:00
Dag-Erling Smørgrav
1f04261973 [partially forced commit due to pilot error in earlier commit attempt]
{set,fill}_{,fp,db}regs() fixup:

 - Add dummy {set,fill}_dbregs() on architectures that don't have them.

 - KSEfy the powerpc versions (struct proc -> struct thread).

 - Some architectures had the prototypes in md_var.h, some in reg.h, and
   some in both; for consistency, move them to reg.h on all platforms.

These functions aren't really MD (the implementation is MD, but the interface
is MI), so they should move to an MI header, but I haven't figured out which
one yet.

Run-tested on i386, build-tested on Alpha, untested on other platforms.
2001-10-21 22:16:48 +00:00
Mark Peek
94e0b85e76 Fix includes based on recent changes to lock.h, mutex.h and ktr.h. 2001-10-19 22:45:46 +00:00
Benno Rice
f3ca814471 Flesh out cpu_fork() and cpu_set_fork_handler(). This is a work in progress. 2001-10-15 12:24:43 +00:00
Benno Rice
d163144b45 - Correct the type of the argument to delay() so as to not conflict with
sys/boot/common/bootstrap.h.
- Add a prototype for fork_trampoline().
2001-10-15 12:23:10 +00:00
Mark Peek
585cf148b6 Fix typo. 2001-10-15 01:04:49 +00:00
Mark Peek
422ec2ace1 Save WIP. Partial rewrite of cpu_switch() and savectx(). This makes it closer
to working but still needs some work to properly switch the full context
(such as saving the fpu registers, switch stacks, etc.).  Also, remove some
dead code that was mixed in.
2001-10-15 00:37:45 +00:00
Benno Rice
bdf71f568b Implement pmap_mapdev. 2001-10-14 08:38:16 +00:00
Mark Peek
f57f841372 Modify a virtual address check to allow use of the openfirmware callback
used by the PowerPC simulator (PSIM).
2001-10-12 19:55:04 +00:00
Mark Peek
351bd3334f Add a call to init_param() to initialize some necessary variables. 2001-10-08 00:44:21 +00:00
Matt Jacob
22883e3c68 Fix problem where a user buffer outside of the area being tested
will be corrupted.

PR:		29194
Obtained from:	Tor.Egge@fast.no
MFC after:	2 weeks
2001-10-02 18:34:20 +00:00
Mark Peek
6e5e7555b3 Catch up to recent removal of curpcb from globals.h. 2001-09-24 02:58:49 +00:00
Mark Peek
d699b539c2 Add missing include file. 2001-09-20 15:32:56 +00:00
Mark Peek
06fdffd84d Use BATL/BATU macros instead of hardcoded hex constants. 2001-09-20 00:48:30 +00:00
Mark Peek
5fd2c51edb Update PowerPC MD code to compile and do initial bootstrap based on
recent changes (KSE and VM requiring physmem to be setup).

Reviewed by:	benno, jhb, julian
2001-09-20 00:47:17 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Peter Wemm
eb30c1c0b9 Rip some well duplicated code out of cpu_wait() and cpu_exit() and move
it to the MI area.  KSE touched cpu_wait() which had the same change
replicated five ways for each platform.  Now it can just do it once.
The only MD parts seemed to be dealing with fpu state cleanup and things
like vm86 cleanup on x86.  The rest was identical.

XXX: ia64 and powerpc did not have cpu_throw(), so I've put a functional
stub in place.

Reviewed by:	jake, tmm, dillon
2001-09-10 04:28:58 +00:00
Peter Wemm
660c5377fd Missing part of dillon's coredump commit. cpu_coredump() was still
passing IO_NODELOCKED to vn_rdwr(), this would cause operations on the
unlocked core vnode and softupdates nastiness if an a.out binary cored.
2001-09-08 22:18:58 +00:00
Peter Wemm
cfbf880deb Zap #if 0'ed map init code that got moved to the MI area.
Convert the powerpc tree to use the common code.
2001-09-04 08:42:35 +00:00
Peter Wemm
b53f9c45f9 Nuke #if 0'ed "setredzone()" stub. We never used it, and probably
never will.  I've implemented an optional redzone as part of the KSE
upage breakup.
2001-09-04 08:36:46 +00:00
John Baldwin
8c8a7645c7 Axe stale mp_fixme(). 2001-09-01 00:49:29 +00:00
Peter Wemm
b8603f0e57 Similar to changes on i386/alpha/etc pmap.c; converge on a similar
look/feel on pmap_new_proc() with some cosmetic style changes.
2001-08-31 06:42:45 +00:00
Peter Wemm
e8ebc08f80 Make COMPAT_43 optional again. XXX we need COMPAT_FBSD3 etc for this
stuff.
2001-08-21 02:32:59 +00:00
John Baldwin
bf3db3eacb FreeBSD doesn't use a want_resched variable. Instead, the PS_NEEDRESCHED
p_sflag is managed in a MI fashion.
2001-08-15 19:39:09 +00:00
John Baldwin
688ebe120c - Close races with signals and other AST's being triggered while we are in
the process of exiting the kernel.  The ast() function now loops as long
  as the PS_ASTPENDING or PS_NEEDRESCHED flags are set.  It returns with
  preemption disabled so that any further AST's that arrive via an
  interrupt will be delayed until the low-level MD code returns to user
  mode.
- Use u_int's to store the tick counts for profiling purposes so that we
  do not need sched_lock just to read p_sticks.  This also closes a
  problem where the call to addupc_task() could screw up the arithmetic
  due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
  clear_resched(), and resched_wanted() in favor of direct bit operations
  on p_sflag.
- Fix up locking with sched_lock some.  In addupc_intr(), use sched_lock
  to ensure pr_addr and pr_ticks are updated atomically with setting
  PS_OWEUPC.  In ast() we clear pr_ticks atomically with clearing
  PS_OWEUPC.  We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.

Reviewed by:	bde (mostly)
2001-08-10 22:53:32 +00:00
Peter Wemm
2aca0c28d3 Zap 'ptrace(PT_READ_U, ...)' and 'ptrace(PT_WRITE_U, ...)' since they
are a really nasty interface that should have been killed long ago
when 'ptrace(PT_[SG]ETREGS' etc came along.  The entity that they
operate on (struct user) will not be around much longer since it
is part-per-process and part-per-thread in a post-KSE world.

gdb does not actually use this except for the obscure 'info udot'
command which does a hexdump of as much of the child's 'struct user'
as it can get.  It carries its own #defines so it doesn't break
compiles.
2001-08-08 05:25:15 +00:00
Peter Wemm
0b27d7104f Make PMAP_SHPGPERPROC tunable. One shouldn't need to recompile a kernel
for this, since it is easy to run into with large systems with lots of
shared mmap space.

Obtained from:	yahoo
2001-07-27 01:08:59 +00:00
Matthew Dillon
7197571105 Move vm_page_zero_idle() from machine-dependant sections to a
machine-independant source file, vm/vm_zeroidle.c.  It was exactly the
same for all platforms and updating them all was getting annoying.
2001-07-05 01:32:42 +00:00
Matthew Dillon
6d03d577a5 Reorg vm_page.c into vm_page.c, vm_pageq.c, and vm_contig.c (for contigmalloc).
Also removed some spl's and added some VM mutexes, but they are not actually
used yet, so this commit does not really make any operational changes
to the system.

vm_page.c relates to vm_page_t manipulation, including high level deactivation,
activation, etc...  vm_pageq.c relates to finding free pages and aquiring
exclusive access to a page queue (exclusivity part not yet implemented).
And the world still builds... :-)
2001-07-04 23:27:09 +00:00
Matthew Dillon
0cddd8f023 With Alfred's permission, remove vm_mtx in favor of a fine-grained approach
(this commit is just the first stage).  Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
2001-07-04 16:20:28 +00:00
John Baldwin
d2a5bcc3d3 Allow Giant to be recursed when a process terminates. 2001-07-03 05:09:48 +00:00
John Baldwin
7aa7260e4a Move ast() and userret() to sys/kern/subr_trap.c now that they are MI. 2001-06-29 19:51:37 +00:00
Benno Rice
111c77dcef Fix comment breakage. 2001-06-27 12:20:48 +00:00
Benno Rice
5fa25b1b0e More verbose version of identifycpu() which also contains many more CPU
versions/revisions.

Modified from the original patch to mark G3 and G4 processors as such.

Submitted by:	Jeff Schottmiller <jeff@neoscale.com>
2001-06-19 13:27:33 +00:00
Benno Rice
d27f1d4c12 This commit (along with one pending in sys/dev/ofw and one in sys/conf) give
us our first minimal glimpse of PowerPC support.

With this code we can get to the "mountroot>" prompt on my Apple iMac.  We
can't get any further due to lack of clock and interrupt handling, among other
things.  This does however mean that pmap and VM are initialising.

We're fairly dependant on OpenFirmware at this point, but I hope to add
support for other classes of firmware at a later stage.

Reviewed by:	obrien, dfr
2001-06-16 07:14:07 +00:00
Benno Rice
f9bac91b18 Bring in NetBSD code used in the PowerPC port.
Reviewed by:	obrien, dfr
Obtained from:	NetBSD
2001-06-10 02:39:37 +00:00
John Baldwin
d0605c94ed GC #if 0'd calls to releasing and acquiring the old style giant kernel
lock.
2001-05-29 23:35:48 +00:00
Andrew Gallatin
2ee7c91e2e catch these files up to their i386 neighbors to make alpha boot
prior to the vm_mtx
2001-05-21 16:04:24 +00:00
John Baldwin
bf4c03d0e9 Initialize p_md.md_kernnest to 1 for newly fork'd processes since they
start off in the kernel.
2001-04-26 23:52:40 +00:00
Andrew Gallatin
64007795de remove bogus check -- for kernel threads we fork off of proc0, not curproc
This was causing panics when modules which create kthreads were loaded
after boot.

pointed out by: jake, jhb
2001-03-15 02:32:26 +00:00
John Baldwin
ff655691d8 Use the proc lock to protect p_pptr when waking up our parent in cpu_exit()
and remove the mpfixme() message that is now fixed.
2001-03-07 03:20:15 +00:00
John Baldwin
938f15c7c4 Rename switch_trampoline() to fork_trampoline() on the alpha and ia64.
Suggested by:	dfr
2001-02-22 16:56:53 +00:00
John Baldwin
5813dc03bd - Don't call clear_resched() in userret(), instead, clear the resched flag
in mi_switch() just before calling cpu_switch() so that the first switch
  after a resched request will satisfy the request.
- While I'm at it, move a few things into mi_switch() and out of
  cpu_switch(), specifically set the p_oncpu and p_lastcpu members of
  proc in mi_switch(), and handle the sched_lock state change across a
  context switch in mi_switch().
- Since cpu_switch() no longer handles the sched_lock state change, we
  have to setup an initial state for sched_lock in fork_exit() before we
  release it.
2001-02-20 05:26:15 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
John Baldwin
a0346459f1 Update some comments, s0 in the pcb of a child returning from fork1() is
now passed in as a0 to fork_exit() and and s2 is passed in as a1.
2001-01-26 23:32:38 +00:00
John Baldwin
2a36ec35ae - Change fork_exit() to take a pointer to a trapframe as its 3rd argument
instead of a trapframe directly.  (Requested by bde.)
- Convert the alpha switch_trampoline to call fork_exit() and use the MI
  fork_return() instead of child_return().
- Axe child_return().
2001-01-24 21:59:25 +00:00
Jake Burkholder
98f03f9030 Protect proc.p_pptr and proc.p_children/p_sibling with the
proctree_lock.

linprocfs not locked pending response from informal maintainer.

Reviewed by:	jhb, -smp@
2000-12-23 19:43:10 +00:00
Andrew Gallatin
4c0b7a9327 acquire/release Giant in vm_page_zero_idle(), like on i386
Discused with: jhb
2000-12-01 18:55:58 +00:00
Doug Rabson
35cd89e7e5 Convert various calls to splhigh() to disable_intr() since splhigh() is
now a no-op.
2000-11-19 12:28:42 +00:00
John Baldwin
7e4b7c97de Don't perform an mi_switch() when we release Giant during cpu_exit(). We
are about to call cpu_switch() anyways.

Found by:	witness
2000-11-15 19:44:38 +00:00
John Baldwin
8088699f79 - Overhaul the software interrupt code to use interrupt threads for each
type of software interrupt.  Roughly, what used to be a bit in spending
  now maps to a swi thread.  Each thread can have multiple handlers, just
  like a hardware interrupt thread.
- Instead of using a bitmask of pending interrupts, we schedule the specific
  software interrupt thread to run, so spending, NSWI, and the shandlers
  array are no longer needed.  We can now have an arbitrary number of
  software interrupt threads.  When you register a software interrupt
  thread via sinthand_add(), you get back a struct intrhand that you pass
  to sched_swi() when you wish to schedule your swi thread to run.
- Convert the name of 'struct intrec' to 'struct intrhand' as it is a bit
  more intuitive.  Also, prefix all the members of struct intrhand with
  'ih_'.
- Make swi_net() a MI function since there is now no point in it being
  MD.

Submitted by:	cp
2000-10-25 05:19:40 +00:00
John Baldwin
35e0e5b311 Catch up to moving headers:
- machine/ipl.h -> sys/ipl.h
- machine/mutex.h -> sys/mutex.h
2000-10-20 07:58:15 +00:00
Doug Rabson
c703143638 Clear pcb_schednest in cpu_fork() for the child process. This is
is necessary since the child's call stack only includes one recursive
hold of sched_lock.
2000-10-03 08:03:03 +00:00
Jason Evans
0384fff8c5 Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00
Andrew Gallatin
49c0f52e11 Support bounce buffers for ISA DMA on the alpha. This is required for the
irongate chipset (used in the UP1000) which does not support scatter/gather
DMA.  We'll still use scatter gather if the core logic chipset supports it.

Reviewed by: dfr
2000-06-19 18:41:27 +00:00
Alan Cox
6fba331424 cpu_fork(): Check "flags" before dereferencing "p2". Otherwise,
the call "vm_fork(p1, 0, flags);" early in fork1 can cause a kernel
panic.
2000-06-11 06:22:01 +00:00
Poul-Henning Kamp
9626b608de Separate the struct bio related stuff out of <sys/buf.h> into
<sys/bio.h>.

<sys/bio.h> is now a prerequisite for <sys/buf.h> but it shall
not be made a nested include according to bdes teachings on the
subject of nested includes.

Diskdrivers and similar stuff below specfs::strategy() should no
longer need to include <sys/buf.> unless they need caching of data.

Still a few bogus uses of struct buf to track down.

Repocopy by:    peter
2000-05-05 09:59:14 +00:00
Poul-Henning Kamp
21144e3bf1 Remove B_READ, B_WRITE and B_FREEBUF and replace them with a new
field in struct buf: b_iocmd.  The b_iocmd is enforced to have
exactly one bit set.

B_WRITE was bogusly defined as zero giving rise to obvious coding
mistakes.

Also eliminate the redundant struct buf flag B_CALL, it can just
as efficiently be done by comparing b_iodone to NULL.

Should you get a panic or drop into the debugger, complaining about
"b_iocmd", don't continue.  It is likely to write on your disk
where it should have been reading.

This change is a step in the direction towards a stackable BIO capability.

A lot of this patch were machine generated (Thanks to style(9) compliance!)

Vinum users:  Greg has not had time to test this yet, be careful.
2000-03-20 10:44:49 +00:00
Andrew Gallatin
a6db6c48bf The kernel side of per-process unaligned access control (printing, fixing &
delivering SIGBUS).  This will allow a non-superuser to control unaligned
access behaviour on a per-process basis once a userland control program
(uac) is written.

Reviewed by: obrien
Tested by:   obrien
2000-01-16 07:07:33 +00:00
Peter Wemm
453c03ac1b Make this compile again. (missing #include for RFPROC) 1999-12-06 18:12:29 +00:00
Luoqi Chen
91c28bfde0 User ldt sharing. 1999-12-06 04:53:08 +00:00
Doug Rabson
62a50bfa38 Re-organise the code which manages the owner of the FP state (fpcurproc).
The old code was spread out through the machdep code and was sloppy about
enabling and disabling the FEN bit (which controls access to the FP
register set). This caused a DIAGNOSTIC warning "DANGER WILL ROBINSON:
FEN SET IN cpu_fork!" sometimes when operating under high loads and could
conceivably lead to processes getting incorrect FP results.

The new code is much more strict about the FEN bit and makes sure that
*only* fpcurproc ever has it enabled. This also allows us to remove a
section of code from the exception_return path which might improve
performance marginally.

Reviewed by: gallatin
1999-11-10 21:14:25 +00:00
Alan Cox
be72f78813 The core of this patch is to vm/vm_page.h. The effects are two-fold: (1) to
eliminate an extra (useless) level of indirection in half of the page
queue accesses and (2) to use a single name for each queue throughout,
instead of, e.g., "vm_page_queue_active" in some places and
"vm_page_queues[PQ_ACTIVE]" in others.

Reviewed by:	dillon
1999-10-30 07:37:14 +00:00
Poul-Henning Kamp
923502ff91 useracc() the prequel:
Merge the contents (less some trivial bordering the silly comments)
of <vm/vm_prot.h> and <vm/vm_inherit.h> into <vm/vm.h>.  This puts
the #defines for the vm_inherit_t and vm_prot_t types next to their
typedefs.

This paves the road for the commit to follow shortly: change
useracc() to use VM_PROT_{READ|WRITE} rather than B_{READ|WRITE}
as argument.
1999-10-29 18:09:36 +00:00
Matthew Dillon
4cc712004c Fix bug in pipe code relating to writes of mmap'd but illegal address
spaces which cross a segment boundry in the page table.  pmap_kextract()
    is not designed for access to the user space portion of the page
    table and cannot handle the null-page-directory-entry case.

    The fix is to have vm_fault_quick() return a success or failure which
    is then used to avoid calling pmap_kextract().
1999-09-20 19:08:48 +00:00
Peter Wemm
c3aac50f28 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
Andrew Gallatin
34067f9475 Fix the child's return path from fork so that fork will return 0
in the child.  This corrects a problem where linux/alpha binaries see
the child's return value of fork as the parent's pid.  This happens because
linux/alpha binaries apparently check the return value directly, rather
than looking for a non-zero value in a4, as *BSD & OSF/1 do.

Reviewed by:dfr@nlsystems.com
1999-08-27 14:47:23 +00:00
John Polstra
0226f1b80d Sync with alc's revision 1.125 of i386/i386/vm_machdep.c. This
fixes the kernel build breakage.
1999-08-05 23:38:13 +00:00
Alan Cox
3b21348301 Reduce the number of "magic constants" used for page coloring
by one: PQ_PRIME2 and PQ_PRIME3 are used to accomplish the same
thing at different places in the kernel.  Drop PQ_PRIME3.
1999-07-22 06:04:17 +00:00
Peter Wemm
9c8b8baa38 Slight reorganization of kernel thread/process creation. Instead of using
SYSINIT_KT() etc (which is a static, compile-time procedure), use a
NetBSD-style kthread_create() interface.  kproc_start is still available
as a SYSINIT() hook.  This allowed simplification of chunks of the
sysinit code in the process.  This kthread_create() is our old kproc_start
internals, with the SYSINIT_KT fork hooks grafted in and tweaked to work
the same as the NetBSD one.

One thing I'd like to do shortly is get rid of nfsiod as a user initiated
process.  It makes sense for the nfs client code to create them on the
fly as needed up to a user settable limit.  This means that nfsiod
doesn't need to be in /sbin and is always "available".  This is a fair bit
easier to do outside of the SYSINIT_KT() framework.
1999-07-01 13:21:46 +00:00
Dmitrij Tejblum
23405ee478 Replace my previous fix of saving the FP state with a much simpler one: when
we swap out fpcurproc, save its FP state.

Suggested by:	bde
1999-06-10 20:40:59 +00:00
Dmitrij Tejblum
be960acd20 Keep fpcurproc locked in memory, so that we always can save the FP state
correctly.

This should fix the "pmap_changebit didn't" panic that some people see.

Reviewed by:	dfr
1999-06-08 16:42:19 +00:00
Dmitrij Tejblum
e415107779 Fixed several (not all) warnings. 1999-04-23 19:53:38 +00:00
Dmitrij Tejblum
f998a53420 Added consts to cpu_set_fork_handler prototype. (Follow i386 version.) 1999-04-20 22:53:54 +00:00