1993-06-12 14:58:17 +00:00
|
|
|
/*-
|
1994-06-06 14:54:41 +00:00
|
|
|
* Copyright (C) 1994, David Greenman
|
|
|
|
* Copyright (c) 1990, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
1993-06-12 14:58:17 +00:00
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* the University of Utah, and William Jolitz.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
genassym.c:
Remove NKMEMCLUSTERS, it is no longer define or used.
locores.s:
Fix comment on PTDpde and APTDpde to be pde instead of pte
Add new equation for calculating location of Sysmap
Remove Bill's old #ifdef garbage for counting up memory,
that stuff will never be made to work and was just cluttering
up the file.
Add code that places the PTD, page table pages, and kernel
stack below the 640k ISA hole if there is room for it, otherwise
put this stuff all at 1MB. This fixes the 28K bogusity in
the boot blocks, that can now go away!
Fix the caclulation of where first is to be dependent on
NKPDE so that we can skip over the above mentioned areas.
The 28K thing is now 44K in size due to the increase in
kernel virtual memory space, but since we no longer have
to worry about that this is no big deal.
Use if NNPX > 0 instead of ifdef NPX for floating point code.
machdep.c
Change the calculation of for the buffer cache to be
20% of all memory above 2MB and add back the upper limit
of 2/5's of the VM_KMEM_SIZE so that we do not eat ALL
of the kernel memory space on large memory machines, note
that this will not even come into effect unless you have
more than 32MB. The current buffer cache limit is 6.7MB
due to this caclulation.
It seems that we where erroniously allocating bufpages pages
for buffer_map. buffer_map is UNUSED in this implementation
of the buffer cache, but since the map is referenced in
several if statements a quick fix was to simply allocate
1 vm page (but no real memory) to it.
pmap.h
Remove rcsid, don't want them in the kernel files!
Removed some cruft inside an #ifdef DEBUGx that caused
compiler errors if you where compiling this for debug.
Use the #defines for PD_SHIFT and PG_SHIFT in place of
constants.
trap.c:
Remove patch kit header and rcsid, fix $Id$.
Now include "npx.h" and use NNPX for controlling the
floating point code.
Remove a now completly invalid check for a maximum virtual
address, the virtual address now ends at 0xFFFFFFFF so
there is no more MAX!! (Thanks David, I completly missed
that one!)
vm_machdep.c
Remove patch kit header and rcsid, fix $Id$.
Now include "npx.h" and use NNPX for controlling the
floating point code.
Replace several 0xFE00000 constants with KERNBASE
1993-10-15 10:34:29 +00:00
|
|
|
* from: @(#)trap.c 7.4 (Berkeley) 5/13/91
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
1994-06-11 05:13:33 +00:00
|
|
|
* 386 Trap and System call handling
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
|
|
|
|
2001-01-19 13:19:02 +00:00
|
|
|
#include "opt_clock.h"
|
1997-12-04 21:21:26 +00:00
|
|
|
#include "opt_cpu.h"
|
1996-01-04 21:13:23 +00:00
|
|
|
#include "opt_ddb.h"
|
2001-01-29 09:38:39 +00:00
|
|
|
#include "opt_isa.h"
|
1997-12-04 21:21:26 +00:00
|
|
|
#include "opt_ktrace.h"
|
2001-01-19 13:19:02 +00:00
|
|
|
#include "opt_npx.h"
|
1998-01-31 05:00:21 +00:00
|
|
|
#include "opt_trap.h"
|
1996-01-03 21:42:35 +00:00
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <sys/param.h>
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <sys/bus.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/proc.h>
|
1997-12-06 04:11:14 +00:00
|
|
|
#include <sys/pioctl.h>
|
2000-10-20 07:58:15 +00:00
|
|
|
#include <sys/ipl.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <sys/kernel.h>
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <sys/ktr.h>
|
2000-10-20 07:58:15 +00:00
|
|
|
#include <sys/mutex.h>
|
1997-11-24 13:25:37 +00:00
|
|
|
#include <sys/resourcevar.h>
|
|
|
|
#include <sys/signalvar.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <sys/syscall.h>
|
2000-08-06 14:17:21 +00:00
|
|
|
#include <sys/sysctl.h>
|
1994-08-24 11:52:21 +00:00
|
|
|
#include <sys/sysent.h>
|
1998-03-28 10:33:27 +00:00
|
|
|
#include <sys/uio.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <sys/vmmeter.h>
|
1993-06-12 14:58:17 +00:00
|
|
|
#ifdef KTRACE
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <sys/ktrace.h>
|
1993-06-12 14:58:17 +00:00
|
|
|
#endif
|
|
|
|
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <vm/vm_param.h>
|
1997-02-10 02:22:35 +00:00
|
|
|
#include <sys/lock.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <vm/pmap.h>
|
1995-03-16 18:17:34 +00:00
|
|
|
#include <vm/vm_kern.h>
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <vm/vm_map.h>
|
|
|
|
#include <vm/vm_page.h>
|
1995-12-07 12:48:31 +00:00
|
|
|
#include <vm/vm_extern.h>
|
|
|
|
|
1994-05-25 09:21:21 +00:00
|
|
|
#include <machine/cpu.h>
|
1995-03-16 18:17:34 +00:00
|
|
|
#include <machine/md_var.h>
|
1997-11-24 13:25:37 +00:00
|
|
|
#include <machine/pcb.h>
|
|
|
|
#ifdef SMP
|
1997-04-26 11:46:25 +00:00
|
|
|
#include <machine/smp.h>
|
1997-11-24 13:25:37 +00:00
|
|
|
#endif
|
1997-10-10 12:42:54 +00:00
|
|
|
#include <machine/tss.h>
|
1993-06-12 14:58:17 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <i386/isa/icu.h>
|
1997-11-24 13:25:37 +00:00
|
|
|
#include <i386/isa/intr_machdep.h>
|
|
|
|
|
1995-07-16 10:31:26 +00:00
|
|
|
#ifdef POWERFAIL_NMI
|
1996-09-10 08:32:01 +00:00
|
|
|
#include <sys/syslog.h>
|
|
|
|
#include <machine/clock.h>
|
1995-07-16 10:31:26 +00:00
|
|
|
#endif
|
|
|
|
|
1997-10-10 12:42:54 +00:00
|
|
|
#include <machine/vm86.h>
|
|
|
|
|
2000-01-11 14:54:01 +00:00
|
|
|
#include <ddb/ddb.h>
|
Improved DDB_UNATTENDED behaviour. From the submitter:
There's something that's been bugging me for a while, so I decided to fix it.
FreeBSD now will DTRT WRT DDB and DDB_UNATTENDED (!debugger_on_panic), at least
in my opinion. The behavior change is such that:
1. Nothing changes when debugger_on_panic != 0.
2. When DDB_UNATTENDED (!debugger_on_panic), if a panic occurs, the
machine will reboot. Also, if a trap occurs, the machine will
panic and reboot, unlike how it broke to DDB before. HOWEVER,
a trap inside DDB will not cause a panic, allowing full use
of DDB without having to worry about the machine being stuck
at a DDB prompt if something goes wrong during the day.
Patches for this behavior follow my signature, and it would
be a boon to anyone (like me) who uses DDB_UNATTENDED, but
actually wants the machine to panic on a trap (otherwise,
what's the use, if the machine causes a fatal trap rather than
a true panic, of debugger_on_panic?). The changes cause no
adverse behavior, but do involve two symbols becoming global
Submitted by: Brian Feldman <green@unixhelp.org>
1998-12-28 23:03:00 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <sys/sysctl.h>
|
|
|
|
|
1995-12-14 08:21:33 +00:00
|
|
|
int (*pmath_emulate) __P((struct trapframe *));
|
|
|
|
|
1995-10-09 04:36:01 +00:00
|
|
|
extern void trap __P((struct trapframe frame));
|
|
|
|
extern int trapwrite __P((unsigned addr));
|
2001-02-10 02:20:34 +00:00
|
|
|
extern void syscall __P((struct trapframe frame));
|
2000-09-07 01:33:02 +00:00
|
|
|
extern void ast __P((struct trapframe frame));
|
1995-10-09 04:36:01 +00:00
|
|
|
|
1998-12-02 08:15:17 +00:00
|
|
|
static int trap_pfault __P((struct trapframe *, int, vm_offset_t));
|
|
|
|
static void trap_fatal __P((struct trapframe *, vm_offset_t));
|
1995-12-19 14:30:50 +00:00
|
|
|
void dblfault_handler __P((void));
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1995-10-04 07:08:04 +00:00
|
|
|
extern inthand_t IDTVEC(syscall);
|
|
|
|
|
1996-08-11 17:41:25 +00:00
|
|
|
#define MAX_TRAP_MSG 28
|
1995-12-09 20:40:43 +00:00
|
|
|
static char *trap_msg[] = {
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
"", /* 0 unused */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
"privileged instruction fault", /* 1 T_PRIVINFLT */
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
"", /* 2 unused */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
"breakpoint instruction fault", /* 3 T_BPTFLT */
|
|
|
|
"", /* 4 unused */
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
"", /* 5 unused */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
"arithmetic trap", /* 6 T_ARITHTRAP */
|
2001-02-19 15:47:38 +00:00
|
|
|
"", /* 7 unused */
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
"", /* 8 unused */
|
1994-06-06 14:54:41 +00:00
|
|
|
"general protection fault", /* 9 T_PROTFLT */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
"trace trap", /* 10 T_TRCTRAP */
|
|
|
|
"", /* 11 unused */
|
|
|
|
"page fault", /* 12 T_PAGEFLT */
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
"", /* 13 unused */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
"alignment fault", /* 14 T_ALIGNFLT */
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
"", /* 15 unused */
|
|
|
|
"", /* 16 unused */
|
|
|
|
"", /* 17 unused */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
"integer divide fault", /* 18 T_DIVIDE */
|
|
|
|
"non-maskable interrupt trap", /* 19 T_NMI */
|
|
|
|
"overflow trap", /* 20 T_OFLOW */
|
|
|
|
"FPU bounds check fault", /* 21 T_BOUND */
|
|
|
|
"FPU device not available", /* 22 T_DNA */
|
|
|
|
"double fault", /* 23 T_DOUBLEFLT */
|
|
|
|
"FPU operand fetch fault", /* 24 T_FPOPFLT */
|
|
|
|
"invalid TSS fault", /* 25 T_TSSFLT */
|
|
|
|
"segment not present fault", /* 26 T_SEGNPFLT */
|
|
|
|
"stack fault", /* 27 T_STKFLT */
|
1996-08-11 17:41:25 +00:00
|
|
|
"machine check trap", /* 28 T_MCHK */
|
First steps in rewriting locore.s, and making info useful
when the machine panics.
i386/i386/locore.s:
1) got rid of most .set directives that were being used like
#define's, and replaced them with appropriate #define's in
the appropriate header files (accessed via genassym).
2) added comments to header inclusions and global definitions,
and global variables
3) replaced some hardcoded constants with cpp defines (such as
PDESIZE and others)
4) aligned all comments to the same column to make them easier to
read
5) moved macro definitions for ENTRY, ALIGN, NOP, etc. to
/sys/i386/include/asmacros.h
6) added #ifdef BDE_DEBUGGER around all of Bruce's debugger code
7) added new global '_KERNend' to store last location+1 of kernel
8) cleaned up zeroing of bss so that only bss is zeroed
9) fix zeroing of page tables so that it really does zero them all
- not just if they follow the bss.
10) rewrote page table initialization code so that 1) works correctly
and 2) write protects the kernel text by default
11) properly initialize the kernel page directory, upages, p0stack PT,
and page tables. The previous scheme was more than a bit
screwy.
12) change allocation of virtual area of IO hole so that it is
fixed at KERNBASE + 0xa0000. The previous scheme put it
right after the kernel page tables and then later expected
it to be at KERNBASE +0xa0000
13) change multiple bogus settings of user read/write of various
areas of kernel VM - including the IO hole; we should never
be accessing the IO hole in user mode through the kernel
page tables
14) split kernel support routines such as bcopy, bzero, copyin,
copyout, etc. into a seperate file 'support.s'
15) split swtch and related routines into a seperate 'swtch.s'
16) split routines related to traps, syscalls, and interrupts
into a seperate file 'exception.s'
17) remove some unused global variables from locore that got
inserted by Garrett when he pulled them out of some .h
files.
i386/isa/icu.s:
1) clean up global variable declarations
2) move in declaration of astpending and netisr
i386/i386/pmap.c:
1) fix calculation of virtual_avail. It previously was calculated
to be right in the middle of the kernel page tables - not
a good place to start allocating kernel VM.
2) properly allocate kernel page dir/tables etc out of kernel map
- previously only took out 2 pages.
i386/i386/machdep.c:
1) modify boot() to print a warning that the system will reboot in
PANIC_REBOOT_WAIT_TIME amount of seconds, and let the user
abort with a key on the console. The machine will wait for
ever if a key is typed before the reboot. The default is
15 seconds, but can be set to 0 to mean don't wait at all,
-1 to mean wait forever, or any positive value to wait for
that many seconds.
2) print "Rebooting..." just before doing it.
kern/subr_prf.c:
1) remove PANICWAIT as it is deprecated by the change to machdep.c
i386/i386/trap.c:
1) add table of trap type strings and use it to print a real trap/
panic message rather than just a number. Lot's of work to
be done here, but this is the first step. Symbolic traceback
is in the TODO.
i386/i386/Makefile.i386:
1) add support in to build support.s, exception.s and swtch.s
...and various changes to various header files to make all of the
above happen.
1993-11-13 02:25:21 +00:00
|
|
|
};
|
|
|
|
|
1997-12-04 14:35:40 +00:00
|
|
|
#if defined(I586_CPU) && !defined(NO_F00F_HACK)
|
1997-12-03 02:45:50 +00:00
|
|
|
extern int has_f00f_bug;
|
|
|
|
#endif
|
|
|
|
|
2000-08-06 14:17:21 +00:00
|
|
|
#ifdef DDB
|
|
|
|
static int ddb_on_nmi = 1;
|
|
|
|
SYSCTL_INT(_machdep, OID_AUTO, ddb_on_nmi, CTLFLAG_RW,
|
|
|
|
&ddb_on_nmi, 0, "Go to DDB on NMI");
|
|
|
|
#endif
|
|
|
|
static int panic_on_nmi = 1;
|
|
|
|
SYSCTL_INT(_machdep, OID_AUTO, panic_on_nmi, CTLFLAG_RW,
|
|
|
|
&panic_on_nmi, 0, "Panic on NMI");
|
|
|
|
|
2000-12-12 01:14:32 +00:00
|
|
|
#ifdef WITNESS
|
|
|
|
extern char *syscallnames[];
|
|
|
|
#endif
|
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
void
|
|
|
|
userret(p, frame, oticks)
|
1994-06-06 14:54:41 +00:00
|
|
|
struct proc *p;
|
|
|
|
struct trapframe *frame;
|
|
|
|
u_quad_t oticks;
|
|
|
|
{
|
2001-01-24 09:53:49 +00:00
|
|
|
int sig;
|
1994-06-06 14:54:41 +00:00
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
while ((sig = CURSIG(p)) != 0) {
|
2001-01-24 09:53:49 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1994-06-06 14:54:41 +00:00
|
|
|
postsig(sig);
|
1997-08-09 10:13:32 +00:00
|
|
|
}
|
2000-03-28 07:16:37 +00:00
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-02-12 00:20:08 +00:00
|
|
|
p->p_pri.pri_level = p->p_pri.pri_user;
|
2000-03-28 07:16:37 +00:00
|
|
|
if (resched_wanted()) {
|
1994-06-06 14:54:41 +00:00
|
|
|
/*
|
|
|
|
* Since we are curproc, clock will normally just change
|
|
|
|
* our priority without moving us from one queue to another
|
|
|
|
* (since the running process is not on a queue.)
|
|
|
|
* If that happened after we setrunqueue ourselves but before we
|
|
|
|
* mi_switch()'ed, we might not be on the queue indicated by
|
|
|
|
* our priority.
|
|
|
|
*/
|
2000-11-17 18:09:18 +00:00
|
|
|
DROP_GIANT_NOSWITCH();
|
1994-06-06 14:54:41 +00:00
|
|
|
setrunqueue(p);
|
|
|
|
p->p_stats->p_ru.ru_nivcsw++;
|
|
|
|
mi_switch();
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2000-11-16 02:16:44 +00:00
|
|
|
PICKUP_GIANT();
|
2000-09-07 01:33:02 +00:00
|
|
|
while ((sig = CURSIG(p)) != 0) {
|
2001-01-24 09:53:49 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1994-06-06 14:54:41 +00:00
|
|
|
postsig(sig);
|
2000-09-07 01:33:02 +00:00
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
1994-06-06 14:54:41 +00:00
|
|
|
}
|
2001-01-24 09:53:49 +00:00
|
|
|
|
1995-02-10 06:25:14 +00:00
|
|
|
/*
|
|
|
|
* Charge system time if profiling.
|
|
|
|
*/
|
2001-01-24 09:53:49 +00:00
|
|
|
if (p->p_sflag & PS_PROFIL) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2001-01-24 09:53:49 +00:00
|
|
|
/* XXX - do we need Giant? */
|
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-02-10 02:20:34 +00:00
|
|
|
addupc_task(p, TRAPF_PC(frame),
|
1996-06-25 20:02:16 +00:00
|
|
|
(u_int)(p->p_sticks - oticks) * psratio);
|
2001-02-22 16:23:12 +00:00
|
|
|
} else
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-06-06 14:54:41 +00:00
|
|
|
}
|
1993-06-12 14:58:17 +00:00
|
|
|
|
|
|
|
/*
|
1995-10-09 04:36:01 +00:00
|
|
|
* Exception, fault, and trap interface to the FreeBSD kernel.
|
1994-06-06 14:54:41 +00:00
|
|
|
* This common code is called from assembly language IDT gate entry
|
1993-06-12 14:58:17 +00:00
|
|
|
* routines that prepare a suitable stack frame, and restore this
|
1994-06-06 14:54:41 +00:00
|
|
|
* frame after the exception has been processed.
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
|
|
|
|
1993-11-25 01:38:01 +00:00
|
|
|
void
|
1993-06-12 14:58:17 +00:00
|
|
|
trap(frame)
|
|
|
|
struct trapframe frame;
|
|
|
|
{
|
1994-06-06 14:54:41 +00:00
|
|
|
struct proc *p = curproc;
|
1994-05-25 09:21:21 +00:00
|
|
|
u_quad_t sticks = 0;
|
1994-10-08 22:19:51 +00:00
|
|
|
int i = 0, ucode = 0, type, code;
|
1998-12-02 08:15:17 +00:00
|
|
|
vm_offset_t eva;
|
2000-09-07 01:33:02 +00:00
|
|
|
#ifdef POWERFAIL_NMI
|
|
|
|
static int lastalert = 0;
|
|
|
|
#endif
|
1998-12-02 08:15:17 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
atomic_add_int(&cnt.v_trap, 1);
|
|
|
|
|
|
|
|
if ((frame.tf_eflags & PSL_I) == 0) {
|
1998-12-02 08:15:17 +00:00
|
|
|
/*
|
2000-09-07 01:33:02 +00:00
|
|
|
* Buggy application or kernel code has disabled
|
|
|
|
* interrupts and then trapped. Enabling interrupts
|
|
|
|
* now is wrong, but it is better than running with
|
|
|
|
* interrupts disabled until they are accidentally
|
2001-01-24 09:53:49 +00:00
|
|
|
* enabled later. XXX This is really bad if we trap
|
|
|
|
* while holding a spin lock.
|
1998-12-02 08:15:17 +00:00
|
|
|
*/
|
|
|
|
type = frame.tf_trapno;
|
|
|
|
if (ISPL(frame.tf_cs) == SEL_UPL || (frame.tf_eflags & PSL_VM))
|
|
|
|
printf(
|
|
|
|
"pid %ld (%s): trap %d with interrupts disabled\n",
|
|
|
|
(long)curproc->p_pid, curproc->p_comm, type);
|
2001-02-08 00:10:07 +00:00
|
|
|
else if (type != T_BPTFLT && type != T_TRCTRAP) {
|
1998-12-02 08:15:17 +00:00
|
|
|
/*
|
|
|
|
* XXX not quite right, since this may be for a
|
|
|
|
* multiple fault in user mode.
|
|
|
|
*/
|
|
|
|
printf("kernel trap %d with interrupts disabled\n",
|
|
|
|
type);
|
2001-02-08 00:10:07 +00:00
|
|
|
/*
|
|
|
|
* We should walk p_heldmtx here and see if any are
|
|
|
|
* spin mutexes, and not do this if so.
|
|
|
|
*/
|
|
|
|
enable_intr();
|
|
|
|
}
|
1998-12-02 08:15:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
eva = 0;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1997-12-04 14:35:40 +00:00
|
|
|
#if defined(I586_CPU) && !defined(NO_F00F_HACK)
|
1997-12-03 02:45:50 +00:00
|
|
|
restart:
|
|
|
|
#endif
|
2000-09-07 01:33:02 +00:00
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
type = frame.tf_trapno;
|
1994-06-06 14:54:41 +00:00
|
|
|
code = frame.tf_err;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
if ((ISPL(frame.tf_cs) == SEL_UPL) ||
|
|
|
|
((frame.tf_eflags & PSL_VM) && !in_vm86call)) {
|
1994-06-06 14:54:41 +00:00
|
|
|
/* user trap */
|
1994-01-14 16:25:31 +00:00
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
1994-06-06 14:54:41 +00:00
|
|
|
sticks = p->p_sticks;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1997-05-07 20:08:53 +00:00
|
|
|
p->p_md.md_regs = &frame;
|
1994-01-14 16:25:31 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
switch (type) {
|
|
|
|
case T_PRIVINFLT: /* privileged instruction fault */
|
|
|
|
ucode = type;
|
|
|
|
i = SIGILL;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_BPTFLT: /* bpt instruction fault */
|
|
|
|
case T_TRCTRAP: /* trace trap */
|
|
|
|
frame.tf_eflags &= ~PSL_T;
|
|
|
|
i = SIGTRAP;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_ARITHTRAP: /* arithmetic trap */
|
|
|
|
ucode = code;
|
|
|
|
i = SIGFPE;
|
|
|
|
break;
|
|
|
|
|
1997-08-09 00:04:06 +00:00
|
|
|
/*
|
|
|
|
* The following two traps can happen in
|
|
|
|
* vm86 mode, and, if so, we want to handle
|
|
|
|
* them specially.
|
|
|
|
*/
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_PROTFLT: /* general protection fault */
|
|
|
|
case T_STKFLT: /* stack fault */
|
1997-08-28 14:36:56 +00:00
|
|
|
if (frame.tf_eflags & PSL_VM) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1997-08-28 14:36:56 +00:00
|
|
|
i = vm86_emulate((struct vm86frame *)&frame);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
1997-08-09 00:04:06 +00:00
|
|
|
if (i == 0)
|
2000-09-07 01:33:02 +00:00
|
|
|
goto user;
|
1997-08-09 00:04:06 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* FALL THROUGH */
|
|
|
|
|
|
|
|
case T_SEGNPFLT: /* segment not present fault */
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
case T_TSSFLT: /* invalid TSS fault */
|
|
|
|
case T_DOUBLEFLT: /* double fault */
|
|
|
|
default:
|
1994-06-06 14:54:41 +00:00
|
|
|
ucode = code + BUS_SEGM_FAULT ;
|
|
|
|
i = SIGBUS;
|
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_PAGEFLT: /* page fault */
|
2001-01-26 04:16:16 +00:00
|
|
|
/*
|
|
|
|
* For some Cyrix CPUs, %cr2 is clobbered by
|
|
|
|
* interrupts. This problem is worked around by using
|
|
|
|
* an interrupt gate for the pagefault handler. We
|
|
|
|
* are finally ready to read %cr2 and then must
|
|
|
|
* reenable interrupts.
|
|
|
|
*/
|
|
|
|
eva = rcr2();
|
|
|
|
enable_intr();
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1998-12-02 08:15:17 +00:00
|
|
|
i = trap_pfault(&frame, TRUE, eva);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
1997-12-04 14:35:40 +00:00
|
|
|
#if defined(I586_CPU) && !defined(NO_F00F_HACK)
|
2000-09-07 01:33:02 +00:00
|
|
|
if (i == -2) {
|
|
|
|
/*
|
|
|
|
* f00f hack workaround has triggered, treat
|
|
|
|
* as illegal instruction not page fault.
|
|
|
|
*/
|
|
|
|
frame.tf_trapno = T_PRIVINFLT;
|
1997-12-03 02:45:50 +00:00
|
|
|
goto restart;
|
2000-09-07 01:33:02 +00:00
|
|
|
}
|
1997-12-03 02:45:50 +00:00
|
|
|
#endif
|
2000-09-07 01:33:02 +00:00
|
|
|
if (i == -1)
|
1994-06-06 14:54:41 +00:00
|
|
|
goto out;
|
2000-09-07 01:33:02 +00:00
|
|
|
if (i == 0)
|
|
|
|
goto user;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
ucode = T_PAGEFLT;
|
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_DIVIDE: /* integer divide fault */
|
1999-07-25 13:16:09 +00:00
|
|
|
ucode = FPE_INTDIV;
|
1994-06-06 14:54:41 +00:00
|
|
|
i = SIGFPE;
|
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
2001-01-29 09:38:39 +00:00
|
|
|
#ifdef DEV_ISA
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_NMI:
|
1995-07-16 10:31:26 +00:00
|
|
|
#ifdef POWERFAIL_NMI
|
2000-09-07 01:33:02 +00:00
|
|
|
#ifndef TIMER_FREQ
|
|
|
|
# define TIMER_FREQ 1193182
|
|
|
|
#endif
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
if (time_second - lastalert > 10) {
|
|
|
|
log(LOG_WARNING, "NMI: power fail\n");
|
|
|
|
sysbeep(TIMER_FREQ/880, hz);
|
|
|
|
lastalert = time_second;
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1995-07-16 10:31:26 +00:00
|
|
|
#else /* !POWERFAIL_NMI */
|
2000-07-14 11:49:44 +00:00
|
|
|
/* machine/parity/power fail/"kitchen sink" faults */
|
2001-01-26 04:16:16 +00:00
|
|
|
/* XXX Giant */
|
2000-07-14 11:49:44 +00:00
|
|
|
if (isa_nmi(code) == 0) {
|
1994-08-27 16:14:39 +00:00
|
|
|
#ifdef DDB
|
2000-08-06 14:17:21 +00:00
|
|
|
/*
|
|
|
|
* NMI can be hooked up to a pushbutton
|
|
|
|
* for debugging.
|
|
|
|
*/
|
|
|
|
if (ddb_on_nmi) {
|
|
|
|
printf ("NMI ... going to debugger\n");
|
|
|
|
kdb_trap (type, 0, &frame);
|
|
|
|
}
|
1995-07-16 10:31:26 +00:00
|
|
|
#endif /* DDB */
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
2000-08-06 14:17:21 +00:00
|
|
|
} else if (panic_on_nmi)
|
|
|
|
panic("NMI indicates hardware failure");
|
|
|
|
break;
|
1995-07-16 10:31:26 +00:00
|
|
|
#endif /* POWERFAIL_NMI */
|
2001-01-29 09:38:39 +00:00
|
|
|
#endif /* DEV_ISA */
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_OFLOW: /* integer overflow fault */
|
1999-07-25 13:16:09 +00:00
|
|
|
ucode = FPE_INTOVF;
|
1994-06-06 14:54:41 +00:00
|
|
|
i = SIGFPE;
|
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_BOUND: /* bounds check fault */
|
1999-07-25 13:16:09 +00:00
|
|
|
ucode = FPE_FLTSUB;
|
1994-06-06 14:54:41 +00:00
|
|
|
i = SIGFPE;
|
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_DNA:
|
2001-01-19 13:19:02 +00:00
|
|
|
#ifdef DEV_NPX
|
2000-09-07 01:33:02 +00:00
|
|
|
/* transparent fault (due to context switch "late") */
|
1994-06-06 14:54:41 +00:00
|
|
|
if (npxdna())
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1996-07-12 06:03:14 +00:00
|
|
|
#endif
|
1995-12-14 08:21:33 +00:00
|
|
|
if (!pmath_emulate) {
|
|
|
|
i = SIGFPE;
|
|
|
|
ucode = FPE_FPU_NP_TRAP;
|
|
|
|
break;
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1995-12-14 08:21:33 +00:00
|
|
|
i = (*pmath_emulate)(&frame);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-12-24 07:22:58 +00:00
|
|
|
if (i == 0) {
|
|
|
|
if (!(frame.tf_eflags & PSL_T))
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1994-12-24 07:22:58 +00:00
|
|
|
frame.tf_eflags &= ~PSL_T;
|
|
|
|
i = SIGTRAP;
|
|
|
|
}
|
|
|
|
/* else ucode = emulator_only_knows() XXX */
|
1994-06-06 14:54:41 +00:00
|
|
|
break;
|
1994-02-08 09:26:04 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_FPOPFLT: /* FPU operand fetch fault */
|
|
|
|
ucode = T_FPOPFLT;
|
|
|
|
i = SIGILL;
|
|
|
|
break;
|
1994-01-14 16:25:31 +00:00
|
|
|
}
|
1994-06-06 14:54:41 +00:00
|
|
|
} else {
|
|
|
|
/* kernel trap */
|
1994-01-14 16:25:31 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
switch (type) {
|
|
|
|
case T_PAGEFLT: /* page fault */
|
2001-01-26 04:16:16 +00:00
|
|
|
/*
|
|
|
|
* For some Cyrix CPUs, %cr2 is clobbered by
|
|
|
|
* interrupts. This problem is worked around by using
|
|
|
|
* an interrupt gate for the pagefault handler. We
|
|
|
|
* are finally ready to read %cr2 and then must
|
|
|
|
* reenable interrupts.
|
|
|
|
*/
|
|
|
|
eva = rcr2();
|
|
|
|
enable_intr();
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1998-12-02 08:15:17 +00:00
|
|
|
(void) trap_pfault(&frame, FALSE, eva);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1994-03-24 23:12:48 +00:00
|
|
|
|
1996-06-13 07:17:21 +00:00
|
|
|
case T_DNA:
|
2001-01-19 13:19:02 +00:00
|
|
|
#ifdef DEV_NPX
|
1996-07-12 06:03:14 +00:00
|
|
|
/*
|
|
|
|
* The kernel is apparently using npx for copying.
|
|
|
|
* XXX this should be fatal unless the kernel has
|
|
|
|
* registered such use.
|
|
|
|
*/
|
1996-06-13 07:17:21 +00:00
|
|
|
if (npxdna())
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1996-07-12 06:03:14 +00:00
|
|
|
#endif
|
1996-06-13 07:17:21 +00:00
|
|
|
break;
|
|
|
|
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
/*
|
2000-09-07 01:33:02 +00:00
|
|
|
* The following two traps can happen in
|
|
|
|
* vm86 mode, and, if so, we want to handle
|
|
|
|
* them specially.
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
*/
|
2000-09-07 01:33:02 +00:00
|
|
|
case T_PROTFLT: /* general protection fault */
|
|
|
|
case T_STKFLT: /* stack fault */
|
|
|
|
if (frame.tf_eflags & PSL_VM) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
i = vm86_emulate((struct vm86frame *)&frame);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
if (i != 0)
|
|
|
|
/*
|
|
|
|
* returns to original process
|
|
|
|
*/
|
|
|
|
vm86_trap((struct vm86frame *)&frame);
|
|
|
|
goto out;
|
|
|
|
}
|
2000-10-06 01:50:43 +00:00
|
|
|
if (type == T_STKFLT)
|
|
|
|
break;
|
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
/* FALL THROUGH */
|
|
|
|
|
|
|
|
case T_SEGNPFLT: /* segment not present fault */
|
|
|
|
if (in_vm86call)
|
|
|
|
break;
|
|
|
|
|
2001-01-21 19:25:07 +00:00
|
|
|
if (p->p_intr_nesting_level != 0)
|
2000-09-07 01:33:02 +00:00
|
|
|
break;
|
|
|
|
|
2000-10-06 01:55:07 +00:00
|
|
|
/*
|
|
|
|
* Invalid %fs's and %gs's can be created using
|
|
|
|
* procfs or PT_SETREGS or by invalidating the
|
|
|
|
* underlying LDT entry. This causes a fault
|
|
|
|
* in kernel mode when the kernel attempts to
|
|
|
|
* switch contexts. Lose the bad context
|
|
|
|
* (XXX) so that we can continue, and generate
|
|
|
|
* a signal.
|
|
|
|
*/
|
|
|
|
if (frame.tf_eip == (int)cpu_switch_load_gs) {
|
2001-01-10 04:43:51 +00:00
|
|
|
PCPU_GET(curpcb)->pcb_gs = 0;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-10-06 01:55:07 +00:00
|
|
|
psignal(p, SIGBUS);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Invalid segment selectors and out of bounds
|
|
|
|
* %eip's and %esp's can be set up in user mode.
|
|
|
|
* This causes a fault in kernel mode when the
|
|
|
|
* kernel tries to return to user mode. We want
|
|
|
|
* to get this fault so that we can fix the
|
|
|
|
* problem here and not have to check all the
|
|
|
|
* selectors and pointers when the user changes
|
|
|
|
* them.
|
|
|
|
*/
|
|
|
|
if (frame.tf_eip == (int)doreti_iret) {
|
|
|
|
frame.tf_eip = (int)doreti_iret_fault;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (frame.tf_eip == (int)doreti_popl_ds) {
|
|
|
|
frame.tf_eip = (int)doreti_popl_ds_fault;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (frame.tf_eip == (int)doreti_popl_es) {
|
|
|
|
frame.tf_eip = (int)doreti_popl_es_fault;
|
|
|
|
goto out;
|
2001-01-10 04:43:51 +00:00
|
|
|
}
|
2000-09-07 01:33:02 +00:00
|
|
|
if (frame.tf_eip == (int)doreti_popl_fs) {
|
|
|
|
frame.tf_eip = (int)doreti_popl_fs_fault;
|
|
|
|
goto out;
|
|
|
|
}
|
2001-01-10 04:43:51 +00:00
|
|
|
if (PCPU_GET(curpcb) != NULL &&
|
|
|
|
PCPU_GET(curpcb)->pcb_onfault != NULL) {
|
|
|
|
frame.tf_eip =
|
|
|
|
(int)PCPU_GET(curpcb)->pcb_onfault;
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1994-03-24 23:12:48 +00:00
|
|
|
}
|
1994-06-06 14:54:41 +00:00
|
|
|
break;
|
1994-03-24 23:12:48 +00:00
|
|
|
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
case T_TSSFLT:
|
|
|
|
/*
|
|
|
|
* PSL_NT can be set in user mode and isn't cleared
|
|
|
|
* automatically when the kernel is entered. This
|
|
|
|
* causes a TSS fault when the kernel attempts to
|
|
|
|
* `iret' because the TSS link is uninitialized. We
|
|
|
|
* want to get this fault so that we can fix the
|
|
|
|
* problem here and not every time the kernel is
|
|
|
|
* entered.
|
|
|
|
*/
|
|
|
|
if (frame.tf_eflags & PSL_NT) {
|
|
|
|
frame.tf_eflags &= ~PSL_NT;
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
Fix security holes in sigreturn(), ptrace() and procfs. sigreturn()
attempted to check for insecure and fatal eflags and segment
selectors, but missed many cases and got the IOPL check back to
front. The other syscalls didn't check at all.
sys_process.c, machdep.c:
Only allow PT_WRITE_U to write to the registers (ordinary and FP).
psl.h, locore.s, machdep.c:
Eliminate PSL_MBZ, PSL_MBO and PSL_USERCLR. We are not supposed
to assume anything about the reserved bits. Use PSL_USERCHANGE
and PSL_KERNEL instead. Rename PSL_USERSET to PSL_USER.
exception.s:
Define a private label for use by doreti when returning to user
mode fails.
machdep.c:
In syscalls, allow changing only the eflags that can be changed on
486's in user mode (no longer attempt to allow benign IOPL changes;
allow changing the nasty PSL_NT; don't allow changing the i586
bits).
Don't attempt to check all the cases involving invalid selectors
and %eip's. Just check for privilege violations and let the invalid
things cause a trap.
procfs_machdep.c:
Call the ptrace register functions to do all the work for reading
and writing ordinary registers and for single stepping.
trap.c:
Ignore traps caused by PSL_NT being set. Previously, users could
cause a fatal trap in user mode by setting PSL_NT and executing an
iret, and a fatal trap in kernel mode by setting PSL_NT and making
a syscall. PSL_NT was cleared too late and not in enough modes to
fix the problem.
Make all traps in user mode (except T_NMI) nonfatal.
Recover from traps caused by attempting to load invalid user
registers in doreti by restarting the traps so that they appear to
occur in user mode.
---
Fix bogons that I noticed while fixing the above:
psl.h:
Fix some comments.
Uniformize idempotency ifdef.
exception.s, machdep.c:
Remove rsvd[0-14]. rsvd0 hasn't been reserved since the 486 came
out. Replace rsvd0 by `align'. rsvd[0-11] used wrong (magic
non-unique) trap numbers. Replace rsvd[1-14] by rsvd.
locore.s:
Enable alignment check flag on 486's and 586's.
machdep.c:
Use a better type for kstack[].
Use TFREGP() to find the registers.
Reformat ptrace functions from SEF to something closer to KNF.
procfs_machdep.c:
The wrong pointer to the registers got fixed as a side effect.
Implement reading and writing of FP registers.
/proc/*/*regs now work (only) for processes that are in memory.
Clean up comments.
trap.c, trap.h:
Remove unused trap types.
1995-01-14 13:20:26 +00:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
1995-10-09 04:36:01 +00:00
|
|
|
case T_TRCTRAP: /* trace trap */
|
|
|
|
if (frame.tf_eip == (int)IDTVEC(syscall)) {
|
|
|
|
/*
|
|
|
|
* We've just entered system mode via the
|
|
|
|
* syscall lcall. Continue single stepping
|
|
|
|
* silently until the syscall handler has
|
|
|
|
* saved the flags.
|
|
|
|
*/
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1995-10-09 04:36:01 +00:00
|
|
|
}
|
|
|
|
if (frame.tf_eip == (int)IDTVEC(syscall) + 1) {
|
|
|
|
/*
|
|
|
|
* The syscall handler has now saved the
|
|
|
|
* flags. Stop single stepping it.
|
|
|
|
*/
|
|
|
|
frame.tf_eflags &= ~PSL_T;
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1995-10-09 04:36:01 +00:00
|
|
|
}
|
2000-07-01 02:40:13 +00:00
|
|
|
/*
|
|
|
|
* Ignore debug register trace traps due to
|
|
|
|
* accesses in the user's address space, which
|
|
|
|
* can happen under several conditions such as
|
|
|
|
* if a user sets a watchpoint on a buffer and
|
|
|
|
* then passes that buffer to a system call.
|
|
|
|
* We still want to get TRCTRAPS for addresses
|
|
|
|
* in kernel space because that is useful when
|
|
|
|
* debugging the kernel.
|
|
|
|
*/
|
2001-01-26 04:16:16 +00:00
|
|
|
/* XXX Giant */
|
2000-09-07 01:33:02 +00:00
|
|
|
if (user_dbreg_trap() && !in_vm86call) {
|
2000-07-01 02:40:13 +00:00
|
|
|
/*
|
|
|
|
* Reset breakpoint bits because the
|
|
|
|
* processor doesn't
|
|
|
|
*/
|
|
|
|
load_dr6(rdr6() & 0xfffffff0);
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
2000-07-01 02:40:13 +00:00
|
|
|
}
|
1995-10-09 04:36:01 +00:00
|
|
|
/*
|
2000-02-20 20:51:23 +00:00
|
|
|
* Fall through (TRCTRAP kernel mode, kernel address)
|
1995-10-09 04:36:01 +00:00
|
|
|
*/
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_BPTFLT:
|
1995-10-09 04:36:01 +00:00
|
|
|
/*
|
|
|
|
* If DDB is enabled, let it handle the debugger trap.
|
|
|
|
* Otherwise, debugger traps "can't happen".
|
|
|
|
*/
|
|
|
|
#ifdef DDB
|
2001-01-26 04:16:16 +00:00
|
|
|
/* XXX Giant */
|
1994-06-06 14:54:41 +00:00
|
|
|
if (kdb_trap (type, 0, &frame))
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1993-06-12 14:58:17 +00:00
|
|
|
#endif
|
1995-10-09 04:36:01 +00:00
|
|
|
break;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
2001-01-29 09:38:39 +00:00
|
|
|
#ifdef DEV_ISA
|
1994-06-06 14:54:41 +00:00
|
|
|
case T_NMI:
|
1995-07-16 10:31:26 +00:00
|
|
|
#ifdef POWERFAIL_NMI
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
if (time_second - lastalert > 10) {
|
2000-10-06 01:55:07 +00:00
|
|
|
log(LOG_WARNING, "NMI: power fail\n");
|
|
|
|
sysbeep(TIMER_FREQ/880, hz);
|
|
|
|
lastalert = time_second;
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1995-07-16 10:31:26 +00:00
|
|
|
#else /* !POWERFAIL_NMI */
|
2001-01-26 04:16:16 +00:00
|
|
|
/* XXX Giant */
|
2000-07-14 11:49:44 +00:00
|
|
|
/* machine/parity/power fail/"kitchen sink" faults */
|
|
|
|
if (isa_nmi(code) == 0) {
|
1994-08-27 16:14:39 +00:00
|
|
|
#ifdef DDB
|
2000-08-06 14:17:21 +00:00
|
|
|
/*
|
|
|
|
* NMI can be hooked up to a pushbutton
|
|
|
|
* for debugging.
|
|
|
|
*/
|
|
|
|
if (ddb_on_nmi) {
|
|
|
|
printf ("NMI ... going to debugger\n");
|
|
|
|
kdb_trap (type, 0, &frame);
|
|
|
|
}
|
1995-07-16 10:31:26 +00:00
|
|
|
#endif /* DDB */
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
2000-08-06 14:17:21 +00:00
|
|
|
} else if (panic_on_nmi == 0)
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1994-06-06 14:54:41 +00:00
|
|
|
/* FALL THROUGH */
|
1995-07-16 10:31:26 +00:00
|
|
|
#endif /* POWERFAIL_NMI */
|
2001-01-29 09:38:39 +00:00
|
|
|
#endif /* DEV_ISA */
|
1994-06-06 14:54:41 +00:00
|
|
|
}
|
1994-02-01 23:07:35 +00:00
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_fatal(&frame, eva);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
goto out;
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1998-04-28 18:15:08 +00:00
|
|
|
/* Translate fault for emulators (e.g. Linux) */
|
|
|
|
if (*p->p_sysent->sv_transtrap)
|
|
|
|
i = (*p->p_sysent->sv_transtrap)(i, type);
|
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
trapsignal(p, i, ucode);
|
1994-04-07 10:51:00 +00:00
|
|
|
|
1995-03-21 07:02:51 +00:00
|
|
|
#ifdef DEBUG
|
1994-06-06 14:54:41 +00:00
|
|
|
if (type <= MAX_TRAP_MSG) {
|
1995-05-30 08:16:23 +00:00
|
|
|
uprintf("fatal process exception: %s",
|
1994-06-06 14:54:41 +00:00
|
|
|
trap_msg[type]);
|
|
|
|
if ((type == T_PAGEFLT) || (type == T_PROTFLT))
|
1998-12-06 00:03:30 +00:00
|
|
|
uprintf(", fault VA = 0x%lx", (u_long)eva);
|
1994-04-07 10:51:00 +00:00
|
|
|
uprintf("\n");
|
|
|
|
}
|
|
|
|
#endif
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-04-07 10:51:00 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
user:
|
2001-01-24 09:53:49 +00:00
|
|
|
userret(p, &frame, sticks);
|
|
|
|
if (mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2001-01-26 04:16:16 +00:00
|
|
|
out:
|
|
|
|
return;
|
1994-06-06 14:54:41 +00:00
|
|
|
}
|
|
|
|
|
1995-03-21 07:16:12 +00:00
|
|
|
#ifdef notyet
|
|
|
|
/*
|
|
|
|
* This version doesn't allow a page fault to user space while
|
|
|
|
* in the kernel. The rest of the kernel needs to be made "safe"
|
|
|
|
* before this can be used. I think the only things remaining
|
|
|
|
* to be made safe are the iBCS2 code and the process tracing/
|
|
|
|
* debugging code.
|
|
|
|
*/
|
1995-12-09 20:40:43 +00:00
|
|
|
static int
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_pfault(frame, usermode, eva)
|
1995-03-21 07:16:12 +00:00
|
|
|
struct trapframe *frame;
|
|
|
|
int usermode;
|
1998-12-02 08:15:17 +00:00
|
|
|
vm_offset_t eva;
|
1995-03-21 07:16:12 +00:00
|
|
|
{
|
|
|
|
vm_offset_t va;
|
|
|
|
struct vmspace *vm = NULL;
|
|
|
|
vm_map_t map = 0;
|
|
|
|
int rv = 0;
|
|
|
|
vm_prot_t ftype;
|
|
|
|
struct proc *p = curproc;
|
|
|
|
|
|
|
|
if (frame->tf_err & PGEX_W)
|
2000-07-31 14:47:14 +00:00
|
|
|
ftype = VM_PROT_WRITE;
|
1995-03-21 07:16:12 +00:00
|
|
|
else
|
|
|
|
ftype = VM_PROT_READ;
|
|
|
|
|
1998-12-02 08:15:17 +00:00
|
|
|
va = trunc_page(eva);
|
1995-03-21 07:16:12 +00:00
|
|
|
if (va < VM_MIN_KERNEL_ADDRESS) {
|
|
|
|
vm_offset_t v;
|
1996-02-25 03:02:53 +00:00
|
|
|
vm_page_t mpte;
|
1995-03-21 07:16:12 +00:00
|
|
|
|
1995-07-30 17:49:24 +00:00
|
|
|
if (p == NULL ||
|
1995-03-21 07:16:12 +00:00
|
|
|
(!usermode && va < VM_MAXUSER_ADDRESS &&
|
2001-01-21 19:25:07 +00:00
|
|
|
(p->p_intr_nesting_level != 0 ||
|
2001-01-10 04:43:51 +00:00
|
|
|
PCPU_GET(curpcb) == NULL ||
|
|
|
|
PCPU_GET(curpcb)->pcb_onfault == NULL))) {
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_fatal(frame, eva);
|
1995-03-21 07:16:12 +00:00
|
|
|
return (-1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a fault on non-kernel virtual memory.
|
|
|
|
* vm is initialized above to NULL. If curproc is NULL
|
|
|
|
* or curproc->p_vmspace is NULL the fault is fatal.
|
|
|
|
*/
|
|
|
|
vm = p->p_vmspace;
|
|
|
|
if (vm == NULL)
|
|
|
|
goto nogo;
|
|
|
|
|
|
|
|
map = &vm->vm_map;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Keep swapout from messing with us during this
|
|
|
|
* critical time.
|
|
|
|
*/
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1995-03-21 07:16:12 +00:00
|
|
|
++p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1995-03-21 07:16:12 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Grow the stack if necessary
|
|
|
|
*/
|
1999-01-06 23:05:42 +00:00
|
|
|
/* grow_stack returns false only if va falls into
|
|
|
|
* a growable stack region and the stack growth
|
|
|
|
* fails. It returns true if va was not within
|
|
|
|
* a growable stack region, or if the stack
|
|
|
|
* growth succeeded.
|
|
|
|
*/
|
|
|
|
if (!grow_stack (p, va)) {
|
|
|
|
rv = KERN_FAILURE;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1999-01-06 23:05:42 +00:00
|
|
|
--p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-01-06 23:05:42 +00:00
|
|
|
goto nogo;
|
|
|
|
}
|
|
|
|
|
1995-03-21 07:16:12 +00:00
|
|
|
/* Fault in the user page: */
|
1997-04-06 02:29:45 +00:00
|
|
|
rv = vm_fault(map, va, ftype,
|
1999-11-09 01:44:28 +00:00
|
|
|
(ftype & VM_PROT_WRITE) ? VM_FAULT_DIRTY
|
|
|
|
: VM_FAULT_NORMAL);
|
1995-03-21 07:16:12 +00:00
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1995-03-21 07:16:12 +00:00
|
|
|
--p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1995-03-21 07:16:12 +00:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Don't allow user-mode faults in kernel address space.
|
|
|
|
*/
|
|
|
|
if (usermode)
|
|
|
|
goto nogo;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Since we know that kernel virtual address addresses
|
|
|
|
* always have pte pages mapped, we just have to fault
|
|
|
|
* the page.
|
|
|
|
*/
|
1999-11-09 01:44:28 +00:00
|
|
|
rv = vm_fault(kernel_map, va, ftype, VM_FAULT_NORMAL);
|
1995-03-21 07:16:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (rv == KERN_SUCCESS)
|
|
|
|
return (0);
|
|
|
|
nogo:
|
|
|
|
if (!usermode) {
|
2001-01-21 19:25:07 +00:00
|
|
|
if (p->p_intr_nesting_level == 0 &&
|
2001-01-10 04:43:51 +00:00
|
|
|
PCPU_GET(curpcb) != NULL &&
|
|
|
|
PCPU_GET(curpcb)->pcb_onfault != NULL) {
|
|
|
|
frame->tf_eip = (int)PCPU_GET(curpcb)->pcb_onfault;
|
1995-03-21 07:16:12 +00:00
|
|
|
return (0);
|
|
|
|
}
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_fatal(frame, eva);
|
1995-03-21 07:16:12 +00:00
|
|
|
return (-1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* kludge to pass faulting virtual address to sendsig */
|
|
|
|
frame->tf_err = eva;
|
|
|
|
|
|
|
|
return((rv == KERN_PROTECTION_FAILURE) ? SIGBUS : SIGSEGV);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
int
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_pfault(frame, usermode, eva)
|
1994-06-06 14:54:41 +00:00
|
|
|
struct trapframe *frame;
|
|
|
|
int usermode;
|
1998-12-02 08:15:17 +00:00
|
|
|
vm_offset_t eva;
|
1994-06-06 14:54:41 +00:00
|
|
|
{
|
|
|
|
vm_offset_t va;
|
1994-09-11 11:26:18 +00:00
|
|
|
struct vmspace *vm = NULL;
|
1994-06-06 14:54:41 +00:00
|
|
|
vm_map_t map = 0;
|
1994-10-08 22:19:51 +00:00
|
|
|
int rv = 0;
|
1994-06-06 14:54:41 +00:00
|
|
|
vm_prot_t ftype;
|
|
|
|
struct proc *p = curproc;
|
|
|
|
|
1998-12-02 08:15:17 +00:00
|
|
|
va = trunc_page(eva);
|
1994-09-11 11:26:18 +00:00
|
|
|
if (va >= KERNBASE) {
|
|
|
|
/*
|
|
|
|
* Don't allow user-mode faults in kernel address space.
|
1997-12-03 02:45:50 +00:00
|
|
|
* An exception: if the faulting address is the invalid
|
|
|
|
* instruction entry in the IDT, then the Intel Pentium
|
|
|
|
* F00F bug workaround was triggered, and we need to
|
|
|
|
* treat it is as an illegal instruction, and not a page
|
|
|
|
* fault.
|
1994-09-11 11:26:18 +00:00
|
|
|
*/
|
1997-12-04 14:35:40 +00:00
|
|
|
#if defined(I586_CPU) && !defined(NO_F00F_HACK)
|
2000-09-07 01:33:02 +00:00
|
|
|
if ((eva == (unsigned int)&idt[6]) && has_f00f_bug)
|
1997-12-03 02:45:50 +00:00
|
|
|
return -2;
|
|
|
|
#endif
|
1994-09-11 11:26:18 +00:00
|
|
|
if (usermode)
|
|
|
|
goto nogo;
|
1994-06-06 14:54:41 +00:00
|
|
|
|
|
|
|
map = kernel_map;
|
|
|
|
} else {
|
1994-09-11 11:26:18 +00:00
|
|
|
/*
|
|
|
|
* This is a fault on non-kernel virtual memory.
|
|
|
|
* vm is initialized above to NULL. If curproc is NULL
|
|
|
|
* or curproc->p_vmspace is NULL the fault is fatal.
|
|
|
|
*/
|
|
|
|
if (p != NULL)
|
|
|
|
vm = p->p_vmspace;
|
|
|
|
|
|
|
|
if (vm == NULL)
|
|
|
|
goto nogo;
|
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
map = &vm->vm_map;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (frame->tf_err & PGEX_W)
|
2000-07-31 14:47:14 +00:00
|
|
|
ftype = VM_PROT_WRITE;
|
1994-06-06 14:54:41 +00:00
|
|
|
else
|
|
|
|
ftype = VM_PROT_READ;
|
|
|
|
|
|
|
|
if (map != kernel_map) {
|
1993-06-12 14:58:17 +00:00
|
|
|
/*
|
1994-06-06 14:54:41 +00:00
|
|
|
* Keep swapout from messing with us during this
|
|
|
|
* critical time.
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1994-06-06 14:54:41 +00:00
|
|
|
++p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-06-06 14:54:41 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Grow the stack if necessary
|
|
|
|
*/
|
1999-01-06 23:05:42 +00:00
|
|
|
/* grow_stack returns false only if va falls into
|
|
|
|
* a growable stack region and the stack growth
|
|
|
|
* fails. It returns true if va was not within
|
|
|
|
* a growable stack region, or if the stack
|
|
|
|
* growth succeeded.
|
|
|
|
*/
|
|
|
|
if (!grow_stack (p, va)) {
|
|
|
|
rv = KERN_FAILURE;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1999-01-06 23:05:42 +00:00
|
|
|
--p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-01-06 23:05:42 +00:00
|
|
|
goto nogo;
|
|
|
|
}
|
1994-06-06 14:54:41 +00:00
|
|
|
|
|
|
|
/* Fault in the user page: */
|
1997-04-06 02:29:45 +00:00
|
|
|
rv = vm_fault(map, va, ftype,
|
1999-11-09 01:44:28 +00:00
|
|
|
(ftype & VM_PROT_WRITE) ? VM_FAULT_DIRTY
|
|
|
|
: VM_FAULT_NORMAL);
|
1994-06-06 14:54:41 +00:00
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1994-06-06 14:54:41 +00:00
|
|
|
--p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-06-06 14:54:41 +00:00
|
|
|
} else {
|
|
|
|
/*
|
2001-01-24 09:53:49 +00:00
|
|
|
* Don't have to worry about process locking or stacks in the
|
|
|
|
* kernel.
|
1994-06-06 14:54:41 +00:00
|
|
|
*/
|
1999-11-09 01:44:28 +00:00
|
|
|
rv = vm_fault(map, va, ftype, VM_FAULT_NORMAL);
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
if (rv == KERN_SUCCESS)
|
|
|
|
return (0);
|
|
|
|
nogo:
|
|
|
|
if (!usermode) {
|
2001-01-21 19:25:07 +00:00
|
|
|
if (p->p_intr_nesting_level == 0 &&
|
2001-01-10 04:43:51 +00:00
|
|
|
PCPU_GET(curpcb) != NULL &&
|
|
|
|
PCPU_GET(curpcb)->pcb_onfault != NULL) {
|
|
|
|
frame->tf_eip = (int)PCPU_GET(curpcb)->pcb_onfault;
|
1994-06-06 14:54:41 +00:00
|
|
|
return (0);
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_fatal(frame, eva);
|
1994-10-30 20:25:21 +00:00
|
|
|
return (-1);
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
1994-06-06 14:54:41 +00:00
|
|
|
|
|
|
|
/* kludge to pass faulting virtual address to sendsig */
|
|
|
|
frame->tf_err = eva;
|
|
|
|
|
|
|
|
return((rv == KERN_PROTECTION_FAILURE) ? SIGBUS : SIGSEGV);
|
|
|
|
}
|
|
|
|
|
1995-12-09 20:40:43 +00:00
|
|
|
static void
|
1998-12-02 08:15:17 +00:00
|
|
|
trap_fatal(frame, eva)
|
1994-06-06 14:54:41 +00:00
|
|
|
struct trapframe *frame;
|
1998-12-02 08:15:17 +00:00
|
|
|
vm_offset_t eva;
|
1994-06-06 14:54:41 +00:00
|
|
|
{
|
1998-12-02 08:15:17 +00:00
|
|
|
int code, type, ss, esp;
|
1994-10-01 02:56:21 +00:00
|
|
|
struct soft_segment_descriptor softseg;
|
1994-06-06 14:54:41 +00:00
|
|
|
|
|
|
|
code = frame->tf_err;
|
|
|
|
type = frame->tf_trapno;
|
1994-10-30 20:25:21 +00:00
|
|
|
sdtossd(&gdt[IDXSEL(frame->tf_cs & 0xffff)].sd, &softseg);
|
1994-06-06 14:54:41 +00:00
|
|
|
|
|
|
|
if (type <= MAX_TRAP_MSG)
|
|
|
|
printf("\n\nFatal trap %d: %s while in %s mode\n",
|
|
|
|
type, trap_msg[type],
|
1997-08-09 00:04:06 +00:00
|
|
|
frame->tf_eflags & PSL_VM ? "vm86" :
|
1997-08-21 06:33:04 +00:00
|
|
|
ISPL(frame->tf_cs) == SEL_UPL ? "user" : "kernel");
|
1997-04-26 11:46:25 +00:00
|
|
|
#ifdef SMP
|
2001-02-06 11:21:58 +00:00
|
|
|
/* two separate prints in case of a trap on an unmapped page */
|
2001-01-10 04:43:51 +00:00
|
|
|
printf("cpuid = %d; ", PCPU_GET(cpuid));
|
1997-09-05 08:54:55 +00:00
|
|
|
printf("lapic.id = %08x\n", lapic.id);
|
1997-04-26 11:46:25 +00:00
|
|
|
#endif
|
1994-06-06 14:54:41 +00:00
|
|
|
if (type == T_PAGEFLT) {
|
|
|
|
printf("fault virtual address = 0x%x\n", eva);
|
|
|
|
printf("fault code = %s %s, %s\n",
|
|
|
|
code & PGEX_U ? "user" : "supervisor",
|
|
|
|
code & PGEX_W ? "write" : "read",
|
|
|
|
code & PGEX_P ? "protection violation" : "page not present");
|
|
|
|
}
|
1996-03-27 17:33:39 +00:00
|
|
|
printf("instruction pointer = 0x%x:0x%x\n",
|
|
|
|
frame->tf_cs & 0xffff, frame->tf_eip);
|
1997-08-21 06:33:04 +00:00
|
|
|
if ((ISPL(frame->tf_cs) == SEL_UPL) || (frame->tf_eflags & PSL_VM)) {
|
1996-03-27 17:33:39 +00:00
|
|
|
ss = frame->tf_ss & 0xffff;
|
|
|
|
esp = frame->tf_esp;
|
|
|
|
} else {
|
|
|
|
ss = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
esp = (int)&frame->tf_esp;
|
|
|
|
}
|
|
|
|
printf("stack pointer = 0x%x:0x%x\n", ss, esp);
|
|
|
|
printf("frame pointer = 0x%x:0x%x\n", ss, frame->tf_ebp);
|
1994-10-01 02:56:21 +00:00
|
|
|
printf("code segment = base 0x%x, limit 0x%x, type 0x%x\n",
|
1996-03-27 17:33:39 +00:00
|
|
|
softseg.ssd_base, softseg.ssd_limit, softseg.ssd_type);
|
1994-10-01 02:56:21 +00:00
|
|
|
printf(" = DPL %d, pres %d, def32 %d, gran %d\n",
|
1996-03-27 17:33:39 +00:00
|
|
|
softseg.ssd_dpl, softseg.ssd_p, softseg.ssd_def32,
|
|
|
|
softseg.ssd_gran);
|
1994-06-06 14:54:41 +00:00
|
|
|
printf("processor eflags = ");
|
1994-09-08 11:49:04 +00:00
|
|
|
if (frame->tf_eflags & PSL_T)
|
1996-03-27 17:33:39 +00:00
|
|
|
printf("trace trap, ");
|
1994-09-08 11:49:04 +00:00
|
|
|
if (frame->tf_eflags & PSL_I)
|
1994-06-06 14:54:41 +00:00
|
|
|
printf("interrupt enabled, ");
|
1994-09-08 11:49:04 +00:00
|
|
|
if (frame->tf_eflags & PSL_NT)
|
1994-06-06 14:54:41 +00:00
|
|
|
printf("nested task, ");
|
1994-09-08 11:49:04 +00:00
|
|
|
if (frame->tf_eflags & PSL_RF)
|
1994-06-06 14:54:41 +00:00
|
|
|
printf("resume, ");
|
1994-09-08 11:49:04 +00:00
|
|
|
if (frame->tf_eflags & PSL_VM)
|
1994-06-06 14:54:41 +00:00
|
|
|
printf("vm86, ");
|
1994-09-08 11:49:04 +00:00
|
|
|
printf("IOPL = %d\n", (frame->tf_eflags & PSL_IOPL) >> 12);
|
1994-06-06 14:54:41 +00:00
|
|
|
printf("current process = ");
|
|
|
|
if (curproc) {
|
1994-10-08 22:19:51 +00:00
|
|
|
printf("%lu (%s)\n",
|
|
|
|
(u_long)curproc->p_pid, curproc->p_comm ?
|
1994-06-06 14:54:41 +00:00
|
|
|
curproc->p_comm : "");
|
|
|
|
} else {
|
|
|
|
printf("Idle\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef KDB
|
|
|
|
if (kdb_trap(&psl))
|
|
|
|
return;
|
|
|
|
#endif
|
1994-08-27 16:14:39 +00:00
|
|
|
#ifdef DDB
|
2000-01-11 14:54:01 +00:00
|
|
|
if ((debugger_on_panic || db_active) && kdb_trap(type, 0, frame))
|
1994-06-06 14:54:41 +00:00
|
|
|
return;
|
|
|
|
#endif
|
1997-04-26 11:46:25 +00:00
|
|
|
printf("trap number = %d\n", type);
|
1994-06-06 14:54:41 +00:00
|
|
|
if (type <= MAX_TRAP_MSG)
|
|
|
|
panic(trap_msg[type]);
|
|
|
|
else
|
|
|
|
panic("unknown/reserved trap");
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
|
|
|
|
1995-12-19 14:30:50 +00:00
|
|
|
/*
|
|
|
|
* Double fault handler. Called when a fault occurs while writing
|
|
|
|
* a frame for a trap/exception onto the stack. This usually occurs
|
|
|
|
* when the stack overflows (such is the case with infinite recursion,
|
|
|
|
* for example).
|
|
|
|
*
|
|
|
|
* XXX Note that the current PTD gets replaced by IdlePTD when the
|
|
|
|
* task switch occurs. This means that the stack that was active at
|
|
|
|
* the time of the double fault is not available at <kstack> unless
|
1995-12-19 14:47:41 +00:00
|
|
|
* the machine was idle when the double fault occurred. The downside
|
1995-12-19 14:30:50 +00:00
|
|
|
* of this is that "trace <ebp>" in ddb won't work.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
dblfault_handler()
|
|
|
|
{
|
1997-04-14 13:52:52 +00:00
|
|
|
printf("\nFatal double fault:\n");
|
2001-01-10 04:43:51 +00:00
|
|
|
printf("eip = 0x%x\n", PCPU_GET(common_tss.tss_eip));
|
|
|
|
printf("esp = 0x%x\n", PCPU_GET(common_tss.tss_esp));
|
|
|
|
printf("ebp = 0x%x\n", PCPU_GET(common_tss.tss_ebp));
|
1997-06-22 16:04:22 +00:00
|
|
|
#ifdef SMP
|
2001-02-06 11:21:58 +00:00
|
|
|
/* two separate prints in case of a trap on an unmapped page */
|
2001-01-10 04:43:51 +00:00
|
|
|
printf("cpuid = %d; ", PCPU_GET(cpuid));
|
1997-09-05 08:54:55 +00:00
|
|
|
printf("lapic.id = %08x\n", lapic.id);
|
1997-04-26 11:46:25 +00:00
|
|
|
#endif
|
1995-12-19 14:30:50 +00:00
|
|
|
panic("double fault");
|
|
|
|
}
|
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
/*
|
1993-07-27 10:52:31 +00:00
|
|
|
* Compensate for 386 brain damage (missing URKR).
|
|
|
|
* This is a little simpler than the pagefault handler in trap() because
|
|
|
|
* it the page tables have already been faulted in and high addresses
|
|
|
|
* are thrown out early for other reasons.
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
1993-07-27 10:52:31 +00:00
|
|
|
int trapwrite(addr)
|
|
|
|
unsigned addr;
|
|
|
|
{
|
|
|
|
struct proc *p;
|
1997-01-23 01:30:59 +00:00
|
|
|
vm_offset_t va;
|
1993-07-27 10:52:31 +00:00
|
|
|
struct vmspace *vm;
|
1994-01-14 16:25:31 +00:00
|
|
|
int rv;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
|
|
|
va = trunc_page((vm_offset_t)addr);
|
1993-07-27 10:52:31 +00:00
|
|
|
/*
|
|
|
|
* XXX - MAX is END. Changed > to >= for temp. fix.
|
|
|
|
*/
|
|
|
|
if (va >= VM_MAXUSER_ADDRESS)
|
|
|
|
return (1);
|
1994-02-08 09:26:04 +00:00
|
|
|
|
1993-07-27 10:52:31 +00:00
|
|
|
p = curproc;
|
|
|
|
vm = p->p_vmspace;
|
1994-01-14 16:25:31 +00:00
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-25 09:21:21 +00:00
|
|
|
++p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-01-14 16:25:31 +00:00
|
|
|
|
1999-01-06 23:05:42 +00:00
|
|
|
if (!grow_stack (p, va)) {
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1999-01-06 23:05:42 +00:00
|
|
|
--p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-01-06 23:05:42 +00:00
|
|
|
return (1);
|
|
|
|
}
|
1993-07-27 10:52:31 +00:00
|
|
|
|
1994-02-08 09:26:04 +00:00
|
|
|
/*
|
|
|
|
* fault the data page
|
|
|
|
*/
|
2000-07-31 14:47:14 +00:00
|
|
|
rv = vm_fault(&vm->vm_map, va, VM_PROT_WRITE, VM_FAULT_DIRTY);
|
1994-02-08 09:26:04 +00:00
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-25 09:21:21 +00:00
|
|
|
--p->p_lock;
|
2001-01-24 09:53:49 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-01-14 16:25:31 +00:00
|
|
|
|
|
|
|
if (rv != KERN_SUCCESS)
|
|
|
|
return 1;
|
1994-02-08 09:26:04 +00:00
|
|
|
|
1993-07-27 10:52:31 +00:00
|
|
|
return (0);
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2001-02-10 02:20:34 +00:00
|
|
|
* syscall - MP aware system call request C handler
|
2000-03-28 07:16:37 +00:00
|
|
|
*
|
|
|
|
* A system call is essentially treated as a trap except that the
|
|
|
|
* MP lock is not held on entry or return. We are responsible for
|
|
|
|
* obtaining the MP lock if necessary and for handling ASTs
|
|
|
|
* (e.g. a task switch) prior to return.
|
|
|
|
*
|
|
|
|
* In general, only simple access and manipulation of curproc and
|
|
|
|
* the current stack is allowed without having to hold MP lock.
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
1993-11-25 01:38:01 +00:00
|
|
|
void
|
2001-02-10 02:20:34 +00:00
|
|
|
syscall(frame)
|
1994-06-06 14:54:41 +00:00
|
|
|
struct trapframe frame;
|
1993-06-12 14:58:17 +00:00
|
|
|
{
|
1994-06-06 14:54:41 +00:00
|
|
|
caddr_t params;
|
|
|
|
int i;
|
|
|
|
struct sysent *callp;
|
|
|
|
struct proc *p = curproc;
|
1994-05-25 09:21:21 +00:00
|
|
|
u_quad_t sticks;
|
1995-08-21 18:06:48 +00:00
|
|
|
int error;
|
2000-03-28 07:16:37 +00:00
|
|
|
int narg;
|
1997-11-06 19:29:57 +00:00
|
|
|
int args[8];
|
1994-05-25 09:21:21 +00:00
|
|
|
u_int code;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
2000-09-07 01:33:02 +00:00
|
|
|
atomic_add_int(&cnt.v_syscall, 1);
|
|
|
|
|
1997-11-24 13:25:37 +00:00
|
|
|
#ifdef DIAGNOSTIC
|
2000-03-28 07:16:37 +00:00
|
|
|
if (ISPL(frame.tf_cs) != SEL_UPL) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1993-06-12 14:58:17 +00:00
|
|
|
panic("syscall");
|
2000-03-28 07:16:37 +00:00
|
|
|
/* NOT REACHED */
|
|
|
|
}
|
1997-11-24 13:25:37 +00:00
|
|
|
#endif
|
2000-03-28 07:16:37 +00:00
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-01-24 09:53:49 +00:00
|
|
|
sticks = p->p_sticks;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2000-03-28 07:16:37 +00:00
|
|
|
|
1997-05-07 20:08:53 +00:00
|
|
|
p->p_md.md_regs = &frame;
|
1995-08-21 18:06:48 +00:00
|
|
|
params = (caddr_t)frame.tf_esp + sizeof(int);
|
|
|
|
code = frame.tf_eax;
|
2000-03-28 07:16:37 +00:00
|
|
|
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
if (p->p_sysent->sv_prepsyscall) {
|
2000-03-28 07:16:37 +00:00
|
|
|
/*
|
|
|
|
* The prep code is not MP aware.
|
|
|
|
*/
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
(*p->p_sysent->sv_prepsyscall)(&frame, args, &code, ¶ms);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
} else {
|
1994-05-25 09:21:21 +00:00
|
|
|
/*
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
* Need to check if this is a 32 bit or 64 bit syscall.
|
2000-03-28 07:16:37 +00:00
|
|
|
* fuword is MP aware.
|
1994-05-25 09:21:21 +00:00
|
|
|
*/
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
if (code == SYS_syscall) {
|
|
|
|
/*
|
|
|
|
* Code is first argument, followed by actual args.
|
|
|
|
*/
|
|
|
|
code = fuword(params);
|
|
|
|
params += sizeof(int);
|
|
|
|
} else if (code == SYS___syscall) {
|
|
|
|
/*
|
|
|
|
* Like syscall, but code is a quad, so as to maintain
|
|
|
|
* quad alignment for the rest of the arguments.
|
|
|
|
*/
|
|
|
|
code = fuword(params);
|
|
|
|
params += sizeof(quad_t);
|
|
|
|
}
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
1994-05-25 09:21:21 +00:00
|
|
|
|
1994-08-24 11:52:21 +00:00
|
|
|
if (p->p_sysent->sv_mask)
|
1995-08-21 18:06:48 +00:00
|
|
|
code &= p->p_sysent->sv_mask;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
1994-08-28 16:16:33 +00:00
|
|
|
if (code >= p->p_sysent->sv_size)
|
1994-08-24 11:52:21 +00:00
|
|
|
callp = &p->p_sysent->sv_table[0];
|
|
|
|
else
|
|
|
|
callp = &p->p_sysent->sv_table[code];
|
1993-06-12 14:58:17 +00:00
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
narg = callp->sy_narg & SYF_ARGMASK;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* copyin is MP aware, but the tracing code is not
|
|
|
|
*/
|
|
|
|
if (params && (i = narg * sizeof(int)) &&
|
1993-06-12 14:58:17 +00:00
|
|
|
(error = copyin(params, (caddr_t)args, (u_int)i))) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1993-06-12 14:58:17 +00:00
|
|
|
#ifdef KTRACE
|
|
|
|
if (KTRPOINT(p, KTR_SYSCALL))
|
2000-03-28 07:16:37 +00:00
|
|
|
ktrsyscall(p->p_tracep, code, narg, args);
|
1993-06-12 14:58:17 +00:00
|
|
|
#endif
|
1994-06-06 14:54:41 +00:00
|
|
|
goto bad;
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
2000-03-28 07:16:37 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Try to run the syscall without the MP lock if the syscall
|
|
|
|
* is MP safe. We have to obtain the MP lock no matter what if
|
|
|
|
* we are ktracing
|
|
|
|
*/
|
|
|
|
if ((callp->sy_narg & SYF_MPSAFE) == 0) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-03-28 07:16:37 +00:00
|
|
|
}
|
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
#ifdef KTRACE
|
2000-03-28 07:16:37 +00:00
|
|
|
if (KTRPOINT(p, KTR_SYSCALL)) {
|
2001-01-24 09:53:49 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-03-28 07:16:37 +00:00
|
|
|
ktrsyscall(p->p_tracep, code, narg, args);
|
|
|
|
}
|
1993-06-12 14:58:17 +00:00
|
|
|
#endif
|
1997-11-06 19:29:57 +00:00
|
|
|
p->p_retval[0] = 0;
|
|
|
|
p->p_retval[1] = frame.tf_edx;
|
1994-06-06 14:54:41 +00:00
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
STOPEVENT(p, S_SCE, narg); /* MP aware */
|
1997-12-06 04:11:14 +00:00
|
|
|
|
1997-11-06 19:29:57 +00:00
|
|
|
error = (*callp->sy_call)(p, args);
|
1994-06-06 14:54:41 +00:00
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
/*
|
|
|
|
* MP SAFE (we may or may not have the MP lock at this point)
|
|
|
|
*/
|
1994-06-06 14:54:41 +00:00
|
|
|
switch (error) {
|
|
|
|
case 0:
|
1997-11-06 19:29:57 +00:00
|
|
|
frame.tf_eax = p->p_retval[0];
|
|
|
|
frame.tf_edx = p->p_retval[1];
|
1995-10-09 04:36:01 +00:00
|
|
|
frame.tf_eflags &= ~PSL_C;
|
1994-06-06 14:54:41 +00:00
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
|
1994-06-06 14:54:41 +00:00
|
|
|
case ERESTART:
|
1995-08-21 18:06:48 +00:00
|
|
|
/*
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
* Reconstruct pc, assuming lcall $X,y is 7 bytes,
|
|
|
|
* int 0x80 is 2 bytes. We saved this in tf_err.
|
1995-08-21 18:06:48 +00:00
|
|
|
*/
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
frame.tf_eip -= frame.tf_err;
|
1994-06-06 14:54:41 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case EJUSTRETURN:
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
1995-08-21 18:06:48 +00:00
|
|
|
bad:
|
1999-05-06 18:13:11 +00:00
|
|
|
if (p->p_sysent->sv_errsize) {
|
1994-10-10 07:33:01 +00:00
|
|
|
if (error >= p->p_sysent->sv_errsize)
|
|
|
|
error = -1; /* XXX */
|
1995-05-30 08:16:23 +00:00
|
|
|
else
|
1994-10-10 07:33:01 +00:00
|
|
|
error = p->p_sysent->sv_errtbl[error];
|
1999-05-06 18:13:11 +00:00
|
|
|
}
|
1994-06-06 14:54:41 +00:00
|
|
|
frame.tf_eax = error;
|
1995-10-09 04:36:01 +00:00
|
|
|
frame.tf_eflags |= PSL_C;
|
1994-06-06 14:54:41 +00:00
|
|
|
break;
|
1993-06-12 14:58:17 +00:00
|
|
|
}
|
1994-06-06 14:54:41 +00:00
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
/*
|
|
|
|
* Traced syscall. trapsignal() is not MP aware.
|
|
|
|
*/
|
1997-08-09 00:04:06 +00:00
|
|
|
if ((frame.tf_eflags & PSL_T) && !(frame.tf_eflags & PSL_VM)) {
|
2001-01-24 09:53:49 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1995-10-04 07:08:04 +00:00
|
|
|
frame.tf_eflags &= ~PSL_T;
|
1995-10-09 04:36:01 +00:00
|
|
|
trapsignal(p, SIGTRAP, 0);
|
1995-10-04 07:08:04 +00:00
|
|
|
}
|
1995-10-09 04:36:01 +00:00
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
/*
|
|
|
|
* Handle reschedule and other end-of-syscall issues
|
|
|
|
*/
|
2001-01-24 09:53:49 +00:00
|
|
|
userret(p, &frame, sticks);
|
1994-06-06 14:54:41 +00:00
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
#ifdef KTRACE
|
2000-03-28 07:16:37 +00:00
|
|
|
if (KTRPOINT(p, KTR_SYSRET)) {
|
2001-01-24 09:53:49 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
1997-11-06 19:29:57 +00:00
|
|
|
ktrsysret(p->p_tracep, code, error, p->p_retval[0]);
|
2000-03-28 07:16:37 +00:00
|
|
|
}
|
1993-06-12 14:58:17 +00:00
|
|
|
#endif
|
1997-12-06 04:11:14 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This works because errno is findable through the
|
|
|
|
* register set. If we ever support an emulation where this
|
|
|
|
* is not the case, this code will need to be revisited.
|
|
|
|
*/
|
|
|
|
STOPEVENT(p, S_SCX, code);
|
|
|
|
|
2000-03-28 07:16:37 +00:00
|
|
|
/*
|
2001-01-24 09:53:49 +00:00
|
|
|
* Release Giant if we had to get it
|
2000-03-28 07:16:37 +00:00
|
|
|
*/
|
2001-01-24 09:53:49 +00:00
|
|
|
if (mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
2000-09-07 01:33:02 +00:00
|
|
|
|
2000-12-12 01:14:32 +00:00
|
|
|
#ifdef WITNESS
|
|
|
|
if (witness_list(p)) {
|
|
|
|
panic("system call %s returning with mutex(s) held\n",
|
|
|
|
syscallnames[code]);
|
|
|
|
}
|
|
|
|
#endif
|
2001-01-24 09:53:49 +00:00
|
|
|
mtx_assert(&sched_lock, MA_NOTOWNED);
|
|
|
|
mtx_assert(&Giant, MA_NOTOWNED);
|
2000-09-07 01:33:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
ast(frame)
|
|
|
|
struct trapframe frame;
|
|
|
|
{
|
|
|
|
struct proc *p = CURPROC;
|
|
|
|
u_quad_t sticks;
|
|
|
|
|
2001-02-10 02:20:34 +00:00
|
|
|
KASSERT(TRAPF_USERMODE(&frame), ("ast in kernel mode"));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We check for a pending AST here rather than in the assembly as
|
|
|
|
* acquiring and releasing mutexes in assembly is not fun.
|
|
|
|
*/
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-02-19 04:15:59 +00:00
|
|
|
if (!(astpending(p) || resched_wanted())) {
|
2001-02-10 02:20:34 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
sticks = p->p_sticks;
|
2001-02-10 02:20:34 +00:00
|
|
|
|
2001-02-19 04:15:59 +00:00
|
|
|
astoff(p);
|
2001-02-10 02:20:34 +00:00
|
|
|
mtx_intr_enable(&sched_lock);
|
2000-09-07 01:33:02 +00:00
|
|
|
atomic_add_int(&cnt.v_soft, 1);
|
2001-01-24 09:53:49 +00:00
|
|
|
if (p->p_sflag & PS_OWEUPC) {
|
|
|
|
p->p_sflag &= ~PS_OWEUPC;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
mtx_lock(&Giant);
|
|
|
|
mtx_lock_spin(&sched_lock);
|
2000-09-07 01:33:02 +00:00
|
|
|
addupc_task(p, p->p_stats->p_prof.pr_addr,
|
|
|
|
p->p_stats->p_prof.pr_ticks);
|
2000-10-06 01:55:07 +00:00
|
|
|
}
|
2001-01-24 09:53:49 +00:00
|
|
|
if (p->p_sflag & PS_ALRMPEND) {
|
|
|
|
p->p_sflag &= ~PS_ALRMPEND;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2000-10-06 02:20:21 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-10-06 02:20:21 +00:00
|
|
|
psignal(p, SIGVTALRM);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2000-10-06 02:20:21 +00:00
|
|
|
}
|
2001-01-24 09:53:49 +00:00
|
|
|
if (p->p_sflag & PS_PROFPEND) {
|
|
|
|
p->p_sflag &= ~PS_PROFPEND;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2000-10-06 02:20:21 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&Giant);
|
2000-10-06 02:20:21 +00:00
|
|
|
psignal(p, SIGPROF);
|
2001-01-24 09:53:49 +00:00
|
|
|
} else
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2001-01-24 09:53:49 +00:00
|
|
|
|
|
|
|
userret(p, &frame, sticks);
|
1997-04-07 07:16:06 +00:00
|
|
|
|
2001-01-24 09:53:49 +00:00
|
|
|
if (mtx_owned(&Giant))
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&Giant);
|
1997-04-07 07:16:06 +00:00
|
|
|
}
|