Commit Graph

1126 Commits

Author SHA1 Message Date
raj
05437e53d5 Rework and extend PowerPC headers definitons towards Book-E/e500 CPUs support.
Approved by:	cognet (mentor)
Obtained from:	Juniper, Semihalf
MFp4:		e500
2008-03-03 13:20:52 +00:00
raj
3dea77f93c Unify and generalize PowerPC headers, adjust AIM code accordingly.
Rework of this area is a pre-requirement for importing e500 support (and
other PowerPC core variations in the future). Mainly the following
headers are refactored so that we can cover for low-level differences between
various machines within PowerPC architecture:

  <machine/pcpu.h>
  <machine/pcb.h>
  <machine/kdb.h>
  <machine/hid.h>
  <machine/frame.h>

Areas which use the above are adjusted and cleaned up.

Credits for this rework go to marcel@

Approved by:	cognet (mentor)
MFp4:		e500
2008-03-02 17:05:57 +00:00
jeff
ad2a31513f - Remove the old smp cpu topology specification with a new, more flexible
tree structure that encodes the level of cache sharing and other
   properties.
 - Provide several convenience functions for creating one and two level
   cpu trees as well as a default flat topology.  The system now always
   has some topology.
 - On i386 and amd64 create a seperate level in the hierarchy for HTT
   and multi-core cpus.  This will allow the scheduler to intelligently
   load balance non-uniform cores.  Presently we don't detect what level
   of the cache hierarchy is shared at each level in the topology.
 - Add a mechanism for testing common topologies that have more information
   than the MD code is able to provide via the kern.smp.topology tunable.
   This should be considered a debugging tool only and not a stable api.

Sponsored by:	Nokia
2008-03-02 07:58:42 +00:00
marcel
bcd067ff4f Avoid hardcoding the kernel link address in the linker script.
Use KERNBASE instead. While here, move the text sections
forward to the beginning of the text segment.
2008-02-27 00:03:23 +00:00
raj
ec4d22c527 Teach PowerPC CPU identification routines to recognize e500 cores. Fix style
issues in this area.

Approved by:	cognet (mentor)
MFp4:		e500
2008-02-25 00:09:23 +00:00
raj
69575dab52 Let PowerPC world optionally build with -msoft-float. For FPU-less PowerPC
variations (e500 currently), this provides a gcc-level FPU emulation and is an
alternative approach to the recently introduced kernel-level emulation
(FPU_EMU).

Approved by:	cognet (mentor)
MFp4:		e500
2008-02-24 19:22:53 +00:00
marcel
f0743db21b Don't define DEBUG. No debugging required.
Pointy hat: marcel
2008-02-24 17:10:30 +00:00
marcel
ded28f747f Resolve warnings exposed by LINT.
o  Put prototypes in a single header only.
o  Fix printf format specifiers.
2008-02-24 03:01:26 +00:00
marcel
bc21c9fa85 Add FPU_EMU. 2008-02-23 22:32:16 +00:00
marcel
7e57bff3e2 Add a floating-point emulator so that a single userland or single ABI
can run on processors that don't have a FPU. This is typically the
case for Book E processors. While a tuned system will probably want
to use soft-float (or use a processor that has a FPU if the usage is
FP intensive enough), allowing hard-float on FPU-less systems gives
great portability and flexibility.

Obtained from: NetBSD
2008-02-23 20:05:26 +00:00
marcel
2c03940da7 Define the bootinfo structure for FreeBSD. It is not used on
AIM, but it's used for BookE.
2008-02-23 18:01:45 +00:00
marcel
b871fc793c Enable option WITNESS_SKIPSPIN by default. 2008-02-16 17:59:27 +00:00
marcel
06a30b50ee Remove SMP left-overs from NetBSD. 2008-02-12 20:55:51 +00:00
marcel
636c607d81 There's no need to suppress option GDB. 2008-02-12 19:38:39 +00:00
marcel
ab259d0a33 Add PIC support for IPIs. When registering an interrupt handler,
the PIC also informs the platform at which IRQ level it can start
assigning IPIs, since this can depend on the number of IRQs
supported for external interrupts.
2008-02-12 18:14:46 +00:00
julian
e106c6b62c One of my powerbooks has this chip in it..
Confirmed by looking at netbsd.. they have also added this.
checked by grehen
MFC After: 3 days
2008-01-26 05:11:09 +00:00
jhb
c7e0e41f73 Add COMPAT_FREEBSD7 and enable it in configs that have COMPAT_FREEBSD6. 2008-01-07 21:40:11 +00:00
alc
545d26e30b Add an access type parameter to pmap_enter(). It will be used to implement
superpage promotion.

Correct a style error in kmem_malloc(): pmap_enter()'s last parameter is
a Boolean.
2008-01-03 07:34:34 +00:00
alc
37cdbd87f5 Add configuration knobs for the superpage reservation system. Initially,
the reservation will only be enabled on amd64.
2007-12-27 16:45:39 +00:00
rwatson
bdee30611d Add a new 'why' argument to kdb_enter(), and a set of constants to use
for that argument.  This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.

Assign approximate why values to all current consumers of the
kdb_enter() interface.
2007-12-25 17:52:02 +00:00
marcel
ad8ce572c1 Apply missing s/rv/res/g in previous commit. 2007-12-21 00:23:23 +00:00
jhb
de6536e34d MFamd64/ia64/i386: Only set the rman bus tags and handles in
bus_activate_resource() methods instead of splitting it up between
bus_alloc_resource() and bus_activate_resource().

Glanced at by:	marcel
2007-12-20 21:42:43 +00:00
marcel
c932130a31 Redefine bus_space_tag_t on PowerPC from a 32-bit integral to
a pointer to struct bus_space. The structure contains function
pointers that do the actual bus space access.

The reason for this change is that previously all bus space
accesses were little endian (i.e. had an explicit byte-swap
for multi-byte accesses), because all busses on Macs are little
endian.
The upcoming support for Book E, and in particular the E500
core, requires support for big-endian busses because all
embedded peripherals are in the native byte-order.

With this change, there's no distinction between I/O port
space and memory mapped I/O. PowerPC doesn't have I/O port
space. Busses assign tags based on the byte-order only.
For that purpose, two global structures exist (bs_be_tag and
bs_le_tag), of which the address can be taken to get a valid
tag.

Obtained from: Juniper, Semihalf
2007-12-19 18:00:50 +00:00
marcel
bdb53a4ffa Rename OEA to AIM. The former means nothing as it applies to all
processors (it's the PowerPC Operating Environment Architecture).
AIM designates the processors made by the Apple-IBM-Motorola
alliance and those we typically support.

While here, remove the NetBSD option IPKDB. It's not an option
used by us. Also, PPC_HAVE_FPU is not used by us either. Remove
that too.

Obtained from: Juniper, Semihalf
2007-12-16 00:45:56 +00:00
marcel
e1d637468a This file was repocopied to src/sys/powerpc/aim, where it will
live on -- an afterlife.
2007-12-14 23:03:48 +00:00
marcel
98c080364c Forced commit to record that this file was repocopied from
src/sys/powerpc/powerpc and modified for its new location.
2007-12-14 22:39:35 +00:00
marcel
6d9ac95311 Remove unused file. 2007-12-14 19:59:53 +00:00
jkoshy
39d4b4accf Add stubs to unbreak LINT. 2007-12-07 13:45:47 +00:00
rwatson
99285f7544 Break out stack(9) from ddb(4):
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
  definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
  defined, or also if "options DDB" is defined to provide compatibility
  with existing users of stack(9).

Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to.  It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.

Update stack(9) man page.

Build tested:	amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested:	amd64 (rwatson), arm (cognet), i386 (rwatson)
2007-12-02 20:40:35 +00:00
jasone
607f2953c0 Define atomic_readandclear_ptr. 2007-11-27 06:34:15 +00:00
jb
7cd7e3058e Implement the _long functions using u_long rather than trying to
cast as uint32_t which is defined as unsigned int. gcc doesn't want to
consider that there might not be much difference between an int and
a long on a 32 bit architecture.
2007-11-26 05:52:45 +00:00
scottl
b607c8d8ad Extend critical section coverage in the low-level interrupt handlers to
include the ithread scheduling step.  Without this, a preemption might
occur in between the interrupt getting masked and the ithread getting
scheduled.  Since the interrupt handler runs in the context of curthread,
the scheudler might see it as having a such a low priority on a busy system
that it doesn't get to run for a _long_ time, leaving the interrupt stranded
in a disabled state.  The only way that the preemption can happen is by
a fast/filter handler triggering a schduling event earlier in the handler,
so this problem can only happen for cases where an interrupt is being
shared by both a fast/filter handler and an ithread handler.  Unfortunately,
it seems to be common for this sharing to happen with network and USB
devices, for example.  This fixes many of the mysterious TCP session
timeouts and NIC watchdogs that were being reported.  Many thanks to Sam
Lefler for getting to the bottom of this problem.

Reviewed by: jhb, jeff, silby
2007-11-21 04:03:51 +00:00
jb
c0f07cdcc9 Define atomic_cmpset_acq_long and atomic_cmpset_rel_long so that
they use casts rather than just assuming that the compiler will DTRT
without complaining.
2007-11-19 03:16:16 +00:00
alc
d1ab859bdc Prevent the leakage of wired pages in the following circumstances:
First, a file is mmap(2)ed and then mlock(2)ed.  Later, it is truncated.
Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the
pages beyond the EOF are unmapped and freed.  However, when the file is
mlock(2)ed, the pages beyond the EOF are unmapped but not freed because
they have a non-zero wire count.  This can be a mistake.  Specifically,
it is a mistake if the sole reason why the pages are wired is because of
wired, managed mappings.  Previously, unmapping the pages destroys these
wired, managed mappings, but does not reduce the pages' wire count.
Consequently, when the file is unmapped, the pages are not unwired
because the wired mapping has been destroyed.  Moreover, when the vm
object is finally destroyed, the pages are leaked because they are still
wired.  The fix is to reduce the pages' wired count by the number of
wired, managed mappings destroyed.  To do this, I introduce a new pmap
function pmap_page_wired_mappings() that returns the number of managed
mappings to the given physical page that are wired, and I use this
function in vm_object_page_remove().

Reviewed by: tegge
MFC after: 6 weeks
2007-11-17 22:52:29 +00:00
marcel
1e7c4f0a3f o Rename cpu_thread_setup() to cpu_thread_alloc() to better
communicate that it relates to (is called by) thread_alloc()
o  Add cpu_thread_free() which is called from thread_free()
   to counter-act cpu_thread_alloc().

i386:	Have cpu_thread_free() call cpu_thread_clean() to
	preserve behaviour.
ia64:	Have cpu_thread_free() call mtx_destroy() for the
	mutex initialized in cpu_thread_alloc().

PR: ia64/118024
2007-11-14 20:21:54 +00:00
julian
7ee6259be7 A bunch more files that should probably print out a thread name
instead of a process name.
2007-11-14 06:51:33 +00:00
julian
b2732e0c22 generally we are interested in what thread did something as
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print the
thread name instead.
2007-11-14 06:21:24 +00:00
grehan
243c15922e Split decr_init() into two, with the section that reads the timebase
frequency from OpenFirmware moved out and into a routine that is called
from cpu_startup().

This allows correct reporting of the CPU clockspeed when printing out
CPU information at boot time.

Reported by:	numerous
Reviewed by:	marcel
MFC after:	1 day
2007-11-13 15:47:55 +00:00
kib
9ae733819b Fix for the panic("vm_thread_new: kstack allocation failed") and
silent NULL pointer dereference in the i386 and sparc64 pmap_pinit()
when the kmem_alloc_nofault() failed to allocate address space. Both
functions now return error instead of panicing or dereferencing NULL.

As consequence, vmspace_exec() and vmspace_unshare() returns the errno
int. struct vmspace arg was added to vm_forkproc() to avoid dealing
with failed allocation when most of the fork1() job is already done.

The kernel stack for the thread is now set up in the thread_alloc(),
that itself may return NULL. Also, allocation of the first process
thread is performed in the fork1() to properly deal with stack
allocation failure. proc_linkup() is separated into proc_linkup()
called from fork1(), and proc_linkup0(), that is used to set up the
kernel process (was known as swapper).

In collaboration with:	Peter Holm
Reviewed by:	jhb
2007-11-05 11:36:16 +00:00
grehan
7c82e0520d Cut over to ULE on PowerPC
kern/sched_ule.c - Add __powerpc__ to the list of supported architectures

powerpc/conf/GENERIC - Swap SCHED_4BSD with SCHED_ULE

powerpc/powerpc/genassym.c - Export TD_LOCK field of thread struct

powerpc/powerpc/swtch.S - Handle new 3rd parameter to cpu_switch() by
 updating the old thread's lock. Note: uniprocessor-only, will require
 modification for MP support.

powerpc/powerpc/vm_machdep.c - Set 3rd param of cpu_switch to mutex of
old thread's lock, making the call a no-op.

Reviewed by:	marcel, jeffr (slightly older version)
2007-10-23 00:52:25 +00:00
marius
d60b8a3096 Make the PCI code aware of PCI domains (aka PCI segments) so we can
support machines having multiple independently numbered PCI domains
and don't support reenumeration without ambiguity amongst the
devices as seen by the OS and represented by PCI location strings.
This includes introducing a function pci_find_dbsf(9) which works
like pci_find_bsf(9) but additionally takes a domain number argument
and limiting pci_find_bsf(9) to only search devices in domain 0 (the
only domain in single-domain systems). Bge(4) and ofw_pcibus(4) are
changed to use pci_find_dbsf(9) instead of pci_find_bsf(9) in order
to no longer report false positives when searching for siblings and
dupe devices in the same domain respectively.
Along with this change the sole host-PCI bridge driver converted to
actually make use of PCI domain support is uninorth(4), the others
continue to use domain 0 only for now and need to be converted as
appropriate later on.
Note that this means that the format of the location strings as used
by pciconf(8) has been changed and that consumers of <sys/pciio.h>
potentially need to be recompiled.

Suggested by:	jhb
Reviewed by:	grehan, jhb, marcel
Approved by:	re (kensmith), jhb (PCI maintainer hat)
2007-09-30 11:05:18 +00:00
marius
4894db0d57 o Revert the part of if_gem.c rev. 1.35 which added a call to gem_stop()
to gem_attach() as the former access softc members not yet initialized
  at that time and gem_reset() actually is enough to stop the chip. [1]
o Revise the use of gem_bitwait(); add bus_barrier() calls before calling
  gem_bitwait() to ensure the respective bit has been written before we
  starting polling on it and poll for the right bits to change, f.e. even
  though we only reset RX we have to actually wait for both GEM_RESET_RX
  and GEM_RESET_TX to clear. Add some additional gem_bitwait() calls in
  places we've been missing them according to the GEM documentation.
  Along with this some excessive DELAYs, which probably only were added
  because of bugs in gem_bitwait() and its use in the first place, as
  well as as have of an gem_bitwait() reimplementation in gem_reset_tx()
  were removed.
o Add gem_reset_rxdma() and use it to deal with GEM_MAC_RX_OVERFLOW errors
  more gracefully as unlike gem_init_locked() it resets the RX DMA engine
  only, causing no link loss and the FIFOs not to be cleared. Also use it
  deal with GEM_INTR_RX_TAG_ERR errors, with previously were unhandled.
  This was based on information obtained from the Linux GEM and OpenSolaris
  ERI drivers.
o Turn on workarounds for silicon bugs in the Apple GMAC variants.
  This was based on information obtained from the Darwin GMAC and Linux GEM
  drivers.
o Turn on "infinite" (i.e. maximum 31 * 64 bytes in length) DMA bursts.
  This greatly improves especially RX performance.
o Optimize the RX path, this consists of:
  - kicking the receiver as soon as we've a spare descriptor in gem_rint()
    again instead of just once after all the ready ones have been handled;
  - kicking the receiver the right way, i.e. as outlined in the GEM
    documentation in batches of 4 and by pointing it to the descriptor
    after the last valid one;
  - calling gem_rint() before gem_tint() in gem_intr() as gem_tint() may
    take quite a while;
  - doubling the size of the RX ring to 256 descriptors.
  Overall the RX performance of a GEM in a 1GHz Sun Fire V210 was improved
  from ~100Mbit/s to ~850Mbit/s.
o In gem_add_rxbuf() don't assign the newly allocated mbuf to rxs_mbuf
  before calling bus_dmamap_load_mbuf_sg(), if bus_dmamap_load_mbuf_sg()
  fails we'll free the newly allocated mbuf, unable to recycle the
  previous one but a NULL pointer dereference instead.
o In gem_init_locked() honor the return value of gem_meminit().
o Simplify gem_ringsize() and dont' return garbage in the default case.
  Based on OpenBSD.
o Don't turn on MAC control, MIF and PCS interrupts unless GEM_DEBUG is
  defined as we don't need/use these interrupts for operation.
o In gem_start_locked() sync the DMA maps of the descriptor rings before
  every kick of the transmitter and not just once after enqueuing all
  packets as the NIC might instantly start transmitting after we kicked
  it the first time.
o Keep state of the link state and use it to enable or disable the MAC
  in gem_mii_statchg() accordingly as well as to return early from
  gem_start_locked() in case the link is down. [3]
o Initialize the maximum frame size to a sane value.
o In gem_mii_statchg() enable carrier extension if appropriate.
o Increment if_ierrors in case of an GEM_MAC_RX_OVERFLOW error and in
  gem_eint(). [3]
o Handle IFF_ALLMULTI correctly; don't set it if we've turned promiscuous
  group mode on and don't clear the flag if we've disabled promiscuous
  group mode (these were mostly NOPs though). [2]
o Let gem_eint() also report GEM_INTR_PERR errors.
o Move setting sc_variant from gem_pci_probe() to gem_pci_attach() as
  device probe methods are not supposed to touch the softc.
o Collapse sc_inited and sc_pci into bits for sc_flags.
o Add CTASSERTs ensuring that GEM_NRXDESC and GEM_NTXDESC are set to
  legal values.
o Correctly set up for 802.3x flow control, though #ifdef out the code
  that actually enables it as this needs more testing and mainly a proper
  framework to support it.
o Correct and add some conversions from hard-coded functions names to
  __func__ which were borked or forgotten in if_gem.c rev. 1.42.
o Use PCIR_BAR instead of a homegrown macro.
o Replace sc_enaddr[6] with sc_enaddr[ETHER_ADDR_LEN].
o In gem_pci_attach() in case attaching fails release the resources in
  the opposite order they were allocated.
o Make gem_reset() static to if_gem.c as it's not needed outside that
  module.
o Remove the GEM_GIGABIT flag and the associated code; GEM_GIGABIT was
  never set and the associated code was in the wrong place.
o Remove sc_mif_config; it was only used to cache the contents of the
  respective register within gem_attach().
o Remove the #ifdef'ed out NetBSD/OpenBSD code for establishing a suspend
  hook as it will never be used on FreeBSD.
o Also probe Apple Intrepid 2 GMAC and Apple Shasta GMAC, add support for
  Apple K2 GMAC. Based on OpenBSD.
o Add support for Sun GBE/P cards, or in other words actually add support
  for cards based on GEM to gem(4). This mainly consists of adding support
  for the TBI of these chips. Along with this the PHY selection code was
  rewritten to hardcode the PHY number for certain configurations as for
  example the PHY of the on-board ERI of Blade 1000 shows up twice causing
  no link as the second incarnation is isolated.
  These changes were ported from OpenBSD with some additional improvements
  and modulo some bugs.
o Add code to if_gem_pci.c allowing to read the MAC-address from the VPD on
  systems without Open Firmware.
  This is an improved version of my variant of the respective code in
  if_hme_pci.c
o Now that gem(4) is MI enable it for all archs.

Pointed out by:	yongari [1]
Suggested by:	rwatson [2], yongari [3]
Tested on:	i386 (GEM), powerpc (GMACs by marcel and yongari),
		sparc64 (ERI and GEM)
Reviewed by:	yongari
Approved by:	re (kensmith)
2007-09-26 21:14:18 +00:00
brueffer
26461bf019 Use the correct expanded name for SCTP.
PR:		116496
Submitted by:	koitsu
Reviewed by:	rrs
Approved by:	re (kensmith)
2007-09-26 20:05:07 +00:00
alc
d1bce06c64 Change the management of cached pages (PQ_CACHE) in two fundamental
ways:

(1) Cached pages are no longer kept in the object's resident page
splay tree and memq.  Instead, they are kept in a separate per-object
splay tree of cached pages.  However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock.  Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.

This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE).  The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held.  Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.

Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case.  Cached pages
are reclaimed far, far more often than they are reactivated.  Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.

(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.

Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated.  Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page.  Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.

Discussed with: many over the course of the summer, including jeff@,
   Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
alc
20b10da706 It has been observed on the mailing lists that the different categories
of pages don't sum to anywhere near the total number of pages on amd64.
This is for the most part because uma_small_alloc() pages have never been
counted as wired pages, like their kmem_malloc() brethren.  They should
be.  This changes fixes that.

It is no longer necessary for the page queues lock to be held to free
pages allocated by uma_small_alloc().  I removed the acquisition and
release of the page queues lock from uma_small_free() on amd64 and ia64
weeks ago.  This patch updates the other architectures that have
uma_small_alloc() and uma_small_free().

Approved by: re (kensmith)
2007-09-15 18:47:02 +00:00
marcel
b031fef0fe Revamp the interrupt handling in support of INTR_FILTER. This includes:
o  Revamp the PIC I/F to only abstract the PIC hardware. The
   resource handling has been moved to nexus, where it belongs.
o  Include EOI and MASK+EOI methods to the PIC I/F in support of
   INTR_FILTER.
o  With the allocation of interrupt resources and setup of
   interrupt handlers in the common platform code we can delay
   talking to the PIC hardware after enumeration of all devices.
   Introduce a call to powerpc_intr_enable() in configure_final()
   to achieve that and have powerpc_setup_intr() only program the
   PIC when !cold.
o  As a consequence of the above, remove all early_attach() glue
   from the OpenPIC and Heathrow PIC drivers and have them
   register themselves when they're found during enumeration.
o  Decouple the interrupt vector from the interrupt request line.
   Allocate vectors increasingly so that they can be used for
   the intrcnt index as well. Extend the Heathrow PIC driver to
   translate between IRQ and vector. The OpenPIC driver already
   has the support for vectors in hardware.

Approved by: re (blanket)
2007-08-11 19:25:32 +00:00
marcel
8bda20dd9d Re-enable external interrupts for faults, traps and syscalls.
Approved by: re (blanket)
2007-08-08 01:19:12 +00:00
marcel
d6e4edefa7 Eliminate <machine/interruptvar.h> as it has only a single
prototype. In the future that prototype will not be needed
at all anyway, but for now it's moved to intr_machdep.h.

Approved by: re (blanket)
2007-08-07 23:33:35 +00:00
marcel
7e2354fa52 Remove redundant prototype.
Approved by: re (blanket)
2007-08-07 18:40:02 +00:00
marcel
2cb62192de Add prototype for trap().
Approved by: re (blanket)
2007-08-07 18:39:28 +00:00