Commit Graph

204 Commits

Author SHA1 Message Date
imp
ea8586b356 Per TRB vote: restore the aquire_timer0 and associated goo. This will
be gone in FreeBSD 6, so put BURN_BRIDGES around it.  The TRB also
felt that if something better comes along sooner, it can be used to
replace this code.

Delayed by: BSDcon and subsequent disk crash.
2003-09-24 15:33:33 +00:00
bde
bc4334bd19 clock.c:
Quick fix for calling DELAY() for ddb input in some (atkbd-based)
console drivers.  ddb must not use any normal locks, but DELAY()
normally calls getit() which needs clock_lock.  One problem with using
normal locks in ddb is that deadlock is possible, but deadlock on
clock_lock is unlikely becaluse clock_lock is bogusly recursive,
apparently just to hide the problem of ddb using it.  The i8254 clock
hardware has mostly write-only registers so it is important for it to
use a lock that gives exclusive access.  (atkbd hardware is also
unfriendly to reentrant software but that problem is more local and
already solved.)  I mostly saw the symptoms of the bug caused by
unlocking in getit() running cpu_unpend().  cpu_unpend() should not
be called while in ddb and Debugger() calls for failing assertions
about this caused a breakpoint within ddb.

ddb must also not call getit() because ddb may be being used to step
through clock initialization code that has stopped or otherwise mangled
the clock.  If the clock is stopped, then getit() always returns the
same value and DELAY() takes forever if it trusts getit().

The quick fix is implement DELAY(n) as (n * timer_freq / 1000000)
inb(0x84)'s if ddb is active.

machdep.c:
Don't permit recursion on clock_lock.
2003-09-07 14:23:08 +00:00
phk
34014d5261 Give timecounters a numeric quality field.
A timecounter will be selected when registered if its quality is
not negative and no less than the current timecounters.

Add a sysctl to report all available timecounters and their qualities.

Give the dummy timecounter a solid negative quality of minus a million.

Give the i8254 zero and the ACPI 1000.

The TSC gets 800, unless APM or SMP forces it negative.

Other timecounters default to zero quality and thereby retain current
selection behaviour.
2003-08-16 08:23:53 +00:00
phk
3ccfe6b622 remove acquire_timer0() and release_timer0() and related stuff. 2003-08-15 15:50:49 +00:00
phk
3bd92bdf0a Dont initialize a TSC timecounter until we know if it is broken or not. 2003-08-06 15:05:27 +00:00
obrien
999f42eba3 Use __FBSDID(). 2003-06-02 16:32:55 +00:00
phk
0eb6112a0e Don't rely on boolean expression evaluating to 1 or 0 by default.
Found by:       FlexeLint
2003-05-31 20:23:46 +00:00
kan
9468fdaf14 Deprecate machine/limits.h in favor of new sys/limits.h.
Change all in-tree consumers to include <sys/limits.h>

Discussed on:	standards@
Partially submitted by: Craig Rodrigues <rodrigc@attbi.com>
2003-04-29 13:36:06 +00:00
mdodd
803a8a66ce Use repo-copied files in sys/i386/bios. 2003-03-24 19:14:46 +00:00
phk
e059b79437 Including <sys/stdint.h> is (almost?) universally only to be able to use
%j in printfs, so put a newsted include in <sys/systm.h> where the printf
prototype lives and save everybody else the trouble.
2003-03-18 08:45:25 +00:00
phk
2a777ce2e1 Switch to using the TSC code in i386/i386/tsc.c. 2003-02-11 11:43:25 +00:00
phk
3692879cc8 Split the global timezone structure into two integer fields to
prevent the compiler from optimizing assignments into byte-copy
operations which might make access to the individual fields non-atomic.

Use the individual fields throughout, and don't bother locking them with
Giant: it is no longer needed.

Inspired by:    tjr
2003-02-03 19:49:35 +00:00
jake
6b3763a173 Split statclock into statclock and profclock, and made the method for driving
statclock based on profhz when profiling is enabled MD, since most platforms
don't use this anyway.  This removes the need for statclock_process, whose
only purpose was to subdivide profhz, and gets the profiling clock running
outside of sched_lock on platforms that implement suswintr.
Also changed the interface for starting and stopping the profiling clock to
do just that, instead of changing the rate of statclock, since they can now
be separate.

Reviewed by:	jhb, tmm
Tested on:	i386, sparc64
2003-02-03 17:53:15 +00:00
phk
36fe9fb493 Make tsc_freq a 64bit quantity.
Inspired by:    http://www.theinquirer.net/?article=7481
2003-01-29 11:36:39 +00:00
phk
7beb8a484e Use the correct value when writing the Day Of Week byte in the CMOS.
The correct range is [1...7] with Sunday=1, but we have been writing
[0...6] with Sunday=0.

The Soekris computers flagged the zero, zapped the date, so if you
rebooted your soekris on a sunday, it would come up with a wrong
date.

Bruce has a more extensive rework of this code, but we will stick with
the minimalist fix for now.

Spotted by:	Soren Kristensen <soren@soekris.com>
Thanks to:	Michael Sierchio <kudzu@tenebras.com>.
Confirmed by:	bde
Approved by:	re
2002-12-04 13:46:49 +00:00
iwasaki
787db7a9c1 1. Fix a comment. Locking _is_ needed (but not done).
2. Update a comment.  We now restore much more than RTC updates and
   interrupts.
3. Order change.  Stop interrupts by writing to RTC_STATUSB,
   restore rate bits for the interrupts by writing to RTC_STATUSA,
   then enable interrupts again.
   This seems to be done perfectly backwards in startrtclock().
   Otherwise, the idea for this change was obtained from
   startrtclock().
4. Don't stop the clock (RTCB_HALT).  We only program some control bits
   and don't want to stop the clock.
5. (Not really related.)  Add caveats to the comment about timer_restore().
   The update is non-atomic since locking is not done.

On locking:
6. rtcin() and writertc() are locked() adequately by splhigh() in RELENG_4,
   but this locking is null in -current.
7. Doing things in the correct order in (3) combined with (6) is probably
   enough locking for rtcrestore() in RELENG_4.  In -current, the
   writertc()'s race with rtcintr() unless the BIOS disables RTC interrupts.

Submitted by:	bde (including commit message)
MFC after:	1 week
2002-10-17 13:55:39 +00:00
phk
9012cc741f Fix a 3 year old oversight: Remove the #ifdef/#endif pair now that there
is nothing between them anymore.

Spotted by:	peter.
2002-09-21 07:59:06 +00:00
iwasaki
c30c4f6198 Restore status register A of RTC at resume time.
This should fix the 'too many RTC interrupts and statclock seems
broken after resume' problem.

MFC after:	1 week
2002-09-18 07:34:04 +00:00
mp
c7f81d7ebd Clock frequencies reported by sysctl should be unsigned values. Discovered
when machdep.tsc_freq returned a negative number on a 2.2GHz Xeon.

Submitted by:	Brian Harrison <bharrison@ironport.com>
Reviewed by:	phk
MFC after:	1 week
2002-06-22 16:30:18 +00:00
phk
26ffc19d1e Don't export timecounter structures under debug. with sysctl, they
contain no truly interesting data anymore.
2002-04-30 19:34:31 +00:00
phk
f227fb83e6 Remove the tc_update() function. Any frequency change to the
timecounter will be used starting at the next second, which is
good enough for sysctl purposes.  If better adjustment is needed
the NTP PLL should be used.
2002-04-26 10:06:26 +00:00
dillon
dc5aafeb94 Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
alfred
728484a745 Remove __P. 2002-03-20 07:51:46 +00:00
dillon
996781f17a revert last commit temporarily due to whining on the lists. 2002-02-26 20:33:41 +00:00
dillon
57b097e18c STAGE-1 of 3 commit - allow (but do not require) interrupts to remain
enabled in critical sections and streamline critical_enter() and
critical_exit().

This commit allows an architecture to leave interrupts enabled inside
critical sections if it so wishes.  Architectures that do not wish to do
this are not effected by this change.

This commit implements the feature for the I386 architecture and provides
a sysctl, debug.critical_mode, which defaults to 1 (use the feature).  For
now you can turn the sysctl on and off at any time in order to test the
architectural changes or track down bugs.

This commit is just the first stage.  Some areas of the code, specifically
the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will
be cleaned up in the STAGE-2 commit when the critical_*() functions are
moved entirely into MD files.

The following changes have been made:

	* critical_enter() and critical_exit() for I386 now simply increment
	  and decrement curthread->td_critnest.  They no longer disable
	  hard interrupts.  When critical_exit() decrements the counter to
	  0 it effectively calls a routine to deal with whatever interrupts
	  were deferred during the time the code was operating in a critical
	  section.

	  Other architectures are unaffected.

	* fork_exit() has been conditionalized to remove MD assumptions for
	  the new code.  Old code will still use the old MD assumptions
	  in regards to hard interrupt disablement.  In STAGE-2 this will
	  be turned into a subroutine call into MD code rather then hardcoded
	  in MI code.

	  The new code places the burden of entering the critical section
	  in the trampoline code where it belongs.

	* I386: interrupts are now enabled while we are in a critical section.
	  The interrupt vector code has been adjusted to deal with the fact.
	  If it detects that we are in a critical section it currently defers
	  the interrupt by adding the appropriate bit to an interrupt mask.

	* In order to accomplish the deferral, icu_lock is required.  This
	  is i386-specific.  Thus icu_lock can only be obtained by mainline
	  i386 code while interrupts are hard disabled.  This change has been
	  made.

	* Because interrupts may or may not be hard disabled during a
	  context switch, cpu_switch() can no longer simply assume that
	  PSL_I will be in a consistent state.  Therefore, it now saves and
	  restores eflags.

	* FAST INTERRUPT PROVISION.  Fast interrupts are currently deferred.
	  The intention is to eventually allow them to operate either while
	  we are in a critical section or, if we are able to restrict the
	  use of sched_lock, while we are not holding the sched_lock.

	* ICU and APIC vector assembly for I386 cleaned up.  The ICU code
	  has been cleaned up to match the APIC code in regards to format
	  and macro availability.  Additionally, the code has been adjusted
	  to deal with deferred interrupts.

	* Deferred interrupts use a per-cpu boolean int_pending, and
	  masks ipending, spending, and fpending.  Being per-cpu variables
	  it is not currently necessary to lock; bus cycles modifying them.

	  Note that the same mechanism will enable preemption to be
	  incorporated as a true software interrupt without having to
	  further hack up the critical nesting code.

	* Note: the old critical_enter() code in kern/kern_switch.c is
	  currently #ifdef to be compatible with both the old and new
	  methodology.  In STAGE-2 it will be moved entirely to MD code.

Performance issues:

	One of the purposes of this commit is to enhance critical section
	performance, specifically to greatly reduce bus overhead to allow
	the critical section code to be used to protect per-cpu caches.
	These caches, such as Jeff's slab allocator work, can potentially
	operate very quickly making the effective savings of the new
	critical section code's performance very significant.

	The second purpose of this commit is to allow architectures to
	enable certain interrupts while in a critical section.  Specifically,
	the intention is to eventually allow certain FAST interrupts to
	operate rather then defer.

	The third purpose of this commit is to begin to clean up the
	critical_enter()/critical_exit()/cpu_critical_enter()/
	cpu_critical_exit() API which currently has serious cross pollution
	in MI code (in fork_exit() and ast() for example).

	The fourth purpose of this commit is to provide a framework that
	allows kernel-preempting software interrupts to be implemented
	cleanly.  This is currently used for two forward interrupts in I386.
	Other architectures will have the choice of using this infrastructure
	or building the functionality directly into critical_enter()/
	critical_exit().

	Finally, this commit is designed to greatly improve the flexibility
	of various architectures to manage critical section handling,
	software interrupts, preemption, and other highly integrated
	architecture-specific details.
2002-02-26 17:06:21 +00:00
bde
3371114993 Don't include <isa/isavar.h> or compile code depending on it when isa
is not configured.  Including <isa/isavar.h> when it is not used is
harmful as well as bogus, since it includes "isa_if.h" which is not
generated when isa is not configured.

This was fixed in 1999 but was broken by unconditionalizing PNPBIOS.
2002-01-30 12:41:12 +00:00
jhb
2463f40fc3 Introduce a standard name for the lock protecting an interrupt controller
and it's associated state variables: icu_lock with the name "icu".  This
renames the imen_mtx for x86 SMP, but also uses the lock to protect
access to the 8259 PIC on x86 UP.  This also adds an appropriate lock to
the various Alpha chipsets which fixes problems with Alpha SMP machines
dropping interrupts with an SMP kernel.
2001-12-20 23:48:31 +00:00
iwasaki
f1842a13d8 Some fix for the recent apm module changes.
- Now that apm loadable module can inform its existence to other kernel
   components  (e.g. i386/isa/clock.c:startrtclock()'s TCS hack).
 - Exchange priority of SI_SUB_CPU and SI_SUB_KLD for above purpose.
 - Add simple arbitration mechanism for APM vs. ACPI.  This prevents
   the kernel enables both of them.
 - Remove obsolete `#ifdef DEV_APM' related code.
 - Add abstracted interface for Powermanagement operations.  Public apm(4)
   functions, such as apm_suspend(), should be replaced new interfaces.
   Currently only power_pm_suspend (successor of apm_suspend) is implemented.

Reviewed by:	peter, arch@ and audit@
2001-11-01 16:34:07 +00:00
robert
04f13118b9 Remove an unneeded variable declaration and statement.
Approved by:	jake
2001-10-09 16:06:28 +00:00
iwasaki
878a79c3e6 Reenable RTC interrupts after wakeup. Some laptops have a problem
with system statistics monitoring tools (such as systat, vmstat...)
because of stopping RTC interrupts generation.
Restore all the timers (RTC and i8254) atomically.

Reviewed by:	bde
MFC after:	1 week
2001-09-04 16:02:06 +00:00
msmith
d52fd88ca3 Add ACPI attachments. 2001-08-30 09:17:03 +00:00
jhb
3fbeaa9056 Remove unneeded includes of sys/ipl.h and machine/ipl.h. 2001-05-15 23:22:29 +00:00
jhb
82ea013b77 Add in a missing call to forward_hardclock() in the SMP case.
Submitted by:	bde
2001-04-28 01:37:44 +00:00
jhb
8bfdafc934 Overhaul of the SMP code. Several portions of the SMP kernel support have
been made machine independent and various other adjustments have been made
to support Alpha SMP.

- It splits the per-process portions of hardclock() and statclock() off
  into hardclock_process() and statclock_process() respectively.  hardclock()
  and statclock() call the *_process() functions for the current process so
  that UP systems will run as before.  For SMP systems, it is simply necessary
  to ensure that all other processors execute the *_process() functions when the
  main clock functions are triggered on one CPU by an interrupt.  For the alpha
  4100, clock interrupts are delievered in a staggered broadcast fashion, so
  we simply call hardclock/statclock on the boot CPU and call the *_process()
  functions on the secondaries.  For x86, we call statclock and hardclock as
  usual and then call forward_hardclock/statclock in the MD code to send an IPI
  to cause the AP's to execute forwared_hardclock/statclock which then call the
  *_process() functions.
- forward_signal() and forward_roundrobin() have been reworked to be MI and to
  involve less hackery.  Now the cpu doing the forward sets any flags, etc. and
  sends a very simple IPI_AST to the other cpu(s).  AST IPIs now just basically
  return so that they can execute ast() and don't bother with setting the
  astpending or needresched flags themselves.  This also removes the loop in
  forward_signal() as sched_lock closes the race condition that the loop worked
  around.
- need_resched(), resched_wanted() and clear_resched() have been changed to take
  a process to act on rather than assuming curproc so that they can be used to
  implement forward_roundrobin() as described above.
- Various other SMP variables have been moved to a MI subr_smp.c and a new
  header sys/smp.h declares MI SMP variables and API's.   The IPI API's from
  machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h.
- The globaldata_register() and globaldata_find() functions as well as the
  SLIST of globaldata structures has become MI and moved into subr_smp.c.
  Also, the globaldata list is only available if SMP support is compiled in.

Reviewed by:	jake, peter
Looked over by:	eivind
2001-04-27 19:28:25 +00:00
jhb
b47bfbe544 Catch up to header include changes:
- <sys/mutex.h> now requires <sys/systm.h>
- <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
2001-03-28 09:17:56 +00:00
bde
405108c6cd Fixed style bugs in clock.c rev.1.164 and cpu.h rev.1.52-1.53 -- declare
tsc_present in the right places (together with other variables of the
same linkage), and don't use messy ifdefs just to avoid exporting it in
some cases.
2001-02-19 03:00:34 +00:00
jhb
1667b748b0 Catch up to changes to inthand_add(). 2001-02-09 17:48:33 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
peter
ab46631b96 Convert mca (microchannel bus support) from something that we count
(bogus) to something that we test for the presence of.
2001-01-29 11:57:27 +00:00
jasone
24d53563ed Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
peter
802c028309 Convert apm from a bogus 'count' into a plain option. Clean out some
other cruft from the files.alpha and files.ia64 that were related to this.
2001-01-19 14:09:54 +00:00
markm
f0aab59cdd Namespace cleanup. Remove some #includes in favour of an explicit
declaration.

Asked for by:	bde
2000-12-02 17:59:41 +00:00
phk
2e92b95197 Revert two experimental changes which escaped from my devel machine. 2000-10-28 06:55:12 +00:00
phk
54ca48450c Convert all users of fldoff() to offsetof(). fldoff() is bad
because it only takes a struct tag which makes it impossible to
use unions, typedefs etc.

Define __offsetof() in <machine/ansi.h>

Define offsetof() in terms of __offsetof() in <stddef.h> and <sys/types.h>

Remove myriad of local offsetof() definitions.

Remove includes of <stddef.h> in kernel code.

NB: Kernelcode should *never* include from /usr/include !

Make <sys/queue.h> include <machine/ansi.h> to avoid polluting the API.

Deprecate <struct.h> with a warning.  The warning turns into an error on
01-12-2000 and the file gets removed entirely on 01-01-2001.

Paritials reviews by:   various.
Significant brucifications by:  bde
2000-10-27 11:45:49 +00:00
jhb
ff18363a3e - Overhaul the software interrupt code to use interrupt threads for each
type of software interrupt.  Roughly, what used to be a bit in spending
  now maps to a swi thread.  Each thread can have multiple handlers, just
  like a hardware interrupt thread.
- Instead of using a bitmask of pending interrupts, we schedule the specific
  software interrupt thread to run, so spending, NSWI, and the shandlers
  array are no longer needed.  We can now have an arbitrary number of
  software interrupt threads.  When you register a software interrupt
  thread via sinthand_add(), you get back a struct intrhand that you pass
  to sched_swi() when you wish to schedule your swi thread to run.
- Convert the name of 'struct intrec' to 'struct intrhand' as it is a bit
  more intuitive.  Also, prefix all the members of struct intrhand with
  'ih_'.
- Make swi_net() a MI function since there is now no point in it being
  MD.

Submitted by:	cp
2000-10-25 05:19:40 +00:00
jhb
ed47777d05 - machine/mutex.h -> sys/mutex.h
- machine/ipl.h -> sys/ipl.h
- Use MUTEX_DECLARE() for clock_lock
2000-10-20 07:31:00 +00:00
jhb
fd275a78bd - Change fast interrupts on x86 to push a full interrupt frame and to
return through doreti to handle ast's.  This is necessary for the
  clock interrupts to work properly.
- Change the clock interrupts on the x86 to be fast instead of threaded.
  This is needed because both hardclock() and statclock() need to run in
  the context of the current process, not in a separate thread context.
- Kill the prevproc hack as it is no longer needed.
- We really need Giant when we call psignal(), but we don't want to block
  during the clock interrupt.  Instead, use two p_flag's in the proc struct
  to mark the current process as having a pending SIGVTALRM or a SIGPROF
  and let them be delivered during ast() when hardclock() has finished
  running.
- Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways.  It was
  broken on the x86 if it was turned on since cpl is gone.  It's only use
  was to bogusly run softclock() directly during hardclock() rather than
  scheduling an SWI.
- Remove the COM_LOCK simplelock and replace it with a clock_lock spin
  mutex.  Since the spin mutex already handles disabling/restoring
  interrupts appropriately, this also lets us axe all the *_intr() fu.
- Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use
  temporary fast interrupts for the APIC trial.
- Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending
  signals in hardclock() that are to be delivered in ast().

Submitted by:	jakeb (making statclock safe in a fast interrupt)
Submitted by:	cp (concept of delaying signals until ast())
2000-10-06 02:20:21 +00:00
jhb
71938e9fcd - Heavyweight interrupt threads on the alpha for device I/O interrupts.
- Make softinterrupts (SWI's) almost completely MI, and divorce them
  completely from the x86 hardware interrupt code.
  - The ihandlers array is now gone.  Instead, there is a MI shandlers array
    that just contains SWI handlers.
  - Most of the former machine/ipl.h files have moved to a new sys/ipl.h.
- Stub out all the spl*() functions on all architectures.

Submitted by:	dfr
2000-10-05 23:09:57 +00:00
jhb
7013b83225 - Remove the inthand2_t type and use the equivalent driver_intr_t type from
newbus for referencing device interrupt handlers.
- Move the 'struct intrec' type which describes interrupt sources into
  sys/interrupt.h instead of making it just be a x86 structure.
- Don't create 'ithd' and 'intrec' typedefs, instead, just use 'struct ithd'
  and 'struct intrec'
- Move the code to translate new-bus interrupt flags into an interrupt thread
  priority out of the x86 nexus code and into a MI ithread_priority()
  function in sys/kern/kern_intr.c.
- Remove now-uneeded x86-specific headers from sys/dev/ata/ata-all.c and
  sys/pci/pci_compat.c.
2000-09-13 18:33:25 +00:00
jasone
769e0f974d Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00