Commit Graph

155 Commits

Author SHA1 Message Date
Joerg Wunsch
a5f50ef9e4 netchild's mega-patch to isolate compiler dependencies into a central
place.

This moves the dependency on GCC's and other compiler's features into
the central sys/cdefs.h file, while the individual source files can
then refer to #ifdef __COMPILER_FEATURE_FOO where they by now used to
refer to #if __GNUC__ > 3.1415 && __BARC__ <= 42.

By now, GCC and ICC (the Intel compiler) have been actively tested on
IA32 platforms by netchild.  Extension to other compilers is supposed
to be possible, of course.

Submitted by:	netchild
Reviewed by:	various developers on arch@, some time ago
2005-03-02 21:33:29 +00:00
David Schultz
6004362e66 Don't include sys/user.h merely for its side-effect of recursively
including other headers.
2004-11-27 06:51:39 +00:00
Nate Lawson
ccd582b0fd Let nexus print our flags for us. Also, clean up an obfuscated if stmt. 2004-10-14 22:37:51 +00:00
Nate Lawson
e5979322ba Remove local hacks to set flags now that the device probe does this for us.
Tested on every device except sio_pci and the pc98 fd.c.  Perhaps something
similar should be done for the "disabled" hints also.

MFC after:	2 weeks
2004-10-14 22:21:59 +00:00
Bruce Evans
6ed979574f Clear any pending exceptions before using frstor (in the non-FXSR case)
in npxsetregs() too.  npxsetregs() must overwrite the previous state, and
it is never paired with an npxgetregs() that would defuse the previous
state (since npxgetregs() would have fninit'ed the state, leaving nothing
to do).

PR:		68058 (this should complete the fix)
Tested by:	Simon Barner <barner@in.tum.de>
2004-06-19 22:24:16 +00:00
Bruce Evans
bd1a3f1a7e Fixed a panic caused by over-optimizing npxdrop() in the non-FXSR case.
frstor can trap despite it being a control instruction, since it bogusly
checks for pending exceptions in the state that it is overwriting.
This used to be a non-problem because frstor was always paired with a
previous fnsave, and fnsave does an implicit fninit so any pending
exceptions only remain live in the saved state.  Now frstor is sometimes
paired with npxdrop() and we must do a little more than just forget
that the npx was used in npxdrop() to avoid a trap later.  This is a
non-problem in the FXSR case because fxrstor doesn't do the bogus check.

FXSR is part of SSE, and npxdrop() is only in FreeBSD-5.x, so this bug
only affected old machines running FreeBSD-5.x.

PR:		68058
2004-06-18 02:10:55 +00:00
Bruce Evans
bed16055e7 Fixed misclassification of npx interrupts caused by npx_probe().
Dividing by 0 in order to check for irq13/exception16 delivery apparently
always causes an irq13 even if we have configured for exception16 (by
setting CR0_NE).  This was expected, but the timing of the irq13 was
unexpected.  Without CR0_NE, the irq13 is delivered synchronously at
least on my test machine, but with CR0_NE it is delivered a little
later (about 250 nsec) in PIC mode and much later (5000-10000 nsec)
in APIC mode.  So especially in APIC mode, the irq13 may arrive after
it is supposed to be shut down.  It should then be masked, but the
shutdown is incomplete, so the irq goes to a null handler that just
reports it as stray.  The fix is to wait a bit after dividing by 0 to
give a good chance of the irq13 being handled by its proper handler.

Removed the hack that was supposed to recover from the incomplete shutdown
of irq13.  The shutdown is now even more incomplete, or perhaps just
incomplete in a different way, but the hack now has no effect because
irq13 is edge triggered and handling of edge triggered interrupts is
now optimized by skipping their masking.  The hack only worked due
to it accidentally not losing races.

The incomplete shutdown of irq13 still allows unprivileged users to
generate a stray irq13 (except on systems where irq13 is actually used)
by unmasking an npx exception and causing one.  The exception gets
handled properly by the exception 16 handler.  A spurious irq13 is
delivered asynchronously but is harmless (as in the probe) because it
is almost perfectly not handled by the null interrupt handler.
Perfectly not handling it involves mainly not resetting the npx busy
latch.  This prevents further irq13's despite them not being masked in
the [A]PIC.
2004-06-06 15:17:44 +00:00
John Baldwin
d4d4ece72d Trim unused includes. 2004-05-11 20:14:53 +00:00
Warner Losh
f36cfd49ad Remove advertising clause from University of California Regent's
license, per letter dated July 22, 1999 and email from Peter Wemm,
Alan Cox and Robert Watson.

Approved by: core, peter, alc, rwatson
2004-04-07 20:46:16 +00:00
Tom Rhodes
a122cca953 These are changes to allow to use the Intel C/C++ compiler (lang/icc)
to build the kernel. It doesn't affect the operation if gcc.

Most of the changes are just adding __INTEL_COMPILER to #ifdef's, as
icc v8 may define __GNUC__ some parts may look strange but are
necessary.

Additional changes:
 - in_cksum.[ch]:
   * use a generic C version instead of the assembly version in the !gcc
     case (ASM code breaks with the optimizations icc does)
     -> no bad checksums with an icc compiled kernel
     Help from:		andre, grehan, das
     Stolen from: 	alpha version via ppc version
     The entire checksum code should IMHO be replaced with the DragonFly
     version (because it isn't guaranteed future revisions of gcc will
     include similar optimizations) as in:
        ---snip---
          Revision  Changes    Path
          1.12      +1 -0      src/sys/conf/files.i386
          1.4       +142 -558  src/sys/i386/i386/in_cksum.c
          1.5       +33 -69    src/sys/i386/include/in_cksum.h
          1.5       +2 -0      src/sys/netinet/igmp.c
          1.6       +0 -1      src/sys/netinet/in.h
          1.6       +2 -0      src/sys/netinet/ip_icmp.c

          1.4       +3 -4      src/contrib/ipfilter/ip_compat.h
          1.3       +1 -2      src/sbin/natd/icmp.c
          1.4       +0 -1      src/sbin/natd/natd.c
          1.48      +1 -0      src/sys/conf/files
          1.2       +0 -1      src/sys/conf/files.amd64
          1.13      +0 -1      src/sys/conf/files.i386
          1.5       +0 -1      src/sys/conf/files.pc98
          1.7       +1 -1      src/sys/contrib/ipfilter/netinet/fil.c
          1.10      +2 -3      src/sys/contrib/ipfilter/netinet/ip_compat.h
          1.10      +1 -1      src/sys/contrib/ipfilter/netinet/ip_fil.c
          1.7       +1 -1      src/sys/dev/netif/txp/if_txp.c
          1.7       +1 -1      src/sys/net/ip_mroute/ip_mroute.c
          1.7       +1 -2      src/sys/net/ipfw/ip_fw2.c
          1.6       +1 -2      src/sys/netinet/igmp.c
          1.4       +158 -116  src/sys/netinet/in_cksum.c
          1.6       +1 -1      src/sys/netinet/ip_gre.c
          1.7       +1 -2      src/sys/netinet/ip_icmp.c
          1.10      +1 -1      src/sys/netinet/ip_input.c
          1.10      +1 -2      src/sys/netinet/ip_output.c
          1.13      +1 -2      src/sys/netinet/tcp_input.c
          1.9       +1 -2      src/sys/netinet/tcp_output.c
          1.10      +1 -1      src/sys/netinet/tcp_subr.c
          1.10      +1 -1      src/sys/netinet/tcp_syncache.c
          1.9       +1 -2      src/sys/netinet/udp_usrreq.c

          1.5       +1 -2      src/sys/netinet6/ipsec.c
          1.5       +1 -2      src/sys/netproto/ipsec/ipsec.c
          1.5       +1 -1      src/sys/netproto/ipsec/ipsec_input.c
          1.4       +1 -2      src/sys/netproto/ipsec/ipsec_output.c

          and finally remove
            sys/i386/i386        in_cksum.c
            sys/i386/include     in_cksum.h
        ---snip---
 - endian.h:
   * DTRT in C++ mode
 - quad.h:
   * we don't use gcc v1 anymore, remove support for it
   Suggested by:	bde (long ago)
 - assym.h:
   * avoid zero-length arrays (remove dependency on a gcc specific
     feature)
     This change changes the contents of the object file, but as it's
     only used to generate some values for a header, and the generator
     knows how to handle this, there's no impact in the gcc case.
   Explained by:	bde
   Submitted by:	Marius Strobl <marius@alchemy.franken.de>
 - aicasm.c:
   * minor change to teach it about the way icc spells "-nostdinc"
   Not approved by:	gibbs (no reply to my mail)
 - bump __FreeBSD_version (lang/icc needs to know about the changes)

Incarnations of this patch survive gcc compiles since a loooong time,
I use it on my desktop. An icc compiled kernel works since Nov. 2003
(exceptions: snd_* if used as modules), it survives a build of the
entire ports collection with icc.

Parts of this commit contains suggestions or submissions from
Marius Strobl <marius@alchemy.franken.de>.

Reviewed by:	-arch
Submitted by:	netchild
2004-03-12 21:45:33 +00:00
Bruce Evans
1203f5be25 Fixed a misplaced ifdef that prevented npx.c building without "device isa"
ISA.  npx has few isa dependencies, but it does unconditional outb()'s to
the isa bus in the !SMP case, and it attaches to isa if "device isa" is
configured in order to support PNP-ISA.  The ifdef for the latter was
misplaced.

PR:		62595
2004-02-13 18:04:51 +00:00
John Baldwin
6f92bdd0c1 New APIC support code:
- The apic interrupt entry points have been rewritten so that each entry
  point can serve 32 different vectors.  When the entry is executed, it
  uses one of the 32-bit ISR registers to determine which vector in its
  assigned range was triggered.  Thus, the apic code can support 159
  different interrupt vectors with only 5 entry points.
- We now always to disable the local APIC to work around an errata in
  certain PPros and then re-enable it again if we decide to use the APICs
  to route interrupts.
- We no longer map IO APICs or local APICs using special page table
  entries.  Instead, we just use pmap_mapdev().  We also no longer
  export the virtual address of the local APIC as a global symbol to
  the rest of the system, but only in local_apic.c.  To aid this, the
  APIC ID of each CPU is exported as a per-CPU variable.
- Interrupt sources are provided for each intpin on each IO APIC.
  Currently, each source is given a unique interrupt vector meaning that
  PCI interrupts are not shared on most machines with an I/O APIC.
  That mapping for interrupt sources to interrupt vectors is up to the
  APIC enumerator driver however.
- We no longer probe to see if we need to use mixed mode to route IRQ 0,
  instead we always use mixed mode to route IRQ 0 for now.  This can be
  disabled via the 'NO_MIXED_MODE' kernel option.
- The npx(4) driver now always probes to see if a built-in FPU is present
  since this test can now be performed with the new APIC code.  However,
  an SMP kernel will panic if there is more than one CPU and a built-in
  FPU is not found.
- PCI interrupts are now properly routed when using APICs to route
  interrupts, so remove the hack to psuedo-route interrupts when the
  intpin register was read.
- The apic.h header was moved to apicreg.h and a new apicvar.h header
  that declares the APIs used by the new APIC code was added.
2003-11-03 21:53:38 +00:00
John Baldwin
0e85b19ba9 Add constants for entries in the IDT and use those instead of magic
numbers.
2003-09-10 01:07:04 +00:00
Peter Wemm
a35b33869d Initiate de-orbit burn for fpu-less operation. 386+387 is still
theoretically supportable, but you'd really be happier with FreeBSD 2.1.8
on it.
2003-07-22 08:11:17 +00:00
David E. O'Brien
006124d811 Use __FBSDID(). 2003-06-02 16:32:55 +00:00
Dag-Erling Smørgrav
9f45b2da8f Define ovbcopy() as a macro which expands to the equivalent bcopy() call,
to take care of the KAME IPv6 code which needs ovbcopy() because NetBSD's
bcopy() doesn't handle overlap like ours.

Remove all implementations of ovbcopy().

Previously, bzero was a function pointer on i386, to save a jmp to
bzero_vector.  Get rid of this microoptimization as it only confuses
things, adds machine-dependent code to an MD header, and doesn't really
save all that much.

This commit does not add my pagezero() / pagecopy() code.
2003-04-04 17:29:55 +00:00
Jeff Roberson
fb8aaa76c7 - In npxgetregs() use the td argument to save the fpu state from and not
curthread.  Nothing currently depends on this behavior.
 - Clean up an extra newline.

Obtained from:	bde
2003-04-01 00:16:32 +00:00
Jeff Roberson
4c8a7679d0 - In npxsetregs don't set the floating point if td == fpcurthread not if
curthread == fpcurthread.  This is important when we're saving the fp
   state for a thread other than curthread as in from set_mcontext.
2003-03-31 00:32:43 +00:00
Julian Elischer
4a338afd7a Move a bunch of flags from the KSE to the thread.
I was in two minds as to where to put them in the first case..
I should have listenned to the other mind.

Submitted by:	 parts by davidxu@
Reviewed by:	jeff@ mini@
2003-02-17 09:55:10 +00:00
Daniel Eischen
2be05b70c9 Add getcontext, setcontext, and swapcontext as system calls.
Previously these were libc functions but were requested to
be made into system calls for atomicity and to coalesce what
might be two entrances into the kernel (signal mask setting
and floating point trap) into one.

A few style nits and comments from bde are also included.

Tested on alpha by: gallatin
2002-11-16 06:35:53 +00:00
David Xu
1f82496322 Fix typo. ioport_rid should be irq_rid. 2002-11-05 04:03:42 +00:00
Peter Wemm
331e4823a2 Finish fixing the 5.x FPU code for dealing with signal handlers.
Obtained from:  bde
2002-10-25 19:12:16 +00:00
Poul-Henning Kamp
218565dc75 Hide inline assembly if lint is defined. 2002-10-20 17:30:30 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Jonathan Mini
30abe507c0 Add kernel support needed for the KSE-aware libpthread:
- Maintain fpu state across signals.
	- Save and restore FPU state properly in ucontext_t's.

Reviewed by:	bde, deischen, julian
Approved by:	-arch
2002-09-16 19:25:59 +00:00
Peter Wemm
f7749f924c Automatically enable CPU_ENABLE_SSE (detect and enable SSE instructions)
if compiling with I686_CPU as a target.  CPU_DISABLE_SSE will prevent
this from happening and will guarantee the code is not compiled in.

I am still not happy with this, but gcc is now generating code that uses
these instructions if you set CPUTYPE to p3/p4 or athlon-4/mp/xp or higher.
2002-09-07 07:02:12 +00:00
Matthew Dillon
d74ac6819b Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
Bruce Evans
ea1499bf8f Fixed some style bugs in the removal of __P(()). The main ones were
not removing tabs before "__P((", and not outdenting continuation lines
to preserve non-KNF lining up of code with parentheses.  Switch to KNF
formatting and/or rewrap the whole prototype in some cases.
2002-03-23 16:01:49 +00:00
Warner Losh
ba74981e71 Fix abuses of cpu_critical_{enter,exit} by converting to
intr_{disable,restore} as well as providing an implemenation of
intr_{disable,restore}.

Reviewed by: jake, rwatson, jhb
2002-03-21 06:19:08 +00:00
Alfred Perlstein
89c9a48352 Remove __P. 2002-03-20 07:51:46 +00:00
Matthew Dillon
181df8c9d4 revert last commit temporarily due to whining on the lists. 2002-02-26 20:33:41 +00:00
Matthew Dillon
f96ad4c223 STAGE-1 of 3 commit - allow (but do not require) interrupts to remain
enabled in critical sections and streamline critical_enter() and
critical_exit().

This commit allows an architecture to leave interrupts enabled inside
critical sections if it so wishes.  Architectures that do not wish to do
this are not effected by this change.

This commit implements the feature for the I386 architecture and provides
a sysctl, debug.critical_mode, which defaults to 1 (use the feature).  For
now you can turn the sysctl on and off at any time in order to test the
architectural changes or track down bugs.

This commit is just the first stage.  Some areas of the code, specifically
the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will
be cleaned up in the STAGE-2 commit when the critical_*() functions are
moved entirely into MD files.

The following changes have been made:

	* critical_enter() and critical_exit() for I386 now simply increment
	  and decrement curthread->td_critnest.  They no longer disable
	  hard interrupts.  When critical_exit() decrements the counter to
	  0 it effectively calls a routine to deal with whatever interrupts
	  were deferred during the time the code was operating in a critical
	  section.

	  Other architectures are unaffected.

	* fork_exit() has been conditionalized to remove MD assumptions for
	  the new code.  Old code will still use the old MD assumptions
	  in regards to hard interrupt disablement.  In STAGE-2 this will
	  be turned into a subroutine call into MD code rather then hardcoded
	  in MI code.

	  The new code places the burden of entering the critical section
	  in the trampoline code where it belongs.

	* I386: interrupts are now enabled while we are in a critical section.
	  The interrupt vector code has been adjusted to deal with the fact.
	  If it detects that we are in a critical section it currently defers
	  the interrupt by adding the appropriate bit to an interrupt mask.

	* In order to accomplish the deferral, icu_lock is required.  This
	  is i386-specific.  Thus icu_lock can only be obtained by mainline
	  i386 code while interrupts are hard disabled.  This change has been
	  made.

	* Because interrupts may or may not be hard disabled during a
	  context switch, cpu_switch() can no longer simply assume that
	  PSL_I will be in a consistent state.  Therefore, it now saves and
	  restores eflags.

	* FAST INTERRUPT PROVISION.  Fast interrupts are currently deferred.
	  The intention is to eventually allow them to operate either while
	  we are in a critical section or, if we are able to restrict the
	  use of sched_lock, while we are not holding the sched_lock.

	* ICU and APIC vector assembly for I386 cleaned up.  The ICU code
	  has been cleaned up to match the APIC code in regards to format
	  and macro availability.  Additionally, the code has been adjusted
	  to deal with deferred interrupts.

	* Deferred interrupts use a per-cpu boolean int_pending, and
	  masks ipending, spending, and fpending.  Being per-cpu variables
	  it is not currently necessary to lock; bus cycles modifying them.

	  Note that the same mechanism will enable preemption to be
	  incorporated as a true software interrupt without having to
	  further hack up the critical nesting code.

	* Note: the old critical_enter() code in kern/kern_switch.c is
	  currently #ifdef to be compatible with both the old and new
	  methodology.  In STAGE-2 it will be moved entirely to MD code.

Performance issues:

	One of the purposes of this commit is to enhance critical section
	performance, specifically to greatly reduce bus overhead to allow
	the critical section code to be used to protect per-cpu caches.
	These caches, such as Jeff's slab allocator work, can potentially
	operate very quickly making the effective savings of the new
	critical section code's performance very significant.

	The second purpose of this commit is to allow architectures to
	enable certain interrupts while in a critical section.  Specifically,
	the intention is to eventually allow certain FAST interrupts to
	operate rather then defer.

	The third purpose of this commit is to begin to clean up the
	critical_enter()/critical_exit()/cpu_critical_enter()/
	cpu_critical_exit() API which currently has serious cross pollution
	in MI code (in fork_exit() and ast() for example).

	The fourth purpose of this commit is to provide a framework that
	allows kernel-preempting software interrupts to be implemented
	cleanly.  This is currently used for two forward interrupts in I386.
	Other architectures will have the choice of using this infrastructure
	or building the functionality directly into critical_enter()/
	critical_exit().

	Finally, this commit is designed to greatly improve the flexibility
	of various architectures to manage critical section handling,
	software interrupts, preemption, and other highly integrated
	architecture-specific details.
2002-02-26 17:06:21 +00:00
Bruce Evans
586079cc26 Don't include <isa/isavar.h> or compile code depending on it when isa
is not configured.  Including <isa/isavar.h> when it is not used is
harmful as well as bogus, since it includes "isa_if.h" which is not
generated when isa is not configured.

This was fixed in 1999 but was broken by unconditionalizing PNPBIOS.
2002-01-30 12:41:12 +00:00
John Baldwin
98f9879242 Introduce a standard name for the lock protecting an interrupt controller
and it's associated state variables: icu_lock with the name "icu".  This
renames the imen_mtx for x86 SMP, but also uses the lock to protect
access to the 8259 PIC on x86 UP.  This also adds an appropriate lock to
the various Alpha chipsets which fixes problems with Alpha SMP machines
dropping interrupts with an SMP kernel.
2001-12-20 23:48:31 +00:00
John Baldwin
7e1f6dfe9d Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
John Baldwin
0bbc882680 Overhaul the per-CPU support a bit:
- The MI portions of struct globaldata have been consolidated into a MI
  struct pcpu.  The MD per-CPU data are specified via a macro defined in
  machine/pcpu.h.  A macro was chosen over a struct mdpcpu so that the
  interface would be cleaner (PCPU_GET(my_md_field) vs.
  PCPU_GET(md.md_my_md_field)).
- All references to globaldata are changed to pcpu instead.  In a UP kernel,
  this data was stored as global variables which is where the original name
  came from.  In an SMP world this data is per-CPU and ideally private to each
  CPU outside of the context of debuggers.  This also included combining
  machine/globaldata.h and machine/globals.h into machine/pcpu.h.
- The pointer to the thread using the FPU on i386 was renamed from
  npxthread to fpcurthread to be identical with other architectures.
- Make the show pcpu ddb command MI with a MD callout to display MD
  fields.
- The globaldata_register() function was renamed to pcpu_init() and now
  init's MI fields of a struct pcpu in addition to registering it with
  the internal array and list.
- A pcpu_destroy() function was added to remove a struct pcpu from the
  internal array and list.

Tested on:	alpha, i386
Reviewed by:	peter, jake
2001-12-11 23:33:44 +00:00
Bruce Evans
08b00f49c3 MFi386:
- sys/pc98/pc98/npx.c 1.87 (2001/09/15; author: imp)
  I don't think pc98 has acpi at all, so ifdef the acpi attachments for
  now.

This completes merging sys/pc98/pc98/npx.c into sys/i386/isa/npx.c so
that the former can be removed.
2001-10-21 06:05:08 +00:00
Bruce Evans
abfde38316 MFpc98: fundamental differences. The magic numbers for the i/o port
and the irq are different for pc98, and are not very well handled (we
use a historical mess of hard-coded values, values from header files
and values from hints).
2001-10-21 05:56:03 +00:00
Bruce Evans
40d8c8da95 MFpc98: all changes in sys/pc98/pc98/npx.c related to FPU_ERROR_BROKEN.
- 1.58 (2000/09/01; author: kato)
  Fixed FPU_ERROR_BROKEN code.  It had old-isa code.
- 1.33 (1998/03/09; author: kato)
  Make FPU_ERROR_BROKEN a new-style option.
- 1.7 (1996/10/09; author: asami)
  Make sure FPU is recognized for non-Intel CPUs.

The log for rev.1.7 should have said something like:
Added FPU_ERROR_BROKEN option.  This forces a successful probe for
exception 16, so that hardware with a broken FPU error signal can sort
of work.
2001-10-21 05:18:30 +00:00
Bruce Evans
265e95d904 Deleted most of npxprobe(), and merged npxprobe1() back into npxprobe().
Use the normal interrupt handler (npx_intr()) instead of a special
probe-time interrupt handler, although this causes problems due to
the bus_teardown_intr() not actually even tearing down the interrupt
(these problems were avoided by doing interrupt attachment for the
special interrupt handler directly).  Fixed minor bitrot in comments.

The reason for the npxprobe()/npxprobe1() split mostly went away at
about the same time it was made (in 1992 or 1993 just before the
beginning of history).  386BSD ran all probes with interrupts completely
masked, and I didn't want to disturb this when I added an irq probe
to npxprobe().  An irq (not necessarily npx) must be acked for at least
external npx's to take the cpu out of the wait state that it enters
when an npx error occurs, so the probe must be done with a suitable
irq unmasked.  npxprobe() went to great lengths to unmask precisely
the npx irq.

Running probes with all interrupts masked was never really needed in
FreeBSD, since FreeBSD always masked interrupts well enough using
splhigh(), but it wasn't until rev.1.48 (1995/12/12) of autoconf.c
that all probes were run with CPU interrupts enabled.  This permits
npxprobe() to probe its irq using normal interrupt resources.  Note
that most drivers still can't depend on this.  It depends on the
interrupt handler being fast and the irq not being shared.
2001-10-16 14:12:35 +00:00
Bruce Evans
2504f76272 Commit my old fixes for cosmetic bugs in npxprobe() so that they aren't
lost when the buggy code goes away completely:
- don't assume that the npx irq number is >= 8.  Rev.1.73 only reversed
  part of the hard-coding of it to 13 in rev.1.66.
- backed out the part of rev.1.84 that added a highly confused comment
  about an enable_intr() being "highly bogus".  The whole reason for
  existence of npxprobe() (separate from the main probe, npxprobe1())
  is to handle the complications to make this enable_intr() safe.
- backed out the part of rev.1.94 that modified npxprobe().  It mainly
  broke the enable_intr() to restore_intr().  Restoring the interrupt
  state in a nested way is precisely what is not wanted here.  It was
  harmless in practice because npxprobe() is called with interrupts
  enabled, so restoring the interrupt state enables interrupts.  Most
  of npxprobe() is a no-op for the same reason...
2001-10-16 12:55:38 +00:00
Tor Egge
4c8f0aced5 Explicitly initialize the fpu when SSE is enabled since this no
longer happens as a side effect of calling npxsave.

Reviewed by:	peter, bde
2001-10-15 20:18:06 +00:00
John Baldwin
659209e636 Whitespace fixes. 2001-09-18 21:05:04 +00:00
Warner Losh
8b8a72ee71 s/thread'/thread's/ 2001-09-14 04:40:44 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Mike Smith
5f063c7b09 Add ACPI attachments. 2001-08-30 09:17:03 +00:00
Peter Wemm
3b703181e3 Dont compile in SSE fxsave/fxrstor instructions if CPU_ENABLE_SSE isn't
active.
2001-08-23 01:03:56 +00:00
John Baldwin
688ebe120c - Close races with signals and other AST's being triggered while we are in
the process of exiting the kernel.  The ast() function now loops as long
  as the PS_ASTPENDING or PS_NEEDRESCHED flags are set.  It returns with
  preemption disabled so that any further AST's that arrive via an
  interrupt will be delayed until the low-level MD code returns to user
  mode.
- Use u_int's to store the tick counts for profiling purposes so that we
  do not need sched_lock just to read p_sticks.  This also closes a
  problem where the call to addupc_task() could screw up the arithmetic
  due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
  clear_resched(), and resched_wanted() in favor of direct bit operations
  on p_sflag.
- Fix up locking with sched_lock some.  In addupc_intr(), use sched_lock
  to ensure pr_addr and pr_ticks are updated atomically with setting
  PS_OWEUPC.  In ast() we clear pr_ticks atomically with clearing
  PS_OWEUPC.  We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.

Reviewed by:	bde (mostly)
2001-08-10 22:53:32 +00:00
Peter Wemm
3e02a8711a MASK_FPU_SW didn't do what it was expected to do. 2001-07-26 23:47:04 +00:00
Tor Egge
e55bc0a096 The per-cpu temporary buffers are not needed since the pcb_save areas have
the proper alignment.  Change dummy variable in npxinit from stack to bss
to ensure proper alignment.

Reviewed by:	bde
2001-07-17 13:06:47 +00:00