Commit Graph

34 Commits

Author SHA1 Message Date
Matthew Dillon
93e70a5f37 Tab-out the backslashes in icu_vector.s to make it more readable and to
match it up with apic_vector.s.
2002-03-27 05:43:11 +00:00
Matthew Dillon
d74ac6819b Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
Matthew Dillon
181df8c9d4 revert last commit temporarily due to whining on the lists. 2002-02-26 20:33:41 +00:00
Matthew Dillon
f96ad4c223 STAGE-1 of 3 commit - allow (but do not require) interrupts to remain
enabled in critical sections and streamline critical_enter() and
critical_exit().

This commit allows an architecture to leave interrupts enabled inside
critical sections if it so wishes.  Architectures that do not wish to do
this are not effected by this change.

This commit implements the feature for the I386 architecture and provides
a sysctl, debug.critical_mode, which defaults to 1 (use the feature).  For
now you can turn the sysctl on and off at any time in order to test the
architectural changes or track down bugs.

This commit is just the first stage.  Some areas of the code, specifically
the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will
be cleaned up in the STAGE-2 commit when the critical_*() functions are
moved entirely into MD files.

The following changes have been made:

	* critical_enter() and critical_exit() for I386 now simply increment
	  and decrement curthread->td_critnest.  They no longer disable
	  hard interrupts.  When critical_exit() decrements the counter to
	  0 it effectively calls a routine to deal with whatever interrupts
	  were deferred during the time the code was operating in a critical
	  section.

	  Other architectures are unaffected.

	* fork_exit() has been conditionalized to remove MD assumptions for
	  the new code.  Old code will still use the old MD assumptions
	  in regards to hard interrupt disablement.  In STAGE-2 this will
	  be turned into a subroutine call into MD code rather then hardcoded
	  in MI code.

	  The new code places the burden of entering the critical section
	  in the trampoline code where it belongs.

	* I386: interrupts are now enabled while we are in a critical section.
	  The interrupt vector code has been adjusted to deal with the fact.
	  If it detects that we are in a critical section it currently defers
	  the interrupt by adding the appropriate bit to an interrupt mask.

	* In order to accomplish the deferral, icu_lock is required.  This
	  is i386-specific.  Thus icu_lock can only be obtained by mainline
	  i386 code while interrupts are hard disabled.  This change has been
	  made.

	* Because interrupts may or may not be hard disabled during a
	  context switch, cpu_switch() can no longer simply assume that
	  PSL_I will be in a consistent state.  Therefore, it now saves and
	  restores eflags.

	* FAST INTERRUPT PROVISION.  Fast interrupts are currently deferred.
	  The intention is to eventually allow them to operate either while
	  we are in a critical section or, if we are able to restrict the
	  use of sched_lock, while we are not holding the sched_lock.

	* ICU and APIC vector assembly for I386 cleaned up.  The ICU code
	  has been cleaned up to match the APIC code in regards to format
	  and macro availability.  Additionally, the code has been adjusted
	  to deal with deferred interrupts.

	* Deferred interrupts use a per-cpu boolean int_pending, and
	  masks ipending, spending, and fpending.  Being per-cpu variables
	  it is not currently necessary to lock; bus cycles modifying them.

	  Note that the same mechanism will enable preemption to be
	  incorporated as a true software interrupt without having to
	  further hack up the critical nesting code.

	* Note: the old critical_enter() code in kern/kern_switch.c is
	  currently #ifdef to be compatible with both the old and new
	  methodology.  In STAGE-2 it will be moved entirely to MD code.

Performance issues:

	One of the purposes of this commit is to enhance critical section
	performance, specifically to greatly reduce bus overhead to allow
	the critical section code to be used to protect per-cpu caches.
	These caches, such as Jeff's slab allocator work, can potentially
	operate very quickly making the effective savings of the new
	critical section code's performance very significant.

	The second purpose of this commit is to allow architectures to
	enable certain interrupts while in a critical section.  Specifically,
	the intention is to eventually allow certain FAST interrupts to
	operate rather then defer.

	The third purpose of this commit is to begin to clean up the
	critical_enter()/critical_exit()/cpu_critical_enter()/
	cpu_critical_exit() API which currently has serious cross pollution
	in MI code (in fork_exit() and ast() for example).

	The fourth purpose of this commit is to provide a framework that
	allows kernel-preempting software interrupts to be implemented
	cleanly.  This is currently used for two forward interrupts in I386.
	Other architectures will have the choice of using this infrastructure
	or building the functionality directly into critical_enter()/
	critical_exit().

	Finally, this commit is designed to greatly improve the flexibility
	of various architectures to manage critical section handling,
	software interrupts, preemption, and other highly integrated
	architecture-specific details.
2002-02-26 17:06:21 +00:00
John Baldwin
c86b6ff551 Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:

The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe.  Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer.  This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs.  Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called.  (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)

I've tested the changes to the interrupt code on i386 and alpha.  I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine.  PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken.  Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.

Reviewed by:	peter
Tested on:	i386, alpha
2002-01-05 08:47:13 +00:00
Ian Dowse
3c7bcedd06 Remove the Xresume* labels from the i386 interrupt handlers; the
code in ipl.s and icu_ipl.s that used them was removed when the
interrupt thread system was committed. Debuggers also knew about
Xresume* because these labels hide the real names of the interrupt
handlers (Xintr*), and debuggers need to special-case interrupt
handlers to get the interrupt frame.

Both gdb and ddb will now use the Xintr* and Xfastintr* symbols to
detect interrupt frames. Fast interrupt frames were never identified
correctly before, so this fixes the problem of the running stack
frame getting lost in a ddb or gdb trace generated from a fast
interrupt - e.g. when debugging a simple infinite loop in the kernel
using a serial console, the frame containing the loop would never
appear in a gdb or ddb trace.

Reviewed by:	jhb, bde
2001-10-09 19:54:52 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
John Baldwin
ae383d0cc7 Don't enable interrupts before calling sched_ithd for threaded interrupts.
Tested by:	obrien
2001-03-05 04:37:54 +00:00
Jake Burkholder
02318dac2c Remove the leading underscore from all symbols defined in x86 asm
and used in C or vice versa.  The elf compiler uses the same names
for both.  Remove asnames.h with great prejudice; it has served its
purpose.

Note that this does not affect the ability to generate an aout kernel
due to gcc's -mno-underscores option.

moral support from:	peter, jhb
2001-02-25 06:29:04 +00:00
Jake Burkholder
a448b62ac9 Make intr_nesting_level per-process, rather than per-cpu. Setup
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context.  This is also necessary
in order to context switch from sched_ithd() directly.

Reviewed By:	peter
2001-01-21 19:25:07 +00:00
Jake Burkholder
41ed17bfec Use %fs to access per-cpu variables in uni-processor kernels the same
as multi-processor kernels.  The old way made it difficult for kernel
modules to be portable between uni-processor and multi-processor
kernels.  It is no longer necessary to jump through hoops.

- always load %fs with the private segment on entry to the kernel
- change the type of the self referntial pointer from struct privatespace
  to struct globaldata
- make the globaldata symbol have value 0 in all cases, so the symbols
  in globals.s are always offsets, not aliases for fields in globaldata
- define the globaldata space used for uniprocessor kernels in C, rather
  than assembler
- change the assmebly language accessors to use %fs, add a macro
  PCPU_ADDR(member, reg), which loads the register reg with the address
  of the per-cpu variable member
2001-01-06 17:40:04 +00:00
Jake Burkholder
6d43764a10 Introduce a new potientially cleaner interface for accessing per-cpu
variables from i386 assembly language.  The syntax is PCPU(member)
where member is the capitalized name of the per-cpu variable, without
the gd_ prefix.  Example: movl %eax,PCPU(CURPROC).  The capitalization
is due to using the offsets generated by genassym rather than the symbols
provided by linking with globals.o.  asmacros.h is the wrong place for
this but it seemed as good a place as any for now.  The old implementation
in asnames.h has not been removed because it is still used to de-mangle
the symbols used by the C variables for the UP case.
2000-12-13 09:23:53 +00:00
Peter Wemm
5ee171d264 Cleanup some leftover lint from the old interrupt system.
Also, while here, run up to 32 interrupt sources on APIC systems.
Normalize INTREN/INTRDIS so they are the same on both UP and SMP systems
rather than sometimes a macro, and sometimes a function.

Reviewed by:  jhb, jakeb
2000-12-04 21:15:14 +00:00
Jake Burkholder
21ad98bca8 Change doreti to take a trapframe instead of an intrframe.
Remove associated pushes of dummy units to convert frame.

Reviewed by:	jhb
2000-12-01 02:09:45 +00:00
John Baldwin
6c56727456 - Change fast interrupts on x86 to push a full interrupt frame and to
return through doreti to handle ast's.  This is necessary for the
  clock interrupts to work properly.
- Change the clock interrupts on the x86 to be fast instead of threaded.
  This is needed because both hardclock() and statclock() need to run in
  the context of the current process, not in a separate thread context.
- Kill the prevproc hack as it is no longer needed.
- We really need Giant when we call psignal(), but we don't want to block
  during the clock interrupt.  Instead, use two p_flag's in the proc struct
  to mark the current process as having a pending SIGVTALRM or a SIGPROF
  and let them be delivered during ast() when hardclock() has finished
  running.
- Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways.  It was
  broken on the x86 if it was turned on since cpl is gone.  It's only use
  was to bogusly run softclock() directly during hardclock() rather than
  scheduling an SWI.
- Remove the COM_LOCK simplelock and replace it with a clock_lock spin
  mutex.  Since the spin mutex already handles disabling/restoring
  interrupts appropriately, this also lets us axe all the *_intr() fu.
- Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use
  temporary fast interrupts for the APIC trial.
- Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending
  signals in hardclock() that are to be delivered in ast().

Submitted by:	jakeb (making statclock safe in a fast interrupt)
Submitted by:	cp (concept of delaying signals until ast())
2000-10-06 02:20:21 +00:00
John Baldwin
1931cf940a - Heavyweight interrupt threads on the alpha for device I/O interrupts.
- Make softinterrupts (SWI's) almost completely MI, and divorce them
  completely from the x86 hardware interrupt code.
  - The ihandlers array is now gone.  Instead, there is a MI shandlers array
    that just contains SWI handlers.
  - Most of the former machine/ipl.h files have moved to a new sys/ipl.h.
- Stub out all the spl*() functions on all architectures.

Submitted by:	dfr
2000-10-05 23:09:57 +00:00
Jason Evans
0384fff8c5 Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00
Bruce Evans
6b4a3ce242 Pack the SWI bits to save some time and space. 2000-05-31 16:36:20 +00:00
Doug Rabson
3e8965ac09 Add SWI_TQ_MASK to imask definition. 2000-05-29 19:40:42 +00:00
David E. O'Brien
fc81cf82e9 1. `movl' is for use with 32-bit operands. Do NOT use it with 16-bit
operands.  `movw' could be used, but instead let the assembler decide
   the right instruction to use.
2. AT&T asm syntax requires a leading '*' in front of the operand for
   indirect calls and jumps.
2000-05-10 01:24:23 +00:00
Peter Wemm
c3aac50f28 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
Bruce Evans
eec2e836e9 Go back to the old (icu.s rev.1.7 1993) way of keeping the AST-pending
bit separate from ipending, since this is simpler and/or necessary for
SMP and may even be better for UP.

Reviewed by:	alc, luoqi, tegge
1999-07-10 15:28:01 +00:00
Bruce Evans
b50641ef9c Fixed glitches (jumps) of about 1/HZ seconds for the i8254 timecounter.
The old version only worked right when the time was read strictly
more often than every 1/HZ seconds, but we only guarantee reading
it every (1/HZ + epsilon) seconds.  Part of rev.1.126-1.127 attempted
to fix this but didn't succeed.  Detect counter rollover using the
heuristic from the old version of microtime() with additional
complications for supporting calls from fast interrupt handlers.
This works provided i8254 interrupts are not delayed by more than
1/(2*HZ) seconds.

This needs more comments, and cleanups for the SMP case, and more
testing of the SMP case before it is merged into RELENG_3.

Tested by:		jhay
1999-05-28 14:08:59 +00:00
Luoqi Chen
5206bca10a Enable vmspace sharing on SMP. Major changes are,
- %fs register is added to trapframe and saved/restored upon kernel entry/exit.
- Per-cpu pages are no longer mapped at the same virtual address.
- Each cpu now has a separate gdt selector table. A new segment selector
  is added to point to per-cpu pages, per-cpu global variables are now
  accessed through this new selector (%fs). The selectors in gdt table are
  rearranged for cache line optimization.
- fask_vfork is now on as default for both UP and SMP.
- Some aio code cleanup.

Reviewed by:	Alan Cox	<alc@cs.rice.edu>
		John Dyson	<dyson@iquest.net>
		Julian Elischer	<julian@whistel.com>
		Bruce Evans	<bde@zeta.org.au>
		David Greenman	<dg@root.com>
1999-04-28 01:04:33 +00:00
Bruce Evans
6a9df7a421 Generate intrnames[] dynamically. This should be new-bus friendly.
Old version reviewed by: se
1999-04-14 14:26:36 +00:00
Bruce Evans
92971f1fd7 Register tty software interrupt handlers at run time using register_swi()
instead of at compile time using ifdefs.

Use _swi_null instead of dummycamisr.  CAM and dpt should call
register_swi() instead of hacking on ihandlers[] directly.
1998-08-11 17:01:32 +00:00
Bruce Evans
18c5a6c435 Implemented dynamic registration of software interrupt handlers. Not
used yet.

Use dummy SWI handlers to avoid some checks for null pointers.
1998-08-11 15:08:13 +00:00
Justin T. Gibbs
99117bb6ec Addition of splsoftvm and a VM SWI to handle bus dma related callbacks.
This SWI may be useful for other, defered, VM tasks.
1998-01-15 07:34:01 +00:00
Justin T. Gibbs
643b3d0cf2 Fix a serious bug I introduced while adding in support for CAM interrupts.
It seems I didn't count my 0's properly when adding the new masks into
icu_vector.s pushing SWI_AST_MASK off the end of the array and screwing
up the indexing for SWI_CLOCK_MASK.

Fix the bug icu_vector.s and also reformat the code in both icu_vector.s and
apic_vector.s so that it will be much harder to make the same mistake in
the future.

Submitted by:	Bruce Evans <bde@zeta.org.au>
1997-09-28 19:30:01 +00:00
Justin T. Gibbs
02a199102d aha1542.c aic6360.c cy.c fd.c ft.c
if_ie.c if_wl.c if_zp.c isa.c isa_device.h
labpc.c mcd.c ncr5380.c scd.c seagate.c si.c
sio.c tw.c ultra14f.c wcd.c wd.c:

	Update for changes in the callout interface.

apic_vector.s icu_vector.s ipl.s ipl_funcs.c:

	Add CAM software/hardware interrupt support.
1997-09-21 21:41:49 +00:00
Peter Wemm
421f9ee1ef Change an assemble-time divide into a shift. Under binutils-2.8 gas in elf
mode, the slash is a comment leader, while under non-elf it is a divide
symbol (what a concept! :-).  Theoretically, #APP/#NO_APP can change this
but that doesn't seem to mesh too well with macros and line continuation.
1997-09-08 06:40:58 +00:00
Steve Passe
7f6ab10763 Removed the defunct GET_MPLOCK/REL_MPLOCK macros.
These are no-ops for UP, and should have been removed when vector.s
was split into UP and SMP subsets.
1997-07-24 03:24:57 +00:00
Peter Wemm
0589fe3b6e The SWI_NET_MASK and SWI_TTY_MASK handlers are now back adjacent to the
top of the hardware interrupt handlers.  Apparently this is slightly
faster with the bit scanning instruction that looks these up - this set of
changes reverts the original change.

Reviewed by: bde
1997-05-31 08:59:51 +00:00
Steve Passe
81c1993f87 Split vector.s into UP and SMP specific files:
- vector.s		<- stub called by i386/exception.s
 - icu_vector.s		<- UP
 - apic_vector.s	<- SMP

Split icu.s into UP and SMP specific files:
 - ipl.s		<- stub called by i386/exception.s (formerly icu.s)
 - icu_ipl.s		<- UP
 - apic_ipl.s		<- SMP

This was done in preparation for massive changes to the SMP INTerrupt
mechanisms.  More fine tuning, such as merging ipl.s into exception.s,
may be appropriate.
1997-05-26 17:58:27 +00:00