Commit Graph

30 Commits

Author SHA1 Message Date
Marius Strobl
4cf9e21ed8 - Give PIL_PREEMPT the lowest priority just above low/stray interrupts.
The reason for this is that the SPARC v9 architecture allows nested
  interrupts of higher priority/level than that of the current interrupt
  to occur (and we can't just entirely bypass this model, also, at least
  for tick interrupts, this also wouldn't be wise). However, when a
  preemption interrupt interrupts another interrupt of lower priority,
  f.e. PIL_ITHREAD, and that one in turn is nested by a third interrupt,
  f.e. PIL_TICK, with SCHED_ULE the execution of interrupts higher than
  PIL_PREEMPT may be migrated to another CPU. In particular, tl1_ret(),
  which is responsible for restoring the state of the CPU prior to entry
  to the interrupt based on the (also migrated) trap frame, then is run
  on a CPU which actually didn't receive the interrupt in question,
  causing an inappropriate processor interrupt level to be "restored".
  In turn, this causes interrupts of the first level, i.e. PIL_ITHREAD
  in the above scenario, to be blocked on the target of the migration
  until the correct PIL happens to be restored again on that CPU again.
  Making PIL_PREEMPT the lowest real priority, this effectively prevents
  this scenario from happening, as preemption interrupts no longer can
  interrupt any other interrupt besides stray ones (which is no issue).
  Thanks to attilio@ and especially mav@ for helping me to understand
  this problem at the 201208DevSummit.
- Give PIL_STOP (which is also used for IPI_STOP_HARD, given that there's
  no real equivalent to NMIs on SPARC v9) the highest possible priority
  just below the hardwired PIL_TICK, so it has a chance to interrupt
  more things.

MFC after:	1 week
2012-10-20 12:07:48 +00:00
Attilio Rao
af2bdacafb Reverts r234074,234105,234564,234723,234989,235231-235232 and part of
r234247.
Use, instead, the static intializer introduced in r239923 for x86 and
sparc64 intr_cpus, unwinding the code to the initial version.

Reviewed by:	marius
2012-10-09 12:22:43 +00:00
Marius Strobl
d0cea659af Fix mismerge in r235231. 2012-05-10 15:23:20 +00:00
Marius Strobl
a6125979d3 Merge r234989 from x86:
Revert part of r234723 by re-enabling the SMP protection for intr_bind().
2012-05-10 15:17:21 +00:00
Attilio Rao
70dbd1604c Clean up the intr* MD KPI from the SMP dependency, removing a cause of
discrepancy between modules and kernel, but deal with SMP differences
within the functions themselves.

As an added bonus this also helps in terms of code readability.

Requested by:	gibbs
Reviewed by:	jhb, marius
MFC after:	1 week
2012-04-26 20:24:25 +00:00
Marius Strobl
f4ff513c4b Reserve INTR_MD[1-4] similarly to what BUS_DMA_BUS[1-4] are intended for
and switch sparc64 to use the first one for bus error filter handlers of
bridge drivers instead of (ab)using INTR_FAST for that so we eventually
can get rid of the latter.

Reviewed by:	jhb
MFC after:	1 month
2011-01-04 16:11:32 +00:00
Alexander Motin
a157e42516 Refactor timer management code with priority to one-shot operation mode.
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.

There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
  kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
  kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
  kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
  kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.

As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.

Tested by:	many (on i386, amd64, sparc64 and powerc)
H/W donated by:	Gheorghe Ardelean
Sponsored by:	iXsystems, Inc.
2010-09-13 07:25:35 +00:00
Alexander Motin
6c8dd81fa9 Adapt sparc64 and sun4v timer code for the new event timers infrastructure.
Reviewed by:	marius@
2010-07-29 12:08:46 +00:00
Alexander Motin
a448e0d827 Allocate proper ammount of memory for interrupt names on sparc64 and
sun4v, same as done on other architectures. This removes garbage from
`vmstat -ia` output.

Reviewed by:	marius@
2010-07-16 22:09:29 +00:00
Marius Strobl
3438bdc53e Merge from amd64/i386:
Implement support for interrupt descriptions.
2009-12-24 15:43:37 +00:00
Marius Strobl
e363ea0fab Use the interrupt level right below PIL_FAST for executing interrupt
filters instead of PIL_FAST and allow special filters and handlers
for interrupts which need to be able to interrupt even filters, f.e.
bus error interrupts, to be registered with the revived INTR_FAST
at PIL_FAST.
2008-11-19 22:12:32 +00:00
Marius Strobl
526bd70425 o Rename ic_eoi to ic_clear to emphasize the functions it points
don't send and EOI which works like on amd64/i386 and blocks all
  interrupts on the relevant interrupt controller.
o Replace the post_filter and post_inthread hooks registered when
  creating the interrupt events with just ic_clear as on sparc64 we
  don't need to do any disable->EOI->enable dance to unblock all but
  the relevant interrupt while running the filter or handler; just
  not clearing the interrupt already has the same effect.
o Merge from amd64/i386:
  - Split the intr_table_lock into an sx lock used for most things,
    and a spin lock to protect intrcnt_index.
  - Add support for binding interrupts to CPUs, including for the
    bus_bind_intr(9) interface, a assign_cpu hook and initially
    shuffling interrupts arround in a round-robin fashion.

Reviewed by:	jhb
MFC after:	1 month
2008-04-23 20:04:38 +00:00
Marius Strobl
a6c165e468 - Add support for IPI_PREEMPT. [1]
- Add my copyright to mp_machdep.c for having implemented support for
  USIII and up and some fixes.

Obtained from:	sun4v (modulo style(9) bugs) [1]
2008-04-09 21:14:01 +00:00
Marius Strobl
7439368f60 o Revamp the sparc64 interrupt code in order to be able to interface
with the INTR_FILTER-enabled MI code. Basically this consists of
  registering an interrupt controller (of which there can be multiple
  and optionally different ones either per host-to-foo bridge or shared
  amongst host-to-foo bridges in any one machine) along with an interrupt
  vector as specific argument for all the interrupt vectors used by a
  given host-to-foo bridge (roughly similar to registering interrupt
  sources on amd64 and i386), providing functions to enable, clear and
  disable the interrupts of the children beneath the bridge.
  This also includes:
  - No longer entering a critical section in tl0_intr() and tl1_intr()
    for executing interrupt handlers but rather let the handlers enter
    it themselves so in the case of intr_event_handle() we don't enter
    a nested critical section.
  - Adding infrastructure for binding delivery of interrupt vectors to
    specific CPUs which later on can be interfaced with the code from
    amd64/i386 for binding interrupts to specific CPUs.
  - Getting rid of the wrapper hack introduced along the lines of the
    API changes for INTR_FILTER which as a side-effect caused interrupts
    associated with ithread handlers only to get the elevated priority
    of those associated with filters ("fast handlers") (this removes the
    hack also in the non-INTR_FILTER case).
  - Disabling (by not clearing) an interrupt in the interrupt controller
    until all associated handlers have been executed, which is crucial
    for the typical locking strategy of NIC drivers in order to work
    correctly in case of shared interrupts. This was a more or less
    theoretical problem on sparc64 though, as shared interrupts are
    rather uncommon there except for the on-board SCCs and UARTs.
  Note that due to the behavior of at least of some of the interrupt
  controllers used on sparc64 an enable+EOI instead of a disable+EOI
  approach (as implied by the INTR_FILTER MI code and implemented on
  other architectures) is used as the latter can cause lost interrupts
  or in the worst case interrupt starvation.
o Correct a typo in sbus_alloc_resource() which caused (pass-through)
  allocations to only work down to the grandchildren of the bus, which
  wasn't a real problem so far as we don't support any devices which are
  great-grandchildren or greater of a U2S bridge, yet.
o In fhc(4) use bus_{read,write}_4() instead of bus_space_{read,write}_4()
  in order to get rid of sc_bh and sc_bt in the fhc_softc. Also get rid
  of some other unneeded members in fhc_softc.

Reviewed by:	marcel (earlier version)
Approved by:	re (kensmith)
2007-09-06 19:16:30 +00:00
Paolo Pisati
ef544f6312 o break newbus api: add a new argument of type driver_filter_t to
bus_setup_intr()

o add an int return code to all fast handlers

o retire INTR_FAST/IH_FAST

For more info: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=465712+0+current/freebsd-current

Reviewed by: many
Approved by: re@
2007-02-23 12:19:07 +00:00
Marius Strobl
0ca3609e30 Convert the remainder of the low hanging fruits regarding including
headers in .S directly rather than getting to their macros through
genassym.c/assym.s so there are less headers genassym.c has to be
kept in sync with.
While at it fix some stytle(9) bugs (indentation, prototype format,
sort headers, etc) and remove trailing whitespace.
2007-01-19 11:15:34 +00:00
John Baldwin
e0f66ef861 Reorganize the interrupt handling code a bit to make a few things cleaner
and increase flexibility to allow various different approaches to be tried
in the future.
- Split struct ithd up into two pieces.  struct intr_event holds the list
  of interrupt handlers associated with interrupt sources.
  struct intr_thread contains the data relative to an interrupt thread.
  Currently we still provide a 1:1 relationship of events to threads
  with the exception that events only have an associated thread if there
  is at least one threaded interrupt handler attached to the event.  This
  means that on x86 we no longer have 4 bazillion interrupt threads with
  no handlers.  It also means that interrupt events with only INTR_FAST
  handlers no longer have an associated thread either.
- Renamed struct intrhand to struct intr_handler to follow the struct
  intr_foo naming convention.  This did require renaming the powerpc
  MD struct intr_handler to struct ppc_intr_handler.
- INTR_FAST no longer implies INTR_EXCL on all architectures except for
  powerpc.  This means that multiple INTR_FAST handlers can attach to the
  same interrupt and that INTR_FAST and non-INTR_FAST handlers can attach
  to the same interrupt.  Sharing INTR_FAST handlers may not always be
  desirable, but having sio(4) and uhci(4) fight over an IRQ isn't fun
  either.  Drivers can always still use INTR_EXCL to ask for an interrupt
  exclusively.  The way this sharing works is that when an interrupt
  comes in, all the INTR_FAST handlers are executed first, and if any
  threaded handlers exist, the interrupt thread is scheduled afterwards.
  This type of layout also makes it possible to investigate using interrupt
  filters ala OS X where the filter determines whether or not its companion
  threaded handler should run.
- Aside from the INTR_FAST changes above, the impact on MD interrupt code
  is mostly just 's/ithread/intr_event/'.
- A new MI ddb command 'show intrs' walks the list of interrupt events
  dumping their state.  It also has a '/v' verbose switch which dumps
  info about all of the handlers attached to each event.
- We currently don't destroy an interrupt thread when the last threaded
  handler is removed because it would suck for things like ppbus(8)'s
  braindead behavior.  The code is present, though, it is just under
  #if 0 for now.
- Move the code to actually execute the threaded handlers for an interrrupt
  event into a separate function so that ithread_loop() becomes more
  readable.  Previously this code was all in the middle of ithread_loop()
  and indented halfway across the screen.
- Made struct intr_thread private to kern_intr.c and replaced td_ithd
  with a thread private flag TDP_ITHREAD.
- In statclock, check curthread against idlethread directly rather than
  curthread's proc against idlethread's proc. (Not really related to intr
  changes)

Tested on:	alpha, amd64, i386, sparc64
Tested on:	arm, ia64 (older version of patch by cognet and marcel)
2005-10-25 19:48:48 +00:00
John-Mark Gurney
e93581dc7a add support for interrupt counting on sparc64. This copies part of the
code from i386.  The code has a slight bogon that interrupts are counted
twice.  Once on the ithread dispatch and once on the dispatch for the vector

vmstat -i and systat -vm now contains interrupt counts.

Reviewed by:	jake
2003-07-16 00:08:43 +00:00
Jake Burkholder
3297e8c990 Renamed intr_enqueue to intr_vector and intr_dequeue to intr_fast, to
better reflect how they are called.
2002-09-28 03:06:35 +00:00
Jake Burkholder
f7e0360261 Forward declare struct trapframe. 2002-05-29 19:25:14 +00:00
Jake Burkholder
4d574756ac Convert the interrupt queue from an array to a linked list. Implement
intr_dequeue in asm so that it can easily be modified to do light weight
context switching.
2002-05-25 02:39:28 +00:00
Jake Burkholder
856316e9c6 Forward declare struct trapframe. 2002-05-20 16:12:35 +00:00
Thomas Moestl
9a60579c15 Avoid crashing in early boot when WITNESS is enabled by moving the
mtx_init() for intr_table_lock after the globaldata pointer
initialization.
2002-02-13 16:36:44 +00:00
Jake Burkholder
6deb695c1d Add initial smp support. This gets as far as allowing the secondary
cpu(s) into the kernel, and sync-ing them up to "kernel" mode so we can
send them ipis, which also work.

Thanks to John Baldwin for providing me with access to the hardware
that made this possible.

Parts obtained from:	bsd/os
2002-01-08 05:50:26 +00:00
Jake Burkholder
dc551940fc Make it clear that IH_SHIFT is expected to be that of a pointer.
Make intr_handlers an array of function pointers instead of
small structures.
2001-12-29 06:57:55 +00:00
Jake Burkholder
6c3dcb9735 Change the stray count in struct intr_vector to a vector number that can
be used to index tables of counters.
Remove intr_dispatch() inline, it is implemented directly in tl*_intr now.
Count stray interrupts in a table of counters like intrcnt.
Disable interrupts briefly when setting up the interrupt vector table.
We must disable interrupts completely, not just raise the pil.
Pass pointers to the intr_vector structures rather than a vector number
to sched_ithd and intr_stray.
2001-10-20 16:03:41 +00:00
Thomas Moestl
84bcb99195 Add inthand_add() and inthand_remove() for use by the MD bus code and
some glue code.
2001-10-12 16:06:41 +00:00
Jake Burkholder
e5e8823f37 Split the low level trap code into trap, interrupt and syscall, its
easier and hopefully this code is done changing radically.

Don't use the mmu tlb register to address the kernel page table, nor
the 8k pointer register.  The hardware will do some of the page table
lookup by storing the the base address in an internal register and
calculating the address of the tte in the table.  However it is limited
to a 1 meg tsb, which only maps 512 megs.  The kernel page table only
has one level, so its easy to just do it by hand, which has the advantage
of supporting abitrary amounts of kvm and only costs a few more instructions.

Increase kvm to 1 gig now that its easy to do so and so we don't waste
most of a 4 meg page.

Fix some traces.  Fix more proc locking.

Call tsb_stte_promote if we get a soft fault on a mapping in the upper
levels of the tsb.  If there is an invalid or unreferenced mapping
in the primary tsb, it will be replaced.

Immediately fail for faults occuring in {f,s}uswintr.
2001-09-30 19:41:20 +00:00
David E. O'Brien
bcc9d95fe0 style(9) the structure definitions. 2001-09-05 05:18:35 +00:00
Jake Burkholder
228fa56391 Add early code to support interrupts. 2001-08-10 04:48:48 +00:00