Commit Graph

15 Commits

Author SHA1 Message Date
John Baldwin
6c56727456 - Change fast interrupts on x86 to push a full interrupt frame and to
return through doreti to handle ast's.  This is necessary for the
  clock interrupts to work properly.
- Change the clock interrupts on the x86 to be fast instead of threaded.
  This is needed because both hardclock() and statclock() need to run in
  the context of the current process, not in a separate thread context.
- Kill the prevproc hack as it is no longer needed.
- We really need Giant when we call psignal(), but we don't want to block
  during the clock interrupt.  Instead, use two p_flag's in the proc struct
  to mark the current process as having a pending SIGVTALRM or a SIGPROF
  and let them be delivered during ast() when hardclock() has finished
  running.
- Remove CLKF_BASEPRI, which was #ifdef'd out on the x86 anyways.  It was
  broken on the x86 if it was turned on since cpl is gone.  It's only use
  was to bogusly run softclock() directly during hardclock() rather than
  scheduling an SWI.
- Remove the COM_LOCK simplelock and replace it with a clock_lock spin
  mutex.  Since the spin mutex already handles disabling/restoring
  interrupts appropriately, this also lets us axe all the *_intr() fu.
- Back out the hacks in the APIC_IO x86 cpu_initclocks() code to use
  temporary fast interrupts for the APIC trial.
- Add two new process flags P_ALRMPEND and P_PROFPEND to mark the pending
  signals in hardclock() that are to be delivered in ast().

Submitted by:	jakeb (making statclock safe in a fast interrupt)
Submitted by:	cp (concept of delaying signals until ast())
2000-10-06 02:20:21 +00:00
Paul Saab
92b123a002 Move MAXCPU from machine/smp.h to machine/param.h to fix breakage
with !SMP kernels.  Also, replace NCPUS with MAXCPU since they are
redundant.
2000-09-23 12:18:06 +00:00
Jason Evans
0384fff8c5 Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*().  See mutex(9).  (Note: The
  alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
  preempted (i386 only).

Partially contributed by:	BSDi (BSD/OS)
Submissions by (at least):	cp, dfr, dillon, grog, jake, jhb, sheldonh
2000-09-07 01:33:02 +00:00
Matthew Dillon
36e9f877df Commit major SMP cleanups and move the BGL (big giant lock) in the
syscall path inward.  A system call may select whether it needs the MP
    lock or not (the default being that it does need it).

    A great deal of conditional SMP code for various deadended experiments
    has been removed.  'cil' and 'cml' have been removed entirely, and the
    locking around the cpl has been removed.  The conditional
    separately-locked fast-interrupt code has been removed, meaning that
    interrupts must hold the CPL now (but they pretty much had to anyway).
    Another reason for doing this is that the original separate-lock for
    interrupts just doesn't apply to the interrupt thread mechanism being
    contemplated.

    Modifications to the cpl may now ONLY occur while holding the MP
    lock.  For example, if an otherwise MP safe syscall needs to mess with
    the cpl, it must hold the MP lock for the duration and must (as usual)
    save/restore the cpl in a nested fashion.

    This is precursor work for the real meat coming later: avoiding having
    to hold the MP lock for common syscalls and I/O's and interrupt threads.
    It is expected that the spl mechanisms and new interrupt threading
    mechanisms will be able to run in tandem, allowing a slow piecemeal
    transition to occur.

    This patch should result in a moderate performance improvement due to
    the considerable amount of code that has been removed from the critical
    path, especially the simplification of the spl*() calls.  The real
    performance gains will come later.

Approved by: jkh
Reviewed by: current, bde (exception.s)
Some work taken from: luoqi's patch
2000-03-28 07:16:37 +00:00
Matthew Dillon
f94f8efc61 Optimize two cases in the MP locking code. First, it is not necessary
to use a locked cmpexg when unlocking a lock that we already hold, since
      nobody else can touch the lock while we hold it.  Second, it is not
      necessary to use a locked cmpexg when locking a lock that we already
      hold, for the same reason.  These changes will allow MP locks to be used
      recursively without impacting performance.

      Modify two procedures that are called only by assembly and are already
      NOPROF entries to pass a critical argument in %edx instead of on the
      stack, removing a significant amount of code from the critical path
      as a consequence.

Reviewed by:	Alfred Perlstein <bright@wintelcom.net>, Peter Wemm <peter@netplex.com.au>
1999-11-19 22:47:19 +00:00
Peter Wemm
c3aac50f28 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
Alan Cox
a236cb64a9 Modify the macros IMASK_UNLOCK, CPL_UNLOCK, and REL_FAST_INTR_LOCK
to perform the s_unlock inline.
1999-08-23 19:14:18 +00:00
Alan Cox
e7dcbbe297 Make "s_unlock" an inline function. (Inlining this function takes
less space than calling it.  A callable version still exists for
use by some assembly code.)
1999-08-22 05:37:18 +00:00
Kris Kennaway
e7647e6c20 Correct a couple of spelling errors in comments. 1999-07-12 15:02:51 +00:00
Poul-Henning Kamp
8184a0a4d1 Remove stuff related to microtime.s, which is gone. 1998-04-06 11:38:18 +00:00
Tor Egge
5c623cb649 Add support for low resolution SMP kernel profiling.
- A nonprofiling version of s_lock (called s_lock_np) is used
    by mcount.

  - When profiling is active, more registers are clobbered in
    seemingly simple assembly routines. This means that some
    callers needed to save/restore extra registers.

  - The stack pointer must have space for a 'fake' return address
    in idle, to avoid stack underflow.
1997-12-15 02:18:35 +00:00
Steve Passe
20233f27f4 General cleanup of the lock pushdown code. They are grouped and enabled
from machine/smptests.h:

#define PUSHDOWN_LEVEL_1
#define PUSHDOWN_LEVEL_2
#define PUSHDOWN_LEVEL_3
#define PUSHDOWN_LEVEL_4_NOT
1997-09-07 22:04:09 +00:00
Steve Passe
1de995bb1f General cleanup of the sub-system locking macros.
Eliminated the RECURSIVE_MPINTRLOCK.
clock.c and microtime use clock_lock.
sio.c and cy.c use com_lock.

Suggestions by:	Bruce Evans <bde@zeta.org.au>
1997-09-01 07:45:37 +00:00
Steve Passe
2645264a72 Debug version of simple_lock. This will store the CPU id of the
holding CPU along with the lock.  When a CPU fails to get the lock
it compares its own id to the holder id.  If they are the same it
panic()s, as simple locks are binary, and this would cause a deadlock.

Controlled by smptests.h: SL_DEBUG, ON by default.

Some minor cleanup.
1997-08-31 03:17:48 +00:00
Steve Passe
78292efeef Another round of lock pushdown.
Add a simplelock to deal with disable_intr()/enable_intr() as used in UP kernel.
UP kernel expects that this is enough to guarantee exclusive access to
regions of code bracketed by these 2 functions.
Add a simplelock to bracket clock accesses in clock.c: clock_lock.

Help from:	Bruce Evans <bde@zeta.org.au>
1997-08-30 08:08:10 +00:00