578 Commits

Author SHA1 Message Date
markm
4e9c36b300 RIP <machine/lock.h>.
Some things needed bits of <i386/include/lock.h> - cy.c now has its
own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK()
has been moved to <i386/include/apic.h> (AKA <machine/apic.h>).
Reviewed by:	jhb
2001-02-11 10:44:09 +00:00
mjacob
7fa161d82a Temporary workaround to get things to compile. I could have updated
genassym here, but what I've also noticed is that we're dorking
with a mutex directly at assembler level- I'm not sure that this
is wise at this stage in the SMP port- I think it's going to be much
safer for a while to do things in C until SMP wunderkind figure out
what works and slow down this 3 order differential...
2001-02-10 23:22:49 +00:00
jake
34c4c58685 Clear the reschedule flag after finding it set in userret(). This
used to be in cpu_switch(), but I don't see any difference between
doing it here.
2001-02-10 20:33:35 +00:00
jhb
3896c29164 Reenable preemption on interrupts. My last commit accidentally reverted
it as I was playing with some other ways of doing kernel preemption.

You must still specify the PREEMPTION option in your config file to get a
preemptive kernel.
2001-02-10 02:46:50 +00:00
jhb
a9c84e00da - Make astpending and need_resched process attributes rather than CPU
attributes.  This is needed for AST's to be properly posted in a preemptive
  kernel.  They are backed by two new flags in p_sflag: PS_ASTPENDING and
  PS_NEEDRESCHED.  They are still accesssed by their old macros:
  aston(), astoff(), etc.  For completeness, an astpending() macro has been
  added to check for a pending AST, and clear_resched() has been added to
  clear need_resched().
- Rename syscall2() on the x86 back to syscall() to be consistent with
  other architectures.
2001-02-10 02:20:34 +00:00
jhb
4444722abe Use the MI ithread helper functions in the alpha hardware interrupt code. 2001-02-09 17:53:23 +00:00
jhb
b30904d840 - Catch up to the new swi API changes:
- Use swi_* function names.
  - Use void * to hold cookies to handlers instead of struct intrhand *.
- In sio.c, use 'driver_name' instead of "sio" as the name of the driver
  lock to minimize diffs with cy(4).
2001-02-09 17:46:35 +00:00
jhb
6e847a265b Move the initailization of the proc lock for proc0 very early into the MD
startup code.
2001-02-09 16:25:16 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
asmodai
010d68ca1c Fix typo: wierd -> weird. 2001-02-06 09:20:17 +00:00
jhb
c490404e3c - Minimize the amount of duplicated code for the PREEMPTION #ifdef, it now
only covers about 3-4 lines.
- Don't lower the IPL while we are on the interrupt stack.  Instead, save
  the raised IPL and change the saved IPL in sched_lock to IPL_0 before
  calling mi_switch().  When we are resumed, restore the saved IPL in
  sched_lock to the saved raised IPL so that when we release sched_lock
  we won't lower the IPL.  Without this, we would get nested interrupts
  that would overflow the kernel stack.

Tested by:	mjacob
2001-02-05 19:34:25 +00:00
peter
e014685526 Clean up some leftovers from the root mount cleanup that was done some
time ago.  FFS_ROOT and CD9660_ROOT are obsolete.
2001-02-04 15:35:10 +00:00
peter
fe7d89e3f2 All the world is not an i386. Merge rev 1.438 of i386/i386/machdep.c.
Make buffer_map a system map.
2001-02-04 07:00:47 +00:00
peter
bb46130566 Grumble, I broke this file with a vi accident before commit. :-(
Submitted by: Christian Weisgerber <naddy@mips.inka.de>
2001-02-04 04:13:12 +00:00
peter
73c13f2592 Conditionalize the alpha interrupt preemption for now to buy us some
time to sort out the quirks.  Add 'options PREEMPTION' to test it on
the Alpha.

Reviewed by: jhb
2001-02-03 03:26:39 +00:00
peter
4c9b874c28 Argh, I missed some #include "sio.h". I was looking primarily for NSIO
when I did my sweeps.

Submitted by: mjacob
2001-02-02 01:48:40 +00:00
mjacob
e2f5e56cc5 Remove inclusion of now vanished sio.h. 2001-02-01 21:59:00 +00:00
jake
a9f4853a90 Implement preemptive scheduling of hardware interrupt threads.
- If possible, context switch to the thread directly in sched_ithd(),
  rather than triggering a delayed ast reschedule.

- Disable interrupts while restoring fpu state in the trap handler,
  in order to ensure that we are not preempted in the middle, which
  could cause migration to another cpu.

Reviewed by:	peter
Tested by:	peter (alpha)
2001-02-01 03:34:20 +00:00
dfr
ba26212fc9 * Move exception_return to exception.s which is a more logical home for it.
* Optimise the return path for syscalls so that they only restore a minimal
  set of registers instead of performing a full exception_return.

A new flag in the trapframe indicates that the frame only holds partial
state. When it is necessary to perform a full state restore (e.g. after an
execve or signal), the flag is cleared to force a full restore.
2001-01-31 11:17:00 +00:00
peter
9b4aea27e5 Remove count for NSIO. The only places it was used it were incorrect.
(alpha-gdbstub.c got sync'ed up a bit with the i386 version)
2001-01-31 10:54:45 +00:00
jhb
5f8a4ce890 Remove unnecessary locking to protect the p_upages_obj and p_addr
pointers.
2001-01-30 00:35:35 +00:00
peter
b7edc4f4e3 Send "#if NISA > 0" to the bit-bucket and replace it with an option.
These were compile-time "is the isa code present?" tests and not
'how many isa busses' tests.
2001-01-29 09:38:39 +00:00
jhb
17984f33e4 Update some comments, s0 in the pcb of a child returning from fork1() is
now passed in as a0 to fork_exit() and and s2 is passed in as a1.
2001-01-26 23:32:38 +00:00
jhb
af00dc8eef - Change fork_exit() to take a pointer to a trapframe as its 3rd argument
instead of a trapframe directly.  (Requested by bde.)
- Convert the alpha switch_trampoline to call fork_exit() and use the MI
  fork_return() instead of child_return().
- Axe child_return().
2001-01-24 21:59:25 +00:00
jhb
9f8dbccd56 - Remove some unused and unneeded atomic operations sitting in mp_machdep.c
that are already implemented in atomic.h.
- Fix SMP kernel builds.
2001-01-24 19:49:13 +00:00
jhb
cf8988088c Oops, when converting if (foo) panic() to a KASSERT(), you have to invert
the test case.

Spotted by:	peter, jasone
2001-01-24 13:10:17 +00:00
jasone
8d2ec1ebc4 Convert all simplelocks to mutexes and remove the simplelock implementations. 2001-01-24 12:35:55 +00:00
jhb
eabcf31b82 - Change userret() to take a struct trapframe * as its second argument and
to extract the PC from that to send to addupc_task() so that it can be
  called from MI code.
- Remove all traces of have_giant with extreme prejudice and use
  mtx_owned(&Giant) instead where appropriate.
- Proc locking.
- P_FOO -> PS_FOO.
- Don't grab Giant just to look in curproc's p_addr during a trap since we
  may choose to immediately exit.  Instead, delay grabbing Giant a bit
  until we actually need it.
- Don't reset 'p' to 'curproc' in syscall() to handle the case of a child
  returning from fork1() since children don't return via syscall().
- Remove an XXX comment in ast() that questions the correctness of the
  userland check.  The code is correct.
2001-01-24 10:23:21 +00:00
jhb
451c942dba - Proc locking.
- P_INMEM -> PS_INMEM.
2001-01-24 10:16:23 +00:00
jhb
8e27ac396a - Proc locking.
- Don't send IPIs for pmap_invalidate_page() or pmap_invalidate_all()
  in the UP case.
- Catch up to cpuno -> cpuid.
- Convert some sanity checks that were #ifdef DIAGNOSTIC to KASSERT()'s.
2001-01-24 10:16:01 +00:00
jhb
277ef6eaf3 - Adjust some whitespace to reduce diffs with the i386 version.
- Rename the per-CPU variable 'cpuno' to 'cpuid'.  This was done so that
  there is one consistent name across all architectures for a logical
  CPU id.
- Remove all traces of IRQ forwarding.
- Add globaldata_register() hook called by globaldata_init() to register
  globaldata structures in the cpuid_to_globaldata array.
- Catch up to P_FOO -> PS_FOO.
- Bring across some fixes for forwarded_statclock() from the i386 version
  to handle ithreads and idleproc properly.
- Rename addugd_intr_forwarded() to addupc_intr_forwarded() so that it is
  the same name on all architectures.
- Set flags in p_sflag instead of calling psignal() from
  forward_hardclock().
- Proc locking.
- When we handle an IPI, turn off its bit in the mask of IPI's we are
  currently handling so that an IPI doesn't send a CPU into an infinite
  loop.
2001-01-24 10:13:13 +00:00
jhb
a1ce5e9db8 - Initialize curproc, proc0.p_heldmtx, and proc0.p_contested earlier so
that mutex operations work.
- Enter Giant earlier so we hold it during boot.
- Proc locking.
- Move globaldata_init() into here from mp_machdep.c so that UP kernels
  don't depend on mp_machdep.c.  Use a callout in the SMP case to register
  the boot processor's globaldata in the cpuid_to_globaldata array.
2001-01-24 10:07:42 +00:00
jhb
8168399f03 - Wrap the IPI interrupt handler in #ifdef SMP.
- Catch up to cpuno -> cpuid change.
- Add parens around a subexpression to quiet a warning.
2001-01-24 10:05:24 +00:00
jhb
8c0af7d84b cpuno -> cpuid. 2001-01-24 10:04:32 +00:00
jhb
73a3d8450d Don't import the nonexistent astpending variable. 2001-01-24 10:03:05 +00:00
jhb
87deba6ead Wrap the startup code used by secondary processors in #ifdef SMP. 2001-01-24 10:01:53 +00:00
des
b3c27aaaf7 First step towards an MP-safe zone allocator:
- have zalloc() and zfree() always lock the vm_zone.
 - remove zalloci() and zfreei(), which are now redundant.

Reviewed by:	bmilekic, jasone
2001-01-21 22:23:11 +00:00
jake
937122ae6d Make intr_nesting_level per-process, rather than per-cpu. Setup
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context.  This is also necessary
in order to context switch from sched_ithd() directly.

Reviewed By:	peter
2001-01-21 19:25:07 +00:00
jasone
24d53563ed Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
peter
feb7598906 Remove the now-empty ipl_funcs.c file on all platforms. 2001-01-19 09:59:56 +00:00
peter
940f70431f Remove the static splXXX functions and replace them by static __inline
stubs.  Remove the xxx_imask variables which have been all but gone for
a while.
2001-01-19 09:57:29 +00:00
bmilekic
37decc93f5 Implement MTX_RECURSE flag for mtx_init().
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.

The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.

The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.

Reviewed by: jhb
2001-01-19 01:59:14 +00:00
jake
4f5d8ed825 Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
jake
fa7a58ab48 Protect proc.p_pptr and proc.p_children/p_sibling with the
proctree_lock.

linprocfs not locked pending response from informal maintainer.

Reviewed by:	jhb, -smp@
2000-12-23 19:43:10 +00:00
jhb
473acfcf52 Remove the "machine dependent" KTR trace buffer ddb commands. The code was
exactly the same on all platforms.
2000-12-14 23:57:30 +00:00
jake
a4ad237eaa - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
gallatin
7a735b1d2f enable the proper cascade irq on as1000a
tested by: wilko
2000-12-12 01:39:17 +00:00
gallatin
dd2ab47ed9 fix AS1000/AS1000A support. It turns out the that iobus depends on the
CPU version (apecs:ev4::cia:ev5) and the irq hardware depends on the systype
previously, only ev4 AS1000s and ev5 AS1000a's would have worked.

tested by: wilko (in its -stable form)
noticed by: daniel
2000-12-12 01:36:26 +00:00
gallatin
feb51706ae fix various compiler warnings generated by previous commit 2000-12-12 01:32:36 +00:00
jake
e04de3cdaa - Add code to detect if a system call returns with locks other than Giant
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
  panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
  a system call, which were missing for alpha and ia64.
2000-12-12 01:14:32 +00:00