Commit Graph

528 Commits

Author SHA1 Message Date
phk
87273d930a Centralize the "bootdev" and "dumpdev" variables. They are still pretty
bogus all things considered, but at least now they don't camouflage as
being MD variables.
2002-03-31 07:15:28 +00:00
marcel
3d34c19920 Transition to a model where the loader passes the address of the
bootinfo block in register r8. In locore.s we save the address
in the global variable 'pa_bootinfo'. In machdep.c we compare
this value against the hardwired address, but don't depend on its
validity yet (ie: we still expect the bootinfo block to be at the
hardwired address). After a small amount of time, we'll flip the
switch and depend on the loader to pass us the address. From that
moment on the loader is free to put it anywhere it likes, provided
the machine itself likes it as well.

Add some verbosity to aid in the transition. We emit a message if
the loader didn't pass the address and we also emit a message if
there's no bootinfo block at the hardwired address.

While in locore.s, reduce the number of redundant serialization
instructions. A srlz.i is a proper superset of a srlz.d and thus
is a valid replacement. Also slightly reorder the movl instructions
to improve bundle density.
2002-03-30 23:25:22 +00:00
jake
8f9ce8398d Remove abuse of intr_disable/restore in MI code by moving the loop in ast()
back into the calling MD code.  The MD code must ensure no races between
checking the astpening flag and returning to usermode.

Submitted by:	peter (ia64 bits)
Tested on:	alpha (peter, jeff), i386, ia64 (peter), sparc64
2002-03-29 16:35:26 +00:00
obrien
2b39669377 style(9)
Approved by:	jake
2002-03-28 02:54:44 +00:00
jeff
dff418f166 Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locks
with this flag.  Remove the dup_list and dup_ok code from subr_witness.  Now
we just check for the flag instead of doing string compares.

Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface.  The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.

Approved by:	jhb
2002-03-27 09:23:41 +00:00
dillon
dc5aafeb94 Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
marcel
4543f25906 o Revert previous commit in asm.h. There's no need to undefine
__FBSDID first, because it should not be defined at all,
o  Remove inclusion of cdefs.h in locore.s.

Pointed out by: peter
2002-03-27 02:20:09 +00:00
obrien
de491c5117 Get the guarding right. The IA-64 has a different organization for this
than our other platforms.
2002-03-26 02:59:00 +00:00
obrien
8842976cdd Guard against redefining __gnuc_va_list. 2002-03-24 11:25:46 +00:00
marcel
985de052a1 Undefine __FBSDID before defining it as it's already defined at
that point.
2002-03-24 10:28:04 +00:00
obrien
4c2f517045 ASM versions of __FBSDID. 2002-03-23 02:01:27 +00:00
dfr
0b5ce40729 Change critical_t to register_t for intr_disable/restore. 2002-03-21 09:50:11 +00:00
dfr
35064c1d50 Change cpu_critical_enter/exit to intr_disable/restore. 2002-03-21 09:35:18 +00:00
peter
b460095c1d In UP mode, the primary cpu's per-cpu current_pmap was not initialized -
this was only done as a side effect of calling cpu_mp_start().  I haven't
actually tested that this fixes UP kernels, but it feels about right.
2002-03-21 07:41:02 +00:00
jeff
2b532bd407 Remove references to vm_zone.h and switch over to the new uma API.
Approved by:	peter
2002-03-21 02:46:56 +00:00
alfred
cd2525164f Remove __P.
Reviewd by: peter
2002-03-20 23:30:31 +00:00
jhb
715dfdbcbe Change the way we ensure td_ucred is NULL if DIAGNOSTIC is defined.
Instead of caching the ucred reference, just go ahead and eat the
decerement and increment of the refcount.  Now that Giant is pushed down
into crfree(), we no longer have to get Giant in the common case.  In the
case when we are actually free'ing the ucred, we would normally free it on
the next kernel entry, so the cost there is not new, just in a different
place.  This also removse td_cache_ucred from struct thread.  This is
still only done #ifdef DIAGNOSTIC.

Tested on:	i386, alpha
2002-03-20 21:09:09 +00:00
dfr
b132efc524 Change intr_enable to intr_restore for consistency with sparc64. 2002-03-20 17:28:40 +00:00
dfr
d0f60a59c9 Replace calls to cpu_critical_enter/exit with appropriate calls to
either explicitly disable interrupts or use a real critical section,
as appropriate.
2002-03-20 10:04:08 +00:00
dfr
fb7bce66e9 Recreate intr_disable/intr_enable and implement cpu_critical_enter/exit
in terms of that (for now).
2002-03-20 10:00:05 +00:00
peter
cd06717f4b #if 0 some unused variables (only in #if 0 code) 2002-03-19 12:15:29 +00:00
peter
3e3a2565b1 Enabling the SKI option is a guaranteed breakage for me. Interrupts no
longer work.
I can only get a box to boot with 'options SMP'.
2002-03-19 11:21:12 +00:00
peter
1e692d325d My ia64 box for some reason likes to fragment the beginning/end of memory
a bit before handing it over to the OS.  I occasionally have 11
segments with several 8K or so fragments depending on nvram settings and
what I have done under loader(8) before booting.  This needs to be
revisited.
2002-03-19 11:18:47 +00:00
peter
27b30c3561 Fix some unused variables. 2002-03-19 11:15:26 +00:00
peter
89e10979b7 Move a couple of prototypes together instead of being incompletely
scattered around.
2002-03-19 11:14:52 +00:00
peter
7581c95972 __func__ is a const char *, not a "string" that can be concatenated. 2002-03-19 11:11:37 +00:00
peter
b6cc63860e Fix a pointer/int warning 2002-03-19 11:10:30 +00:00
peter
871754752f #ifdef SMP some variables that are only used elsewhere under #ifdef SMP
also.
2002-03-19 11:10:03 +00:00
peter
78e1e8f574 Work around an apparent compiler bug with gcc-3.1, although it might be
a language feature that I do not know about.  gcc is complaining about
a left shift >= sizeof type, even when shifting a (cast) 64 bit type left
by 43 bits.
2002-03-19 11:09:24 +00:00
peter
25f7063044 Believe it or not, I ran into the 32MB stack size limit using a natively
hosted gcc.
2002-03-19 11:07:09 +00:00
peter
ed85319f6f #if 0 out some unused code. 2002-03-19 11:06:01 +00:00
peter
aef96cffb1 Add some #includes after things got broken with the last round of
MI include file (<sys/smp.h> I think) tweaks.
2002-03-19 11:05:07 +00:00
peter
1d9448da3b Turn off the ia64 ITC timecounter when SMP is present since it has the
same problem as the TSC on the x86 - ie: it is not synchronized.
#if 0 out some unused functions, ia64 doesn't calibrate clocks yet.
2002-03-19 11:03:48 +00:00
jeff
2923687da3 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
dfr
e13126ed94 Fix spelling. 2002-03-18 09:29:16 +00:00
des
a032109782 Move the definition of PT_[GS]ET{,DB,FP}REGS from the MD ptrace.h to the
MI ptrace.h, since all platforms define them.  Keep the MD ptrace.h around
for FIX_SSTEP (which is currently only needed on Alpha).
2002-03-16 00:25:53 +00:00
dfr
99e1353483 * Stop other cpus when one cpu enters DDB and restart them after it
leaves.
* Add a sync.i instruction to the code which writes out breakpoints to
  ensure that the breakpoint is seem by all cpus in the coherence domain.
2002-03-15 11:12:08 +00:00
dfr
e995bf6472 * Remove a breakpoint() I accidentally left in for debugging :-(.
* Make cpu_mp_probe() work before the VM system is available and
  initialise mp_maxid accordingly.
2002-03-15 09:47:16 +00:00
dfr
70e1c6d1de Tweak the AP startup code somewhat. With all the other recent changes,
this now works pretty well for two processors at least.

Submitted by: marcel, mostly.
2002-03-14 19:37:36 +00:00
dfr
c828b9dda4 * Initialise pcb_pmap for new threads.
* Add support for forking new threads from &thread0 as well as curthread.
2002-03-14 19:34:50 +00:00
dfr
3467bc8862 * Save and restore PCPU_GET(current_pmap) in pcb_pmap so that we don't
lose if a process is preempted while pmap is temporarily switched to
  another pmap.
* For SMP, drop the high-fp state when a thread is switched away from
  so that if another cpu resumes that thread, it doesn't have to play
  games with IPI to get ahold of the correct register values.
2002-03-14 19:33:03 +00:00
dfr
6b90e8d3c5 Add pcpu.pc_current_pmap and pcb.pcb_pmap. 2002-03-14 19:20:24 +00:00
dfr
b5f14974e3 Add a field to hold the current pmap of a thread. 2002-03-14 19:19:49 +00:00
dfr
614f8fef36 Add ia64_sync_i(), ia64_get_tpr() and ia64_set_tpr(). 2002-03-14 12:29:55 +00:00
dfr
2de0b2ebe2 * Add some KTR messages for IPIs.
* Don't call ast() from interrupt() - if we switch, then we will miss
  writing cr.eoi which will prevent the current cpu from receiving
  interrupts until the current thread is resumed. The call to ast()
  happens magically in exception_restore where it is safe.
* Add DDB 'show irq' command to examine interrupt hardware state.
2002-03-14 10:24:00 +00:00
dfr
7138bad96c Add debug code to print SAPIC registers. 2002-03-14 10:17:08 +00:00
dfr
c50266a708 * Use a mutex to protect the RID allocator.
* Use ptc.g instead of ptc.l so that TLB shootdowns are broadcast to the
  coherence domain.
* Use smp_rendezvous for pmap_invalidate_all to ensure it happens on all
  cpus.
* Dike out a DIAGNOSTIC printf which didn't compile.
* Protect the internals of pmap_install with cpu_critical_enter/exit.
2002-03-14 09:28:05 +00:00
dfr
8af7d2dfbb Move the call to pmap_bootstrap to after the initialisation of thread0.
This allows us to use mutexes in pmap safely. Also initialise fpcurthread
for cpu0 so that ia64_fpstate_check doesn't barf during boot.
2002-03-14 09:20:07 +00:00
dfr
a679e26ad5 Don't restore r13 when returning to kernel mode. We may have migrated to
a different cpu since the exception_save and r13 needs to point at the
current cpu's pcpu structure.
2002-03-14 00:28:10 +00:00
peter
e65affeb45 Fix some -Wunused warnings by "using" a macro argument 2002-03-12 00:19:14 +00:00