120 Commits

Author SHA1 Message Date
jhb
a084756a1b Catch comments up to child_return() -> fork_return() as well. 2001-02-22 16:49:36 +00:00
jhb
07789e60e9 Synch up with the other architectures:
- Remove unneeded spl()'s around mi_switch() in userret().
- Don't hold sched_lock across addupc_task().
- Remove the MD function child_return() now that the MI function
  fork_return() is used instead.
- Use TRAPF_USERMODE() instead of dinking with the trapframe directly to
  check for ast's in kernel mode.
- Check astpending(curproc) and resched_wanted() in ast() and return if
  neither is true.
- Use astoff() rather than setting the non-existent per-cpu variable
  astpending to 0 to clear an ast.
2001-02-22 16:27:03 +00:00
jhb
c591cfd9c6 Use the MI fork_return() fork trampoline callout function for child
processes instead of the MD child_return().
2001-02-22 16:05:48 +00:00
jhb
c728eb55b0 - Don't dink with sched_lock in cpu_switch() since mi_switch() does this
for us.
- Change the switch_trampoline() to call fork_exit() passing in the
  required arguments instead of calling the fork trampoline callout
  function directly.
Warning: this hasn't been tested.

Looked over by:	dfr
2001-02-22 16:05:09 +00:00
jhb
ed7d3539ba - Axe the now unused ASS_* assertions for interrupt status.
- Use ia64_get_psr() instead of save_intr() in mtx_legal2block().
2001-02-22 15:43:42 +00:00
jhb
1fcc4482e8 Add a inline function to read the psr. 2001-02-22 15:39:58 +00:00
jhb
57e83d23b7 Add a mtx_intr_enable() macro. 2001-02-22 15:37:57 +00:00
jhb
107493c81a Axe the astpending per-cpu variable. 2001-02-22 15:37:34 +00:00
jhb
4280c09262 Add TRAPF_PC() and TRAPF_USERMODE() macros and redefine CLKF_PC() and
CLKF_USERMODE() in terms of them.
2001-02-22 15:35:04 +00:00
jhb
073b1ca9aa Catch up to new MI astpending and need_resched handling. 2001-02-22 13:29:22 +00:00
ume
e4a288b688 Correct disordering which is corresponding to bde's fix to
i386/include/ansi.h.
2001-02-17 14:51:11 +00:00
ume
bf66c2eda8 Correct 2nd argument of getnameinfo(3) to socklen_t.
Reviewed by:	itojun
2001-02-15 10:35:55 +00:00
jake
55d5108ac5 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
markm
4e9c36b300 RIP <machine/lock.h>.
Some things needed bits of <i386/include/lock.h> - cy.c now has its
own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK()
has been moved to <i386/include/apic.h> (AKA <machine/apic.h>).
Reviewed by:	jhb
2001-02-11 10:44:09 +00:00
jhb
6e847a265b Move the initailization of the proc lock for proc0 very early into the MD
startup code.
2001-02-09 16:25:16 +00:00
jhb
736bcaf4be Remove bogus #if 0'd code that dinked with the saved interrupt state in
sched_lock.
2001-02-09 14:50:52 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
peter
e014685526 Clean up some leftovers from the root mount cleanup that was done some
time ago.  FFS_ROOT and CD9660_ROOT are obsolete.
2001-02-04 15:35:10 +00:00
peter
fe7d89e3f2 All the world is not an i386. Merge rev 1.438 of i386/i386/machdep.c.
Make buffer_map a system map.
2001-02-04 07:00:47 +00:00
peter
9b4aea27e5 Remove count for NSIO. The only places it was used it were incorrect.
(alpha-gdbstub.c got sync'ed up a bit with the i386 version)
2001-01-31 10:54:45 +00:00
dfr
66e581ea9f Flesh out EFI support somewhat. 2001-01-29 13:31:19 +00:00
peter
b7edc4f4e3 Send "#if NISA > 0" to the bit-bucket and replace it with an option.
These were compile-time "is the isa code present?" tests and not
'how many isa busses' tests.
2001-01-29 09:38:39 +00:00
marcel
f8be4d8cb4 Add gd_witness_spin_check. 2001-01-28 08:06:50 +00:00
marcel
8a14db568d Fix typo. 2001-01-28 08:05:55 +00:00
marcel
93f1d42808 Improve kernel bootstrapping:
o  Use objdump instead of gensetdefs(1) to build the linker sets.
o  Allow overriding of nm and objdump in resp. genassym.sh and
   gensetdefs.pl for non-native toolchains.

Reviewed by: arch
Perl improvements: Jos Backus <josb@cncdsl.com>, benno
2001-01-28 06:39:56 +00:00
dfr
8ac42e8bc9 Initialise proc0.p_heldmtx and proc0.p_contested and call
mtx_enter(&Giant, MTX_DEF) after Giant is initialised.

Reviewed by: jhb
2001-01-26 17:52:34 +00:00
dfr
82b0a074c7 Change cpuno to cpuid. 2001-01-24 17:12:37 +00:00
dfr
14133b4f7e Fix typo. 2001-01-24 17:11:33 +00:00
jasone
8d2ec1ebc4 Convert all simplelocks to mutexes and remove the simplelock implementations. 2001-01-24 12:35:55 +00:00
jhb
8dae9bbc6a - Proc locking.
- P_OWEUPC -> PS_OWEUPC.
2001-01-24 10:38:58 +00:00
jhb
af41b60632 - Proc locking.
- Update userret() to take a struct trapframe * as a second argument.
- Axe have_giant and use mtx_owned(&Giant) where appropriate.
2001-01-24 10:38:13 +00:00
jhb
9455afff84 - Proc locking.
- P_FOO -> PS_FOO.
2001-01-24 10:36:47 +00:00
jhb
4dc0bb19d1 - Proc locking.
- Bring across forwarded_statclock() fixes from i386 and alpha.
2001-01-24 10:36:21 +00:00
des
b3c27aaaf7 First step towards an MP-safe zone allocator:
- have zalloc() and zfree() always lock the vm_zone.
 - remove zalloci() and zfreei(), which are now redundant.

Reviewed by:	bmilekic, jasone
2001-01-21 22:23:11 +00:00
jake
937122ae6d Make intr_nesting_level per-process, rather than per-cpu. Setup
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context.  This is also necessary
in order to context switch from sched_ithd() directly.

Reviewed By:	peter
2001-01-21 19:25:07 +00:00
jasone
24d53563ed Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
peter
feb7598906 Remove the now-empty ipl_funcs.c file on all platforms. 2001-01-19 09:59:56 +00:00
peter
940f70431f Remove the static splXXX functions and replace them by static __inline
stubs.  Remove the xxx_imask variables which have been all but gone for
a while.
2001-01-19 09:57:29 +00:00
bmilekic
37decc93f5 Implement MTX_RECURSE flag for mtx_init().
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.

The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.

The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.

Reviewed by: jhb
2001-01-19 01:59:14 +00:00
phk
d147a07119 These files have been on deathrow for a couple of months, no appeal. 2001-01-16 10:01:56 +00:00
markm
fde03d4ba0 Remove NOBLOCKRANDOM as a compile-time option. Instead, provide
exactly the same functionality via a sysctl, making this feature
a run-time option.

The default is 1(ON), which means that /dev/random device will
NOT block at startup.

setting kern.random.sys.seeded to 0(OFF) will cause /dev/random
to block until the next reseed, at which stage the sysctl
will be changed back to 1(ON).

While I'm here, clean up the sysctls, and make them dynamic.
Reviewed by:		des
Tested on Alpha by:	obrien
2001-01-14 17:50:15 +00:00
jake
422162e650 Remove unused per-cpu variables inside_intr and ss_eflags. 2001-01-12 07:47:54 +00:00
jake
4f7710fc47 - Remove compatibility macros for accessing per-cpu variables.
__FreeBSD_version 500015 can be used to detect their disappearance.
- Move the symbols for SMP_prvspace and lapic from globals.s to
  locore.s.
- Remove globals.s with extreme prejudice.
2001-01-11 14:46:26 +00:00
jake
4f5d8ed825 Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
obrien
50c1c6f1ac Put VCS ids in a consistent place and form. 2001-01-08 06:24:08 +00:00
obrien
464d16ce94 Remove seconds types we don't use that came in thru the NetBSD heiratage. 2001-01-08 06:17:11 +00:00
jake
d20b7bdad1 Implement accessors for per-cpu variables which don't depend on the
symbols in globals.s.

	PCPU_GET(name) returns the value of the per-cpu variable
	PCPU_PTR(name) returns a pointer to the per-cpu variable
	PCPU_SET(name, val) sets the value of the per-cpu variable

In general these are not yet used, compatibility macros remain.

Unifdef SMP struct globaldata, this makes variables such as cpuid
available for UP as well.

Rebuilding modules is probably a good idea, but I believe old
modules will still work, as most of the old infrastructure
remains.
2001-01-06 19:55:42 +00:00
dfr
c0346114b4 Don't include <stddef.h> for offsetof() - its also defined in <sys/types.h> 2000-12-30 13:07:58 +00:00
dfr
78040d3f19 Merge ALIGN changes from alpha code. 2000-12-30 13:06:01 +00:00
dfr
83798d65dc Fix typo. 2000-12-30 13:04:20 +00:00