3454 Commits

Author SHA1 Message Date
jhb
eef4e5d2f9 - Catch up to p_sflag changes.
- The MD code now initializes proc0.p_heldmtx, proc0.p_contested, and
  curproc.
- The MD code calls here with Giant already held.
- Proc locking.
2001-01-24 10:40:56 +00:00
jhb
a539ffb3e7 - Kill the have_giant parameter to userret() along with all instances of
that name as a variable.  Use mtx_owned(&Giant) where appropriate
  instead.
- Proc locking.
- P_FOO -> PS_FOO.
- Update comments about enable interrupts during trap and why this may be
  bad if we trap while holding a spin mutex.
- Don't bother resetting p to curproc in syscall() in case we are the child
  returning from fork.  The child hasn't returned from fork through syscall
  in a while.
- Remove fork_return() as it has been superseded by the MI version.
2001-01-24 09:53:49 +00:00
jhb
8e161d3269 - Relocate portions of this file to get it into an order closer to that of
the alpha mp_machdep.c.
- Proc locking.
- Catch up to the P_FOO -> PS_FOO proc flags changes.
- Stick ap_init()'s prototype with the other prototypes.
- Remove the Xforwardirq IPI.
- Remove unused simplelocks.
- Don't try to psignal() from forward_statclock(), but set the appropriate
  signal pending flag in p_sflag instead.
- Add in KTR_SMP tracepoints for various SMP functions.   (Brought over
  from the alpha port)
2001-01-24 09:48:52 +00:00
jhb
558ee819ee Fix a typo.
Reported by:	albert
2001-01-24 08:42:39 +00:00
mckusick
28dc76bebe Never reuse AUTO_OID values.
Approved by:	Alfred Perlstein <bright@wintelcom.net>
2001-01-24 04:35:13 +00:00
jhb
fa0d2b97b0 Don't grab Giant when calling kmem_alloc/kmem_free as this is just
encouraging other people to follow the same practice.  If this is going
to be done, then it should be done inside of those two functions instead.
2001-01-24 00:36:03 +00:00
jhb
7c01c0a2c2 Proc locking. 2001-01-24 00:35:12 +00:00
jhb
1046fa2a8d - Proc locking.
- Protect calcru() with sched_lock.
2001-01-24 00:33:44 +00:00
jhb
065a35fbb6 - Proc locking.
- Protect calcru() with sched_lock.
2001-01-24 00:28:07 +00:00
jhb
a294d4e362 Proc locking. 2001-01-24 00:27:28 +00:00
mjacob
b4e8c12b24 Do not do the commenting out the way that saves bytes and looks cleaner
to you. Do it the way Vox Populi wants it.
2001-01-23 16:35:33 +00:00
ume
1ae749987d Add mibs to hold the number of forks since boot. New mibs are:
vm.stats.vm.v_forks
	vm.stats.vm.v_vforks
	vm.stats.vm.v_rforks
	vm.stats.vm.v_kthreads
	vm.stats.vm.v_forkpages
	vm.stats.vm.v_vforkpages
	vm.stats.vm.v_rforkpages
	vm.stats.vm.v_kthreadpages

Submitted by:	Paul Herman <pherman@frenchfries.net>
Reviewed by:	alfred
2001-01-23 14:32:01 +00:00
rwatson
a7fc696a51 o The move to using VADMIN under vaccess() resulted in some system
calls returning EACCES instead of EPERM.  This patch modifies vaccess()
  to return EPERM instead of EACCES if VADMIN is among the requested
  rights.  This affects functions normally limited to the owners of
  a file, such as chmod(), as EPERM is the error indicating that
  privilege would allow the operation, rather than a chance in mandatory
  or discretionary rights.

Reported by:	bde
2001-01-23 04:15:19 +00:00
mjacob
6c5381dc64 Move (now) unused variable declaration inside the block (now commented out). 2001-01-22 22:22:38 +00:00
jasone
cf0a6d372c Print correct file name and line number in mtx_assert().
Noticed by:	jake
2001-01-22 05:56:55 +00:00
jasone
ec55088093 Move most of sys/mutex.h into kern/kern_mutex.c, thereby making the mutex
inline functions non-inlined.  Hide parts of the mutex implementation that
should not be exposed.

Make sure that WITNESS code is not executed during boot until the mutexes
are fully initialized by SI_SUB_MUTEX (the original motivation for this
commit).

Submitted by:	peter
2001-01-21 22:34:43 +00:00
des
b3c27aaaf7 First step towards an MP-safe zone allocator:
- have zalloc() and zfree() always lock the vm_zone.
 - remove zalloci() and zfreei(), which are now redundant.

Reviewed by:	bmilekic, jasone
2001-01-21 22:23:11 +00:00
phk
ea80761e20 Convert a Debugger(3) to a panic(9) and a EINVAL.
Reminded by:	bde
2001-01-21 21:19:49 +00:00
jake
937122ae6d Make intr_nesting_level per-process, rather than per-cpu. Setup
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context.  This is also necessary
in order to context switch from sched_ithd() directly.

Reviewed By:	peter
2001-01-21 19:25:07 +00:00
jasone
c3fd76623a Make the order of the static initializer for all_mtx match the order of
fields in struct mtx.

Found by:	jake
2001-01-21 11:05:02 +00:00
peter
0f92919216 Remove APIC_INTR_DIAGNOSTIC - this has been disabled for some time now.
Remove some leftovers of removed SMP options.
2001-01-21 07:54:10 +00:00
jasone
24d53563ed Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
jake
c45422f874 Remove the per-cpu pages used for copy and zero-ing pages of memory
for SMP; just use the same ones as UP.  These weren't used without
holding Giant anyway, and the routines that use them would have to
be protected from pre-emption to avoid migrating cpus.
2001-01-21 06:50:03 +00:00
jhb
bf9da1eab7 - All of proc_compare needs sched_lock, so hold it for the for loop that
calls it rather than obtaining and releasing it a lot in proc_compare.
- Collect all of the data gathering and stick it just after the
  proc_compare loop.  This way, we only have to grab sched_lock once now
  when handling SIGINFO.  All the printf's are done after the values are
  calculated.

Submitted mostly by:	bde
2001-01-20 23:03:20 +00:00
bmilekic
adf34d0448 When short of mbufs or mbuf clusters, we sleep on appropriate "counters."
The counters are incremented when a thread goes to sleep and decremented
either when a thread is woken up by another thread or when the sleep
times out. There existed a race where the sleep count could be decremented
twice resulting in an eventual underflow.
Move the decrementing of the "counters" to the thread initiating the sleep
and thus remedy the problem.
2001-01-20 21:29:10 +00:00
jhb
5e8c3954d5 Temporarily disable the printf() for micruptime() going backwards, the
SIGXCPU signal, and killing of processes that exceed their allowed run
time until they can play nice with sched_lock.  Right now they are just
potentital panics waiting to happen.  The printf() has bitten several
people.
2001-01-20 02:57:59 +00:00
jake
ea36052df5 - Make npx_intr INTR_MPSAFE and move acquiring Giant into the
function itself.
- Remove a hack to allow acquiring Giant from the npx asm trap
  vector.
2001-01-20 02:30:58 +00:00
jhb
a11e21597d Be more careful with sched_lock in the SIGINFO handler. Specifically, do
not hold sched_lock while calling ttyprintf().  If we are on a serial
console, then ttyprintf() will end up getting the sio lock, resulting in
a lock order violation.

Noticed by:	des
2001-01-20 02:04:44 +00:00
peter
0d5e420364 Use #ifdef DEV_NPX from opt_npx.h instead of #if NNPX > 0 from npx.h 2001-01-19 13:19:02 +00:00
peter
c0bc1dba91 apic_itrace_splz[] is unused 2001-01-19 10:48:35 +00:00
peter
940f70431f Remove the static splXXX functions and replace them by static __inline
stubs.  Remove the xxx_imask variables which have been all but gone for
a while.
2001-01-19 09:57:29 +00:00
jhb
a4116607b8 Revert revision 1.102. I don't think p_nice needs to be protected with
sched_lock, and I'm fairly certain P_TRACED will be protected with the
proc lock instead.

Pointed out indirectly by:	bde
2001-01-19 08:23:22 +00:00
dillon
9b157601a0 Do not cluster with B_LOCKED buffers.
This is an odd one.  This patch appears to fix a panic related to background
bitmap writes (for FFS), though neither Kirk, Ian, or I can figure out how
B_CLUSTEROK could possibly be set on a bitmap block to cause the clustering
code to improperly cluster with a buffer undergoing a background write.

In anycase, the clustering code is very fragile and this patch helps with
that, as well as possibly fixing a bug Andre was having.

Suggested by: Ian Dowse <iedowse@maths.tcd.ie>
Testing by: Andre Albsmeier <andre.albsmeier@mchp.siemens.de>
2001-01-19 05:31:07 +00:00
bmilekic
37decc93f5 Implement MTX_RECURSE flag for mtx_init().
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.

The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.

The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.

Reviewed by: jhb
2001-01-19 01:59:14 +00:00
jhb
2c69cab2ed Protect p_stat and p_oncpu with sched_lock in forward_signal(). 2001-01-18 08:19:25 +00:00
bmilekic
3650624f86 Add some KASSERTs valid if WITNESS is defined to verify that the mbuf
allocation routines are being called safely. Since we drop our relevant
mbuf mutex and acquire Giant before we call kmem_malloc(), we have
to make sure that this does not pave the way for a fatal lock order
reversal. Check that either Giant is already held (in which case it's safe
to grab it again and recurse on it) or, if Giant is not held, that no
other locks are held before we try to acquire Giant.

Similarily, add a KASSERT valid in the WITNESS case in m_reclaim() to
nail callers who end up in m_reclaim() and hold a lock.

Pointed out by: jhb
2001-01-16 01:53:13 +00:00
jasone
20a8a23d2b Implement condition variables. 2001-01-16 01:00:43 +00:00
phk
27cbeb0325 A bit of sanity-checking in bioqdisksort(): panic if we recurse. 2001-01-14 18:48:42 +00:00
des
3f582183f5 Use predictable internal names for the sysvipc modules, so we have a
chance of getting dependencies working.
2001-01-14 18:04:30 +00:00
jhb
f0aa56d3bf - Use sched_lock to prevent the mutex name from changing out from under us
while we are copying it to the kinfo_proc structure.
- Test against p_stat to see if we are blocked on a mutex.
- Terminate ki_mtxname with a null char rather than ki_wmesg.
2001-01-13 23:08:34 +00:00
ben
a15d151520 Fix getsid() to use "=" instead of "==".
Not objected to by:	audit
2001-01-13 22:49:59 +00:00
jake
0a7d951162 Change return ??? to return -1 in some #if 0'ed code. 2001-01-12 08:24:25 +00:00
dwmalone
27e8ba2ba4 Style improvements for last fix. Should be functionally the same.
Submitted by:	bde
2001-01-11 00:13:54 +00:00
jake
4f5d8ed825 Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
bmilekic
3726db774d In m_mballoc_wait(), drop the mmbfree mutex lock prior to calling
m_reclaim() and re-acquire it when m_reclaim() returns. This means that
we now call the drain routines without holding the mutex lock and
recursing into it. This was done for mainly two reasons:

(i) Avoid the long recursion; long recursions are typically bad and this
    is the case here because we block all other code from freeing mbufs
    if they need to. Doing that is kind of counter-productive, since we're
    really hoping that someone will free.

(ii) More importantly, avoid a potential lock order reversal. Right now,
     not all the locks have been added to our networking code; but
     without this change, we're introducing the possibility for deadlock.
     Consider for example ip_drain(). We will likely eventually introduce
     a lock for ipq there, and so ip_freef() will be called with ipq lock
     held. But, ip_freef() calls m_freem() which in turn acquires the
     mmbfree lock. Since we were previously calling ip_drain() with mmbfree
     held, our lock order would be: mmbfree->ipq->mmbfree. Some other code
     may very well lock ipq first and then call ip_freef(). This would
     result in the regular lock order, ipq->mmbfree. Clearly, we have
     deadlock if one thread acquires the ipq lock and sits waiting for
     mmbfree while another thread calling m_reclaim() acquires mmbfree
     and sits waiting for the ipq lock.

Also, make sure to add a comment above m_reclaim()'s definition briefly
explaining this. Also document this above the call to m_reclaim() in
m_mballoc_wait().

Suggested and reviewed by: alfred
2001-01-09 23:58:56 +00:00
wollman
70c88bb8da select() DKI is now in <sys/selinfo.h>. 2001-01-09 04:33:49 +00:00
n_hibma
cfd616810d Unset the devclass if the attach fails and the devclass was not set to
begin with.

Reviewed by:	dfr
2001-01-08 22:16:26 +00:00
dwmalone
e42ccf8d79 If we failed to allocate the file discriptor for the write end of
the pipe, then we were corrupting the pipe_zone free list by calling
pipeclose on rpipe twice. NULL out rpipe to avoid this.

Reviewed by:	dillon
Reviewed by:	iedowse
2001-01-08 22:14:48 +00:00
jake
470280a25e Fix a warning. The type of globaldata.gd_prvspace has changed. 2001-01-08 15:25:45 +00:00
alfred
88a3867c6e Don't use SCARG.
Pointed out by: bde
2001-01-08 07:22:06 +00:00