6012 Commits

Author SHA1 Message Date
davidxu
0330c19021 Move code for detecting PS_NEEDSIGCHK into thread_schedule_upcall,
I think it is a better place to handle it.
2003-02-17 14:41:22 +00:00
tjr
6ebeaa8ec8 Use the proc lock to protect p_realtimer instead of Giant, and obtain
sched_lock around accesses to p_stats->p_timer[] to avoid a potential
race with hardclock. getitimer(), setitimer() and the realitexpire()
callout are now Giant-free.
2003-02-17 10:03:02 +00:00
jeff
5c29a640b8 - Add a new function, thread_signal_add(), that is called from postsig to
add a signal to a mailbox's pending set.
 - Add a new function, thread_signal_upcall(), this causes the current thread
   to upcall so that we can deliver pending signals.

Reviewed by:	mini
2003-02-17 09:58:11 +00:00
julian
af55753a06 Move a bunch of flags from the KSE to the thread.
I was in two minds as to where to put them in the first case..
I should have listenned to the other mind.

Submitted by:	 parts by davidxu@
Reviewed by:	jeff@ mini@
2003-02-17 09:55:10 +00:00
jeff
590a39e29b - Split the struct kse into struct upcall and struct kse. struct kse will
soon be visible only to schedulers.  This greatly simplifies much the
   KSE code.

Submitted by:	davidxu
2003-02-17 05:14:26 +00:00
jeff
aa384c931f - Move ke_sticks, ke_iticks, ke_uticks, ke_uu, ke_su, and ke_iu back into
the proc.  These counters are only examined through calcru.

Submitted by:	davidxu
Tested on:	x86, alpha, UP/SMP
2003-02-17 02:19:58 +00:00
alfred
81da3fbe2a Fix logic in loop so it actually executes.
Pointed out by: fjoe
2003-02-16 16:12:10 +00:00
phk
4bfb37f22e Remove #include <sys/dkstat.h> 2003-02-16 14:13:23 +00:00
phk
811b1cae1c Move the tty related statistics counters to live with the tty code. 2003-02-16 13:22:15 +00:00
jeff
de87d496d3 - Introduce a new function bremfreel() that does a bremfree with the buf
queue lock already held.
 - In getblk() and flushbufqueues() use bremfreel() while we still have the
   buf queue lock held to keep the lists consistent.
 - Add LK_NOWAIT to two cases where we're essentially asserting that the bufs
   are not locked while acquiring the locks.  This will make sure that we get
   the appropriate panic() and not another one for sleeping with a lock held.
2003-02-16 10:43:06 +00:00
jeff
bdd1053074 - Add a WITNESS_SLEEP() for the appropriate cases in lockmgr(). 2003-02-16 10:39:49 +00:00
alfred
4e5b966b93 prevent overflow in shminfo.shmmax 2003-02-16 06:08:55 +00:00
hsu
762f64befa Remove extraneous FILEDESC_LOCK around atomic read. 2003-02-16 02:15:15 +00:00
arr
8e7322af29 - Update a couple of comments to make sense with what today's code is
doing (stale comments make arr something something ;)).
2003-02-15 23:25:12 +00:00
tegge
f360480a68 Avoid file lock leakage when linuxthreads port or rfork is used:
- Mark the process leader as having an advisory lock
  - Check if process leader is marked as having advisory lock when
    closing file
  - Check that file is still open after lock has been obtained
  - Don't allow file descriptor table sharing between processes
    with different leaders

PR:		10265
Reviewed by:	alfred
2003-02-15 22:43:05 +00:00
arr
e1e48e3d34 - Remove old comment for PURGE() as it no longer exists and implied it
was a comment to cache_zap().
- Add a comment to quickly state what cache_zap() does.

Reviewed by:	phk, mux
2003-02-15 18:58:06 +00:00
tjr
c831929bbb Acquire Giant around calls to kern_sigaction() in sigaction(),
freebsd4_sigaction() and osigaction() instead of around the whole
body of those functions. They now no longer hold Giant around calls
to copyin() and copyout(), and it is slightly more obvious what
Giant is protecting.
2003-02-15 09:56:09 +00:00
tjr
a000ef163a osigpending() no longer needs Giant, for the same reason sigpending()
does not.
2003-02-15 09:15:30 +00:00
tjr
f12b647e3e All uses of p_siglist are protected by the proc lock now, so there's
no need to acquire Giant in sigpending() anymore.
2003-02-15 08:42:02 +00:00
alfred
29fb7c2bce Do not allow kqueues to be passed via unix domain sockets. 2003-02-15 06:04:55 +00:00
alfred
d9a7e5d627 Fix LOR with PROC/filedesc. Introduce fdesc_mtx that will be used as a
barrier between free'ing filedesc structures.  Basically if you want to
access another process's filedesc, you want to hold this mutex over the
entire operation.
2003-02-15 05:52:56 +00:00
bmilekic
49e9cba72e Make m_getm() always return the top of the newly allocated chain, as
opposed to returning the top of the old chain when there was one and
the top of the newly allocated chain if there was no old chain.

Actually, it should be noted that prior to this fix, although the
comment above m_getm() advertised that m_getm() would return the
top of the old chain (if an old chain was being passed in) it
actually [wrongly] was returning the tail mbuf in the old chain
instead.  This is a bug but since the one use of m_getm() in
the tree luckily did not depend on the behavior, it happened
to work out without notice.

Harti Brandt pointed out that the advertised behavior was actually
not the real behavior and so this change makes m_getm() ALWAYS
return the newly allocated chain (and fixes the comment).  This
is less confusing and is the best course of action as then the
caller is always able to have both a reference to the top of
the original chain (because it's passing it in in the call) and
a reference to the newly attached chain.  Although the API is
slightly modified, I don't think that any third-party code uses
m_getm() and if it does, it surely can't be working properly
because the old behavior was bogus.

API bug pointed out by: Harti Brandt <brandt@fokus.fraunhofer.de>
2003-02-14 16:50:13 +00:00
des
c872f41146 Style nit. 2003-02-14 13:30:25 +00:00
alfred
f69f06f4e6 KASSERT format string does not need newline termination 2003-02-14 13:28:44 +00:00
alfred
766548f91e Add kasserts to catch bad API usage.
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
2003-02-14 13:18:51 +00:00
alfred
268bc18ef2 Fix crash dumps on ata and scsi.
To fix scsi, don't wait for ithreads if we're dumping, it makes the
debugger sad.

To fix ata, use what appears to be a polling method if we're dumping,
I stole this from tmm but added code to ensure that this change is
only in effect while dumping.

Tested by: des
2003-02-14 13:10:40 +00:00
alfred
6c48c14c49 style. 2003-02-14 12:44:48 +00:00
alfred
952ec160dd Print a backtrace in case we tsleep from inside of DDB. 2003-02-14 12:44:07 +00:00
alc
d3cfca777c Use atomic ops to update amountpipekva. Amountpipekva represents the
total kernel virtual address space used by all pipes.  It is, thus, outside
the scope of any individual pipe lock.
2003-02-13 19:39:54 +00:00
des
0fb179fc08 It seems the extra precautions are no longer needed. 2003-02-13 10:05:20 +00:00
tjr
729d326fb2 Add an XXX comment noting that getrusage() accesses p_stats->p_ru
and p_stats->p_cru without holding the appropriate locks.
2003-02-13 09:53:59 +00:00
peter
30c571736e Add a 'debug.witness_trace' sysctl (and tunable) when DDB is present.
This causes LOR and could-sleep messages to come with a stack trace.
2003-02-13 01:35:56 +00:00
peter
e6756cd99a Print "Stack backtrace:" right before dumping the backtrace. We cannot
expect end users to automatically recognize a stack trace for what it is.
2003-02-13 01:33:59 +00:00
imp
003f2e27d6 Implement rman_get_device
# I though this was alredy implemented

Pointy hat on my head shown by: peter
2003-02-12 07:00:59 +00:00
alfred
2cadfd181f Don't lock FILEDESC under PROC.
The locking here needs to be revisited, but this ought to get rid of the
LOR messages that people are complaining about for now.  I imagine either
I or someone else interested with smp will eventually clear this up.
2003-02-11 07:20:52 +00:00
jeff
4d663017dc - Add a comment about a race that will happen without Giant. 2003-02-10 22:47:34 +00:00
jeff
2492221864 - Unlock the nblock after the loop in bwillwrite(). 2003-02-10 22:33:59 +00:00
jeff
1564002a4b - Enable STRICT_RESCHED until code that dynamically decides on resched
strictness based on the current workload is finished.
2003-02-10 14:11:23 +00:00
jeff
fbd09df3eb - Add a new variable 'kg_runtime' that tracks the amount of time we've run.
- Use the ratio of kg_runtime / kg_slptime to determine our dynamic priority.
 - Scale kg_runtime and kg_slptime back when the sum of the two exceeds
   SCHED_SLP_RUN_MAX.  This allows us to slowly forget old behavior.
 - Scale back the runtime and slptime in fork so that the new process has the
   same ratio but much less accumulated time.  This causes new behavior to be
   noticed more quickly.
2003-02-10 14:03:45 +00:00
tjr
ed08307335 Lock the proc around accessing p_siglist in ttycheckoutq() in the
unused wait != 0 case.
2003-02-10 06:06:46 +00:00
jeff
2de830f8f6 - In getnewbuf() unlock the bq lock prior to sleeping when we're out of
buffers.

Submitted by:	tegge
2003-02-10 06:02:51 +00:00
jake
d3a0473d61 Remove mtx_lock_giant from functions which are mp-safe. 2003-02-10 04:42:20 +00:00
jeff
6bab19f3ac - Correct another atomic op.
Spotted by:	alc
2003-02-09 22:39:51 +00:00
jeff
404c49d980 - Claim we're 'fsync' and not 'spec_fsync' in vop_stdfsync. 2003-02-09 12:29:38 +00:00
jeff
528cceebc4 - Move some code out from #ifdef INVARIANTS. 2003-02-09 12:11:37 +00:00
jeff
33fe54e557 - Update a printf format for b_flags. 2003-02-09 11:56:13 +00:00
jeff
87e306ad71 - Cleanup unlocked accesses to buf flags by introducing a new b_vflag member
that is protected by the vnode lock.
 - Move B_SCANNED into b_vflags and call it BV_SCANNED.
 - Create a vop_stdfsync() modeled after spec's sync.
 - Replace spec_fsync, msdos_fsync, and hpfs_fsync with the stdfsync and some
   fs specific processing.  This gives all of these filesystems proper
   behavior wrt MNT_WAIT/NOWAIT and the use of the B_SCANNED flag.
 - Annotate the locking in buf.h
2003-02-09 11:28:35 +00:00
jeff
75e9ed76e4 - spell add 'add' and not 'subtract' in an atomic op.
Spotted by:	alc
Pointy hat to:	jeff
2003-02-09 11:21:40 +00:00
jeff
734283166f - Lock down the buffer cache's infrastructure code. This includes locks on
buf lists, synchronization variables, and atomic ops for the counters.
   This change does not remove giant from any code although some pushdown
   may be possible.
 - In vfs_bio_awrite() don't access buf fields without the buf lock.
2003-02-09 09:47:31 +00:00
julian
cf07da2f1a A little infrastructure, preceding some upcoming changes
to the profiling and statistics code.

Submitted by:	DavidXu@
Reviewed by:	peter@
2003-02-08 02:58:16 +00:00