Commit Graph

7750 Commits

Author SHA1 Message Date
njl
a39c5fd2f1 Set flags for devices before probing them. In the non-ISA case, flags set
via hints were not getting passed to the child.

PR:		kern/72489
MFC after:	1 day
2004-10-13 07:10:41 +00:00
phk
e606aa635e Don't call driver close unless we have one. 2004-10-12 21:40:41 +00:00
phk
e948dce998 Make !SMP kernels compile, and as far as I can tell, work again. 2004-10-12 20:57:37 +00:00
jhb
94ad1578b0 Whitespace fix. 2004-10-12 19:36:00 +00:00
jhb
a8c1c80ef5 Refine the turnstile and sleep queue interfaces just a bit:
- Add a new _lock() call to each API that locks the associated chain lock
  for a lock_object pointer or wait channel.  The _lookup() functions now
  require that the chain lock be locked via _lock() when they are called.
- Change sleepq_add(), turnstile_wait() and turnstile_claim() to lookup
  the associated queue structure internally via _lookup() rather than
  accepting a pointer from the caller.  For turnstiles, this means that
  the actual lookup of the turnstile in the hash table is only done when
  the thread actually blocks rather than being done on each loop iteration
  in _mtx_lock_sleep().  For sleep queues, this means that sleepq_lookup()
  is no longer used outside of the sleep queue code except to implement an
  assertion in cv_destroy().
- Change sleepq_broadcast() and sleepq_signal() to require that the chain
  lock is already required.  For condition variables, this lets the
  cv_broadcast() and cv_signal() functions lock the sleep queue chain lock
  while testing the waiters count.  This means that the waiters count
  internal to condition variables is no longer protected by the interlock
  mutex and cv_broadcast() and cv_signal() now no longer require that the
  interlock be held when they are called.  This lets consumers of condition
  variables drop the lock before waking other threads which can result in
  fewer context switches.

MFC after:	1 month
2004-10-12 18:36:20 +00:00
jhb
30fc565c2d Add a WITNESS_WARN() to uiomove() to whine if locks are held when this
function is called.

MFC after:	1 month
2004-10-12 18:27:14 +00:00
ups
d193856a50 Directly modifying the priority of a thread that may be on the runqueue
can break the sorting order of the ksegp run queue.

Tested   by: pho
Reviewed by: jhb, julian
Approved by: sam (mentor)
MFC: ASAP
2004-10-12 16:31:23 +00:00
ups
5d0d8550e7 Prevent preemption in slot_fill.
Implement preemption between threads in the same ksegp in out of slot
situations to prevent priority inversion.

Tested   by: pho
Reviewed by: jhb, julian
Approved by: sam (mentor)
MFC: ASAP
2004-10-12 16:30:20 +00:00
ups
2d1412dc2e Force MUTEX_WAKE_ALL.
A race condition in single thread wakeup may break priority inheritance.

Tested   by: pho
Reviewed by: jhb,julian
Approved by: sam (mentor)
MFC: ASAP
2004-10-12 16:28:18 +00:00
phk
589b4ae9cb Add missing zero flag arguments to calls to userland_sysctl() 2004-10-12 07:49:15 +00:00
peter
09964b7499 Put on my peril sensitive sunglasses and add a flags field to the internal
sysctl routines and state.  Add some code to use it for signalling the need
to downconvert a data structure to 32 bits on a 64 bit OS when requested by
a 32 bit app.

I tried to do this in a generic abi wrapper that intercepted the sysctl
oid's, or looked up the format string etc, but it was a real can of worms
that turned into a fragile mess before I even got it partially working.

With this, we can now run 'sysctl -a' on a 32 bit sysctl binary and have
it not abort.  Things like netstat, ps, etc have a long way to go.

This also fixes a bug in the kern.ps_strings and kern.usrstack hacks.
These do matter very much because they are used by libc_r and other things.
2004-10-11 22:04:16 +00:00
glebius
e84a71d330 Rename _m_tag_free() to m_tag_free_default() and make it non-static.
Approved by:	sam
2004-10-11 18:40:19 +00:00
rwatson
5d60192ca7 Add entropy harvest mutex to hard-coded spin lock witness lock order,
remove previous entropy harvesting mutex names as they are no longer
present.  Commit to this file was ommitted when randomdev_soft.c:1.5
was made.

Feet shot:	Robert Huff <roberthuff at rcn dot com>
2004-10-11 08:26:18 +00:00
rwatson
2b329ffd2f Rework sofree() logic to take into account a possible race with accept().
Sockets in the listen queues have reference counts of 0, so if the
protocol decides to disconnect the pcb and try to free the socket, this
triggered a race with accept() wherein accept() would bump the reference
count before sofree() had removed the socket from the listen queues,
resulting in a panic in sofree() when it discovered it was freeing a
referenced socket.  This might happen if a RST came in prior to accept()
on a TCP connection.

The fix is two-fold: to expand the coverage of the accept mutex earlier
in sofree() to prevent accept() from grabbing the socket after the "is it
really safe to free" tests, and to expand the logic of the "is it really
safe to free" tests to check that the refcount is still 0 (i.e., we
didn't race).

RELENG_5 candidate.

Much discussion with and work by:	green
Reported by:	Marc UBM Bocklet <ubm at u-boot-man dot de>
Reported by:	Vlad <marchenko at gmail dot com>
2004-10-11 08:11:26 +00:00
glebius
0dda31b4f9 Revert last commit since it breaks API.
Requested by:	sam
2004-10-10 09:16:48 +00:00
julian
30d2ba06b9 Don't release the slot twice.. sched_rem() has already done it.
Submitted by:	stephan uphoff (ups at tree dot com)
MFC after:	3 days
2004-10-10 05:19:22 +00:00
julian
8c3d54b9e4 Remove duplicate line. 2004-10-10 05:07:43 +00:00
glebius
0c7bb9f633 Remove inlined m_tag_free(). Rename _m_tag_free() to m_tag_free()
and make it visible (same way as in OpenBSD). Describe usage in manpage.

This change is useful for creating custom free methods, which
call default free method at their end.

While here, make malloc declaration for mbuf tags more informative.

Approved by:	julian (mentor), sam
MFC after:	1 month
2004-10-09 13:25:19 +00:00
green
3a482df790 Don't "implicitly order all sleep locks before spin locks" in witness
when the spin lock in question isn't -- it's the critical_enter() that
KDB set.  No more panic in DDB for console -> syscons -> tty -> knote
operations.
2004-10-09 08:16:37 +00:00
davidxu
94500a0336 Add an execve command for kse_thr_interrupt to allow libpthread to
restore signal mask correctly, this is required by POSIX.

Reviewed by: deischen
2004-10-07 13:50:10 +00:00
davidxu
e85209d12c Regen to unbreak world.
Pointy hat to: mtm
2004-10-07 01:09:46 +00:00
das
35b6f981ab Back out rev 1.240; it is unnecessary. In particular,
p1 == curthread, so _PHOLD(p1) will not have to block
to swap in p1.

Noticed by:	jhb
2004-10-06 23:53:49 +00:00
mtm
0a21f474dc Close a race between a thread exiting and the freeing of it's stack.
After some discussion the best option seems to be to signal the thread's
death from within the kernel. This requires that thr_exit() take an
argument.

Discussed with: davidxu, deischen, marcel
MFC after: 3 days
2004-10-06 14:23:00 +00:00
davidxu
e1ce006b64 Close a race between thr_create and sysctl -w, the thr_scope_sys could
be changed when thr_create is running, and we tested it for several times.
2004-10-06 02:29:19 +00:00
grog
152055d94b vtryrecycle: Don't rely on type VBAD alone to mean that we don't need
to clean the vnode.  If v_data is set, we still need to
	     clean it.  This code change should catch all incidents of
	     the previous commit (INVARIANTS only).
2004-10-06 02:09:59 +00:00
grog
882d69104e getnewvnode: Weaken the panic "cleaned vnode isn't" to a warning.
Discussion: this panic (or waning) only occurs when the kernel is
  compiled with INVARIANTS.  Otherwise the problem (which means that
  the vp->v_data field isn't NULL, and represents a coding error and
  possibly a memory leak) is silently ignored by setting it to NULL
  later on.

  Panicking here isn't very helpful: by this time, we can only find
  the symptoms.  The panic occurs long after the reason for "not
  cleaning" has been forgotten; in the case in point, it was the
  result of severe file system corruption which left the v_type field
  set to VBAD.  That issue will be addressed by a separate commit.
2004-10-06 02:06:11 +00:00
davidxu
7acde29a24 Restore some code removed in revision 1.193 and 1.194, julian said
he'd like to keep these code.
2004-10-06 00:49:41 +00:00
davidxu
793ea9317e In original kern_execve() code, at the start of the function, it forces
all other threads to suicide, problem is execve() could be failed, and
a failed execve() would change threaded process to unthreaded, this side
effect is unexpected.
The new code introduces a new single threading mode SINGLE_BOUNDARY, in
the mode, all threads should suspend themself at user boundary except
the singler. we can not use SINGLE_NO_EXIT because we want to start from
a clean state if execve() is successful, suspending other threads at unknown
point and later resuming them from there and forcing them to exit at user
boundary may cause the process to start from a dirty state. If execve() is
successful, current thread upgrades to SINGLE_EXIT mode and forces other
threads to suicide at user boundary, otherwise, other threads will be resumed
and their interrupted syscall will be restarted.

Reviewed by: julian
2004-10-06 00:40:41 +00:00
julian
d5dfe59f9e Fix whitespace botch that only showed up in the commit message diff :-/
MFC after:	4 days
2004-10-05 22:14:02 +00:00
julian
b4640b18f7 Slight cleanup in the single threading code.
MFC after:	4 days
2004-10-05 22:05:25 +00:00
julian
57fb03da54 When preempting a thread, put it back on the HEAD of its run queue.
(Only really implemented in 4bsd)

MFC after:	4 days
2004-10-05 22:03:10 +00:00
julian
7d0504ed38 Oops. left out part of the diff.
MFC after:	4 days
2004-10-05 21:26:27 +00:00
julian
7b170fd9fa Use some macros to trach available scheduler slots to allow
easier debugging.

MFC after:	4 days
2004-10-05 21:10:44 +00:00
julian
8587c9806d light rearrangement of some code to get some locking
more correct

MFC after:	4 days
2004-10-05 20:48:16 +00:00
julian
2094122f86 Break out to a separate function, the code to revert a multithreaded
process back to officially being a non-threaded program.

MFC after:	4 days
2004-10-05 20:39:26 +00:00
jhb
ce2d3f89af Rework how we store process times in the kernel such that we always store
the raw values including for child process statistics and only compute the
system and user timevals on demand.

- Fix the various kern_wait() syscall wrappers to only pass in a rusage
  pointer if they are going to use the result.
- Add a kern_getrusage() function for the ABI syscalls to use so that they
  don't have to play stackgap games to call getrusage().
- Fix the svr4_sys_times() syscall to just call calcru() to calculate the
  times it needs rather than calling getrusage() twice with associated
  stackgap, etc.
- Add a new rusage_ext structure to store raw time stats such as tick counts
  for user, system, and interrupt time as well as a bintime of the total
  runtime.  A new p_rux field in struct proc replaces the same inline fields
  from struct proc (i.e. p_[isu]ticks, p_[isu]u, and p_runtime).  A new p_crux
  field in struct proc contains the "raw" child time usage statistics.
  ruadd() has been changed to handle adding the associated rusage_ext
  structures as well as the values in rusage.  Effectively, the values in
  rusage_ext replace the ru_utime and ru_stime values in struct rusage.  These
  two fields in struct rusage are no longer used in the kernel.
- calcru() has been split into a static worker function calcru1() that
  calculates appropriate timevals for user and system time as well as updating
  the rux_[isu]u fields of a passed in rusage_ext structure.  calcru() uses a
  copy of the process' p_rux structure to compute the timevals after updating
  the runtime appropriately if any of the threads in that process are
  currently executing.  It also now only locks sched_lock internally while
  doing the rux_runtime fixup.  calcru() now only requires the caller to
  hold the proc lock and calcru1() only requires the proc lock internally.
  calcru() also no longer allows callers to ask for an interrupt timeval
  since none of them actually did.
- calcru() now correctly handles threads executing on other CPUs.
- A new calccru() function computes the child system and user timevals by
  calling calcru1() on p_crux.  Note that this means that any code that wants
  child times must now call this function rather than reading from p_cru
  directly.  This function also requires the proc lock.
- This finishes the locking for rusage and friends so some of the Giant locks
  in exit1() and kern_wait() are now gone.
- The locking in ttyinfo() has been tweaked so that a shared lock of the
  proctree lock is used to protect the process group rather than the process
  group lock.  By holding this lock until the end of the function we now
  ensure that the process/thread that we pick to dump info about will no
  longer vanish while we are trying to output its info to the console.

Submitted by:	bde (mostly)
MFC after:	1 month
2004-10-05 18:51:11 +00:00
jhb
9536269a6d Add a critical section in turnstile_unpend() from before dropping the
turnstile chain lock until after making all the awakened threads
runnable.  First, this fixes a priority inversion race.  Second, this
attempts to finish waking up all of the threads waiting on a turnstile
before doing a preemption.

Reviewed by:	Stephan Uphoff (who found the priority inversion race)
2004-10-05 18:00:30 +00:00
pjd
c944ef39d6 Back out changes which were introduced to delay mounting root file system.
Those changes were made on gmirror needs, but now gmirror handles this
by itself.
2004-10-05 11:26:43 +00:00
davidxu
aa22b44625 Use scheduler api to adjust thread priority. 2004-10-05 09:10:30 +00:00
imp
cf32c9fe79 Add taskqueue_drain. This waits for the specified task to finish, if
running, or returns.  The calling program is responsible for making sure
that nothing new is enqueued.

# man page coming soon.
2004-10-05 04:16:01 +00:00
phk
bd3b1af9a6 Change the perfectly precise message
printf("No buffers busy after final sync");
to
       printf("All buffers synced.");
in order to not leave the users wondering if there should be.
2004-10-04 13:13:23 +00:00
julian
395c906e95 Another case where we need to guard against a partially
constructed process.

Submitted by: Stephan Uphoff ( ups at tree.com	)
MFC after:	3 days
2004-10-04 06:45:48 +00:00
julian
96dbdb17db Always strt out with an initilalised ksegrp structure.
MFC after:	3 days
2004-10-03 20:06:11 +00:00
davidxu
33faeb8a73 Don't bother to turn off other P_STOPPED bits for SIGKILL, doing
so would cause kernel to produce an unkillable process in some cases,
especially, P_STOPPED_SINGLE has a singling thread, turning off the
bit would mess the state.
2004-10-03 13:23:49 +00:00
alc
ad2a4ca3e0 Add a SOCKBUF_LOCK() to a rarely executed path in do_sendfile(). 2004-10-02 05:37:47 +00:00
alfred
0efc91b067 Clear a process's procfs trace points upon delivery of SIGKILL.
MT5 candidate. (Desired features for 5.3-RELEASE "More truss problems")
2004-10-01 14:15:20 +00:00
phk
2e5b8b9883 Fix a LOR relating to freeing cdevs. 2004-10-01 06:33:39 +00:00
alfred
a72e384f52 cover soreadable and sowriteable with the corresponding socketbuffer locks. 2004-10-01 05:54:06 +00:00
das
e399d76f1b Avoid calling _PHOLD(p1) with p2's lock held, since _PHOLD()
may block to swap in p1.  Instead, call _PHOLD earlier, at a
point where the only lock held happens to be p1's.
2004-10-01 05:01:29 +00:00
jhb
59af2fcb61 Fix a typo to fix the !DIAGNOSTIC build.
Submitted by:	many
2004-09-30 18:13:18 +00:00