Commit Graph

10066 Commits

Author SHA1 Message Date
Rong-En Fan
534046e301 - Remove UMAP filesystem. It was disconnected from build three years ago,
and it is seriously broken.

Discussed on:   freebsd-arch@
Approved by:	re (mux)
2007-06-25 05:06:57 +00:00
Konstantin Belousov
9bc911d4a2 devfs_free() calls free_unr(), that may sleep.
Postpone call to devfs_free() after cdev mutex is dropped. Reuse
cdp_list link for queuing devices awaiting deletion in the
cdevp_free_list.

Reported by:	Hans Petter Selasky <hselasky c2i net>
Tested by:	Peter Holm
Approved by:	re (kensmith)
MFC after:	2 weeks
2007-06-19 13:19:23 +00:00
Konstantin Belousov
7550e3eac4 Add the witness warning for free_unr. Function could sleep, thus callers
shall not have any non-sleepable locks held.

Submitted by:	Hans Petter Selasky <hselasky c2i net>
Approved by:	re (kensmith)
2007-06-19 13:13:17 +00:00
Pawel Jakub Dawidek
dfe97ff4a5 We only flush entries related to the given file system. Currently there are
no 'invalid' cache entires - file system is responsible for keeping it that
way. The comment should have been updated in rev.1.25.
2007-06-18 09:28:24 +00:00
Robert Watson
7251b7863c Rather than passing SUSER_RUID into priv_check_cred() to specify when
a privilege is checked against the real uid rather than the effective
uid, instead decide which uid to use in priv_check_cred() based on the
privilege passed in.  We use the real uid for PRIV_MAXFILES,
PRIV_MAXPROC, and PRIV_PROC_LIMIT.  Remove the definition of
SUSER_RUID; there are now no flags defined for priv_check_cred().

Obtained from:	TrustedBSD Project
2007-06-16 23:41:43 +00:00
Marius Strobl
79be8b5082 - Remove zstty spin lock for no longer existing zs(4).
- Move the rtc_mtx spin lock out from under #ifdef SMP as it's just
  not SMP-specific.
- Add a new spin lock pcib_mtx for locking "fast" interrupt handlers
  of host-to-PCI bridge drivers on sparc64.
2007-06-16 23:30:57 +00:00
Jeff Roberson
dda713dfb8 - Fix an off by one error in sched_pri_range.
- In tdq_choose() only assert that a thread does not have too high a
   priority (low value) for the queue we removed it from.  This will catch
   bugs in priority elevation.  It's not a serious error for the thread
   to have too low a priority as we don't change queues in this case as
   an optimization.

Reported by:	kris
2007-06-15 19:33:58 +00:00
Robert Watson
7e273744a6 Remove the restriction that rtprio(2) cannot be used to set the realtime
or idle priority of another process owned by the same user.  This means
that privilege in rtprio(2) (and rtprio_thread(2)) is required indirectly
via p_cansched(9) or directly to set realtime/idle privilege, rather than
directly affecting target process authorization.
2007-06-14 23:31:52 +00:00
Robert Watson
b4be6ef22f Only require privilege to set the current time adjustment, not in order to
query it.
2007-06-14 18:37:58 +00:00
Robert Watson
3805385e3d Spell statistics more correctly in comments. 2007-06-14 03:02:33 +00:00
John Baldwin
34a9edafbc Improve the ktrace locking somewhat to reduce overhead:
- Depessimize userret() in kernels where KTRACE is enabled by doing an
  unlocked check of the per-process queue of pending events before
  acquiring any locks.  Previously ktr_userret() unconditionally acquired
  the global ktrace_sx lock on every return to userland for every thread,
  even if ktrace wasn't enabled for the thread.
- Optimize the locking in exit() to first perform an unlocked read of
  p_traceflag to see if ktrace is enabled and only acquire locks and
  teardown ktrace if the test succeeds.  Also, explicitly disable tracing
  before draining any pending events so the pending events actually get
  written out.  The unlocked read is safe because proc lock is acquired
  earlier after single-threading so p_traceflag can't change between then
  and this check (well, it can currently due to a bug in ktrace I will fix
  next, but that race existed prior to this change as well).

Reviewed by:	rwatson
2007-06-13 20:01:42 +00:00
John Baldwin
ce0be64687 Conditionally acquire Giant when dropping a reference on the ktrace vnode
during execve() when turning off tracing due to executing a setuid binary
as non-root.  Previously this could fail to acquire Giant and fail an
assertion if the ktrace file was on a non-MPSAFE filesystem and the
executable was on an MPSAFE filesystem.

MFC after:	3 days
Reported by:	kris
2007-06-13 19:41:47 +00:00
Jeff Roberson
3036ab79e3 - Include opt_sched.h for SCHED_STATS. 2007-06-12 23:27:31 +00:00
Jeff Roberson
671f2709ae - Garbage collect unused concurrency functions. 2007-06-12 19:50:31 +00:00
Jeff Roberson
e7c8d2e9fe - Garbage collect unused concurrency functions.
- Remove unused kse fields from struct proc.
 - Group remaining fields and #ifdef KSE them.
 - Move some kern_kse.c only prototypes out of proc and into kern_kse.

Discussed with:	Julian
2007-06-12 19:49:39 +00:00
Jeff Roberson
fe54587ffa - Move some common code out of sched_fork_exit() and back into fork_exit(). 2007-06-12 07:47:09 +00:00
Jeff Roberson
ff8fbcffcb Solve a complex exit race introduced with thread_lock:
- Add a count of exiting threads, p_exitthreads, to struct proc.
 - Increment p_exithreads when we set the deadthread in thread_exit().
 - When we thread_stash() a deadthread use an atomic to drop the count.
 - Spin until the p_exithreads count reaches 0 in thread_wait().
 - Lock the last exiting thread momentarily to be certain that it has
   exited cpu_throw().
 - Restructure thread_wait().  It does not need a loop as there will only
   ever be one thread.

Tested by:	moose@opera.com
Reported by:	kris, moose@opera.com
2007-06-12 07:24:46 +00:00
Robert Watson
32f9753cfb Eliminate now-unused SUSER_ALLOWJAIL arguments to priv_check_cred(); in
some cases, move to priv_check() if it was an operation on a thread and
no other flags were present.

Eliminate caller-side jail exception checking (also now-unused); jail
privilege exception code now goes solely in kern_jail.c.

We can't yet eliminate suser() due to some cases in the KAME code where
a privilege check is performed and then used in many different deferred
paths.  Do, however, move those prototypes to priv.h.

Reviewed by:	csjp
Obtained from:	TrustedBSD Project
2007-06-12 00:12:01 +00:00
Jeff Roberson
efe641b939 - Add a missing PROC_SUNLOCK() in tdsignal() 2007-06-11 23:27:03 +00:00
Olivier Houchard
e411ce026a Re-acquire the PROC_SLOCK before calling calcru(), and release it after,
since calcru() expects it to be locked.

Reviewed by:	attilio
2007-06-11 21:05:41 +00:00
Sam Leffler
68e8e04e93 Update 802.11 wireless support:
o major overhaul of the way channels are handled: channels are now
  fully enumerated and uniquely identify the operating characteristics;
  these changes are visible to user applications which require changes
o make scanning support independent of the state machine to enable
  background scanning and roaming
o move scanning support into loadable modules based on the operating
  mode to enable different policies and reduce the memory footprint
  on systems w/ constrained resources
o add background scanning in station mode (no support for adhoc/ibss
  mode yet)
o significantly speedup sta mode scanning with a variety of techniques
o add roaming support when background scanning is supported; for now
  we use a simple algorithm to trigger a roam: we threshold the rssi
  and tx rate, if either drops too low we try to roam to a new ap
o add tx fragmentation support
o add first cut at 802.11n support: this code works with forthcoming
  drivers but is incomplete; it's included now to establish a baseline
  for other drivers to be developed and for user applications
o adjust max_linkhdr et. al. to reflect 802.11 requirements; this eliminates
  prepending mbufs for traffic generated locally
o add support for Atheros protocol extensions; mainly the fast frames
  encapsulation (note this can be used with any card that can tx+rx
  large frames correctly)
o add sta support for ap's that beacon both WPA1+2 support
o change all data types from bsd-style to posix-style
o propagate noise floor data from drivers to net80211 and on to user apps
o correct various issues in the sta mode state machine related to handling
  authentication and association failures
o enable the addition of sta mode power save support for drivers that need
  net80211 support (not in this commit)
o remove old WI compatibility ioctls (wicontrol is officially dead)
o change the data structures returned for get sta info and get scan
  results so future additions will not break user apps
o fixed tx rate is now maintained internally as an ieee rate and not an
  index into the rate set; this needs to be extended to deal with
  multi-mode operation
o add extended channel specifications to radiotap to enable 11n sniffing

Drivers:
o ath: add support for bg scanning, tx fragmentation, fast frames,
       dynamic turbo (lightly tested), 11n (sniffing only and needs
       new hal)
o awi: compile tested only
o ndis: lightly tested
o ipw: lightly tested
o iwi: add support for bg scanning (well tested but may have some
       rough edges)
o ral, ural, rum: add suppoort for bg scanning, calibrate rssi data
o wi: lightly tested

This work is based on contributions by Atheros, kmacy, sephe, thompsa,
mlaier, kevlo, and others.  Much of the scanning work was supported by
Atheros.  The 11n work was supported by Marvell.
2007-06-11 03:36:55 +00:00
Attilio Rao
393a081d42 Optimize vmmeter locking.
In particular:
- Add an explicative table for locking of struct vmmeter members
- Apply new rules for some of those members
- Remove some unuseful comments

Heavily reviewed by: alc, bde, jeff
Approved by: jeff (mentor)
2007-06-10 21:59:14 +00:00
Matt Jacob
a659386c7e Remove unused variable. 2007-06-10 01:50:05 +00:00
Matt Jacob
26756b7a58 The new compiler can't quite follow the logic of has_stime and
complains about using uninitialized tags in stime.
2007-06-10 01:49:17 +00:00
Matt Jacob
9b73d2396a Initialized ets to zero. This is arguably a gcc bug in that ets is always
set to rts when timeout is non-NULL and then timevalid is set and ets is
only checked later when timervalid is set.
2007-06-10 01:43:11 +00:00
Attilio Rao
bdf08be439 Fix a bug caming from the committing a pre-merge version of the patch
instead than a post-merge version (respect to another rusage fix).

Reported by: marcel
Approved by: jeff(mentor)
2007-06-10 00:28:41 +00:00
Marcel Moolenaar
55b5660de4 Work around an integer overflow in expression `3 * maxbufspace / 4',
when maxbufspace is larger than INT_MAX / 3. The overflow causes a
hard hang on ia64 when physical memory is sufficiently large (8GB).
2007-06-09 23:41:14 +00:00
Attilio Rao
a1fe14bc33 rufetch and calcru sometimes should be called atomically together.
This patch fixes places where they should be called atomically changing
their locking requirements (both assume per-proc spinlock held) and
introducing rufetchcalc which wrappers both calls to be performed in
atomic way.

Reviewed by: jeff
Approved by: jeff (mentor)
2007-06-09 21:48:44 +00:00
Attilio Rao
86a49dea5b Since locking in kern/subr_prof.c is changed a bit, we need nomore of
time_lock spinlock exported.

Approved by: jeff (mentor)
2007-06-09 19:41:14 +00:00
Attilio Rao
a140976eb4 The current rusage code show peculiar problems:
- Unsafeness on ruadd() in thread_exit()
- Unatomicity of thread_exiit() in the exit1() operations

This patch addresses these problems allocating p_fd as part of the
process and modifying the way it is accessed.

A small chunk of this patch, resolves a race about p_state in kern_wait(),
since we have to be sure about the zombif-ing process.

Submitted by: jeff
Approved by: jeff (mentor)
2007-06-09 18:56:11 +00:00
Matt Jacob
65d32cd8fb Propagate volatile qualifier to make gcc4.2 happy. 2007-06-09 18:09:37 +00:00
Attilio Rao
e682569165 Remove the MUTEX_WAKE_ALL option and make it the default behaviour for our
mutexes.
Currently we alredy force MUTEX_WAKE_ALL beacause of some problems with the
!MUTEX_WAKE_ALL case (unavioidable priority inversion).
2007-06-08 21:36:52 +00:00
Poul-Henning Kamp
7acfb0af82 Double the WITNESS and DIAGNOSTIC benchmark warnings right before we
go into userland to improve the chances of people noticing them.
2007-06-08 11:47:36 +00:00
Xin LI
7b8c8b858c In getblk(), before gbincore(), use BO_LOCK directly when locking
the bufobj, rather than using VI_LOCK, like what was done with
revision 1.453.
2007-06-08 07:05:08 +00:00
Robert Watson
faef53711b Move per-process audit state from a pointer in the proc structure to
embedded storage in struct ucred.  This allows audit state to be cached
with the thread, avoiding locking operations with each system call, and
makes it available in asynchronous execution contexts, such as deep in
the network stack or VFS.

Reviewed by:	csjp
Approved by:	re (kensmith)
Obtained from:	TrustedBSD Project
2007-06-07 22:27:15 +00:00
John Baldwin
a66fde8d35 - Remove unused variable from create_thread().
- Move kern_thr_*() prototype to <sys/syscallsubr.h> where all the other
  kern_*() prototypes live.
2007-06-07 19:45:19 +00:00
David Xu
42ce445fed Backout experimental adaptive-spin umtx code. 2007-06-06 07:35:08 +00:00
Jeff Roberson
710eacdc5f - Placing the 'volatile' on the right side of the * in the td_lock
declaration removes the need for __DEVOLATILE().

Pointed out by:	tegge
2007-06-06 03:40:47 +00:00
Attilio Rao
d301eb10c7 Fix a problem with not-preemptive kernels caming from mis-merging of
existing code with the new thread_lock patch.
This also cleans up a bit unlock operation for mutexes.

Approved by: jhb, jeff(mentor)
2007-06-05 18:57:09 +00:00
Konstantin Belousov
b95b98b0bd Restore non-SMP build.
Reviewed by:	attilio
2007-06-05 14:20:13 +00:00
Jeff Roberson
95e3a0bca3 - Better fix for previous error; use DEVOLATILE on the td_lock pointer
it can actually sometimes be something other than sched_lock even on
   schedulers which rely on a global scheduler lock.

Tested by:	kan
2007-06-05 04:12:46 +00:00
Jeff Roberson
c219b097af - Pass &sched_lock as the third argument to cpu_switch() as this will
always be the correct lock and we don't get volatile warnings this
   way.

Pointed out by:	kan
2007-06-05 03:46:54 +00:00
Jeff Roberson
36b369163b - Define TDQ_ID() for the !SMP case.
- Default pick_pri to off.  It is not faster in most cases.
2007-06-05 02:53:51 +00:00
Jeff Roberson
8e0185f604 - Remove sched_core.c. The maintainer has lost interest in pursuing this
and it has been neglected in the recent ksegrp removal as well as
   the thread_lock() changes.

Discussed with:	davidxu
2007-06-05 00:12:37 +00:00
Jeff Roberson
982d11f836 Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
Jeff Roberson
bd43e47156 Commit 10/14 of sched_lock decomposition.
- Add new spinlocks to support thread_lock() and adjust ordering.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:45 +00:00
Jeff Roberson
07a61420ff Commit 9/14 of sched_lock decomposition.
- Attempt to return the ttyinfo() selection algorithm to something sane
   as it has been broken and disabled for some time.  Adapt this algorithm
   in such a way that it does not conflict with per-cpu scheduler locking.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:32 +00:00
Jeff Roberson
3c2e44364e Commit 8/14 of sched_lock decomposition.
- Use a global umtx spinlock to protect the sleep queues now that there
   is no global scheduler lock.
 - Use thread_lock() to protect thread state.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:50 +00:00
Jeff Roberson
765b2891e8 Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Use a global kse spinlock to protect upcall and thread assignment.  The
   per-process spinlock can not be used because this lock must be acquired
   via mi_switch() where we already hold a thread lock.  The kse spinlock
   is a leaf lock ordered after the process and thread spinlocks.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
Jeff Roberson
11bda9b8d5 Commit 6/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Replace the tail-end of fork_exit() with a scheduler specific routine
   which can do the appropriate lock manipulations.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:34 +00:00
Jeff Roberson
40acdeabab Commit 5/14 of sched_lock decomposition.
- Protect the cp_time tick counts with atomics instead of a global lock.
   There will only be one atomic per tick and this allows all processors
   to execute softclock concurrently.
 - In softclock, protect access to rusage and td_*tick data with the
   thread_lock(), expanding the scope of the thread lock over the whole
   function.
 - Do some creative re-arranging in hardclock() to avoid excess locking.
 - Protect the p_timer fields with the per-process spinlock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:06 +00:00
Jeff Roberson
a54e85fdbf Commit 4/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Move some common code into thread_suspend_switch() to handle the
   mechanics of suspending a thread.  The locking here is incredibly
   convoluted and should be simplified.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:52:24 +00:00
Jeff Roberson
2502c107ba Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
   deadlocks that are possible with thread_lock().
 - The turnstile lock order is defined as the exact opposite of the
   lock order used with the sleep locks they represent.  This allows us
   to walk in reverse order in priority_propagate and this is the only
   place we wish to multiply acquire turnstile locks.
 - Use the turnstile_chain lock to protect assigning mutexes to turnstiles.
 - Change the turnstile interface to pass back turnstile pointers to the
   consumers.  This allows us to reduce some locking and makes it easier
   to cancel turnstile assignment while the turnstile chain lock is held.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:51:44 +00:00
Jeff Roberson
d72e80f09a Commit 2/14 of sched_lock decomposition.
- Adapt sleepqueues to the new thread_lock() mechanism.
 - Delay assigning the sleep queue spinlock as the thread lock until after
   we've checked for signals.  It is illegal for a thread to return in
   mi_switch() with any lock assigned to td_lock other than the scheduler
   locks.
 - Change sleepq_catch_signals() to do the switch if necessary to simplify
   the callers.
 - Simplify timeout handling now that locking a sleeping thread has the
   side-effect of locking the sleepqueue.  Some previous races are no
   longer possible.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:56 +00:00
Jeff Roberson
7b20fb19fb Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
   similar to solaris's container locking.
 - A per-process spinlock is now used to protect the queue of threads,
   thread count, suspension count, p_sflags, and other process
   related scheduling fields.
 - The new thread lock is actually a pointer to a spinlock for the
   container that the thread is currently owned by.  The container may
   be a turnstile, sleepqueue, or run queue.
 - thread_lock() is now used to protect access to thread related scheduling
   fields.  thread_unlock() unlocks the lock and thread_set_lock()
   implements the transition from one lock to another.
 - A new "blocked_lock" is used in cases where it is not safe to hold the
   actual thread's lock yet we must prevent access to the thread.
 - sched_throw() and sched_fork_exit() are introduced to allow the
   schedulers to fix-up locking at these points.
 - Add some minor infrastructure for optionally exporting scheduler
   statistics that were invaluable in solving performance problems with
   this patch.  Generally these statistics allow you to differentiate
   between different causes of context switches.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
Attilio Rao
b4b7081961 Do proper "locking" for missing vmmeters part.
Now, we assume no more sched_lock protection for some of them and use the
distribuited loads method for vmmeter (distribuited through CPUs).

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:45:18 +00:00
Attilio Rao
6759608248 Rework the PCPU_* (MD) interface:
- Rename PCPU_LAZY_INC into PCPU_INC
- Add the PCPU_ADD interface which just does an add on the pcpu member
  given a specific value.

Note that for most architectures PCPU_INC and PCPU_ADD are not safe.
This is a point that needs some discussions/work in the next days.

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:38:48 +00:00
David Malone
041b706b2f Despite several examples in the kernel, the third argument of
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.

Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.

In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported.  In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.
2007-06-04 18:25:08 +00:00
David Malone
df82ff50ed Add a function for exporting 64 bit types. 2007-06-04 18:14:28 +00:00
Kris Kennaway
cdcc788a7e Revert some debugging KTRs that were added during development. 2007-06-03 18:24:31 +00:00
Konstantin Belousov
7a31868ed0 Change the VOP_OPEN(), vn_open() vnode operation and d_fdopen() cdev operation
argument from being file descriptor index into the pointer to struct file:
part 2. Convert calls missed in the first big commit.

Noted by:	rwatson
Pointy hat to:	kib
2007-06-01 14:33:11 +00:00
Jeff Roberson
1c4bcd050a - Move rusage from being per-process in struct pstats to per-thread in
td_ru.  This removes the requirement for per-process synchronization in
   statclock() and mi_switch().  This was previously supported by
   sched_lock which is going away.  All modifications to rusage are now
   done in the context of the owning thread.  reads proceed without locks.
 - Aggregate exiting threads rusage in thread_exit() such that the exiting
   thread's rusage is not lost.
 - Provide a new routine, rufetch() to fetch an aggregate of all rusage
   structures from all threads in a process.  This routine must be used
   in any place requiring a rusage from a process prior to it's exit.  The
   exited process's rusage is still available via p_ru.
 - Aggregate tick statistics only on demand via rufetch() or when a thread
   exits.  Tick statistics are kept in the thread and protected by sched_lock
   until it exits.

Initial patch by:	attilio
Reviewed by:		attilio, bde (some objections), arch (mostly silent)
2007-06-01 01:12:45 +00:00
Attilio Rao
2feb50bf7d Revert VMCNT_* operations introduction.
Probabilly, a general approach is not the better solution here, so we should
solve the sched_lock protection problems separately.

Requested by: alc
Approved by: jeff (mentor)
2007-05-31 22:52:15 +00:00
Paolo Pisati
3401f2c1df In some particular cases (like in pccard and pccbb), the real device
handler is wrapped in a couple of functions - a filter wrapper and an
ithread wrapper. In this case (and just in this case), the filter
wrapper could ask the system to schedule the ithread and mask the
interrupt source if the wrapped handler is composed of just an ithread
handler: modify the "old" interrupt code to make it support
this situation, while the "new" interrupt code is already ok.

Discussed with: jhb
2007-05-31 19:25:35 +00:00
Konstantin Belousov
9e223287c0 Revert UF_OPENING workaround for CURRENT.
Change the VOP_OPEN(), vn_open() vnode operation and d_fdopen() cdev operation
argument from being file descriptor index into the pointer to struct file.

Proposed and reviewed by:	jhb
Reviewed by:	daichi (unionfs)
Approved by:	re (kensmith)
2007-05-31 11:51:53 +00:00
Robert Watson
049c3b6cdf Now that sx(9) locks support an interruptible lock acquire primitive,
properly observe the SB_NOINTR flag in sblock.  This restores the
required behavior that lock acquisition be interruptible on the socket
buffer I/O serialization lock to allow threads waiting for I/O to be
signaled even if they aren't the thread currently holding the I/O lock.
With this change, the sblock regression test is again passed.

Reported by:		alfred
sx(9) handiwork:	attilio
2007-05-31 11:51:22 +00:00
Attilio Rao
f9819486e5 Add functions sx_xlock_sig() and sx_slock_sig().
These functions are intended to do the same actions of sx_xlock() and
sx_slock() but with the difference to perform an interruptible sleep, so
that sleep can be interrupted by external events.
In order to support these new featueres, some code renstruction is needed,
but external API won't be affected at all.

Note: use "void" cast for "int" returning functions in order to avoid tools
like Coverity prevents to whine.

Requested by: rwatson
Tested by: rwatson
Reviewed by: jhb
Approved by: jeff (mentor)
2007-05-31 09:14:48 +00:00
Attilio Rao
2c7289cbfa style(9) fixes for sx locks.
Approved by: jeff (mentor)
2007-05-29 19:46:37 +00:00
Attilio Rao
acf840c4bd Add a small fix for lock profiling in sx locks.
"0" cannot be a correct value since when the function is entered at least
one shared holder must be present and since we want the last one "1" is
the correct value.
Note that lock_profiling for sx locks is far from being perfect.
Expect further fixes for that.

Approved by: jeff (mentor)
2007-05-29 19:34:32 +00:00
Attilio Rao
02b0a160dc Fix some problems introduced with the last descriptors tables locking
patch:
- Do the correct test for ldt allocation
- Drop dt_lock just before to call kmem_free (since it acquires blocking
  locks inside)
- Solve a deadlock with smp_rendezvous() where other CPU will wait
  undefinitively for dt_lock acquisition.
- Add dt_lock in the WITNESS list of spinlocks

While applying these modifies, change the requirement for user_ldt_free()
making that returning without dt_lock held.

Tested by: marcus, tegge
Reviewed by: tegge
Approved by: jeff (mentor)
2007-05-29 18:55:41 +00:00
Robert Watson
03c96c3176 Add DDB "show unpcb" command, allowing DDB to print out many pertinent
details from UNIX domain socket protocol layer state.
2007-05-29 12:36:00 +00:00
Ed Maste
911d16b8cd Revert 1.197 and instead avoid calling kdb_enter() if the KDB_UNATTENDED
option is in use.
2007-05-28 21:50:54 +00:00
Warner Losh
cfa7a8beea Simplify the kernel configuration file return code.
Reviewed by: wkoszek
2007-05-28 20:41:10 +00:00
Ed Maste
1e62d77c09 Eliminate explicit kdb_enter in the software watchdog handler (which
produced incorrect behaviour with the KDB_UNATTENDED option) and call
panic in both the KDB and non-KDB cases.  This change is consistent
with rwatson's current kdb/ddb work.
2007-05-28 19:51:12 +00:00
Robert Watson
dede2ab3b2 In kern_kevent(), unconditionally fdrop() fp once fget() has succeeded,
as we never have an opportunity to set it to NULL.

Found with:	Coverity Prevent(tm)
CID:		2161
2007-05-28 17:15:05 +00:00
Robert Watson
e1e8f51b85 Universally adopt most conventional spelling of acquire. 2007-05-27 20:50:23 +00:00
Robert Watson
87066f04c6 Select a more appealing spelling for the word acquire. 2007-05-27 19:24:00 +00:00
Robert Watson
1c293049d9 Add parens around *free in *free++ in mbp_count() so that mbp_count()
actually works.  mbp_count() turns out only to be used in debugging code
in if_patm_intr.c, so this bug did not affect much in practice.

Found with:	Coverity Prevent(tm)
CID:		1943
2007-05-27 17:38:36 +00:00
Robert Watson
097e1ea87f Remove amountpipes counter for pipes -- this replicates the function of
existing UMA statistics for pipes, and allows us to get rid of both the
per-pipe dtor and two atomic operations per pipe required to maintain
the counter.
2007-05-27 17:33:10 +00:00
Robert Watson
e4e80aa713 Remove #if 0'd check for 0-size allocations, which if enabled, called
kdb_enter().
2007-05-27 13:13:46 +00:00
Pawel Jakub Dawidek
6e042171bd To avoid a deadlock when handling .. directory during a lookup, we unlock
parent vnode and relock it after locking child vnode. The problem was that
we always relock it exclusively, even when it was share-locked.

Discussed with:	jeff
2007-05-25 22:23:38 +00:00
Pawel Jakub Dawidek
b4c85af977 We no longer need to put namecache entries onto temporary mplist.
It was useful in revision 1.86, but should have been removed in 1.89.
2007-05-25 22:19:49 +00:00
Pawel Jakub Dawidek
950afe9972 The cache_leaf_test() function seems to be unused, so remove it. 2007-05-25 22:16:17 +00:00
Sam Leffler
3c86b7cdad fix comment typo 2007-05-23 17:28:21 +00:00
Robert Watson
4dec0e67ea Comment that tdsignal() may be entered from the debugger. 2007-05-23 17:27:42 +00:00
Robert Watson
63d69d2592 Initialize time_lock before calling cpu_initclocks(). This corrects a
race condition in which hardclock fires before the mutex is initialized
leading to a "corrupt spinlock" panic.

Submitted by:	attilio
2007-05-23 17:27:01 +00:00
Olivier Houchard
302e130edc Remove duplicate includes.
Submitted by:   Cyril Nguyen Huu <cyril ci0 org>
2007-05-23 13:36:02 +00:00
Pawel Jakub Dawidek
f013ccb768 - Remove redundant initialization.
- Compare pointer with NULL.
2007-05-22 23:05:48 +00:00
Diomidis Spinellis
fdbe5babe4 Increase precision of time values in the process accounting
structure, while maintaining backward compatibility with legacy
file and record formats.
2007-05-22 06:51:38 +00:00
Jeff Roberson
8b98fec903 - Move clock synchronization into a seperate clock lock so the global
scheduler lock is not involved.  sched_lock still protects the sched_clock
   call.  Another patch will remedy this.

Contributed by:	Attilio Rao <attilio@FreeBSD.org>
Tested by:	kris, jeff
2007-05-20 22:11:50 +00:00
John Baldwin
7ec137e5b0 Rename the macros for assertion flags passed to sx_assert() from SX_* to
SA_* to match mutexes and rwlocks.  The old flags still exist for
backwards compatiblity.

Requested by:	attilio
2007-05-19 21:26:05 +00:00
Andre Oppermann
1c9dbd1567 In kern_sendfile() adjust byte accounting of the file sending loop to
ignore the size of any headers that were passed with the sendfile(2)
system call.  Otherwise the file sent will be truncated by the header
size if the nbytes parameter was provided.  The bug doesn't show up
when either nbytes is zero, meaning send the whole file, or no header
iovec is provided.

Resolve a potential error aliasing of errors from the VM and sf_buf
parts and the protocol send parts where an error of the latter over-
writes one of the former.

Update comments.

The byte accounting bug wasn't seen in earlier because none of the popular
sendfile(2) consumers, Apache, lighttpd and our ftpd(8) use it in modes
that trigger it.  The varnish HTTP proxy makes full use of it and exposed
the problem.

Bug found by:	phk
Tested by:	phk
2007-05-19 20:50:59 +00:00
John Baldwin
71a95881d4 Expose sx_xholder() as a public macro. It returns a pointer to the thread
that holds the current exclusive lock, or NULL if no thread holds an
exclusive lock.

Requested by:	pjd
2007-05-19 20:18:12 +00:00
John Baldwin
46a8b9cbc9 Oops, didn't include SX_ADAPTIVESPIN in the list of valid flags for the
assert in sx_init_flags().

Submitted by:	attilio
2007-05-19 18:34:24 +00:00
John Baldwin
b0d673254d Add a new SX_RECURSE flag to make support for recursive exclusive locks
conditional.  By default, sx(9) locks are back to not supporting recursive
exclusive locks.

Submitted by:	attilio
2007-05-19 16:35:27 +00:00
Alexander Kabaev
ee9f46615e Add kern.arnd sysctl. SSP code uses it to initialize the stack guard
magic value.

Submitted by: Jeremie Le Hen <jeremie@le-hen.org>
2007-05-19 04:53:14 +00:00
Robert Watson
eefc497257 Remove unnecessary assignment.
CID:		2227
Found with:	Coverity Prevent(tm)
2007-05-18 21:10:08 +00:00
John Baldwin
da7d0d1e24 Fix a comment. 2007-05-18 15:05:41 +00:00
John Baldwin
c91fcee75d Move lock_profile_object_{init,destroy}() into lock_{init,destroy}(). 2007-05-18 15:04:59 +00:00
Konstantin Belousov
d413d21071 Since renaming of vop_lock to _vop_lock, pre- and post-condition
function calls are no more generated for vop_lock.
Rename _vop_lock to vop_lock1 to satisfy tools/vnode_if.awk assumption
about vop naming conventions. This restores pre/post-condition calls.
2007-05-18 13:02:13 +00:00