Commit Graph

7167 Commits

Author SHA1 Message Date
Peter Wemm
8a412f314e Move the process_fork event out from under Giant. This one is easy,
since there are no consumers in the tree.  Document this.
2004-03-14 01:48:32 +00:00
Peter Wemm
78c45c5d66 Regen for mpsafe kse_create() 2004-03-13 22:32:17 +00:00
Peter Wemm
37814395c1 Push Giant down a little further:
- no longer serialize on Giant for thread_single*() and family in fork,
  exit and exec
- thread_wait() is mpsafe, assert no Giant
- reduce scope of Giant in exit to not cover thread_wait and just do
  vm_waitproc().
- assert that thread_single() family are not called with Giant
- remove the DROP/PICKUP_GIANT macros from thread_single() family
- assert that thread_suspend_check() s not called with Giant
- remove manual drop_giant hack in thread_suspend_check since we know it
  isn't held.
- remove the DROP/PICKUP_GIANT macros from thread_suspend_check() family
- mark kse_create() mpsafe
2004-03-13 22:31:39 +00:00
Robert Watson
5d8dd01da2 Add annotations to mtx_lock(&Giant) in kern_select() and poll() that
we always grab Giant, even if we're actually only polling objects that
don't require giant.  Once socket locking is merged, there will be
strong motivation to fix this.
2004-03-13 05:58:57 +00:00
Bruce Evans
0249823ecb Align the offset in vn_rdwr_inchunks() so that at most the first and
the last chunk are misaligned relative to a MAXBSIZE byte boundary.
vn_rdwr_inchunks() is used mainly for elf core dumps, and elf sections
are usually perfectly misaligned relative to MAXBSIZE, and chunking
prevents the file system from doing much realigning.

This gives a surprisingly large speedup for core dumps -- from 50 to
13 seconds for a 512MB core dump here.  The pessimization was mostly
from an interaction of the misalignment with IO_DIRECT.  It increased
the number of i/o's for each chunk by a factor of 5 (3 writes and 2
read-before-writes instead of 1 write).
2004-03-13 02:56:27 +00:00
Tom Rhodes
a122cca953 These are changes to allow to use the Intel C/C++ compiler (lang/icc)
to build the kernel. It doesn't affect the operation if gcc.

Most of the changes are just adding __INTEL_COMPILER to #ifdef's, as
icc v8 may define __GNUC__ some parts may look strange but are
necessary.

Additional changes:
 - in_cksum.[ch]:
   * use a generic C version instead of the assembly version in the !gcc
     case (ASM code breaks with the optimizations icc does)
     -> no bad checksums with an icc compiled kernel
     Help from:		andre, grehan, das
     Stolen from: 	alpha version via ppc version
     The entire checksum code should IMHO be replaced with the DragonFly
     version (because it isn't guaranteed future revisions of gcc will
     include similar optimizations) as in:
        ---snip---
          Revision  Changes    Path
          1.12      +1 -0      src/sys/conf/files.i386
          1.4       +142 -558  src/sys/i386/i386/in_cksum.c
          1.5       +33 -69    src/sys/i386/include/in_cksum.h
          1.5       +2 -0      src/sys/netinet/igmp.c
          1.6       +0 -1      src/sys/netinet/in.h
          1.6       +2 -0      src/sys/netinet/ip_icmp.c

          1.4       +3 -4      src/contrib/ipfilter/ip_compat.h
          1.3       +1 -2      src/sbin/natd/icmp.c
          1.4       +0 -1      src/sbin/natd/natd.c
          1.48      +1 -0      src/sys/conf/files
          1.2       +0 -1      src/sys/conf/files.amd64
          1.13      +0 -1      src/sys/conf/files.i386
          1.5       +0 -1      src/sys/conf/files.pc98
          1.7       +1 -1      src/sys/contrib/ipfilter/netinet/fil.c
          1.10      +2 -3      src/sys/contrib/ipfilter/netinet/ip_compat.h
          1.10      +1 -1      src/sys/contrib/ipfilter/netinet/ip_fil.c
          1.7       +1 -1      src/sys/dev/netif/txp/if_txp.c
          1.7       +1 -1      src/sys/net/ip_mroute/ip_mroute.c
          1.7       +1 -2      src/sys/net/ipfw/ip_fw2.c
          1.6       +1 -2      src/sys/netinet/igmp.c
          1.4       +158 -116  src/sys/netinet/in_cksum.c
          1.6       +1 -1      src/sys/netinet/ip_gre.c
          1.7       +1 -2      src/sys/netinet/ip_icmp.c
          1.10      +1 -1      src/sys/netinet/ip_input.c
          1.10      +1 -2      src/sys/netinet/ip_output.c
          1.13      +1 -2      src/sys/netinet/tcp_input.c
          1.9       +1 -2      src/sys/netinet/tcp_output.c
          1.10      +1 -1      src/sys/netinet/tcp_subr.c
          1.10      +1 -1      src/sys/netinet/tcp_syncache.c
          1.9       +1 -2      src/sys/netinet/udp_usrreq.c

          1.5       +1 -2      src/sys/netinet6/ipsec.c
          1.5       +1 -2      src/sys/netproto/ipsec/ipsec.c
          1.5       +1 -1      src/sys/netproto/ipsec/ipsec_input.c
          1.4       +1 -2      src/sys/netproto/ipsec/ipsec_output.c

          and finally remove
            sys/i386/i386        in_cksum.c
            sys/i386/include     in_cksum.h
        ---snip---
 - endian.h:
   * DTRT in C++ mode
 - quad.h:
   * we don't use gcc v1 anymore, remove support for it
   Suggested by:	bde (long ago)
 - assym.h:
   * avoid zero-length arrays (remove dependency on a gcc specific
     feature)
     This change changes the contents of the object file, but as it's
     only used to generate some values for a header, and the generator
     knows how to handle this, there's no impact in the gcc case.
   Explained by:	bde
   Submitted by:	Marius Strobl <marius@alchemy.franken.de>
 - aicasm.c:
   * minor change to teach it about the way icc spells "-nostdinc"
   Not approved by:	gibbs (no reply to my mail)
 - bump __FreeBSD_version (lang/icc needs to know about the changes)

Incarnations of this patch survive gcc compiles since a loooong time,
I use it on my desktop. An icc compiled kernel works since Nov. 2003
(exceptions: snd_* if used as modules), it survives a build of the
entire ports collection with icc.

Parts of this commit contains suggestions or submissions from
Marius Strobl <marius@alchemy.franken.de>.

Reviewed by:	-arch
Submitted by:	netchild
2004-03-12 21:45:33 +00:00
Ruslan Ermilov
7700eb86e7 Do what the execve(2) manpage says and enforce what a Strictly
Conforming POSIX application should do by disallowing the argv
argument to be NULL.

PR:		kern/33738
Submitted by:	Marc Olzheim, Serge van den Boom
OK'ed by:	nectar
2004-03-12 21:06:20 +00:00
Ken Smith
db322c7eba This is a temporary fix to solve a regression issue on sparc64 that
is caused by the way sparc64 registers its CPUs.  Nate will work on
a real fix shortly.

Approved by:	njl
2004-03-12 20:35:21 +00:00
John Baldwin
1ed3e44f22 - Remove old sleep queues.
- Remove sleepqueue argument from sleepq_set_timeout() since it is not
  used.
2004-03-12 19:06:18 +00:00
John Baldwin
595bc82a1d Fixup a comment. 2004-03-12 19:05:46 +00:00
Dag-Erling Smørgrav
30a058027a Replace a manual check of a VMIO candidate with vn_canvmio(). This
silences an annoying warning in getblk() when VMIO'ing on a directory
vnode, which can happen when vfs.vmiodirenable is 1.

Bring the warning message in line with reality at the same time.

Submitted by:	hmp
2004-03-12 12:02:12 +00:00
Poul-Henning Kamp
ceb58ca58f When I was a kid my work table was one cluttered mess an cleaning it up
were a rather overwhelming task.  I soon learned that if you don't know
where you're going to store something, at least try to pile it next to
something slightly related in the hope that a pattern emerges.

Apply the same principle to the ffs/snapshot/softupdates code which have
leaked into specfs:  Add yet a buf-quasi-method and call it from the
only two places I can see it can make a difference and implement the
magic in ffs_softdep.c where it belongs.

It's not pretty, but at least it's one less layer violated.
2004-03-11 18:50:33 +00:00
Poul-Henning Kamp
4d453ef101 Properly vector all bwrite() and BUF_WRITE() calls through the same path
and s/BUF_WRITE()/bwrite()/ since it now does the same as bwrite().
2004-03-11 18:02:36 +00:00
Poul-Henning Kamp
2b348f7429 Remove unused mnt_reservedvnlist field. 2004-03-11 16:59:57 +00:00
Poul-Henning Kamp
651b11eaf2 Remove unused second arg to vfinddev().
Don't call addaliasu() on VBLK nodes.
2004-03-11 16:33:11 +00:00
Poul-Henning Kamp
8666b655b5 Correctly account for extra bits in unit numbers when looking for
next free unit.
2004-03-11 14:11:02 +00:00
Poul-Henning Kamp
9397290e76 Add clone_setup() function rather than rely on lazy initialization.
Requested by:	rwatson
2004-03-11 12:58:55 +00:00
John-Mark Gurney
0235bf0261 make sure we had the filedesc lock when calling fdinit when RFCFDG is set
on call to rfork.

Submitted by:	Brian Buchanan
Semi-Reviewed by: rwatson
2004-03-10 00:27:36 +00:00
Nate Lawson
29f5b9a8c1 Hook CPUs up to newbus. CPUs will ultimately be a bus driver so that
multiple CPU-specific drivers can attach.  This is a work in progress
so children aren't supported yet.

Help from:	jhb
2004-03-09 03:37:21 +00:00
Robert Watson
ce89352952 Mark loadaverage callout as CALLOUT_MPSAFE.
Reviewed by:	jhb
2004-03-08 22:01:19 +00:00
Pawel Jakub Dawidek
dd604e2647 Add two new sysctls:
- security.bsd.hardlink_check_uid, when set, means, that unprivileged
		users are not permitted to create hard links to files not
		owned by them,
	- security.bsd.hardlink_check_gid, when set, means, that unprivileged
		users are not permitted to create hard links to files owned
		by group they don't belong to.

OK'ed by:	rwatson
2004-03-08 20:37:25 +00:00
Peter Wemm
a69d88af52 Move a vref call outside of proc locks and Giant. By virtue of the fact
that we (p1) are currently running, we hold a reference on p_textvp which
means the vnode cannot go away.  p2 cannot run yet (and hence cannot exit)
so this should be safe to do at this point.  As a bonus, it removes a
block of under-Giant code that was there to support the vref.
2004-03-08 00:32:34 +00:00
Alan Cox
3eba15c12e Remove GIANT_REQUIRED from vunmapbuf(). 2004-03-07 00:37:18 +00:00
Alan Cox
5fadbfeac2 Giant is not required for vm_thread_new_altkstack(). 2004-03-07 00:06:32 +00:00
Alexander Kabaev
ff85a3f0e1 Always call vn_finished_write after vn_start_write was called. All
occurences of 'goto done' after vn_start_write invocation were cleaning
up incompletely.
2004-03-06 04:09:54 +00:00
Peter Wemm
5750ee293d Add a missing part of jhb's previous commit. It looks like he had a
patch chunk rejected that he missed.  This would manifest as a lock
assertion panic at boot (Giant not locked in kern_fork.c).

Obtained from:  jhb
2004-03-06 00:44:59 +00:00
John Baldwin
6074439965 kthread_exit() no longer requires Giant, so don't force callers to acquire
Giant just to call kthread_exit().

Requested by:	many
2004-03-05 22:42:17 +00:00
John Baldwin
4ae89b957c - Push down Giant in exit() and wait().
- Push Giant down a bit in coredump() and call coredump() with the proc
  lock already held rather than unlocking it only to turn around and
  relock it.

Requested by:	peter
2004-03-05 22:39:53 +00:00
John Baldwin
8144e3b884 Lock Giant around the single threading code in exec() to satisfy an
assertion in the single threading code.
2004-03-05 22:38:26 +00:00
John Baldwin
5ce2f67858 - Grab a share lock of the proctree lock while looking for a pid due to the
process group and session dereferences.  Also, check that p_pgrp and
  p_sesssion are NULL before dereferencing them.
- Push down Giant in fork1().

Requested by:	peter
2004-03-05 22:37:32 +00:00
Don Lewis
169299398a Undo the merger of mlock()/vslock and munlock()/vsunlock() and the
introduction of kern_mlock() and kern_munlock() in
        src/sys/kern/kern_sysctl.c      1.150
        src/sys/vm/vm_extern.h          1.69
        src/sys/vm/vm_glue.c            1.190
        src/sys/vm/vm_mmap.c            1.179
because different resource limits are appropriate for transient and
"permanent" page wiring requests.

Retain the kern_mlock() and kern_munlock() API in the revived
vslock() and vsunlock() functions.

Combine the best parts of each of the original sets of implementations
with further code cleanup.  Make the mclock() and vslock()
implementations as similar as possible.

Retain the RLIMIT_MEMLOCK check in mlock().  Move the most strigent
test, which can return EAGAIN, last so that requests that have no
hope of ever being satisfied will not be retried unnecessarily.

Disable the test that can return EAGAIN in the vslock() implementation
because it will cause the sysctl code to wedge.

Tested by:	Cy Schubert <Cy.Schubert AT komquats.com>
2004-03-05 22:03:11 +00:00
Robert Watson
8cbec0c8dd The roundrobin callout from sched_4bsd is MPSAFE, so set up the
callout as MPSAFE to avoid grabbing Giant.

Reviewed by:	jhb
2004-03-05 19:27:04 +00:00
Robert Watson
16df17d062 Put "failed to set signal flags properly for ast()" check under
DIAGNOSTIC instead of INVARIANTS.  INVARIANTS is intended for tests
that don't substantially change code flow or behavior (passive), but
this test required locking both the proc lock and scheduler lock
in order to execute.  It also appears to be a very advisory diagnostic
as opposed to an invariant violation.

Following discussion with:	bde
2004-03-05 17:35:28 +00:00
Poul-Henning Kamp
1e0e79c993 Just because the timecounter reads the same value on two samples
after each other doesn't mean that nothing happened.
2004-03-04 14:14:23 +00:00
Bruce Evans
c8564ad433 Fixed some style bugs (mainly English usage errors in comments). 2004-03-04 09:56:29 +00:00
Bruce Evans
01e3f3ae4f Fixed some style bugs (mainly misplaced comments, and totally disordered
declarations in acct_process()).
2004-03-04 09:47:09 +00:00
Robert Watson
0b759971a2 Remove unneeded label 'done2' from socket(). We now grab Giant
only around socreate(), and don't need it for file descriptor
accesses.

Submitted by:	sam
2004-03-04 01:57:48 +00:00
Dag-Erling Smørgrav
86b5e56351 Use different dummy wait channels to avoid panic in msleep().
Reviewed by:	jhb
2004-03-03 23:03:18 +00:00
John Baldwin
efac7951fe Always assert that the passed in lock is the same as the saved lock in the
sleep queue now that the one abnormal case has been fixed.
2004-03-02 15:02:08 +00:00
John Baldwin
959c0c4122 Correct handling of PDROP in msleep() to just skip the mtx_lock() rather
than clear the lock pointer so that sleepq_add() still gets the correct
lock pointer and doesn't bogusly trip an assertion.
2004-03-02 14:58:33 +00:00
John Baldwin
707559e402 Check for TDF_SINTR before calling sleepq_abort() as there is a narrow
race in between sleepq_add() and sleepq_catch_signals() in that setting
td_wchan and TDF_SINTR is not atomic to sched_lock but only to the sleepq
lock.  This band-aid will stop assertion failures, but there is perhaps a
larger problem with the sleepq_add/sleepq_catch_signals race that I am not
sure how to solve.  For the signals case the race is harmless because we
always call cursig() after setting TDF_SINTR.  However, KSE doesn't do
anything in sleepq_catch_signals() to check that this race was lost, so I
am unsure if this race is harmful for this specific abort.
2004-03-01 23:07:58 +00:00
Robert Watson
746e5bf09b Rename dup_sockaddr() to sodupsockaddr() for consistency with other
functions in kern_socket.c.

Rename the "canwait" field to "mflags" and pass M_WAITOK and M_NOWAIT
in from the caller context rather than "1" or "0".

Correct mflags pass into mac_init_socket() from previous commit to not
include M_ZERO.

Submitted by:	sam
2004-03-01 03:14:23 +00:00
Scott Long
740d9ba692 Convert the other use of flags to mflags in soalloc(). 2004-03-01 01:14:28 +00:00
Robert Watson
2bc87dcfbe Modify soalloc() API so that it accepts a malloc flags argument rather
than a "waitok" argument.  Callers now passing M_WAITOK or M_NOWAIT
rather than 0 or 1.  This simplifies the soalloc() logic, and also
makes the waiting behavior of soalloc() more clear in the calling
context.

Submitted by:	sam
2004-02-29 17:54:05 +00:00
Poul-Henning Kamp
2cf6bdac50 Loudly announce WITNESS and DIAGNOSTIC options and warn about reduced
performance.
2004-02-29 16:56:54 +00:00
Poul-Henning Kamp
3d6e5ccb06 Make sure to disable the watchdog if we cannot honour the timeout. 2004-02-28 22:01:19 +00:00
Poul-Henning Kamp
4103b7652d Rename the WATCHDOG option to SW_WATCHDOG and make it use the
generic watchdoc(9) interface.

Make watchdogd(8) perform as watchdog(8) as well, and make it
possible to specify a check command to run, timeout and sleep
periods.

Update watchdog(4) to talk about the generic interface and add
new watchdog(8) page.
2004-02-28 20:56:35 +00:00
John Baldwin
44f3b09204 Switch the sleep/wakeup and condition variable implementations to use the
sleep queue interface:
- Sleep queues attempt to merge some of the benefits of both sleep queues
  and condition variables.  Having sleep qeueus in a hash table avoids
  having to allocate a queue head for each wait channel.  Thus, struct cv
  has shrunk down to just a single char * pointer now.  However, the
  hash table does not hold threads directly, but queue heads.  This means
  that once you have located a queue in the hash bucket, you no longer have
  to walk the rest of the hash chain looking for threads.  Instead, you have
  a list of all the threads sleeping on that wait channel.
- Outside of the sleepq code and the sleep/cv code the kernel no longer
  differentiates between cv's and sleep/wakeup.  For example, calls to
  abortsleep() and cv_abort() are replaced with a call to sleepq_abort().
  Thus, the TDF_CVWAITQ flag is removed.  Also, calls to unsleep() and
  cv_waitq_remove() have been replaced with calls to sleepq_remove().
- The sched_sleep() function no longer accepts a priority argument as
  sleep's no longer inherently bump the priority.  Instead, this is soley
  a propery of msleep() which explicitly calls sched_prio() before
  blocking.
- The TDF_ONSLEEPQ flag has been dropped as it was never used.  The
  associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been
  dropped and replaced with a single explicit clearing of td_wchan.
  TD_SET_ONSLEEPQ() would really have only made sense if it had taken
  the wait channel and message as arguments anyway.  Now that that only
  happens in one place, a macro would be overkill.
2004-02-27 18:52:44 +00:00
John Baldwin
e5bb601d87 Drop sched_lock around the wakeup of the parent process after setting
the process state to zombie when a process exits to avoid a lock order
reversal with the sleepqueue locks.  This appears to be the only place
that we call wakeup() with sched_lock held.
2004-02-27 18:39:09 +00:00
John Baldwin
dd75b0a90d Add an implementation of a generic sleep queue abstraction that is used
to queue threads sleeping on a wait channel similar to how turnstiles are
used to queue threads waiting for a lock.  This subsystem will be used as
the backend for sleep/wakeup and condition variables initially.  Eventually
it will also be used to replace the ithread-specific iwait thread
inhibitor.

Sleep queues are also not locked by sched_lock, so this splits sched_lock
up a bit further increasing concurrency within the scheduler.  Sleep queues
also natively support timeouts on sleeps and interruptible sleeps allowing
for the reduction of a lot of duplicated code between the sleep/wakeup and
condition variable implementations.  For more details on the sleep queue
implementation, check the comments in sys/sleepqueue.h and
kern/subr_sleepqueue.c.
2004-02-27 18:33:09 +00:00
Dag-Erling Smørgrav
21885af505 Add sysctl_move_oid() which reparents an existing OID. 2004-02-27 17:13:23 +00:00
John Baldwin
5b7de7e19e Clarify and tweak some comments. 2004-02-27 16:14:27 +00:00
John Baldwin
03129ba97f Fix _sx_assert() to panic() rather than printf() when an assertion fails
and ignore assertions if we have already paniced.
2004-02-27 16:13:44 +00:00
John Baldwin
f4114c3d7f Replace the ktrace queue's semaphore with a condition variable instead as
it is slightly more efficient since we already have a mutex to protect the
queue.  Ktrace originally used a semaphore more as a proof of concept.
2004-02-26 19:30:22 +00:00
Don Lewis
47934cef8f Split the mlock() kernel code into two parts, mlock(), which unpacks
the syscall arguments and does the suser() permission check, and
kern_mlock(), which does the resource limit checking and calls
vm_map_wire().  Split munlock() in a similar way.

Enable the RLIMIT_MEMLOCK checking code in kern_mlock().

Replace calls to vslock() and vsunlock() in the sysctl code with
calls to kern_mlock() and kern_munlock() so that the sysctl code
will obey the wired memory limits.

Nuke the vslock() and vsunlock() implementations, which are no
longer used.

Add a member to struct sysctl_req to track the amount of memory
that is wired to handle the request.

Modify sysctl_wire_old_buffer() to return an error if its call to
kern_mlock() fails.  Only wire the minimum of the length specified
in the sysctl request and the length specified in its argument list.
It is recommended that sysctl handlers that use sysctl_wire_old_buffer()
should specify reasonable estimates for the amount of data they
want to return so that only the minimum amount of memory is wired
no matter what length has been specified by the request.

Modify the callers of sysctl_wire_old_buffer() to look for the
error return.

Modify sysctl_old_user to obey the wired buffer length and clean up
its implementation.

Reviewed by:	bms
2004-02-26 00:27:04 +00:00
Robert Watson
049ffe98a8 Assert pipe mutex in pipeselwakeup(), as we manipulate pipe_state
in a non-atomic manner.  It appears to always be called with the
mutex (good).
2004-02-26 00:18:22 +00:00
Robert Watson
094bdd260c Update comment regarding MAC labels: we no longer pass endpoints
into the MAC Framework, just the pipe pair.

GC 'hadpeer' used in pipedestroy(), which is no longer needed as
we check pipe_present flags on the pair.
2004-02-25 23:30:56 +00:00
Dag-Erling Smørgrav
854a417d92 Whitespace cleanup 2004-02-24 19:31:30 +00:00
Poul-Henning Kamp
652d04726d Fix two oversights here: don't trash the freelist, and properly cleanup
the cdevsw{}.

Submitted by:	tegge
2004-02-23 08:42:55 +00:00
Brian Feldman
240160d48b Correct some major SMP-harmful problems in the pipe implementation. First
of all, PIPE_EOF is not checked pervasively after everything that can drop
the pipe mutex and msleep(), so fix.  Additionally, though it might not
harm anything, pipelock() and pipeunlock() are not used consistently.
Third, the kqueue support functions do not use the pipe mutex correctly.
Last, but absolutely not least, is a race: if pipe_busy is not set on
the closing side of the pipe, the other side that is trying to write to
that will crash BECAUSE PIPE_EOF IS NOT SET!  Unconditionally set
PIPE_EOF, and get rid of all the lockups/crashes I have seen trying
to build ports.
2004-02-22 23:00:14 +00:00
Daniel Eischen
2648efa621 Add sysctls to allow showing threads for pgrp, tty, uid, ruid,
and pid.
2004-02-22 17:54:32 +00:00
Pawel Jakub Dawidek
63dba32b76 Reimplement sysctls handling by MAC framework.
Now I believe it is done in the right way.

Removed some XXMAC cases, we now assume 'high' integrity level for all
sysctls, except those with CTLFLAG_ANYBODY flag set. No more magic.

Reviewed by:	rwatson
Approved by:	rwatson, scottl (mentor)
Tested with:	LINT (compilation), mac_biba(4) (functionality)
2004-02-22 12:31:44 +00:00
Colin Percival
b17dd2bcc0 If we're going to panic(), do it before dereferencing a NULL pointer.
Reported by:	"Ted Unangst" <tedu@coverity.com>
Approved by:	rwatson (mentor)
2004-02-22 01:11:53 +00:00
Robert Watson
f6a4109212 Update my personal copyrights and NETA copyrights in the kernel
to use the "year1-year3" format, as opposed to "year1, year2, year3".
This seems to make lawyers more happy, but also prevents the
lines from getting excessively long as the years start to add up.

Suggested by:	imp
2004-02-22 00:33:12 +00:00
Poul-Henning Kamp
ded67d0f77 Check for NODEV return from udev2dev() 2004-02-21 23:52:03 +00:00
Poul-Henning Kamp
cd690b60de Device megapatch 6/6:
This is what we came here for:  Hang dev_t's from their cdevsw,
refcount cdevsw and dev_t and generally keep track of things a lot
better than we used to:

Hold a cdevsw reference around all entrances into the device driver,
this will be necessary to safely determine when we can unload driver
code.

Hold a dev_t reference while the device is open.

KASSERT that we do not enter the driver on a non-referenced dev_t.

Remove old D_NAG code, anonymous dev_t's are not a problem now.

When destroy_dev() is called on a referenced dev_t, move it to
dead_cdevsw's list.  When the refcount drops, free it.

Check that cdevsw->d_version is correct.  If not, set all methods
to the dead_*() methods to prevent entrance into driver.  Print
warning on console to this effect.  The device driver may still
explode if it is also incompatible with newbus, but in that case
we probably didn't get this far in the first place.
2004-02-21 21:57:26 +00:00
Poul-Henning Kamp
816d62bbb9 Device megapatch 5/6:
Remove the unused second argument from udev2dev().

Convert all remaining users of makedev() to use udev2dev().  The
semantic difference is that udev2dev() will only locate a pre-existing
dev_t, it will not line makedev() create a new one.

Apart from the tiny well controlled windown in D_PSEUDO drivers,
there should no longer be any "anonymous" dev_t's in the system
now, only dev_t's created with make_dev() and make_dev_alias()
2004-02-21 21:32:15 +00:00
Poul-Henning Kamp
dc08ffec87 Device megapatch 4/6:
Introduce d_version field in struct cdevsw, this must always be
initialized to D_VERSION.

Flip sense of D_NOGIANT flag to D_NEEDGIANT, this involves removing
four D_NOGIANT flags and adding 145 D_NEEDGIANT flags.
2004-02-21 21:10:55 +00:00
Poul-Henning Kamp
8e1f1df080 Device megapatch 3/6:
Add missing D_TTY flags to various drivers.

Complete asserts that dev_t's passed to ttyread(), ttywrite(),
ttypoll() and ttykqwrite() have (d_flags & D_TTY) and a struct tty
pointer.

Make ttyread(), ttywrite(), ttypoll() and ttykqwrite() the default
cdevsw methods for D_TTY drivers and remove the explicit initializations
in various drivers cdevsw structures.
2004-02-21 20:41:11 +00:00
Poul-Henning Kamp
b0b0334878 Device megapatch 2/6:
This commit adds a couple of functions for pseudodrivers to use for
implementing cloning in a manner we will be able to lock down (shortly).

Basically what happens is that pseudo drivers get a way to ask for
"give me the dev_t with this unit number" or alternatively "give
me a dev_t with the lowest guaranteed free unit number" (there is
unfortunately a lot of non-POLA in the exact numeric value of this
number, just live with it for now)

Managing the unit number space this way removes the need to use
rman(9) to do so in the drivers this greatly simplifies the code in
the drivers because even using rman(9) they still needed to manage
their dev_t's anyway.

I have taken the if_tun, if_tap, snp and nmdm drivers through the
mill, partly because they (ab)used makedev(), but mostly because
together they represent three different problems for device-cloning:

if_tun and snp is the plain case: just give me a device.

if_tap has two kinds of devices, with a flag for device type.

nmdm has paired devices (ala pty) can you can clone either of them.
2004-02-21 20:29:52 +00:00
Brian Feldman
fe5f3a72ac Make sure to wake up any select waiters when closing a kqueue (also, not
crash).  I am fairly sure that only people with SMP and multi-threaded
apps using kqueue will be affected by this, so I have a stress-testing
program on my web site:
<URL:http://green.homeunix.org/~green/getaddrinfo-pthreads-stresstest.c>
2004-02-20 04:00:48 +00:00
John Baldwin
712f57d8ab Tidy up the thread taskqueue implementation and close a lost wakeup race.
Instead of creating a mutex that we msleep on but don't actually lock when
doing the corresponding wakeup(), in the kthread, lock the mutex associated
with our taskqueue and msleep while the queue is empty.  Assert that the
queue is locked when the callback function is called to wake the kthread.
2004-02-19 22:03:52 +00:00
Jacques Vidrine
57f22bd4af Rework jail_attach(2) so that an already jailed process cannot hop
to another jail.

Submitted by:	rwatson
2004-02-19 21:03:20 +00:00
Pawel Jakub Dawidek
461167c289 Added sysctl security.jail.jailed.
It returns 1 is process is inside of jail and 0 if it is not.
Information if we are in jail or not is not a secret, there is plenty of
ways to discover it. Many people are using own hack to check this and
this will be a legal way from now on.

It will be great if our starting scripts will take advantage of this sysctl
to allow clean "boot" inside jail.

Approved by:	rwatson, scottl (mentor)
2004-02-19 14:29:14 +00:00
Pawel Jakub Dawidek
f6739b1ddc Simplify check. We are only able to check exclusive lock and if
2nd condition is true, first one is true for sure.

Approved by:	jhb, scottl (mentor)
2004-02-19 14:19:31 +00:00
Don Lewis
cf93aa166c When reparenting a process in the PT_DETACH code, only set p_sigparent
to SIGCHLD if the new parent process is initproc.

MFC after:	2 weeks
2004-02-19 10:39:42 +00:00
Don Lewis
6567eef757 A Linux thread created using clone() should not send SIGCHLD to its
parent if no signal is specified in the clone() flags argument.

PR:		42457
MFC after:	2 weeks
2004-02-19 06:43:48 +00:00
Nate Lawson
32869e71fb Add support for 'h' and 'hh' modifiers for printf(9).
Submitted by:	Bruno Ducrot <ducrot AT poupinou.org>
Reviewed by:	bde
2004-02-19 05:29:39 +00:00
Colin Percival
3a1bdbf8d1 Don't ignore errors from vfs_allocate_syncvnode.
PR:		kern/18503
Submitted by:	Anatoly Vorobey <mellon@pobox.com>
Approved by:	rwatson (mentor)
2004-02-18 05:20:54 +00:00
Peter Wemm
df7c361e64 Checkpoint a hack to enable running i386 libc_r binaries on a 64 bit
kernel.  I'm not happy with it yet - refinements are to come.
This hack allows the kern.ps_strings and kern.usrstack sysctls to respond
to a 32 bit request, such as those coming from emulated i386 binaries.
2004-02-18 00:54:17 +00:00
David Malone
a1cc6206fb Correct a comment.
Reviewed by:	alfred, tanimura
2004-02-17 12:30:32 +00:00
Dag-Erling Smørgrav
963385cf22 Mechanical whistespace cleanup. 2004-02-17 10:21:03 +00:00
Dag-Erling Smørgrav
44f4b94b38 Don't bother storing a result when all you need are the side effects. 2004-02-16 18:38:46 +00:00
David Malone
a82294d01c In fdcheckstd the descriptor table should never be shared, so just
KASSERT this rather than trying to deal with what happens when file
descriptors change out from under us.
2004-02-15 21:14:48 +00:00
Bruce Evans
72632ef235 Fixed style bugs near previous commit (mainly formatting errors and
missing parentheses).  Use default handling (trap to debugger) for
udev2dev(x, 1) since it is an error and doesn't happen anywhere in
the sys tree except in bogusly commented out code in coda.
2004-02-15 20:14:47 +00:00
Colin Percival
a20e9655b9 Remove opv_desc_vector from vfs_add_vnodeops, since it is defined
and given a value, but never used.  This has no effect on the
resulting binaries, since gcc optimizes the variable away anyway.

PR:		kern/62684
Approved by:	rwatson (mentor)
2004-02-15 17:27:33 +00:00
Poul-Henning Kamp
2a3faf2fbd Split the initialization of the cdevsw into a separate function. 2004-02-15 10:35:33 +00:00
Robert Watson
402d7aa884 Remove excess brackets. 2004-02-15 00:43:22 +00:00
Poul-Henning Kamp
d60d18d491 Use standard style for cdevsw initialization. 2004-02-14 20:03:36 +00:00
Robert Watson
679a106075 By default, don't allow processes in a jail to list the set of
jails in the system.  Previous behavior (allowed) may be restored
by setting security.jail.list_allowed=1.
2004-02-14 19:19:47 +00:00
Robert Watson
7e440242e5 Fix mismerge in last commit: check that cred->cr_prison is NULL
before dereferencing the prison pointer.
2004-02-14 18:52:43 +00:00
Robert Watson
f08df373a3 By default, when a process in jail calls getfsstat(), only return the
data for the file system on which the jail's root vnode is located.
Previous behavior (show data for all mountpoints) can be restored
by setting security.jail.getfsstatroot_only to 0.  Note: this also
has the effect of hiding other mounts inside a jail, such as /dev,
/tmp, and /proc, but errs on the side of leaking less information.
2004-02-14 18:31:11 +00:00
Poul-Henning Kamp
d08c5d0b9b Remove the check which used to protect us against make_dev() being
called until DEVFS had a chance to initialize.  Since DEVFS is mandatory
and things over in that department coincidentally works from without
any initialization now, this is safe.
2004-02-14 17:19:43 +00:00
Brian Feldman
a0ed09c0af T -CURRENT DO NOT CRASH UPON ^T K PLZ THX.
Also, use sched_pctcpu() instead of assuming td->td_kse is non-NULL.
2004-02-14 01:30:06 +00:00
Brian Feldman
f662a93197 Always socantsendmore() before deallocating a socket. This, in turn,
calls selwakeup() if necessary (which it is, if you don't want freed
memory hanging around on your td->td_selq).

Props to:	alfred
2004-02-12 01:48:40 +00:00
Don Lewis
55b5f2a202 When reparenting a process to init, make sure that p_sigparent is
set to SIGCHLD.  This avoids the creation of orphaned Linux-threaded
zombies that init is unable to reap.  This can occur when the parent
process sets its SIGCHLD to SIG_IGN.  Fix a similar situation in the
PT_DETACH code.

Tested by:	"Steven Hartland" <killing AT multiplay.co.uk>
2004-02-11 22:06:02 +00:00
John Baldwin
e7a44cace2 Argh! Fix a bogon. lim_cur() was returning the hard (max) limit rather
than the soft (cur) limit.

Submitted by:	bde
2004-02-11 18:04:13 +00:00
Mike Silbersack
b49d824e8b Add the SF_NODISKIO flag to sendfile. This flag causes sendfile to be
mindful of blocking on disk I/O and instead return EBUSY when such
blocking would occur.

Results from the DeBox project indicate that blocking on disk I/O
can slow the performance of a kqueue/poll based webserver.  Using
a flag such as SF_NODISKIO and throwing connections that would block
to helper processes/threads helped increase performance.

Currently, only the Flash webserver uses this flag, although it could
probably be applied to thttpd with relative ease.

Idea by:	Yaoping Ruan & Vivek Pai
2004-02-08 07:35:48 +00:00
Alan Cox
c5aebf380c swp_pager_async_iodone() no longer requires Giant. Modify bufdone()
and swapgeom_done() to perform swp_pager_async_iodone() without Giant.

Reviewed by:	tegge
2004-02-07 08:54:50 +00:00
John Baldwin
a875f38546 - Convert the plimit lock to a pool mutex lock.
- Hide struct plimit from userland.

Submitted by:	bde (2)
2004-02-06 19:35:14 +00:00
John Baldwin
f4daf05619 - Correct the translation of old rlimit values to properly handle the old
RLIM_INFINITY case for ogetrlimit().
- Use %jd and intmax_t to output negative time in usec in calcru().
- Rework getrusage() to make a copy of the rusage struct into a local
  variable while holding Giant and then do the copyout from the local
  variable to avoid having to have the original process rusage struct
  locked while doing the copyout (which would not be safe).  This also
  includes a few style fixes from Bruce to getrusage().

Submitted by:	bde (1, parts of 3)
Suggested by:	bde (2)
2004-02-06 19:30:12 +00:00
John Baldwin
99b6e02ba6 A few more style fixes from Bruce including a few I missed last time.
Submitted by:	bde
2004-02-06 19:25:34 +00:00
John Baldwin
4c3558aa82 Always set a process' state to normal when it is fully constructed in
fork1() rather than only doing it for the RFSTOPPED case and then having
to fix it up in other places later on.
2004-02-05 21:01:37 +00:00
John Baldwin
b4323d7729 - A lot of style and whitespace fixes.
- Update a few comments regarding locking notes.

Submitted by:	bde (1, mostly)
2004-02-05 20:53:25 +00:00
Jacques Vidrine
b00a3c85da Correct a reference counting bug in shmat(2). If vm_map_find(9)
failed, the reference count for the virtual memory object referenced
by the specified shared memory segment would have been erroneously
incremented.

Reported by:	Joost Pol <joost@pine.nl>
2004-02-05 18:00:35 +00:00
Alexander Kabaev
dec8868dcc Rename cn_unavailable to cnunavailable for little more consistency.
Garbage collect unused cndebug() function.

Suggested by:	bde
2004-02-05 17:35:28 +00:00
Mike Silbersack
b711d74eaf Style fixes: don't indent variable names.
Submitted by:	bde
2004-02-05 08:29:27 +00:00
Alexander Kabaev
e99c09e2dc Eliminate global cons_unavailable flag and replace it by the status
bit maintained on a per-device basis. Single variable is inadequate
on machines running with multiple consoles enabled.
2004-02-05 01:56:43 +00:00
John Baldwin
91d5354a2c Locking for the per-process resource limits structure.
- struct plimit includes a mutex to protect a reference count.  The plimit
  structure is treated similarly to struct ucred in that is is always copy
  on write, so having a reference to a structure is sufficient to read from
  it without needing a further lock.
- The proc lock protects the p_limit pointer and must be held while reading
  limits from a process to keep the limit structure from changing out from
  under you while reading from it.
- Various global limits that are ints are not protected by a lock since
  int writes are atomic on all the archs we support and thus a lock
  wouldn't buy us anything.
- All accesses to individual resource limits from a process are abstracted
  behind a simple lim_rlimit(), lim_max(), and lim_cur() API that return
  either an rlimit, or the current or max individual limit of the specified
  resource from a process.
- dosetrlimit() was renamed to kern_setrlimit() to match existing style of
  other similar syscall helper functions.
- The alpha OSF/1 compat layer no longer calls getrlimit() and setrlimit()
  (it didn't used the stackgap when it should have) but uses lim_rlimit()
  and kern_setrlimit() instead.
- The svr4 compat no longer uses the stackgap for resource limits calls,
  but uses lim_rlimit() and kern_setrlimit() instead.
- The ibcs2 compat no longer uses the stackgap for resource limits.  It
  also no longer uses the stackgap for accessing sysctl's for the
  ibcs2_sysconf() syscall but uses kernel_sysctl() instead.  As a result,
  ibcs2_sysconf() no longer needs Giant.
- The p_rlimit macro no longer exists.

Submitted by:	mtm (mostly, I only did a few cleanups and catchups)
Tested on:	i386
Compiled on:	alpha, amd64
2004-02-04 21:52:57 +00:00
Mike Silbersack
ff5e43a3fd Rename iov_to_uio to uiofromiov to be more consistent with other
uio* functions.

Suggested by:	bde
2004-02-04 08:43:21 +00:00
Pawel Jakub Dawidek
19b0efd32d Allow assert that the current thread does not hold the sx(9) lock.
Reviewed by:		jhb
In cooperation with:	juli, jhb
Approved by:		jhb, scottl (mentor)
2004-02-04 08:14:58 +00:00
Mike Silbersack
2ccbe4b596 Style fixes
Submitted by:	bde
2004-02-04 08:14:47 +00:00
Robert Watson
5e312ddcc6 A variety of further cleanups to ttyinfo():
- Rename temporary variable names ("tmp", "tmp2") to more informative
  names ("load", "pctcpu", "rss", ...)

- Unclutter indentation and return paths: rather than lots of nested
  ifs, simply return earlier if it's not going to work out.  Simplify
  general structure and avoid "deep" code.

- Comment on the thread/process selection and locking.

- Correct handling of "running"/"runnable" states, avoid "unknown"
  that people were seeing for running processes.  This was due to
  a misunderstanding of the more complex state machine / inhibitors
  behavior of KSE.

- Do perform ttyinfo() printing on KSE (P_SA) processes, it seems
  generally to work.

While I initially attempted to formulate this as two commits (one
layout, the other content), I concluded that the layout changes were
really structural changes.

Many elements submitted by:  bde
2004-02-04 05:46:05 +00:00
John Baldwin
3e9ac3ebf2 Remove a bogus assertion.
Noticed by:	bde
Pointy hat to:	jhb
2004-02-03 15:14:27 +00:00
Daniel Eischen
b5426f096b Regen after adding ksem_timedwait(). 2004-02-03 05:11:31 +00:00
Daniel Eischen
aae94fbbb6 Add ksem_timedwait() to complement ksem_wait().
Glanced at by:	alfred
2004-02-03 05:08:32 +00:00
Robert Watson
4f638130c3 Don't dec/inc the amountpipes counter every time we resize a pipe --
instead, just dec/inc in the ctor/dtor.  For now, increment/decrement
in two's, since we're now performing the operation once per pair,
not once per pipe.  Not really any measurable performance change
in my micro-benchmarks, but doing less work is good, especially when
it comes to atomic operations.

Suggested by:	alc
2004-02-03 04:55:24 +00:00
Robert Watson
9a830ddc54 Catch instances of (pipe == NULL) that were obsoleted with recent
changes to jointly allocated pipe pairs.  Replace these checks
with pipe_present checks.  This avoids a NULL pointer dereference
when a pipe is half-closed.

Submitted by:	Peter Edwards <peter.edwards@openet-telecom.com>
2004-02-03 02:50:51 +00:00
John Baldwin
9c9c52a3ed - Assert that witness_cold is not true in enroll().
- Only check witness_watch once in enroll().

Reported by:	ru (2)
2004-02-02 22:15:17 +00:00
Pawel Jakub Dawidek
3410b19324 Fix many issues related to mount/unmount:
1. Root from inside a jail was able to unmount any file system
   (except /).
2. Unprivileged root was able to unmount file systems mounted by
   privileged root (execpt /).
3. User from inside a jail was able to mount file system when
   sysctl vfs.usermount was set to 1.
4. User was able to mount file system when vfs.usermount was set to 1
   (that's ok) and unmount it even if vfs.usermount was equal to 0
   (that's not correct).

Possibility from point 1 was reported by: Dariusz Kowalski <darek@76.pl>

Only a part of this fix will be MFC'ed (if approved).

PR:		kern/60149
Reviewed by:	rwatson
Approved by:	scottl (mentor)
MFC after:	3 days
2004-02-02 19:02:05 +00:00
Mike Silbersack
02ec600572 Remove debugging code that slipped into the previous commit.
Spotted by:	bde
2004-02-02 09:09:59 +00:00
Jeff Roberson
b209e5e3e4 - style fixes to the critical_exit() KASSERT().
Submitted by:	bde
2004-02-02 08:13:27 +00:00
Jeff Roberson
0392e39dff - Allow interactive tasks to use the maximum time-slice. This is not as
detrimental as I thought it would be in the case of massive process
   storms from a shell and it makes regular desktop usage noticeably
   better.
2004-02-01 10:38:13 +00:00
Mike Silbersack
beb699c7ba Rewrite sendfile's header support so that headers are now sent in the first
packet along with data, instead of in their own packet.  When serving files
of size (packetsize - headersize) or smaller, this will result in one less
packet crossing the network.  Quick testing with thttpd and http_load has
shown a noticeable performance improvement in this case (350 vs 330 fetches
per second.)

Included in this commit are two support routines, iov_to_uio, and m_uiotombuf;
these routines are used by sendfile to construct the header mbuf chain that
will be linked to the rest of the data in the socket buffer.
2004-02-01 07:56:44 +00:00
Jeff Roberson
f2f51f8ab8 - Disable ithread binding in all cases for now. This doesn't make as much
sense with sched_4bsd as it does with sched_ule.
 - Use P_NOLOAD instead of the absence of td->td_ithd to determine whether or
   not a thread should be accounted for in sched_tdcnt.
2004-02-01 06:20:18 +00:00
Robert Watson
4795b82c13 Coalesce pipe allocations and frees. Previously, the pipe code
would allocate two 'struct pipe's from the pipe zone, and malloc a
mutex.

- Create a new "struct pipepair" object holding the two 'struct
  pipe' instances, struct mutex, and struct label reference.  Pipe
  structures now have a back-pointer to the pipe pair, and a
  'pipe_present' flag to indicate whether the half has been
  closed.

- Perform mutex init/destroy in zone init/destroy, avoiding
  reallocating the mutex for each pipe.  Perform most pipe structure
  setup in zone constructor.

- VM memory mappings for pageable buffers are still done outside of
  the UMA zone.

- Change MAC API to speak 'struct pipepair' instead of 'struct pipe',
  update many policies.  MAC labels are also handled outside of the
  UMA zone for now.  Label-only policy modules don't have to be
  recompiled, but if a module is recompiled, its pipe entry points
  will need to be updated.  If a module actually reached into the
  pipe structures (unlikely), that would also need to be modified.

These changes substantially simplify failure handling in the pipe
code as there are many fewer possible failure modes.

On half-close, pipes no longer free the 'struct pipe' for the closed
half until a full-close takes place.  However, VM mapped buffers
are still released on half-close.

Some code refactoring is now possible to clean up some of the back
references, etc; this patch attempts not to change the structure
of most of the pipe implementation, only allocation/free code
paths, so as to avoid introducing bugs (hopefully).

This cuts about 8%-9% off the cost of sequential pipe allocation
and free in system call tests on UP and SMP in my micro-benchmarks.
May or may not make a difference in macro-benchmarks, but doing
less work is good.

Reviewed by:	juli, tjr
Testing help:	dwhite, fenestro, scottl, et al
2004-02-01 05:56:51 +00:00
Jeff Roberson
40ece05382 - Revert rev 1.240 we no longer need a kthread for loadav(). 2004-02-01 05:37:36 +00:00
Jeff Roberson
e7f004fe23 - Use sched_load() rather than grabbing the sx lock and traversing the proc
table to discover the load.
2004-02-01 02:51:33 +00:00
Jeff Roberson
33916c360e - Add a new member to struct kseq called ksq_sysload. This is intended to
track the load for the sched_load() function.  In the SMP case this member
   is not defined because it would be redundant with the ksg_load member
   which already tracks the non ithd load.
 - For sched_load() in the UP case simply return ksq_sysload.  In the SMP
   case traverse the list of kseq groups and sum up their ksg_load fields.
2004-02-01 02:48:36 +00:00
Jeff Roberson
ca59f15272 - Keep a variable 'sched_tdcnt' that is used for the local implementation
of sched_load().  This variable tracks the number of running and runnable
   non ithd threads.  This removes the need to traverse the proc table and
   discover how many threads are runnable.
2004-02-01 02:46:47 +00:00
Robert Watson
fca542bcaa Move KASSERT regarding td_critnest to after the value of td is set to
curthread, to avoid warning and incorrect behavior.

Hoped not to mind:	jeff
2004-02-01 02:31:36 +00:00
Jeff Roberson
6767c6547b - Assert that td_critnest > 0 in critical_exit() to catch cases of
unbalanced uses of the critical_* api.
2004-02-01 01:24:54 +00:00
Robert Watson
26518e8d8c Fix an error in a KASSERT string: it's pipe_free_kmem(), not
pipespace(), that contains this KASSERT.
2004-01-31 23:03:22 +00:00
Poul-Henning Kamp
be8a62e821 Introduce the SO_BINTIME option which takes a high-resolution timestamp
at packet arrival.

For benchmarking purposes SO_BINTIME is preferable to SO_TIMEVAL
since it has higher resolution and lower overhead.  Simultaneous
use of the two options is possible and they will return consistent
timestamps.

This introduces an extra test and a function call for SO_TIMEVAL, but I have
not been able to measure that.
2004-01-31 10:40:25 +00:00
Robert Watson
30a9f26db2 Assert process lock in ptracestop(), since we're going to rely
on it, and later unlock it.
2004-01-29 00:58:21 +00:00
Robert Watson
94ffb20d72 Add a reset sysctl for mutex profiling: zeros all of the mutex
profiling buffers and hash table.  This makes it a lot easier to
do multiple profiling runs without rebooting or performing
gratuitous arithmetic.  Sysctl is named debug.mutex.prof.reset.

Reviewed by:	jake
2004-01-28 22:11:53 +00:00
John Baldwin
d5b75694e7 Move the loadav() callout into its own kthread since it uses allproc_lock
which is a sleepable lock and thus is not safe to acquire from a callout
routine.
2004-01-28 20:44:41 +00:00
John Baldwin
8d768e7676 Rework witness_lock() to make it slightly more useful and flexible.
- witness_lock() is split into two pieces: witness_checkorder() and
  witness_lock().  Witness_checkorder() determines if acquiring a specified
  lock at the time it is called would result in a lock order.  It
  optionally adds a new lock order relationship as well.  witness_lock()
  updates witness's data structures to assume that a lock has been acquired
  by stick a new lock instance in the appropriate lock instance list.
- The mutex and sx lock functions now call checkorder() prior to trying to
  acquire a lock and continue to call witness_lock() after the acquire is
  completed.  This will let witness catch a deadlock before it happens
  rather than trying to do so after the threads have deadlocked (i.e. never
  actually report it).
- A new function witness_defineorder() has been added that adds a lock
  order between two locks at runtime without having to acquire the locks.
  If the lock order cannot be added it will return an error.  This function
  is available to programmers via the WITNESS_DEFINEORDER() macro which
  accepts either two mutexes or two sx locks as its arguments.
- A few simple wrapper macros were added to allow developers to call
  witness_checkorder() anywhere as a way of enforcing locking assertions
  in code that might acquire a certain lock in some situations.  The
  macros are: witness_check_{mutex,shared_sx,exclusive_sx} and take an
  appropriate lock as the sole argument.
- The code to remove a lock instance from a lock list in witness_unlock()
  was unnested by using a goto to vastly improve the readability of this
  function.
2004-01-28 20:39:57 +00:00
John Baldwin
62a0fd943c Use mtx_assert() rather than using a home-rolled version. 2004-01-28 20:26:39 +00:00
Alexander Kabaev
975634280a Move the part of the comment which applies to osigsuspend where
it belongs. The current sigsuspend syscall does expect a pointer
to the mask as argument.

Submitted by:	Igor Sysoev <is at rambler-co dot ru>
2004-01-28 06:06:04 +00:00
Dag-Erling Smørgrav
84344f9fbf Rename the kern.vm.kmem.size tunable to the more logical vm.kmem_size. To
assure backward compatibility (conditional on !BURN_BRIDGES), look it up
by its old name first, and log a warning (but accept the setting) if it
was found.  If both the old and new name are defined, the new name takes
precedence.

Also export vm.kmem_size as a read-only sysctl variable; I find it hard to
tune a parameter when I don't know its default value, especially when that
default value is computed at boot time.
2004-01-27 15:59:38 +00:00
Robert Watson
6bea667f63 When aborting fork() due to a failure, if using MAC, make sure to clean
up the p_label field.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, McAfee Research
2004-01-25 18:42:18 +00:00
Ruslan Ermilov
33fe8fd0df Register the uart(4)'s spin lock with witness(4). 2004-01-25 15:04:37 +00:00
Jeff Roberson
c77ac1fdee - sched_strict has been dead for a long time now. Get rid of it. 2004-01-25 08:58:14 +00:00
Jeff Roberson
c494ddc8a1 - Clean up KASSERTS. 2004-01-25 08:57:38 +00:00
Jeff Roberson
5a2b158d8d - Correct function names listed in KASSERTs. These were copied from other
code and it was sloppy of me not to adjust these sooner.
2004-01-25 08:21:46 +00:00
Jeff Roberson
e17c57b14b - Implement cpu pinning and binding. This is acomplished by keeping a per-
cpu run queue that is only used for pinned or bound threads.

Submitted by:	Chris Bradfield <chrisb@ation.org>
2004-01-25 08:00:04 +00:00
Jeff Roberson
d1605f0ac9 - Use a unique string for the sched_setup SYSINIT and rename sched_setup to
synch_setup.  The schedulers use the sched_setup function name.
2004-01-25 07:49:45 +00:00
Jeff Roberson
29bcc4514f - Add a flags parameter to mi_switch. The value of flags may be SW_VOL or
SW_INVOL.  Assert that one of these is set in mi_switch() and propery
   adjust the rusage statistics.  This is to simplify the large number of
   users of this interface which were previously all required to adjust the
   proper counter prior to calling mi_switch().  This also facilitates more
   switch and locking optimizations.
 - Change all callers of mi_switch() to pass the appropriate paramter and
   remove direct references to the process statistics.
2004-01-25 03:54:52 +00:00
Robert Watson
8dc10be885 Add some basic support for measuring sleep mutex contention to the
mutex profiling code.  As with existing mutex profiling, measurement
is done with respect to mtx_lock() instances in the code, as opposed
to specific mutexes.  In particular, measure two things:

(1) Lock contention.  How often did this mtx_lock() call get made and
    have to sleep (or almost sleep) waiting for the lock.  This helps
    identify the "victims" of contention.

(2) Hold contention.  How often, while the lock was held by a thread
    as a result of this mtx_lock(), did another thread try to acquire
    the same mutex.  This helps identify the causes of contention.

I'm currently exploring adding measurement of "time waited for the
lock", but the current implementation has proven useful to me so far
so I figured I'd commit it so others could try it out.  Note that this
increases the size of mutexes when MUTEX_PROFILING is enabled, so you
might find you need to further bump UMA_BOOT_PAGES.  Fixes welcome.

The once over:	des, others
2004-01-25 01:59:27 +00:00