Commit Graph

3434 Commits

Author SHA1 Message Date
Dag-Erling Smørgrav
dd488b6dd8 Retire kernfs (kernel part). 2000-12-28 12:17:35 +00:00
Paul Saab
6a10f299b9 Send a SIGCONT when detaching or continuing the excution of a traced
process.  This fixes a problem when attaching to a process in gdb
and the process staying in the STOP'd state after quiting gdb.
This whole process seems a bit suspect, but this seems to work.

Reviewed by:	peter
2000-12-28 08:34:21 +00:00
Peter Wemm
4058c0f013 Pull out the module path from the loader. ie: if you boot from
/boot/kernel.foobar/* then that had better be in the path ahead of the
others.

Submitted by:  Daniel J. O'Connor <darius@dons.net.au>
PR: 23662
2000-12-28 08:14:58 +00:00
Matthew Dillon
2b6b0df712 This implements a better launder limiting solution. There was a solution
in 4.2-REL which I ripped out in -stable and -current when implementing the
low-memory handling solution.  However, maxlaunder turns out to be the saving
grace in certain very heavily loaded systems (e.g. newsreader box).  The new
algorithm limits the number of pages laundered in the first pageout daemon
pass.  If that is not sufficient then suceessive will be run without any
limit.

Write I/O is now pipelined using two sysctls, vfs.lorunningspace and
vfs.hirunningspace.  This prevents excessive buffered writes in the
disk queues which cause long (multi-second) delays for reads.  It leads
to more stable (less jerky) and generally faster I/O streaming to disk
by allowing required read ops (e.g. for indirect blocks and such) to occur
without interrupting the write stream, amoung other things.

NOTE: eventually, filesystem write I/O pipelining needs to be done on a
per-device basis.  At the moment it is globalized.
2000-12-26 19:41:38 +00:00
Jake Burkholder
98f03f9030 Protect proc.p_pptr and proc.p_children/p_sibling with the
proctree_lock.

linprocfs not locked pending response from informal maintainer.

Reviewed by:	jhb, -smp@
2000-12-23 19:43:10 +00:00
Matt Jacob
661f2768f4 Make sure we have a non-null proc pointer before referring to fields
off of it.
2000-12-23 07:33:32 +00:00
Bosko Milekic
2a0c503e7a * Rename M_WAIT mbuf subsystem flag to M_TRYWAIT.
This is because calls with M_WAIT (now M_TRYWAIT) may not wait
  forever when nothing is available for allocation, and may end up
  returning NULL. Hopefully we now communicate more of the right thing
  to developers and make it very clear that it's necessary to check whether
  calls with M_(TRY)WAIT also resulted in a failed allocation.
  M_TRYWAIT basically means "try harder, block if necessary, but don't
  necessarily wait forever." The time spent blocking is tunable with
  the kern.ipc.mbuf_wait sysctl.
  M_WAIT is now deprecated but still defined for the next little while.

* Fix a typo in a comment in mbuf.h

* Fix some code that was actually passing the mbuf subsystem's M_WAIT to
  malloc(). Made it pass M_WAITOK instead. If we were ever to redefine the
  value of the M_WAIT flag, this could have became a big problem.
2000-12-21 21:44:31 +00:00
Poul-Henning Kamp
b80e3b4191 A last minute brucification resulted in syntax errors in the previous commit. 2000-12-20 22:07:59 +00:00
Poul-Henning Kamp
e2a09b2649 Replace logwakeup() with "int msgbuftrigger". There is little
point in calling a function just to set a flag.

Keep better track of the syslog FAC/PRI code and try to DTRT if
they mingle.

Log all writes to /dev/console to syslog with <console.info>
priority.  The formatting is not preserved, there is no robust,
way of doing it.  (Ideas with patches welcome).
2000-12-20 21:50:37 +00:00
John Baldwin
48786ef412 Fix another sched_sihand -> sched_swi in a KTR trace message. 2000-12-18 23:59:34 +00:00
Jake Burkholder
1156bc4de2 Whitespace. Fix a comment block and an if statement that were wider
than 80 characters.
2000-12-18 07:10:04 +00:00
Marcel Moolenaar
d96cfeae0c Fix a typo that allowed signals caused by traps to be delivered
to the process when said signal is masked.

PR: 23457
Submitted by: Yasuhiko Watanabe <yasu@mrit.mei.co.jp>
2000-12-16 21:03:48 +00:00
John Baldwin
a9b1370731 Delay waking up processes select'ing on the log device directly from
the kernel console.  Instead, change logwakeup() to set a flag in the
softc.  A callout then wakes up every so often and wakes up any processes
selecting on /dev/log (such as syslogd) if the flag is set.  By default
this callout fires 5 times a second, but that can be adjusted by the
sysctl kern.log_wakeups_per_second.

Reviewed by:	phk
2000-12-15 21:23:32 +00:00
John Baldwin
ffc831da27 Stick the kthread API in a kthread_* namespace, and the specialized kproc
functions in a kproc_* namespace.

Reviewed by:	-arch
2000-12-15 20:08:20 +00:00
Poul-Henning Kamp
f84ee0ff00 Don't clone impossible unit numbers for disks. 2000-12-15 17:55:24 +00:00
John Baldwin
de3622188a Add in MI implementations of the KTR trace buffer ddb commands. The
commands have also been slightly updated as follows:
- Use ktr_idx to find the newest entry rather than walking the buffer
  comparing timespecs.  Timespecs are not always unique after the change
  to use getnanotime(9).
- Add a new verbose setting.  When the verbose setting is on, then the
  timestamp is printed with each message.  If KTR_EXTEND is on, then the
  filename and line number are output as well.  By default this option is
  off.  It can be turned on with the 'v' modifier passed to the 'tbuf'
  and 'tall' commands.  For the 'tnext' command, the 'v' modifier toggles
  the verbose mode.
- Only display the cpu number for each message on SMP systems.
- Don't display anything for an empty entry that hasn't been used yet.
2000-12-15 00:01:20 +00:00
John Baldwin
562e4ffe86 - Add a new flag MTX_QUIET that can be passed to the various mtx_*
functions.  If this flag is set, then no KTR log messages are issued.
  This is useful for blocking excessive logging, such as with the internal
  mutex used by the witness code.
- Use MTX_QUIET on all of the mtx_enter/exit operations on the internal
  mutex used by the witness code.
- If we are in a panic, don't do witness checks in witness_enter(),
  witness_exit(), and witness_try_enter(), just return.
2000-12-13 21:53:42 +00:00
Dag-Erling Smørgrav
60ec413038 String buffer API 2000-12-13 19:51:07 +00:00
John Baldwin
05f9877c15 If we fail to emulate a vm86 trap in kernel mode, then we use
vm86_trap() to return to the calling program directly.  vm86_trap()
doesn't return, thus it was never returning to trap() to release
Giant.  Thus, release Giant before calling vm86_trap().
2000-12-13 18:57:15 +00:00
Kirk McKusick
0bf3b91d8a Use proper mutex locking when calling setrunnable from speedup_syncer().
Submitted by:	Tor.Egge@fast.no
2000-12-13 01:06:53 +00:00
Jake Burkholder
c0c2557090 - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
Matt Jacob
1426b70df8 only include sys/proc.h once 2000-12-12 21:20:48 +00:00
David E. O'Brien
184265fd42 Include sys/proc.h so this compiles [on the Alpha]. 2000-12-12 21:18:13 +00:00
Matt Jacob
093d32e535 We reference curproc, ergo need <sys/proc.h> 2000-12-12 21:14:29 +00:00
Kirk McKusick
1f7d250182 Change the proc information returned from the kernel so that it
no longer contains kernel specific data structures, but rather
only scalar values and structures that are already part of the
kernel/user interface, specifically rusage and rtprio. It no
longer contains proc, session, pcred, ucred, procsig, vmspace,
pstats, mtx, sigiolst, klist, callout, pasleep, or mdproc. If
any of these changed in size, ps, w, fstat, gcore, systat, and
top would all stop working. The new structure has over 200 bytes
of unassigned space for future values to be added, yet is nearly
100 bytes smaller per entry than the structure that it replaced.
2000-12-12 07:25:57 +00:00
John Baldwin
06592dd188 - Convert the per-eventhandler list mutex to a lockmgr lock so that it can
be safely held across an eventhandler function call.
- Fix an instance of the head of an eventhandler list being read without
  the lock being held.
- Break down and use a SYSINIT at the new SI_SUB_EVENTHANDLER to initialize
  the eventhandler global mutex and the eventhandler list of lists rather
  than using a non-MP safe initialization during the first call to
  eventhandler_register().
- Add in a KASSERT() to eventhandler_register() to ensure that we don't try
  to register an eventhandler before things have been initialized.
2000-12-12 04:01:35 +00:00
Jake Burkholder
92cf772d8d - Add code to detect if a system call returns with locks other than Giant
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
  panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
  a system call, which were missing for alpha and ia64.
2000-12-12 01:14:32 +00:00
John Baldwin
d664747bfa - Don't bother taking a trace message if we have panic'd since doing so
can lead to further panics.
- Call getnanotime() instead of nanotime() for the timestamp.  nanotime()
  is more precise, but it also calls into the timer code, which results
  in mutex operations on the i386 arch.  If KTR_LOCK is turned on, then
  ktr_tracepoint() recurses on itself until it exhausts the kernel stack.
  Eventually this should change to use get_cyclecount() instead, but that
  can't happen if get_cyclecount() is calling nanotime() instead of
  getnanotime().
2000-12-12 00:43:50 +00:00
John Baldwin
428b4b5562 Oops, the witness mutex is a spin lock, so use MTX_SPIN in the call to
mtx_init().  Since the witness code ignores its internal mutex, this
doesn't result in any functional change.
2000-12-12 00:37:18 +00:00
David E. O'Brien
1a37aa566b Add `_PATH_DEVZERO'.
Use _PATH_* where where possible.
2000-12-09 09:35:55 +00:00
David Malone
7cc0979fd6 Convert more malloc+bzero to malloc+M_ZERO.
Submitted by:	josh@zipperup.org
Submitted by:	Robert Drehmel <robd@gmx.net>
2000-12-08 21:51:06 +00:00
Poul-Henning Kamp
959b7375ed Staticize some malloc M_ instances. 2000-12-08 20:09:00 +00:00
Poul-Henning Kamp
06b6617e0b Kill some bogus "register" keywords.
Go Ansi on the functions.
2000-12-08 06:57:39 +00:00
Matthew Dillon
a41ce5d30b Only call bwillwrite() for vnodes. Do not penalize devices or pipes. 2000-12-07 23:45:57 +00:00
Poul-Henning Kamp
5e1aea9fd7 Hide intrstate in the #ifdef where it belongs. 2000-12-07 22:38:22 +00:00
Matthew Dillon
9440653d07 Add necessary bwillwrite() in writev() entry point.
Deal with excessive dirty buffers when msync() syncs non-contiguous
dirty buffers by checking for the case in UFS *before* checking for
clusterability.
2000-12-06 20:55:09 +00:00
Peter Wemm
138e514cb5 Untangle vfsinit() a bit. Use seperate sysinit functions rather than
having a super-function calling bits all over the place.
2000-12-06 07:09:08 +00:00
Peter Wemm
7ca7bbb36b Simplify this a bit so that it doesn't have to generate silly redundant
__P() prototypes when an ansi-style static inline is a prototype already.
Since vnode_if.[ch] are generated on the fly, there are no CVS diffs to
mess up.
2000-12-06 06:59:38 +00:00
Peter Wemm
4366ac52ad This is kind of a nasty hack, but it appears to solve the Compaq DL360
SMP problem.  Compaq, in their infinite wisdom, forgot to put the IO apic
intpin #0 connection to the 8259 PIC into the mptable.  This hack is to
look and see if intpin #0 has *no* table entry and adds a fake ExtInt
entry for the remap routines to use.  isa/clock.c will still test the
interrupts.  This entry is only ever used on an already broken system.
2000-12-06 03:47:14 +00:00
John Baldwin
960d3c68ed Pass RFSTOPPED to fork1() in kthread_create() to avoid a race condition
where fork1() could put the process on the run queue where it could be
snatched up by another CPU before kthread_create() had set the proper
fork handler.  Instead, we put the new kthread on the runqueue after its
fork handler has been sent.

Noticed by:	jake
Looked over by:	peter
2000-12-06 03:45:15 +00:00
John Baldwin
7b29322c25 Add in #include of <sys/lock.h> since it was axed from <sys/proc.h>.
Noticed by:	Wesley Morgan <morganw@chemikals.org>
Pointy hat to:	me
2000-12-06 00:33:58 +00:00
Alfred Perlstein
89b54bffe9 Add forgotten SYSCALL_MODULE_HELPER() for msgsys() syscall.
Discovered by: Valentin Chopov <valentin@valcho.net>
2000-12-05 23:05:45 +00:00
Jake Burkholder
1eb44f0270 Remove the last of the MD netisr code. It is now all MI. Remove
spending, which was unused now that all software interrupts have
their own thread.  Make the legacy schednetisr use an atomic op
for setting bits in the netisr mask.

Reviewed by:	jhb
2000-12-05 00:36:00 +00:00
Peter Wemm
5ee171d264 Cleanup some leftover lint from the old interrupt system.
Also, while here, run up to 32 interrupt sources on APIC systems.
Normalize INTREN/INTRDIS so they are the same on both UP and SMP systems
rather than sometimes a macro, and sometimes a function.

Reviewed by:  jhb, jakeb
2000-12-04 21:15:14 +00:00
Jake Burkholder
8dd431fcf7 Whitespace. Fix indentation, align comments. 2000-12-04 10:23:29 +00:00
Jake Burkholder
f6a6e37a2c Whitespace. Fix an overly long line. 2000-12-04 09:52:39 +00:00
Jake Burkholder
85b039fe64 Remove if defined(tahoe) cobwebs. 2000-12-04 09:49:34 +00:00
David Greenman
8f9a5273a3 Changed second argument in a call to sf_buf_free() to be NULL instead of
PAGE_SIZE to match the prototype better. The argument is ignored, so this
is just to silence the compile-time warning.

Pointed out by:	jhb
2000-12-03 01:35:46 +00:00
John Baldwin
4971f62a86 - Add a mutex to the proc structure p_mtx that will be used to lock accesses
to each individual proc.
- Initialize the lock during fork1(), and destroy it in wait1().
2000-12-03 01:22:34 +00:00
Andrew Gallatin
19f085228f Correct int/long type mismatch in the proper place this time. freevnodes
and numvnodes are longs in the kernel.  They should remain longs in systat,
what really needs to change is that they should be using SYSCTL_LONG rather
than SYSCTL_INT.   I also changed wantfreevnodes to SYSCTL_LONG because I
happened to notice it.

I wish there was a way to find all of these automatically..

Pointed out by: bde
2000-12-02 20:08:33 +00:00
Jake Burkholder
a4bd171dbf Regen. 2000-12-02 05:45:32 +00:00
Jake Burkholder
86360fee54 Remove thr_sleep and thr_wakeup. Remove fields p_nthread and p_wakeup
from struct proc, which are now unused (p_nthread already was).
Remove process flag P_KTHREADP which was untested and only set
in vfs_aio.c (it should use kthread_create).  Move the yield
system call to kern_synch.c as kern_threads.c has been removed
completely.

moral support from:	alfred, jhb
2000-12-02 05:41:30 +00:00
John Baldwin
0ebabc93a4 Protect p_stat with sched_lock. 2000-12-02 01:32:51 +00:00
Bosko Milekic
794cd879fe Make sure to free the sf_buf if we've allocated it but fail to allocate
an mbuf (ENOBUFS) before returning so that we don't leak sf_bufs in
the case where we're out of mbufs.

Submitted by: David Greenman (dg)
2000-12-02 00:40:57 +00:00
John Baldwin
1c32c37c06 Protect p_stat with sched_lock. 2000-12-01 23:43:15 +00:00
John Baldwin
2925cbe569 Protect p_stat with sched_lock. 2000-12-01 16:59:02 +00:00
Alfred Perlstein
78525ce318 sysvipc loadable.
new syscall entry lkmressys - "reserved loadable syscall"

Make syscall_register allow overwriting of such entries (lkmressys).
2000-12-01 08:57:47 +00:00
Alfred Perlstein
3a4d365463 Add reserved lkmressys keyword. I swear, this script will die the
next time I need to hack on it.
2000-12-01 08:47:54 +00:00
Alfred Perlstein
1dc8643099 implement NOSTD syscall type, this creates the syscall args, but sticks
a lkmnosys into the sysent table so that SYSCALL_MODULE() works
2000-12-01 07:40:20 +00:00
Alfred Perlstein
c5a86b0ab9 Translate alfred to english.
Submitted by: bde
2000-12-01 06:59:18 +00:00
Jake Burkholder
1512b5d6ab Use an mp-safe callout for endtsleep. 2000-12-01 04:55:52 +00:00
John Baldwin
2191340786 Use msleep() instead of mtx_exit()/tsleep() so that we release the lock and
go to sleep as an "atomic" operation.
2000-12-01 03:43:33 +00:00
John Baldwin
472fd56ea5 Don't update p_stat in exit1() to SZOMB until after releasing the allproc
lock.  Otherwise, if we block on the backing mutex while releasing the
allproc lock, then when we resume, we will be at SRUN, and we will stay
that way all the way through cpu_exit.  As a result, our parent will never
harvest us.
2000-12-01 03:42:17 +00:00
Jake Burkholder
96fde7da19 Use msleep instead of mtx_exit; tsleep; mtx_enter, which is not safe. 2000-12-01 02:18:38 +00:00
John Baldwin
6936206ebd Split the WITNESS and MUTEX_DEBUG options apart so that WITNESS does not
depend on MUTEX_DEBUG.  The MUTEX_DEBUG option turns on extra assertions
and checks to verify that mutexes themselves are implemented properly.
The WITNESS option uses extra checks and diagnostics to verify that other
code is using mutexes properly.
2000-12-01 00:10:59 +00:00
Robert Watson
cf64863a1e o Add a comment to exec_check_permissions() to indicate that the
passed vnode must be locked; this is the case because of calls
  to VOP_GETATTR(), VOP_ACCESS(), and VOP_OPEN().  This becomes
  more of an issue when VOP_ACCESS() gets a bit more complicated,
  which it does when you introduce ACL, Capability, and MAC
  support.

Obtained from:	TrustedBSD Project
2000-11-30 21:06:05 +00:00
Alfred Perlstein
c6ab5768aa only call bwillwrite() to stall on IO when dealing with VNODEs otherwise
we will stall on non-disk IO for things like fifos and sockets
2000-11-30 20:23:14 +00:00
Alfred Perlstein
237710275e This is a fix for a problem described in PR kern/19572. It was
recently discussed at -hackers. The problem is a null-pointer
    dereference that happens in kern/vfs_lookup.c when accessing ".."
    with a v_mount entry for the current directory vnode of NULL. This
    happens when a volume is forcibly unmounted, and the vnode for a
    working directory in the mounted volume is cleared.

PR: 23191
Submitted by: Thomas Moestl <tmoestl@gmx.net>
2000-11-30 20:04:44 +00:00
Alfred Perlstein
1baf4aabbc use a oppurtunistic locking strategy with the uidinfo structures to avoid
locking the global hash on each uifree()

make struct uidinfo only visible to the kernel

make uihold() a function rather than a macro to reduce bloat

swap the order of a spl/mutex to maintain consistancy
2000-11-30 19:15:22 +00:00
Alfred Perlstein
5c3f70d7c0 make crfree into a function rather than a macro to avoid bloat because of
the mutex aquire/release

reorder struct ucred
2000-11-30 19:09:48 +00:00
Kirk McKusick
6d984dfa6a Get rid of a bogus mtx_exit (it was attempting to release an
already released mutex).

Submitted by:	"Chris Knight" <chris@aims.com.au>
2000-11-30 19:09:29 +00:00
Marcel Moolenaar
d034d459da Don't use p->p_sigstk.ss_flags to keep state of whether the
process is on the alternate stack or not. For compatibility
with sigstack(2) state is being updated if such is needed.

We now determine whether the process is on the alternate
stack by looking at its stack pointer. This allows a process
to siglongjmp from a signal handler on the alternate stack
to the place of the sigsetjmp on the normal stack. When
maintaining state, this would have invalidated the state
information and causing a subsequent signal to be delivered
on the normal stack instead of the alternate stack.

PR: 22286
2000-11-30 05:23:49 +00:00
John Baldwin
1bd0eefb4c Fix up priority propagation:
- Use a better test for determining when a process is running.
- Convert some checks to assertions.
- Remove unnecessary tests.
- Save the priority before acquiring a mutex rather than in msleep(9).
2000-11-30 00:51:16 +00:00
John Baldwin
86327ad8a4 Set p_mtxname when blocking on a mutex and clear it when waking up. 2000-11-29 20:17:15 +00:00
John Baldwin
62ca2477d8 Save a copy of p_mtxname in e_mtxname when creating an eproc. 2000-11-29 20:14:50 +00:00
John Baldwin
f404050e44 Use an atomic operation with an appropriate memory barrier when releasing
a contested sleep mutex in the case that at least two processes are blocked
on the contested mutex.
2000-11-29 18:41:19 +00:00
John Baldwin
8f838cb563 The sched_lock mutex goes after the sio mutex in the locking order since
a software interrupt can be scheduled in the sio interrupt handler while
the sio mutex is held.
2000-11-29 18:38:14 +00:00
John Baldwin
bbc7a98a31 Save the line number and filename of the last mtx_enter operation for
spin locks.  We already do this for sleep locks.
2000-11-29 18:37:01 +00:00
John Baldwin
e2979dcc85 Don't drop Giant and the passed in mutex incorrectly in the
cold || panicstr case.  Do drop the passed in mutex in that case if
PDROP is specified.
2000-11-29 18:32:50 +00:00
John Baldwin
2bcc63c545 Only print out APIC info on an SMP system during a panic if APIC_IO is
defined.
2000-11-29 01:33:15 +00:00
John Baldwin
8d9888d37a Don't wait forever for CPUs to stop or restart. Instead, give up after a
timeout.  If DIAGNOSTIC is turned on, then display a message to the console
with a map of which CPUs failed to stop or restart.  This gives an SMP box
at least a fighting chance of getting into DDB if one of the other CPUs has
interrupts disabled.
2000-11-28 23:52:36 +00:00
Jordan K. Hubbard
7022a92395 Kernel support for erase2 character.
Submitted by:	Rui Pedro Mendes Salgueiro <rps@mat.uc.pt>
2000-11-28 20:03:23 +00:00
Matthew N. Dodd
46aa504e42 Alter the return value and arguments of the GET_RESOURCE_LIST bus method.
Alter consumers of this method to conform to the new convention.
Minor cosmetic adjustments to bus.h.

This isn't of concern as this interface isn't in use yet.
2000-11-28 06:49:15 +00:00
Jake Burkholder
4f55983606 Use callout_reset instead of timeout(9). Most callouts are statically
allocated, 2 have been added to struct proc for setitimer and sleep.

Reviewed by:	jhb, jlemon
2000-11-27 22:52:31 +00:00
John Baldwin
91b7c97713 Drop Giant around the mi_switch() call in yield().
Submitted by:	tegge
2000-11-27 18:48:13 +00:00
Alfred Perlstein
1e5d626ad9 ucred system overhaul:
1) mpsafe (protect the refcount with a mutex).
2) reduce duplicated code by removing the inlined crdup() from crcopy()
   and make crcopy() call crdup().
3) use M_ZERO flag when allocating initial structs instead of calling bzero
   after allocation.
4) expand the size of the refcount from a u_short to an u_int, by using
   shorts we might have an overflow.

Glanced at by: jake
2000-11-27 00:09:16 +00:00
Alfred Perlstein
0931dcefb3 Move the #define of _KERN_MUTEX_C_ so that it's before any system headers
are included.  System headers can include sys/mutex.h and then certain
macros do not get defined.

Reviewed by: jake
2000-11-26 21:14:17 +00:00
Poul-Henning Kamp
a52585d77e Simplify the tprintf() API.
Loose the special <sys/tprintf.h> #include file.
2000-11-26 20:35:21 +00:00
Poul-Henning Kamp
4d88c4598f Make log(-1, ...) do what addlog(...) did.
Replace all uses of addlog(...) with log(-1, ...)

Remove bogus "register" keywords in subr_prf.c

Make log() return void.
2000-11-26 19:34:06 +00:00
Poul-Henning Kamp
cb7e609a3c Make diskerr() always log with printf. 2000-11-26 19:29:15 +00:00
Jake Burkholder
a5d5c61c12 Add uidinfo hash and uidinfo struct to the witness order list. 2000-11-26 15:05:46 +00:00
Alfred Perlstein
9c19bcddf0 Make uidinfo subsystem mpsafe
use a mutex lock when looking up/deleting entries on the hashlist
use a mutex lock on each uidinfo when updating fields

make uifree() a void function rather than 'int' since no one cares

allocate uidinfo structs with the M_ZERO flag and don't explicitly initialize
them

Assisted by: eivind, jhb, jakeb
2000-11-26 12:08:17 +00:00
Jonathan Lemon
e82ac18e52 Revert the last commit to the callout interface, and add a flag to
callout_init() indicating whether the callout is safe or not.  Update
the callers of callout_init() to reflect the new interface.

Okayed by: Jake
2000-11-25 06:22:16 +00:00
Jake Burkholder
249849e0b9 - Rename callout_reset to _callout_reset and add a flags argument.
- Add macros callout_reset, which does the obvious, and
  mp_callout_reset, which passes the CALLOUT_MPSAFE flag.
2000-11-25 03:34:49 +00:00
Jake Burkholder
553629ebc9 Protect the following with a lockmgr lock:
allproc
	zombproc
	pidhashtbl
	proc.p_list
	proc.p_hash
	nextpid

Reviewed by:	jhb
Obtained from:	BSD/OS and netbsd
2000-11-22 07:42:04 +00:00
John Baldwin
0959cc6680 Ahem, fix the disclaimer portion of the copyright so it disclaim's the
voices in my head.  You can sue the voices in Bill Paul's head all you
want.

Noticed by:	jhb
2000-11-21 21:10:15 +00:00
Jonathan Lemon
4a476efa51 Protect p_wchan with sched_lock in selwakeup(). 2000-11-21 20:22:34 +00:00
Alan Cox
c6fa9f78d2 Provide a new interface for the user of aio_read() and aio_write() to request
a kevent upon completion of the I/O.  Specifically, introduce a new type
of sigevent notification, SIGEV_EVENT.  If sigev_notify is SIGEV_EVENT,
then sigev_notify_kqueue names the kqueue that should receive the event
and sigev_value contains the "void *" is copied into the kevent's udata
field.

In contrast to the existing interface, this one: 1) works on
the Alpha 2) avoids the extra copyin() call for the kevent because all
of the information needed is in the sigevent and 3) could be
applied to request a single kevent upon completion of an entire lio_listio().

Reviewed by:	jlemon
2000-11-21 19:36:36 +00:00
Alfred Perlstein
830fedd28f Accept filters broke kernels compiled without options INET.
Make accept filters conditional on INET support to fix.

Pointed out by: bde
Tested and assisted by: Stephen J. Kiernan <sab@vegamuse.org>
2000-11-20 01:35:25 +00:00
Robert Watson
7f112b0489 o Export cp_time ("CPU time statistics") using SYSCTL_OPAQUE.
This removes a reason that systat requires setgid kmem.  More to
  come.
2000-11-20 00:44:58 +00:00