Commit Graph

3370 Commits

Author SHA1 Message Date
Poul-Henning Kamp
f84ee0ff00 Don't clone impossible unit numbers for disks. 2000-12-15 17:55:24 +00:00
John Baldwin
de3622188a Add in MI implementations of the KTR trace buffer ddb commands. The
commands have also been slightly updated as follows:
- Use ktr_idx to find the newest entry rather than walking the buffer
  comparing timespecs.  Timespecs are not always unique after the change
  to use getnanotime(9).
- Add a new verbose setting.  When the verbose setting is on, then the
  timestamp is printed with each message.  If KTR_EXTEND is on, then the
  filename and line number are output as well.  By default this option is
  off.  It can be turned on with the 'v' modifier passed to the 'tbuf'
  and 'tall' commands.  For the 'tnext' command, the 'v' modifier toggles
  the verbose mode.
- Only display the cpu number for each message on SMP systems.
- Don't display anything for an empty entry that hasn't been used yet.
2000-12-15 00:01:20 +00:00
John Baldwin
562e4ffe86 - Add a new flag MTX_QUIET that can be passed to the various mtx_*
functions.  If this flag is set, then no KTR log messages are issued.
  This is useful for blocking excessive logging, such as with the internal
  mutex used by the witness code.
- Use MTX_QUIET on all of the mtx_enter/exit operations on the internal
  mutex used by the witness code.
- If we are in a panic, don't do witness checks in witness_enter(),
  witness_exit(), and witness_try_enter(), just return.
2000-12-13 21:53:42 +00:00
Dag-Erling Smørgrav
60ec413038 String buffer API 2000-12-13 19:51:07 +00:00
John Baldwin
05f9877c15 If we fail to emulate a vm86 trap in kernel mode, then we use
vm86_trap() to return to the calling program directly.  vm86_trap()
doesn't return, thus it was never returning to trap() to release
Giant.  Thus, release Giant before calling vm86_trap().
2000-12-13 18:57:15 +00:00
Kirk McKusick
0bf3b91d8a Use proper mutex locking when calling setrunnable from speedup_syncer().
Submitted by:	Tor.Egge@fast.no
2000-12-13 01:06:53 +00:00
Jake Burkholder
c0c2557090 - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
Matt Jacob
1426b70df8 only include sys/proc.h once 2000-12-12 21:20:48 +00:00
David E. O'Brien
184265fd42 Include sys/proc.h so this compiles [on the Alpha]. 2000-12-12 21:18:13 +00:00
Matt Jacob
093d32e535 We reference curproc, ergo need <sys/proc.h> 2000-12-12 21:14:29 +00:00
Kirk McKusick
1f7d250182 Change the proc information returned from the kernel so that it
no longer contains kernel specific data structures, but rather
only scalar values and structures that are already part of the
kernel/user interface, specifically rusage and rtprio. It no
longer contains proc, session, pcred, ucred, procsig, vmspace,
pstats, mtx, sigiolst, klist, callout, pasleep, or mdproc. If
any of these changed in size, ps, w, fstat, gcore, systat, and
top would all stop working. The new structure has over 200 bytes
of unassigned space for future values to be added, yet is nearly
100 bytes smaller per entry than the structure that it replaced.
2000-12-12 07:25:57 +00:00
John Baldwin
06592dd188 - Convert the per-eventhandler list mutex to a lockmgr lock so that it can
be safely held across an eventhandler function call.
- Fix an instance of the head of an eventhandler list being read without
  the lock being held.
- Break down and use a SYSINIT at the new SI_SUB_EVENTHANDLER to initialize
  the eventhandler global mutex and the eventhandler list of lists rather
  than using a non-MP safe initialization during the first call to
  eventhandler_register().
- Add in a KASSERT() to eventhandler_register() to ensure that we don't try
  to register an eventhandler before things have been initialized.
2000-12-12 04:01:35 +00:00
Jake Burkholder
92cf772d8d - Add code to detect if a system call returns with locks other than Giant
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
  panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
  a system call, which were missing for alpha and ia64.
2000-12-12 01:14:32 +00:00
John Baldwin
d664747bfa - Don't bother taking a trace message if we have panic'd since doing so
can lead to further panics.
- Call getnanotime() instead of nanotime() for the timestamp.  nanotime()
  is more precise, but it also calls into the timer code, which results
  in mutex operations on the i386 arch.  If KTR_LOCK is turned on, then
  ktr_tracepoint() recurses on itself until it exhausts the kernel stack.
  Eventually this should change to use get_cyclecount() instead, but that
  can't happen if get_cyclecount() is calling nanotime() instead of
  getnanotime().
2000-12-12 00:43:50 +00:00
John Baldwin
428b4b5562 Oops, the witness mutex is a spin lock, so use MTX_SPIN in the call to
mtx_init().  Since the witness code ignores its internal mutex, this
doesn't result in any functional change.
2000-12-12 00:37:18 +00:00
David E. O'Brien
1a37aa566b Add `_PATH_DEVZERO'.
Use _PATH_* where where possible.
2000-12-09 09:35:55 +00:00
David Malone
7cc0979fd6 Convert more malloc+bzero to malloc+M_ZERO.
Submitted by:	josh@zipperup.org
Submitted by:	Robert Drehmel <robd@gmx.net>
2000-12-08 21:51:06 +00:00
Poul-Henning Kamp
959b7375ed Staticize some malloc M_ instances. 2000-12-08 20:09:00 +00:00
Poul-Henning Kamp
06b6617e0b Kill some bogus "register" keywords.
Go Ansi on the functions.
2000-12-08 06:57:39 +00:00
Matthew Dillon
a41ce5d30b Only call bwillwrite() for vnodes. Do not penalize devices or pipes. 2000-12-07 23:45:57 +00:00
Poul-Henning Kamp
5e1aea9fd7 Hide intrstate in the #ifdef where it belongs. 2000-12-07 22:38:22 +00:00
Matthew Dillon
9440653d07 Add necessary bwillwrite() in writev() entry point.
Deal with excessive dirty buffers when msync() syncs non-contiguous
dirty buffers by checking for the case in UFS *before* checking for
clusterability.
2000-12-06 20:55:09 +00:00
Peter Wemm
138e514cb5 Untangle vfsinit() a bit. Use seperate sysinit functions rather than
having a super-function calling bits all over the place.
2000-12-06 07:09:08 +00:00
Peter Wemm
7ca7bbb36b Simplify this a bit so that it doesn't have to generate silly redundant
__P() prototypes when an ansi-style static inline is a prototype already.
Since vnode_if.[ch] are generated on the fly, there are no CVS diffs to
mess up.
2000-12-06 06:59:38 +00:00
Peter Wemm
4366ac52ad This is kind of a nasty hack, but it appears to solve the Compaq DL360
SMP problem.  Compaq, in their infinite wisdom, forgot to put the IO apic
intpin #0 connection to the 8259 PIC into the mptable.  This hack is to
look and see if intpin #0 has *no* table entry and adds a fake ExtInt
entry for the remap routines to use.  isa/clock.c will still test the
interrupts.  This entry is only ever used on an already broken system.
2000-12-06 03:47:14 +00:00
John Baldwin
960d3c68ed Pass RFSTOPPED to fork1() in kthread_create() to avoid a race condition
where fork1() could put the process on the run queue where it could be
snatched up by another CPU before kthread_create() had set the proper
fork handler.  Instead, we put the new kthread on the runqueue after its
fork handler has been sent.

Noticed by:	jake
Looked over by:	peter
2000-12-06 03:45:15 +00:00
John Baldwin
7b29322c25 Add in #include of <sys/lock.h> since it was axed from <sys/proc.h>.
Noticed by:	Wesley Morgan <morganw@chemikals.org>
Pointy hat to:	me
2000-12-06 00:33:58 +00:00
Alfred Perlstein
89b54bffe9 Add forgotten SYSCALL_MODULE_HELPER() for msgsys() syscall.
Discovered by: Valentin Chopov <valentin@valcho.net>
2000-12-05 23:05:45 +00:00
Jake Burkholder
1eb44f0270 Remove the last of the MD netisr code. It is now all MI. Remove
spending, which was unused now that all software interrupts have
their own thread.  Make the legacy schednetisr use an atomic op
for setting bits in the netisr mask.

Reviewed by:	jhb
2000-12-05 00:36:00 +00:00
Peter Wemm
5ee171d264 Cleanup some leftover lint from the old interrupt system.
Also, while here, run up to 32 interrupt sources on APIC systems.
Normalize INTREN/INTRDIS so they are the same on both UP and SMP systems
rather than sometimes a macro, and sometimes a function.

Reviewed by:  jhb, jakeb
2000-12-04 21:15:14 +00:00
Jake Burkholder
8dd431fcf7 Whitespace. Fix indentation, align comments. 2000-12-04 10:23:29 +00:00
Jake Burkholder
f6a6e37a2c Whitespace. Fix an overly long line. 2000-12-04 09:52:39 +00:00
Jake Burkholder
85b039fe64 Remove if defined(tahoe) cobwebs. 2000-12-04 09:49:34 +00:00
David Greenman
8f9a5273a3 Changed second argument in a call to sf_buf_free() to be NULL instead of
PAGE_SIZE to match the prototype better. The argument is ignored, so this
is just to silence the compile-time warning.

Pointed out by:	jhb
2000-12-03 01:35:46 +00:00
John Baldwin
4971f62a86 - Add a mutex to the proc structure p_mtx that will be used to lock accesses
to each individual proc.
- Initialize the lock during fork1(), and destroy it in wait1().
2000-12-03 01:22:34 +00:00
Andrew Gallatin
19f085228f Correct int/long type mismatch in the proper place this time. freevnodes
and numvnodes are longs in the kernel.  They should remain longs in systat,
what really needs to change is that they should be using SYSCTL_LONG rather
than SYSCTL_INT.   I also changed wantfreevnodes to SYSCTL_LONG because I
happened to notice it.

I wish there was a way to find all of these automatically..

Pointed out by: bde
2000-12-02 20:08:33 +00:00
Jake Burkholder
a4bd171dbf Regen. 2000-12-02 05:45:32 +00:00
Jake Burkholder
86360fee54 Remove thr_sleep and thr_wakeup. Remove fields p_nthread and p_wakeup
from struct proc, which are now unused (p_nthread already was).
Remove process flag P_KTHREADP which was untested and only set
in vfs_aio.c (it should use kthread_create).  Move the yield
system call to kern_synch.c as kern_threads.c has been removed
completely.

moral support from:	alfred, jhb
2000-12-02 05:41:30 +00:00
John Baldwin
0ebabc93a4 Protect p_stat with sched_lock. 2000-12-02 01:32:51 +00:00
Bosko Milekic
794cd879fe Make sure to free the sf_buf if we've allocated it but fail to allocate
an mbuf (ENOBUFS) before returning so that we don't leak sf_bufs in
the case where we're out of mbufs.

Submitted by: David Greenman (dg)
2000-12-02 00:40:57 +00:00
John Baldwin
1c32c37c06 Protect p_stat with sched_lock. 2000-12-01 23:43:15 +00:00
John Baldwin
2925cbe569 Protect p_stat with sched_lock. 2000-12-01 16:59:02 +00:00
Alfred Perlstein
78525ce318 sysvipc loadable.
new syscall entry lkmressys - "reserved loadable syscall"

Make syscall_register allow overwriting of such entries (lkmressys).
2000-12-01 08:57:47 +00:00
Alfred Perlstein
3a4d365463 Add reserved lkmressys keyword. I swear, this script will die the
next time I need to hack on it.
2000-12-01 08:47:54 +00:00
Alfred Perlstein
1dc8643099 implement NOSTD syscall type, this creates the syscall args, but sticks
a lkmnosys into the sysent table so that SYSCALL_MODULE() works
2000-12-01 07:40:20 +00:00
Alfred Perlstein
c5a86b0ab9 Translate alfred to english.
Submitted by: bde
2000-12-01 06:59:18 +00:00
Jake Burkholder
1512b5d6ab Use an mp-safe callout for endtsleep. 2000-12-01 04:55:52 +00:00
John Baldwin
2191340786 Use msleep() instead of mtx_exit()/tsleep() so that we release the lock and
go to sleep as an "atomic" operation.
2000-12-01 03:43:33 +00:00
John Baldwin
472fd56ea5 Don't update p_stat in exit1() to SZOMB until after releasing the allproc
lock.  Otherwise, if we block on the backing mutex while releasing the
allproc lock, then when we resume, we will be at SRUN, and we will stay
that way all the way through cpu_exit.  As a result, our parent will never
harvest us.
2000-12-01 03:42:17 +00:00
Jake Burkholder
96fde7da19 Use msleep instead of mtx_exit; tsleep; mtx_enter, which is not safe. 2000-12-01 02:18:38 +00:00