Commit Graph

4962 Commits

Author SHA1 Message Date
Maxime Henrion
0ae037eb45 Bring sys/kern/md5c.c in sync with the userland version.
Add a comment so that people don't forget to keep the
version in src/lib/libmd/md5c.c in sync with this one.
This fixes a warning on sparc64.

Reviewed by:	phk
2002-06-24 14:15:25 +00:00
Kirk McKusick
d374764fd3 Use proper size in bzero of stat structure.
Submitted by:	Jake Burkholder <jake@locore.ca>
Sponsored by:	DARPA & NAI Labs.
2002-06-24 07:14:44 +00:00
Jonathan Mini
01ad8a53db Remove unused diagnostic function cread_free_thread().
Approved by:	alfred
2002-06-24 06:22:00 +00:00
Matthew Dillon
727300861d I Noticed a defect in the way wakeup() scans the tailq. Tor noticed an
even worse defect in wakeup_one().  This patch cleans up both.

Submitted by:	tegge
MFC after:	3 days
2002-06-24 00:14:36 +00:00
Maxime Henrion
0a84e7c8f7 More 64 bits platforms warning fixes.
Reviewed by:	rwatson
2002-06-23 18:32:39 +00:00
Kirk McKusick
6524dddcd5 This patch fixes a size problem with the stat structure for
64-bit architectures that was introduced in the UFS2 code
merge two days ago. The stat structure change that caused
the problem was the addition of the file create time.

Submitted by:	Bruce Evans <bde@zeta.org.au>
Sponsored by:	DARPA & NAI Labs.
2002-06-22 22:01:13 +00:00
Maxime Henrion
2853bfa0df We don't need to check the return value of malloc() against
NULL when the M_WAITOK flag is specified.
2002-06-22 21:44:11 +00:00
Matthew Dillon
ed22d6e948 Fix a bug in vfs_bio_clrbuf(). The single-page-clrbuf optimization was
improperly clearing more then just the invalid portions of the page.  (This
bug is not known to have been triggered by anything).

Submitted by:	tegge
MFC after:	7 days
2002-06-22 19:09:35 +00:00
Maxime Henrion
cacd1c9b49 o Remove the initialization of unused fields in the struct
uio now that we don't use uiomove() anymore.
o Enforce stricter checks on the length of the iov's in
  nmount(2) since we now malloc() them individually and
  corrupted iov's could make the kernel crash in malloc()
  with "kmem_map too small".

Reviewed by:	phk
2002-06-22 18:07:05 +00:00
Jonathan Mini
9718382d85 Always drop the p_args reference we held for copyout, even if we're about
to change it. This fixes a leak triggered by setproctitle(3).

Approved by:	alfred
Noticed by:	Peter Jeremy <peter.jeremy@alcatel.com.au>
2002-06-22 10:05:50 +00:00
Kirk McKusick
1c85e6a35d This commit adds basic support for the UFS2 filesystem. The UFS2
filesystem expands the inode to 256 bytes to make space for 64-bit
block pointers. It also adds a file-creation time field, an ability
to use jumbo blocks per inode to allow extent like pointer density,
and space for extended attributes (up to twice the filesystem block
size worth of attributes, e.g., on a 16K filesystem, there is space
for 32K of attributes). UFS2 fully supports and runs existing UFS1
filesystems. New filesystems built using newfs can be built in either
UFS1 or UFS2 format using the -O option. In this commit UFS1 is
the default format, so if you want to build UFS2 format filesystems,
you must specify -O 2. This default will be changed to UFS2 when
UFS2 proves itself to be stable. In this commit the boot code for
reading UFS2 filesystems is not compiled (see /sys/boot/common/ufsread.c)
as there is insufficient space in the boot block. Once the size of the
boot block is increased, this code can be defined.

Things to note: the definition of SBSIZE has changed to SBLOCKSIZE.
The header file <ufs/ufs/dinode.h> must be included before
<ufs/ffs/fs.h> so as to get the definitions of ufs2_daddr_t and
ufs_lbn_t.

Still TODO:
Verify that the first level bootstraps work for all the architectures.
Convert the utility ffsinfo to understand UFS2 and test growfs.
Add support for the extended attribute storage. Update soft updates
to ensure integrity of extended attribute storage. Switch the
current extended attribute interfaces to use the extended attribute
storage. Add the extent like functionality (framework is there,
but is currently never used).

Sponsored by: DARPA & NAI Labs.
Reviewed by:	Poul-Henning Kamp <phk@freebsd.org>
2002-06-21 06:18:05 +00:00
Maxime Henrion
7d2d440991 Change the way we internally store the mount options to
a linked list.  This is to allow the merging of the mount
options in the MNT_UPDATE case, as the current data structure
is unsuitable for this.

There are no functional differences in this commit.

Reviewed by:	phk
2002-06-20 20:03:42 +00:00
Alfred Perlstein
c33c825169 Implement SO_NOSIGPIPE option for sockets. This allows one to request that
an EPIPE error return not generate SIGPIPE on sockets.

Submitted by: lioux
Inspired by: Darwin
2002-06-20 18:52:54 +00:00
Alfred Perlstein
69be5db96f Don't leak resources if fdcheckstd() fails during exec.
Submitted by: Mike Makonnen <makonnen@pacbell.net>
2002-06-20 17:27:28 +00:00
Ian Dowse
99568bcaf7 Display the mutex name in the ^T status line if the selected thread
is blocked on a mutex. Prepend a '*' to distinguish this case as
is done in top(1).
2002-06-20 14:03:36 +00:00
Peter Wemm
9d04103b7c Remove UIO_USERISPACE - we do not support any split instruction/data
address space machines (eg: pdp-11) and are not likely to ever do so.
Nothing in our kernel sets this.
2002-06-20 07:08:43 +00:00
Peter Wemm
2f9267ec23 Move the "- 1" into the RQB_FFS(mask) macro itself so that
implementations can provide a base zero ffs function if they wish.
This changes
  #define RQB_FFS(mask) (ffs64(mask))
  foo = RQB_FFS(mask) - 1;
to
  #define RQB_FFS(mask) (ffs64(mask) - 1)
  foo = RQB_FFS(mask);
On some platforms we can get the "- 1" for free, eg: those that use the
C code for ffs64().

Reviewed by:	jake (in principle)
2002-06-20 06:21:20 +00:00
Andrew R. Reiter
2eb7b21b00 - Remove the lock(9) protecting the kernel linker system.
- Added a mutex, kld_mtx, to protect the kernel_linker system.  Note that
  while ``classes'' is global (to that file), it is only read only after
  SI_SUB_KLD, SI_ORDER_ANY.
- Add a SYSINIT to flip a flag that disallows class registration after
  SI_SUB_KLD, SI_ORDER_ANY.

Idea for ``classes'' read only by:	jake
Reviewed by:	jake
2002-06-19 21:25:59 +00:00
Poul-Henning Kamp
c4bacc1871 Remove the compat bits for the mis-aligned struct disklabel on alpha,
people got three times longer than I promised.

Sponsored by: DARPA & NAI Labs.
2002-06-19 08:37:02 +00:00
Alfred Perlstein
1419eacb86 Squish the "could sleep with process lock" messages caused by calling
uifind() with a proc lock held.

change_ruid() and change_euid() have been modified to take a uidinfo
structure which will be pre-allocated by callers, they will then
call uihold() on the uidinfo structure so that the caller's logic
is simplified.

This allows one to call uifind() before locking the proc struct and
thereby avoid a potential blocking allocation with the proc lock
held.

This may need revisiting, perhaps keeping a spare uidinfo allocated
per process to handle this situation or re-examining if the proc
lock needs to be held over the entire operation of changing real
or effective user id.

Submitted by: Don Lewis <dl-freebsd@catspoiler.org>
2002-06-19 06:39:25 +00:00
Alfred Perlstein
f2102dadf9 setsugid() touches p->p_flag so assert that the proc is locked. 2002-06-18 22:41:35 +00:00
Seigo Tanimura
03e4918190 Remove so*_locked(), which were backed out by mistake. 2002-06-18 07:42:02 +00:00
Maxime Henrion
fe93750656 Change vfs_copyopt() so that the length argument passed to it
must be the exact same size as the mount option.  This makes
vfs_copyopt() much more useful.
2002-06-14 20:04:21 +00:00
Bosko Milekic
ad1026f912 Set system_map for both mbuf_map and clust_map to 1, in mbuf_init().
Submitted by: Tor Egge (tegge)
Pointed out to me by: hsu
2002-06-13 23:53:42 +00:00
Robert Watson
6480dc743e Regen. 2002-06-13 23:44:50 +00:00
Robert Watson
65772a1a0a Keep POSIX.1e capabilities system call placeholders, but remove definitions. 2002-06-13 23:43:53 +00:00
Robert Watson
a3cce19f7d kern_cap.c no longer needed. 2002-06-13 23:19:34 +00:00
Robert Watson
4aaae52d99 opt_cap.c no longer needed 2002-06-13 23:17:39 +00:00
Kelly Yancey
9ae6d334da Make nselcol, the number of select collisions since boot, unsigned as
negative collisions simply doesn't make sense.

PR:		(one small part of) 19720
Approved by:	alfred
2002-06-12 02:08:18 +00:00
Kelly Yancey
e3f0c5755c Time counter stats are unsigned, advertise them to sysctl(8) that way.
PR:		(one small part of) 19720
Approved by:	phk
2002-06-11 19:47:44 +00:00
Kelly Yancey
3316a80bd1 Convert hit and miss counters to unsigned values. Surely negative values
for either does not make sense.

PR:		(one small part of) 19720
2002-06-10 22:40:26 +00:00
John Baldwin
d0c149fce8 We no longer need to acqure Giant in ast() for ktrpsig() in postsig() now
that ktrace no longer needs Giant.
2002-06-07 05:43:40 +00:00
John Baldwin
374a15aa55 - trapsignal() no longer needs to acquire Giant for ktrpsig().
- Catch up to new ktrace API.
2002-06-07 05:43:02 +00:00
John Baldwin
af300f2367 - Proper locking for p_tracep and p_traceflag.
- Catch up to new ktrace API.
2002-06-07 05:42:25 +00:00
John Baldwin
6c84de02e0 Properly lock accesses to p_tracep and p_traceflag. Also make a few
ktrace-only things #ifdef KTRACE that were not before.
2002-06-07 05:41:27 +00:00
John Baldwin
9ba7fe1b76 - Catch up to new ktrace API.
- ktrace trace points in msleep() and cv_wait() no longer need Giant.
2002-06-07 05:39:16 +00:00
John Baldwin
60a9bb197d Catch up to changes in ktrace API. 2002-06-07 05:37:18 +00:00
John Baldwin
ea3fc8e4cd Overhaul the ktrace subsystem a bit. For the most part, the actual vnode
operations to dump a ktrace event out to an output file are now handled
asychronously by a ktrace worker thread.  This enables most ktrace events
to not need Giant once p_tracep and p_traceflag are suitably protected by
the new ktrace_lock.

There is a single todo list of pending ktrace requests.  The various
ktrace tracepoints allocate a ktrace request object and tack it onto the
end of the queue.  The ktrace kernel thread grabs requests off the head of
the queue and processes them using the trace vnode and credentials of the
thread triggering the event.

Since we cannot assume that the user memory referenced when doing a
ktrgenio() will be valid and since we can't access it from the ktrace
worker thread without a bit of hassle anyways, ktrgenio() requests are
still handled synchronously.  However, in order to ensure that the requests
from a given thread still maintain relative order to one another, when a
synchronous ktrace event (such as a genio event) is triggered, we still put
the request object on the todo list to synchronize with the worker thread.
The original thread blocks atomically with putting the item on the queue.
When the worker thread comes across an asynchronous request, it wakes up
the original thread and then blocks to ensure it doesn't manage to write a
later event before the original thread has a chance to write out the
synchronous event.  When the original thread wakes up, it writes out the
synchronous using its own context and then finally wakes the worker thread
back up.  Yuck.  The sychronous events aren't pretty but they do work.

Since ktrace events can be triggered in fairly low-level areas (msleep()
and cv_wait() for example) the ktrace code is designed to use very few
locks when posting an event (currently just the ktrace_mtx lock and the
vnode interlock to bump the refcoun on the trace vnode).  This also means
that we can't allocate a ktrace request object when an event is triggered.
Instead, ktrace request objects are allocated from a pre-allocated pool
and returned to the pool after a request is serviced.

The size of this pool defaults to 100 objects, which is about 13k on an
i386 kernel.  The size of the pool can be adjusted at compile time via the
KTRACE_REQUEST_POOL kernel option, at boot time via the
kern.ktrace_request_pool loader tunable, or at runtime via the
kern.ktrace_request_pool sysctl.

If the pool of request objects is exhausted, then a warning message is
printed to the console.  The message is rate-limited in that it is only
printed once until the size of the pool is adjusted via the sysctl.

I have tested all kernel traces but have not tested user traces submitted
by utrace(2), though they should work fine in theory.

Since a ktrace request has several properties (content of event, trace
vnode, details of originating process, credentials for I/O, etc.), I chose
to drop the first argument to the various ktrfoo() functions.  Currently
the functions just assume the event is posted from curthread.  If there is
a great desire to do so, I suppose I could instead put back the first
argument but this time make it a thread pointer instead of a vnode pointer.

Also, KTRPOINT() now takes a thread as its first argument instead of a
process.  This is because the check for a recursive ktrace event is now
per-thread instead of process-wide.

Tested on:	i386
Compiles on:	sparc64, alpha
2002-06-07 05:32:59 +00:00
John Baldwin
48849938e8 Change the all locks list from a STAILQ to a TAILQ. This bloats struct
lock_object by another pointer (though all of lock_object should be
conditional on LOCK_DEBUG anyways) in exchange for an O(1) TAILQ_REMOVE()
in witness_destroy() (called for every mtx_destroy() and sx_destroy())
instead of an O(n) STAILQ_REMOVE.  Since WITNESS is so dog slow as it is,
the speed-up is worth the space cost.

Suggested by:	iedowse
2002-06-06 20:51:04 +00:00
Chad David
ca18d53eae s/!SIGNOTEMPY/SIGISEMPTY/
Reviewed by: marcel, jhb, alfred
2002-06-06 19:12:41 +00:00
John Baldwin
8dcb900b62 Handle "dead" witnesses better in the situation of several short term locks
being created and destroyed without a single long-term one around to ensure
the witness associated with that group of locks stays alive.  The pipe
mutexes are an example of this group.  For a dead witness we no longer
clear the witness name.  Instead, when looking up the witness for a lock,
if a dead witness' (a witness with a refcount of 0) w_name pointer is
identical to the witness name of the lock then we revive that witness
instead of using a new witness for the lock.  This results in far fewer
dead witness objects and also better preserves locking orders over the long
term resulting in more correct lock order checking.  Note that we can't
ever derefence w_name of a dead witness since we don't know if the string
it is pointing to has been free()'d or kldunload()'d out from under us.
2002-06-06 19:04:38 +00:00
Dag-Erling Smørgrav
edad3af28d Move some sysctls from the debug tree to the vfs tree. 2002-06-06 15:50:22 +00:00
Dag-Erling Smørgrav
4a357a32e0 Gratuitous whitespace cleanup. 2002-06-06 15:46:38 +00:00
Poul-Henning Kamp
53e645d90e Use "bwrbg" as description when we sleep for background writing,
"biord" was misleading in every possible way.
2002-06-06 08:56:10 +00:00
Bruce Evans
6438c894da Fixed overflow in the bounds checking in dscheck(). It assumed that
daadr_t is no larger than a long, and some other relatively harmless
things (*blush*).  Overflow for subtracting a daddr_t from a u_long
caused "truncation" of the i/o for attempts to access blocks beyond
the end of the actually cause expansion of the i/o to a preposterous
size.
2002-06-06 00:35:07 +00:00
John Baldwin
6a95e08f2f Replace thread_runnable() with thread_running() as the latter is more
accurate.

Suggested by:	julian
2002-06-04 22:36:24 +00:00
John Baldwin
7fcca6096f Optimize the adaptive mutex spin a bit. Use a simple while loop with
simple reads (and on IA32, a "pause" instruction for each interation of the
loop) to spin until either the mutex owner field changes, or the lock owner
stops executing.

Suggested by:	tanimura
Tested on:	i386
2002-06-04 21:53:48 +00:00
John Baldwin
5853d37d3b Add a private thread_runnable() macro to make the code more readable and
make the KSE diff easier to maintain.
2002-06-04 21:50:02 +00:00
Dag-Erling Smørgrav
f16a5176ad ANSIfy the one remaining K&R function. 2002-06-02 21:57:28 +00:00
Dag-Erling Smørgrav
e89efc02e0 Whitespace nits. 2002-06-02 21:55:58 +00:00