Commit Graph

8109 Commits

Author SHA1 Message Date
Jeff Roberson
66ca1b4878 - Use VFS_LOCK_GIANT() in place of mtx_lock(&giant), etc.
Sponsored By:	Isilon Systems, Inc.
2005-01-24 10:19:31 +00:00
Robert Watson
471135a3af Style cleanup: with removal of mutex operations, we can also remove
{}'s from securelevel_gt() and securelevel_ge().

MFC after:	1 week
2005-01-23 21:11:39 +00:00
Robert Watson
0b880542e6 When reading pr_securelevel from a prison, perform a lockless read,
as it's an integer read operation and the resulting slight race is
acceptable.

MFC after:	1 week
2005-01-23 21:01:00 +00:00
Robert Watson
4261ed50fd When retrieving the current per-jails securelevel for a sysctl read,
don't acquire the prison mutex, as it's an integer read and races
here don't make a difference.

MFC after:	1 week
2005-01-23 20:59:19 +00:00
Robert Watson
5324bda309 When DDB is not defined, don't implement witness_thread_has_locks() and
witness_proc_has_locks(), as they are unused, which results in a compiler
error.  This problem was introduced with the implementation of "show
alllocks".

Spotted by:	Artem Kuchin <matrix at itlegion dot ru>
2005-01-22 21:14:21 +00:00
Robert Watson
14cedfc842 Invoke label initialization, creation, cleanup, and tear-down MAC
Framework entry points for System V IPC shared memory.

Submitted by:	Dandekar Hrishikesh <rishi_dandekar at sbcglobal dot net>
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, SPAWAR, McAfee Research
2005-01-22 19:10:25 +00:00
Robert Watson
a6009aa7c1 Invoke label initialization, creation, cleanup, and tear-down MAC
Framework entry points for System V IPC semaphores.

Submitted by:	Dandekar Hrishikesh <rishi_dandekar at sbcglobal dot net>
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, SPAWAR, McAfee Research
2005-01-22 19:04:17 +00:00
Robert Watson
e6a543f8db Invoke label initialization, creation, cleanup, and tear-down MAC
Framework entry points for System V IPC message queues.

Submitted by:	Dandekar Hrishikesh <rishi_dandekar at sbcglobal dot net>
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, SPAWAR, McAfee Research
2005-01-22 18:51:43 +00:00
Bosko Milekic
e4eb384b47 Bring in MemGuard, a very simple and small replacement allocator
designed to help detect tamper-after-free scenarios, a problem more
and more common and likely with multithreaded kernels where race
conditions are more prevalent.

Currently MemGuard can only take over malloc()/realloc()/free() for
particular (a) malloc type(s) and the code brought in with this
change manually instruments it to take over M_SUBPROC allocations
as an example.  If you are planning to use it, for now you must:

	1) Put "options DEBUG_MEMGUARD" in your kernel config.
	2) Edit src/sys/kern/kern_malloc.c manually, look for
	   "XXX CHANGEME" and replace the M_SUBPROC comparison with
	   the appropriate malloc type (this might require additional
	   but small/simple code modification if, say, the malloc type
	   is declared out of scope).
	3) Build and install your kernel.  Tune vm.memguard_divisor
	   boot-time tunable which is used to scale how much of kmem_map
	   you want to allott for MemGuard's use.  The default is 10,
	   so kmem_size/10.

ToDo:
	1) Bring in a memguard(9) man page.
	2) Better instrumentation (e.g., boot-time) of MemGuard taking
	   over malloc types.
	3) Teach UMA about MemGuard to allow MemGuard to override zone
	   allocations too.
	4) Improve MemGuard if necessary.

This work is partly based on some old patches from Ian Dowse.
2005-01-21 18:09:17 +00:00
Colin Percival
7834081c88 Make "c->c_func = NULL" conditional on CALLOUT_LOCAL_ALLOC in both
places where it occurs, not just one. :-)

Pointed out by:	glebius
Pointy had to:	cperciva
2005-01-19 21:15:58 +00:00
Colin Percival
0ceba3d69c Make "c->c_func = NULL" conditional on the CALLOUT_LOCAL_ALLOC flag,
i.e., only clear c->c_func if the callout c is being used via the old
timeout(9) interface.

Requested by:	glebius
2005-01-19 20:34:46 +00:00
Colin Percival
86fd19de7b Clarify the description of the callout_active() macro: It is cleared by
callout_stop, callout_drain, and callout_deactivate, but is not
automatically cleared when a callout returns.
2005-01-19 19:46:35 +00:00
Paul Saab
efa42cbc93 move kern_nanosleep to sys/syscallsubr.h
Requested by:	jhb
2005-01-19 18:09:50 +00:00
Paul Saab
0e214fad37 Add a 32bit syscall wrapper for modstat
Obtained from:	Yahoo!
2005-01-19 17:53:06 +00:00
Paul Saab
7fdf2c856f - rename nanosleep1 to kern_nanosleep
- Add a 32bit syscall entry for nanosleep

Reviewed by:	peter
Obtained from:	Yahoo!
2005-01-19 17:44:59 +00:00
Warner Losh
234111d6d0 Introduce bus_free_resource. It is a convenience function which wraps
bus_release_resource by grabbing the rid from the resource.
2005-01-19 06:52:19 +00:00
David Xu
a2cc61fa6e Revert my previous errno hack, that is certainly an issue,
and always has been, but the system call itself returns
errno in a register so the problem is really a function of
libc, not the system call.

Discussed with : Matthew Dillion <dillon@apollo.backplane.com>
2005-01-18 13:53:10 +00:00
Poul-Henning Kamp
9fc6aa0618 Detect sign-extension bugs in the ioctl(2) command argument: Truncate
to 32 bits and print warning.
2005-01-18 07:37:05 +00:00
Mike Silbersack
6792415119 Rearrange the kninit calls for both directions of a pipe so that
they both happen before pipe backing allocation occurs.  Previously,
a pipe memory shortage would cause a panic due to a KNOTE call
on an uninitialized si_note.

Reported by:	Peter Holm
MFC after:	1 week
2005-01-17 07:56:28 +00:00
Poul-Henning Kamp
7bf38aeae7 Fix a bug I introduced in 1.561 which has caused considerable filesystem
unhappiness lately.

As far as I can tell, no files that have made it safely to disk
have been endangered, but stuff in transit has been in peril.

Pointy hat:	phk
2005-01-16 21:09:39 +00:00
David Xu
b7be40d612 make umtx timeout relative so userland can select different clock type,
e.g, CLOCK_REALTIME or CLOCK_MONOTONIC.
merge umtx_wait and umtx_timedwait into single function.
2005-01-14 13:38:15 +00:00
Poul-Henning Kamp
7c0745eeae Eliminate unused and unnecessary "cred" argument from vinvalbuf() 2005-01-14 07:33:51 +00:00
Poul-Henning Kamp
e39db32ab0 Ditch vfs_object_create() and make the callers call VOP_CREATEVOBJECT()
directly.
2005-01-13 12:25:19 +00:00
Poul-Henning Kamp
63f89abf4a Change the generated VOP_ macro implementations to improve type checking
and KASSERT coverage.

After this check there is only one "nasty" cast in this code but there
is a KASSERT to protect against the wrong argument structure behind
that cast.

Un-inlining the meat of VOP_FOO() saves 35kB of text segment on a typical
kernel with no change in performance.

We also now run the checking and tracing on VOP's which have been layered
by nullfs, umapfs, deadfs or unionfs.

    Add new (non-inline) VOP_FOO_AP() functions which take a "struct
    foo_args" argument and does everything the VOP_FOO() macros
    used to do with checks and debugging code.

    Add KASSERT to VOP_FOO_AP() check for argument type being
    correct.

    Slim down VOP_FOO() inline functions to just stuff arguments
    into the struct foo_args and call VOP_FOO_AP().

    Put function pointer to VOP_FOO_AP() into vop_foo_desc structure
    and make VCALL() use it instead of the current offsetoff() hack.

    Retire vcall() which implemented the offsetoff()

    Make deadfs and unionfs use VOP_FOO_AP() calls instead of
    VCALL(), we know which specific call we want already.

    Remove unneeded arguments to VCALL() in nullfs and umapfs bypass
    functions.

    Remove unused vdesc_offset and VOFFSET().

    Generally improve style/readability of the generated code.
2005-01-13 07:53:01 +00:00
Maxim Sobolev
fdf84ec4c6 When re-connecting already connected datagram socket ensure to clean
up its pending error state, which may be set in some rare conditions resulting
in connect() syscall returning that bogus error and making application believe
that attempt to change association has failed, while it has not in fact.

There is sockets/reconnect regression test which excersises this bug.

MFC after:	2 weeks
2005-01-12 10:15:23 +00:00
Poul-Henning Kamp
3963baec64 Comment out debugging printf which doesn't compile on amd64. 2005-01-12 10:11:31 +00:00
David Xu
333d4875cd Let _umtx_op directly return error code rather than from errno because
errno can be tampered potentially by nested signal handle.
Now all error codes are returned in negative value, positive value are
reserved for future expansion.
2005-01-12 05:55:52 +00:00
Poul-Henning Kamp
6ef8480a88 Add BO_SYNC() and add a default which uses the secret vnode pointer
and VOP_FSYNC() for now.
2005-01-11 10:43:08 +00:00
Poul-Henning Kamp
6afa350d53 More vnode -> bufobj migration. 2005-01-11 10:16:39 +00:00
Poul-Henning Kamp
8d785753bd Give flushbuflist() a struct bufv as first argument and avoid home-rolling
TAILQ_FOREACH_SAFE().

Loose the error pointer argument and return any errors the normal way.

Return EAGAIN for the case where more work needs to be done.
2005-01-11 10:01:54 +00:00
Poul-Henning Kamp
8df6bac4c7 Remove the unused credential argument from VOP_FSYNC() and VFS_SYNC().
I'm not sure why a credential was added to these in the first place, it is
not used anywhere and it doesn't make much sense:

	The credentials for syncing a file (ability to write to the
	file) should be checked at the system call level.

	Credentials for syncing one or more filesystems ("none")
	should be checked at the system call level as well.

	If the filesystem implementation needs a particular credential
	to carry out the syncing it would logically have to the
	cached mount credential, or a credential cached along with
	any delayed write data.

Discussed with:	rwatson
2005-01-11 07:36:22 +00:00
David Xu
3e380f0d3d Break out of loop earlier if it is not timeout. 2005-01-08 06:57:46 +00:00
Robert Watson
2b05b557ff In acct_process(), do a lockless read of acctvp to see if it's NULL
before deciding to do more expensive locking to account for process
exit.  This acceptable minor race avoids two mutex operations in
that highly common case of accounting not being enabled.

MFC after:	2 weeks
2005-01-08 04:45:57 +00:00
Robert Watson
fd544ee8f7 In kern_wait(), let the compiler copy the rusage structure rather than
an explicit bcopy() -- it probably does a better job.
2005-01-08 04:17:48 +00:00
Colin Percival
e9dec2c41b Adjust two of my comments to the new world order: Indent protection in
the first column is performed using /**, not /*-.
2005-01-07 03:25:45 +00:00
Warner Losh
60727d8b86 /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
Warner Losh
9454b2d864 /* -> /*- for copyright notices, minor format tweaks as necessary 2005-01-06 23:35:40 +00:00
Warner Losh
73108a1664 Expand COPYRIGHT inline, per Matthew Dillon's earlier approval. 2005-01-06 23:34:38 +00:00
David Xu
476e1d077e Return ETIMEDOUT when thread is timeouted since POSIX thread
APIs expect ETIMEDOUT not EAGAIN, this simplifies userland code a
bit.
2005-01-06 02:08:34 +00:00
John Baldwin
c88379381b - Move the function prototypes for kern_setrlimit() and kern_wait() to
sys/syscallsubr.h where all the other kern_foo() prototypes live.
- Resort kern_execve() while I'm there.
2005-01-05 22:19:44 +00:00
John Baldwin
33fb8a386e Rework the optimization for spinlocks on UP to be slightly less drastic and
turn it back on.  Specifically, the actual changes are now less intrusive
in that the _get_spin_lock() and _rel_spin_lock() macros now have their
contents changed for UP vs SMP kernels which centralizes the changes.
Also, UP kernels do not use _mtx_lock_spin() and no longer include it.  The
UP versions of the spin lock functions do not use any atomic operations,
but simple compares and stores which allow mtx_owned() to still work for
spin locks while removing the overhead of atomic operations.

Tested on:	i386, alpha
2005-01-05 21:13:27 +00:00
Poul-Henning Kamp
0b3e4fe239 Since we do not support forceful unmount of DEVFS we can do away with
the partially implemented vnode-readoption code in vgonechrl().
2005-01-04 08:49:14 +00:00
Marcel Moolenaar
3195113e2a Regen. 2005-01-03 00:47:23 +00:00
Marcel Moolenaar
fe0ef598b6 uuidgen(2) is MP safe. 2005-01-03 00:45:57 +00:00
Warner Losh
d0d4cc63e3 Implement device_quiesce. This method means 'you are about to be
unloaded, cleanup, or return ebusy of that's inconvenient.'  The
default module hanlder for newbus will now call this when we get a
MOD_QUIESCE event, but in the future may call this at other times.

This shouldn't change any actual behavior until drivers start to use it.
2004-12-31 20:47:51 +00:00
Pawel Jakub Dawidek
46003fb337 Be consistent and always use form 'return (value);' instead of 'return value;'.
We had (before this change) 84 lines where it was style(9)-clean and 15 lines
where it was not.
2004-12-31 14:52:53 +00:00
John Baldwin
50aaa791ba Fix a typo and two whitespace nits. 2004-12-30 22:17:00 +00:00
John Baldwin
f5c157d986 Rework the interface between priority propagation (lending) and the
schedulers a bit to ensure more correct handling of priorities and fewer
priority inversions:
- Add two functions to the sched(9) API to handle priority lending:
  sched_lend_prio() and sched_unlend_prio().  The turnstile code uses these
  functions to ask the scheduler to lend a thread a set priority and to
  tell the scheduler when it thinks it is ok for a thread to stop borrowing
  priority.  The unlend case is slightly complex in that the turnstile code
  tells the scheduler what the minimum priority of the thread needs to be
  to satisfy the requirements of any other threads blocked on locks owned
  by the thread in question.  The scheduler then decides where the thread
  can go back to normal mode (if it's normal priority is high enough to
  satisfy the pending lock requests) or it it should continue to use the
  priority specified to the sched_unlend_prio() call.  This involves adding
  a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag
  for priority elevation.
- Schedulers now refuse to lower the priority of a thread that is currently
  borrowing another therad's priority.
- If a scheduler changes the priority of a thread that is currently sitting
  on a turnstile, it will call a new function turnstile_adjust() to inform
  the turnstile code of the change.  This function resorts the thread on
  the priority list of the turnstile if needed, and if the thread ends up
  at the head of the list (due to having the highest priority) and its
  priority was raised, then it will propagate that new priority to the
  owner of the lock it is blocked on.

Some additional fixes specific to the 4BSD scheduler include:
- Common code for updating the priority of a thread when the user priority
  of its associated kse group has been consolidated in a new static
  function resetpriority_thread().  One change to this function is that
  it will now only adjust the priority of a thread if it already has a
  time sharing priority, thus preserving any boosts from a tsleep() until
  the thread returns to userland.  Also, resetpriority() no longer calls
  maybe_resched() on each thread in the group. Instead, the code calling
  resetpriority() is responsible for calling resetpriority_thread() on
  any threads that need to be updated.
- schedcpu() now uses resetpriority_thread() instead of just calling
  sched_prio() directly after it updates a kse group's user priority.
- sched_clock() now uses resetpriority_thread() rather than writing
  directly to td_priority.
- sched_nice() now updates all the priorities of the threads after the
  group priority has been adjusted.

Discussed with:	bde
Reviewed by:	ups, jeffr
Tested on:	4bsd, ule
Tested on:	i386, alpha, sparc64
2004-12-30 20:52:44 +00:00
John Baldwin
99b808f461 Whitespace fix. 2004-12-30 20:30:58 +00:00
John Baldwin
63710c4d35 Stop explicitly touching td_base_pri outside of the scheduler and simply
set a thread's priority via sched_prio() when that is the desired action.
The schedulers will start managing td_base_pri internally shortly.
2004-12-30 20:29:58 +00:00
John Baldwin
9e6c867ccc Call tty_close() at the very end of ttyclose() since otherwise NULL
deferences can occur since tty_close() may end up freeing the tty structure
if it drops the last reference to it.

Glanced at by:	phk
2004-12-30 19:24:49 +00:00
Robert Watson
b36aab857b Make the sysctls kern.ipc.msgmnb and kern.ipc.msgtql into tunables as
is the case for most other sysctls in the System V IPC message queue
implementation.

PR:		75541
Submitted by:	Sergiy Vyshnevetskiy <serg at vostok dot net>
MFC after:	2 weeks
2004-12-30 13:56:34 +00:00
David Xu
cc1000ac5b Make umtx_wait and umtx_wake more like linux futex does, it is
more general than previous. It also lets me implement cancelable point
in thread library. Also in theory, umtx_lock and umtx_unlock can
be implemented by using umtx_wait and umtx_wake, all atomic operations
can be done in userland without kernel's casuptr() function.
2004-12-30 02:56:17 +00:00
Alan Cox
956d03da83 Eliminate (now) unnecessary acquisition and release of the global page
queues lock.
2004-12-29 04:49:10 +00:00
John Baldwin
83ae089aab - Up the WITNESS_COUNT macro from 200 to 1024 to support the growing number
of lock types in the kernel.  This results in an increase of witness
  data usage from ~145k to ~280k on i386 for kernels with
  'options WITNESS'.
- Remove the unused witness malloc bucket.

Submitted by:	Michal Mertl mime at traveller dot cz (1)
2004-12-28 21:21:27 +00:00
Robert Watson
6ce8940626 Attempt to slightly refine the print out from "show alllocks" -- list
the process and thread numbers/names on the same line rather than on
separate lines, and print the thread pointer not just the tid.
2004-12-27 10:47:08 +00:00
Alexander Kabaev
aa6f98d12f Do not vput(9) unlocked vnode and do not VREF it with the sole purpose
of vputting it back immediately.

Complained by:	DEBUG_VFS_LOCKS
2004-12-27 05:17:11 +00:00
Jeff Roberson
2ebf8eb132 - Unintentionally checked in a debugging panic. Remove that. 2004-12-26 23:21:48 +00:00
Jeff Roberson
36996b3b7c - Remove a 4BSD specific hack since this will work on ULE too. 2004-12-26 22:56:51 +00:00
Jeff Roberson
598b368d6c - Fix a long standing problem where an ithread would not honor sched_pin().
- Remove the sched_add wrapper that used sched_add_internal() as a backend.
   Its only purpose was to interpret one flag and turn it into an int.  Do
   the right thing and interpret the flag in sched_add() instead.
 - Pass the flag argument to sched_add() to kseq_runq_add() so that we can
   get the SRQ_PREEMPT optimization too.
 - Add a KEF_INTERNAL flag.  If KEF_INTERNAL is set we don't adjust the SLOT
   counts, otherwise the slot counts are adjusted as soon as we enter
   sched_add() or sched_rem() rather than when the thread is actually placed
   on the run queue.  This greatly simplifies the handling of slots.
 - Remove the explicit prevention of migration for ithreads on non-x86
   platforms.  This was never shown to have any real benefit.
 - Remove the unused class argument to KSE_CAN_MIGRATE().
 - Add ktr points for thread migration events.
 - Fix a long standing bug on platforms which don't initialize the cpu
   topology.  The ksg_maxid variable was never correctly set on these
   platforms which caused the long term load balancer to never inspect
   more than the first group or processor.
 - Fix another bug which prevented the long term load balancer from working
   properly.  If stathz != hz we can't expect sched_clock() to be called
   on the exact tick count that we're anticipating.
 - Rearrange sched_switch() a bit to reduce indentation levels.
2004-12-26 22:56:08 +00:00
Robert Watson
b6dd9ef2fe Add "show alllocks" command to DDB, which dumps a list of processes
and threads currently holding sleep mutexes (and spin mutexes for
curthread).  This can be quite useful in looking for a lock condition
summary for a system, as it avoids manually iterating through threads
and processes to find all the interesting locks.

NB: "alllocks" is up there with "lockedvnods" for a bad argument for
show.

MFC after:	2 weeks
2004-12-26 22:52:24 +00:00
Jeff Roberson
6a98702001 - Run sched_userret() after thread_userret(). Before, sched_userret() would
lower the priority of the returning thread to a user priority before
   calling into thread_userret() which would call wakeup() which in turn would
   cause the returning thread to eventually context switch rather than
   completing its slice.  Allowing this thread to complete its slice first
   yields a 15% performance improvement in super-smack on my dual opteron with
   4BSD.
2004-12-26 07:30:35 +00:00
Jeff Roberson
907bdbc288 - Wrap the thread count adjustment in sched_load_add() and sched_load_rem()
so that we may place some ktr entries nearby.
 - Define other KTR_SCHED tracepoints so that we may graph the operation
   of the scheduler.
2004-12-26 00:16:24 +00:00
Jeff Roberson
81d47d3f4b - Remove earlier KTR_ULE tracepoints.
- Define new KTR_SCHED points so that we can graph the operation of the
   scheduler.
2004-12-26 00:15:33 +00:00
Jeff Roberson
85da7a569b - Define KTR points for KTR_SCHED. 2004-12-26 00:14:21 +00:00
David Xu
c180db2bce Make _umtx_op() as more general interface, the final parameter needn't be
timespec pointer, every parameter will be interpreted by its opcode.
2004-12-25 13:02:50 +00:00
David Xu
8b37fbabb4 1. introduce umtx_owner to get an owner of a umtx.
2. add const qualifier to umtx_timedlock and umtx_timedwait.
3. add missing blackets in umtx do_unlock_and_wait.
2004-12-25 12:49:35 +00:00
David Xu
3dd213f160 Add umtxq_lock/unlock around umtx_signal, fix debug kernel compiling,
let umtx_lock returns EINTR when it returns ERESTART, this lets
userland have chance to back off mtx lock code when needed.
2004-12-24 11:59:20 +00:00
David Xu
a08c214a72 1. Fix race condition between umtx lock and unlock, heavy testing
on SMP can explore the bug.
2. Let umtx_wake returns number of threads have been woken.
2004-12-24 11:30:55 +00:00
Robert Watson
0fddf92d72 Assert the sem lock in sem_ref() and sem_rel(), as it is required to
safely manipulate the reference count.
2004-12-23 02:22:47 +00:00
Robert Watson
38e6a58c77 Remove temporary debugging printf that was used to detect the presence
of a race that had previously caused a panic in order to determine if
the fix was for the right problem.  It was.

MFC after:	2 weeks
2004-12-23 01:19:27 +00:00
Robert Watson
1ef121cf6b In sonewconn(), the s/if/while/ change to wait for room at the tail of
the accept queue is a feature, not a bug/issue, so remove the XXXRW
from the comment.
2004-12-23 01:16:21 +00:00
Robert Watson
ba65391172 Remove an XXXRW indicating atomic operations might be used as a
substitute for a global mutex protecting the socket count and
generation number.

The observation that soreceive_rcvoob() can't return an mbuf
chain is a property, not a bug, so remove the XXXRW.

In sorflush, s/existing/previous/ for code when describing prior
behavior.

For SO_LINGER socket option retrieval, remove an XXXRW about why
we hold the mutex: this is correct and not dubious.

MFC after:	2 weeks
2004-12-23 01:07:12 +00:00
Robert Watson
81b5dbecd4 In soalloc(), simplify the mac_init_socket() handling to remove
unnecessary use of a global variable and simplify the return case.
While here, use ()'s around return values.

In sodealloc(), remove a comment about why we bump the gencnt and
decrement the socket count separately.  It doesn't add
substantially to the reading, and clutters the function.

MFC after:	2 weeks
2004-12-23 00:59:43 +00:00
Alan Cox
7abe2ac214 Add send buffer locking to uipc_send(). Without this locking a race can
occur between a reader and a writer that results in a panic upon close,
e.g.,
	"panic: sbflush_locked: cc 4 || mb 0xffffff0052afa400 || mbcnt 0"

Reviewed by: rwatson@
MFC after: 2 weeks
2004-12-22 20:28:46 +00:00
Poul-Henning Kamp
40b5a6f2c6 Include uio.h
Check O_NONBLOCK instead if IO_NDELAY
Don't include vnode.h
2004-12-22 17:37:14 +00:00
Poul-Henning Kamp
72e8dfe5a0 Hide/remove various printfs, now that root mounting doesn't seem to explode
on people.
2004-12-20 21:59:25 +00:00
Poul-Henning Kamp
118253ca24 fix a misleading sleep identifier. 2004-12-20 21:38:13 +00:00
Poul-Henning Kamp
e87047b437 We can only ever get to vgonechrl() from a devfs vnode, so we do not
need to reassign the vp->v_op to devfs_specops, we know that is the
value already.

Make devfs_specops private to devfs.
2004-12-20 21:34:29 +00:00
David Xu
839f811c6a 1. msleep returns EWOULDBLOCK not ETIMEDOUT, use EWOULDBLOCK instead.
2. Eliminate a possible lock leak in timed wait loop.
2004-12-18 13:43:16 +00:00
David Xu
50586e8b6b 1. make umtx sharable between processes, the way is two or more processes
call mmap() to create a shared space, and then initialize umtx on it,
   after that, each thread in different processes can use the umtx same
   as threads in same process.
2. introduce a new syscall _umtx_op to support timed lock and condition
   variable semantics. also, orignal umtx_lock and umtx_unlock inline
   functions now are reimplemented by using _umtx_op, the _umtx_op can
   use arbitrary id not just a thread id.
2004-12-18 12:52:44 +00:00
Sam Leffler
a37c415e66 fix m_append for case where additional mbufs are required 2004-12-15 19:04:07 +00:00
Poul-Henning Kamp
662d80dc23 Fix a deadlock I introduced this morning.
Mostly from:	tegge
2004-12-14 20:48:40 +00:00
Jeff Roberson
7842f65e7f - Garbage collect several unused members of struct kse and struce ksegrp.
As best as I can tell, some of these were never used.
2004-12-14 10:53:55 +00:00
Jeff Roberson
8ffb8f5558 - In kseq_choose(), don't recalculate slice values for processes with a
nice of 0.  Doing so can cause an infinite loop because they should be
   running, but a nice -20 process could prevent them from doing so.
 - Add a new flag KEF_PRIOELEV to flag a thread that has had its priority
   elevated due to priority propagation.  If a thread has had its priority
   elevated, we assume that it must go on the current queue and it must
   get a slice.
 - In sched_userret() if our priority was elevated and we shouldn't have
   a timeslice, yield here until we should.

Found/Tested by:	glebius
2004-12-14 10:34:27 +00:00
Poul-Henning Kamp
d986dbb448 Add a new kind of reference count (fd_holdcnt) to struct filedesc
which holds on to just the data structure and the mutex.  (The
existing refcount (fd_refcnt) holds onto the open files in the
descriptor.)

The fd_holdcnt is protected by fdesc_mtx, fd_refcnt by FILEDESC_LOCK.

Add fdhold(struct proc *) which gets a hold on the filedescriptors of
the specified proc..

Add fddrop(struct filedesc *) which drops the fd_holdcnt and if zero
destroys the mutex and frees the memory.

Initialize the fd_holdcnt to one in fdinit().  Normal operations on
the filedesc structure will not change it.

In fdfree() use fddrop() to dispose of the mutex and structure.  Hold
the FILEDESC_LOCK() until we have cleaned out the contents and carefully
set the fields to null values during cleanup.

Use fdhold()/fddrop() in mountcheckdirs() and sysctl_kern_file().
2004-12-14 09:09:51 +00:00
Poul-Henning Kamp
30abaa53df Make fdesc_mtx private to kern_descrip.c now that the flock has come home. 2004-12-14 08:44:51 +00:00
Poul-Henning Kamp
12b18fdab4 Move the checkdirs() function from vfs_mount.c to kern_descrip.c and
call it mountcheckdirs().
2004-12-14 08:23:18 +00:00
Poul-Henning Kamp
c113083c5a Add new function fdunshare() which encapsulates the necessary light magic
for ensuring that a process' filedesc is not shared with anybody.

Use it in the two places which previously had private implmentations.

This collects all fd_refcnt handling in kern_descrip.c
2004-12-14 07:20:03 +00:00
Jeff Roberson
3ef6ac3361 - If delivering a signal will result in killing a process that has a
nice value above 0, set it to 0 so that it may proceed with haste.
   This is especially important on ULE, where adjusting the priority
   does not guarantee that a thread will be granted a greater time slice.
2004-12-13 16:45:57 +00:00
Jeff Roberson
2d59a44dc0 - Take up a 'slot' while we're on the assigned queue, waiting to be
posted to another processor.  Otherwise, kern_switch() gets confused
   and tries to sched_add(NULL).
2004-12-13 13:09:33 +00:00
Pawel Jakub Dawidek
bf4843166f Add bioq_insert_head() function.
OK'd by:	phk
2004-12-13 12:57:21 +00:00
Alan Cox
db24060c25 Correct the handling of two unusual cases by the zero-copy receive path,
specifically, vm_pgmoveco():
1. If vm_pgmoveco() sleeps on a busy page, it must redo the look up
because the page may have been freed.
2. If the receive buffer is copy-on-write due to, for example, a fork,
then although the first vm object in the shadow chain may not contain
a page there may still be one from a backing object that is mapped.
Thus, a pmap_remove() is required for the new page rather than the
backing object's page to been seen by the application.

Also, add some comments to vm_pgmoveco() and update some assertions.

Tested by: ken@
2004-12-13 06:24:14 +00:00
Poul-Henning Kamp
1ab58cc2df Copy the entire stats structure. Let compiler decide how. 2004-12-11 22:13:02 +00:00
Poul-Henning Kamp
e40da1f149 Fix whitespace.
Spotted by:	njl
2004-12-11 20:41:32 +00:00
Poul-Henning Kamp
494ea31a7d Remove the /dev/dev -> / symlink after we are done with it. 2004-12-11 12:48:37 +00:00
Alan Cox
c73e3e9223 Remove unneeded code from the zero-copy receive path.
Discussed with: gallatin@
Tested by: ken@
2004-12-10 04:49:13 +00:00
Max Laier
f8aabcb680 Start the protocol timeouts only after all domains have been initialized
completely. For some reason (that I am still curious about) we started to no
longer manage to finish the initialization before the timeouts run the first
time leading to panics when using uninitialized mutex etc.

The root of this problem is that we currently first link a domain to the
domains list and only later initialize the domain's protocols. This should
be reworked in the future, but with the current API it is not possible in
all situations. We settle with this lazy fix for now.

Tested by:	gnn, ru, myself
2004-12-09 11:47:30 +00:00
Sam Leffler
4873d1754f add m_append utility function to be used in forthcoming changes 2004-12-08 05:42:02 +00:00
Alan Cox
1c4dbedac4 Tidy up the zero-copy receive path: Remove an unneeded argument to
uiomoveco() and userspaceco().
2004-12-08 05:25:08 +00:00