Commit Graph

89 Commits

Author SHA1 Message Date
Alfred Perlstein
eb20931127 Attempt to fixup select(2) and poll(2), this should fix some races with
other threads as well as speed up the interfaces.

To fix the race and accomplish the speedup, remove selholddrop and
pollholddrop.  The entire concept is somewhat bogus because holding
the individual struct file pointers offers us no guarantees that
another thread context won't close it on us thereby removing our
access to our own reference.

Selholddrop and pollholddrop also would do multiple locks and unlocks
of mutexes _per-file_ in the fd arrays to be scanned, this needed to
be sped up.

Instead of using selholddrop and pollholddrop, simply hold the
filedesc lock over the selscan and pollscan functions.  This should
protect us against close(2)'s on the files as reduce the multiple
lock/unlock pairs per fd into a single lock over the filedesc.
2002-01-29 22:54:19 +00:00
Alfred Perlstein
97fa4397d3 make pread use fget_read instead of holdfp. 2002-01-23 08:22:59 +00:00
Alfred Perlstein
aa11a498ff undo a bit of the Giant pushdown.
fdrop isn't SMP safe as it may call into the file's close routine which
definetly is not SMP safe right now, so we hold Giant over calls to
fdrop now.
2002-01-19 01:03:54 +00:00
Alfred Perlstein
b5c93a560d Fix giant handling in pwrite(2), I forgot to release it when finishing
the syscall.
2002-01-16 21:33:41 +00:00
Alfred Perlstein
a4db49537b Replace ffind_* with fget calls.
Make fget MPsafe.

Make fgetvp and fgetsock use the fget subsystem to reduce code bloat.

Push giant down in fpathconf().
2002-01-14 00:13:45 +00:00
Alfred Perlstein
426da3bcfb SMP Lock struct file, filedesc and the global file list.
Seigo Tanimura (tanimura) posted the initial delta.

I've polished it quite a bit reducing the need for locking and
adapting it for KSE.

Locks:

1 mutex in each filedesc
   protects all the fields.
   protects "struct file" initialization, while a struct file
     is being changed from &badfileops -> &pipeops or something
     the filedesc should be locked.

1 mutex in each struct file
   protects the refcount fields.
   doesn't protect anything else.
   the flags used for garbage collection have been moved to
     f_gcflag which was the FILLER short, this doesn't need
     locking because the garbage collection is a single threaded
     container.
  could likely be made to use a pool mutex.

1 sx lock for the global filelist.

struct file *	fhold(struct file *fp);
        /* increments reference count on a file */

struct file *	fhold_locked(struct file *fp);
        /* like fhold but expects file to locked */

struct file *	ffind_hold(struct thread *, int fd);
        /* finds the struct file in thread, adds one reference and
                returns it unlocked */

struct file *	ffind_lock(struct thread *, int fd);
        /* ffind_hold, but returns file locked */

I still have to smp-safe the fget cruft, I'll get to that asap.
2002-01-13 11:58:06 +00:00
Matthew Dillon
b064d43d8f remove holdfp()
Replace uses of holdfp() with fget*() or fgetvp*() calls as appropriate

introduce fget(), fget_read(), fget_write() - these functions will take
a thread and file descriptor and return a file pointer with its ref
count bumped.

introduce fgetvp(), fgetvp_read(), fgetvp_write() - these functions will
take a thread and file descriptor and return a vref()'d vnode.

*_read() requires that the file pointer be FREAD, *_write that it be
FWRITE.

This continues the cleanup of struct filedesc and struct file access
routines which, when are all through with it, will allow us to then
make the API calls MP safe and be able to move Giant down into the fo_*
functions.
2001-11-14 06:30:36 +00:00
John Baldwin
fea2ab833e The P_SELECT flag was moved from p->p_flag to td->td_flags, but p_flag
was locked by the proc lock and td_flags is locked by the sched_lock.
The places that read, set, and cleared TDF_SELECT weren't updated, so they
read and modified td_flags w/o holding the sched_lock, meaning that they
could corrupt the per-thread flags field.  As an immediate band-aid,
grab sched_lock while reading and manipulating td_flags in relation to
TDF_SELECT.  This will probably be cleaned up some later on.
2001-09-21 22:06:22 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Matthew Dillon
ad2edad94e Giant Pushdown:
read() pread() readv() write () pwrite() writev() ioctl() select ()
    poll() openbsd_poll()
2001-09-01 19:34:23 +00:00
Seigo Tanimura
1b36970495 Back out scanning file descriptors with holding a process lock.
selrecord() requires allproc sx in pfind(), resulting in lock order
reversal between allproc and a process lock.
2001-05-15 10:19:57 +00:00
Seigo Tanimura
265fc98f36 - Convert msleep(9) in select(2) and poll(2) to cv_*wait*(9).
- Since polling should not involve sleeping, keep holding a
  process lock upon scanning file descriptors.

- Hold a reference to every file descriptor prior to entering
  polling loop in order to avoid lock order reversal between
  lockmgr and p_mtx upon calling fdrop() in fo_poll().
  (NOTE: this work has not been done for netncp and netsmb
  yet because a socket itself has no reference counts.)

Reviewed by:	jhb
2001-05-14 05:26:48 +00:00
John Baldwin
33a9ed9d0e Change the pfind() and zpfind() functions to lock the process that they
find before releasing the allproc lock and returning.

Reviewed by:	-smp, dfr, jake
2001-04-24 00:51:53 +00:00
John Baldwin
19eb87d22a Grab the process lock while calling psignal and before calling psignal. 2001-03-07 03:37:06 +00:00
Jonathan Lemon
ea0237ed11 Correctly declare variables as u_int rather than doing typecasts.
Kill some register declarations while I'm here.

Submitted by:  bde (1)
2001-02-27 15:11:31 +00:00
Jonathan Lemon
0b7088c4d0 Cast nfds to u_int before range checking it in order to catch negative
values.

PR:	25393
2001-02-27 00:50:20 +00:00
Peter Wemm
2bd5ac330f poll(2) array limits (take 2) - after some input from bde. 2001-02-09 08:10:22 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
Peter Wemm
89b716473e The code I picked up from NetBSD in '97 had a nasty bug. It limited
the index of the pollfd array to the number of fd's currently open, not
the maximum number of fd's.  ie: if you had 0,1,2 open, you could not
use pollfd slots higher than 20.  The specs say we only have to support
OPEN_MAX [64] entries but we allow way more than that.
2001-02-07 23:28:01 +00:00
John Baldwin
e04ac2fe6b - Catch up to proc flag changes.
- Add proc locking for selwakeup() and selrecord().
2001-01-24 11:12:37 +00:00
Garrett Wollman
0a2c3d48c6 select() DKI is now in <sys/selinfo.h>. 2001-01-09 04:33:49 +00:00
Matthew Dillon
a41ce5d30b Only call bwillwrite() for vnodes. Do not penalize devices or pipes. 2000-12-07 23:45:57 +00:00
Matthew Dillon
9440653d07 Add necessary bwillwrite() in writev() entry point.
Deal with excessive dirty buffers when msync() syncs non-contiguous
dirty buffers by checking for the case in UFS *before* checking for
clusterability.
2000-12-06 20:55:09 +00:00
Alfred Perlstein
c6ab5768aa only call bwillwrite() to stall on IO when dealing with VNODEs otherwise
we will stall on non-disk IO for things like fifos and sockets
2000-11-30 20:23:14 +00:00
Jonathan Lemon
4a476efa51 Protect p_wchan with sched_lock in selwakeup(). 2000-11-21 20:22:34 +00:00
Matthew Dillon
279d722604 This patchset fixes a large number of file descriptor race conditions.
Pre-rfork code assumed inherent locking of a process's file descriptor
    array.  However, with the advent of rfork() the file descriptor table
    could be shared between processes.  This patch closes over a dozen
    serious race conditions related to one thread manipulating the table
    (e.g. closing or dup()ing a descriptor) while another is blocked in
    an open(), close(), fcntl(), read(), write(), etc...

PR: kern/11629
Discussed with: Alexander Viro <viro@math.psu.edu>
2000-11-18 21:01:04 +00:00
Peter Wemm
b31ae1adc5 Fix a warning that has been annoying me for some time:
"kern/sys_generic.c:358: warning: cast discards qualifiers from pointer
   target type"
The idea for using the uintptr_t intermediate cast for de-constifying
a pointer was hinted at by bde some time ago.
2000-07-28 22:17:42 +00:00
Brian Feldman
3c89e357f0 Distinguish between whether ktraceing was enabled before an IO
operation or after it.  If the ktrace operation was enabled while the
process was blocked doing IO, the race would allow it to pass down
invalid (uninitialized) data and panic later down the call stack.
2000-07-27 03:45:18 +00:00
John Baldwin
9c386f6b7d For infinite timeouts, set both the tv_sec and tv_usec fields to zero in
poll() and select().

Noticed by:	Wesley Morgan <morganw@chemicals.tacorp.com>
2000-07-13 02:12:25 +00:00
John Baldwin
4da144c091 Fix a very obscure bug in select() and poll() where the timeout would
never expire if poll() or select() was called before the system had been
in multiuser for 1 second.  This was caused by only checking to see if
tv_sec was zero rather than checking both tv_sec and tv_usec.
2000-07-12 22:46:40 +00:00
Brian Feldman
7ceba2d755 Remove two micro-pessimizations I made. Bruce is teaching me well :)
KTRPOINT(p, KTR_GENIO) is more uncommon than error == 0, so it should
be first in the && statement.
2000-07-07 22:11:37 +00:00
Brian Feldman
42ebfbf227 Modify ktrace's general I/O tracing, ktrgenio(), to use a struct uio *
instead of a struct iovec * array and int len.  Get rid of stupidly trying
to allocate all of the memory and copyin()ing the entire iovec[], and
instead just do the proper VOP_WRITE() in ktrwrite() using a copy of
the struct uio that the syscall originally used.

This solves the DoS which could easily be performed; to work around the
DoS, one could also remove "options KTRACE" from the kernel.  This is
a very strong MFC candidate for 4.1.

Found by:	art@OpenBSD.org
2000-07-02 08:08:09 +00:00
Alfred Perlstein
8757e5bbc5 unstatic getfp() so that other subsystems can use it.
make sendfile() use it.

Approved by: dg
2000-06-12 18:06:12 +00:00
Matthew Dillon
d2ba455c2c Some ioctl routines assume that the ioctl buffer is aligned, but a
char[] declaration makes no such guarentee.  A union is used to force
    alignment of the char buffer.
2000-05-09 17:43:21 +00:00
Peter Wemm
f082218c18 Fix select(2) for the Alpha. (!!) It was never returning true for
fd's in the range of 32-63, 96-127 etc.  The first problem was the
FD_*() macros were shifting a 32 bit integer "1" left by more than
32 bits.  The same problem happened in selscan().  ffs() also takes
an int argument and causes failure.  For cases where int == long
(ie: the usual case for x86, but not always as gcc can have long
being a 64 bit quantity) ffs() could be used.

Reported by:	Marian Stagarescu <marian@bile.skycache.com>
Reviewed by:	dfr, gallatin (sys/types.h only)
Approved by:	jkh
2000-02-20 13:36:26 +00:00
Jason Evans
bfbbc4aa44 Add aio_waitcomplete(). Make aio work correctly for socket descriptors.
Make gratuitous style(9) fixes (me, not the submitter) to make the aio
code more readable.

PR:		kern/12053
Submitted by:	Chris Sedore <cmsedore@maxwell.syr.edu>
2000-01-14 02:53:29 +00:00
Peter Wemm
8cb96f20f8 Export the nselcoll counter via the kern.nselcoll sysctl so we can see
just how bad it gets in various situations.

Reminded by:  adrian
2000-01-05 19:40:17 +00:00
Brian Feldman
d817743797 Missed the second argument of fdrop().
Submitted by:	jhay
1999-10-14 10:50:06 +00:00
Brian Feldman
1aa3e7ddd0 Fix a race condition with shared fd tables and writev(). It's
still not safe to consider file table sharing secure.
Submitted by:	Ville-Pertti Keinonen <will@iki.fi>
1999-10-14 05:37:52 +00:00
Peter Wemm
d1f088dab5 Trim unused options (or #ifdef for undoc options).
Submitted by:	phk
1999-10-11 15:19:12 +00:00
Brian Feldman
13ccadd4b0 This is what was "fdfix2.patch," a fix for fd sharing. It's pretty
far-reaching in fd-land, so you'll want to consult the code for
changes.  The biggest change is that now, you don't use
	fp->f_ops->fo_foo(fp, bar)
but instead
	fo_foo(fp, bar),
which increments and decrements the fp refcount upon entry and exit.
Two new calls, fhold() and fdrop(), are provided.  Each does what it
seems like it should, and if fdrop() brings the refcount to zero, the
fd is freed as well.

Thanks to peter ("to hell with it, it looks ok to me.") for his review.
Thanks to msmith for keeping me from putting locks everywhere :)

Reviewed by:	peter
1999-09-19 17:00:25 +00:00
Peter Wemm
c3aac50f28 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
Dmitrij Tejblum
8fe387ab84 Add standard padding argument to pread and pwrite syscall. That should make them
NetBSD compatible.

Add parameter to fo_read and fo_write. (The only flag FOF_OFFSET mean that
the offset is set in the struct uio).

Factor out some common code from read/pread/write/pwrite syscalls.
1999-04-04 21:41:28 +00:00
Alan Cox
4160ccd978 Added pread and pwrite. These functions are defined by the X/Open
Threads Extension.  (Note: We use the same syscall numbers as NetBSD.)

Submitted by:	John Plevyak <jplevyak@inktomi.com>
1999-03-27 21:16:58 +00:00
Bruce Evans
425c50cf51 Removed a bogus cast to c_caddr_t. This is part of terminating
c_caddr_t with extreme prejudice.  Here the point of the original
cast to caddr_t was to break the warning about the const mismatch
between write(2)'s `const void *buf' and `struct uio's `char
*iov_base' (previous bitrot gave a gratuitous dependency on caddr_t
being char *).  Compiling with -Wcast-qual made the cast a full
no-op.

This change has no effect on the warning for discarding `const'
on assignment to iov_base.  The warning should not be fixed by
splitting `struct iovec' into a non-const version for read()
and a const version for write(), since correct const poisoning
would affect all pointers to i/o addresses.  Const'ness should
probably be forgotten by not declaring it in syscalls.master.
1999-01-29 08:10:35 +00:00
Matthew Dillon
d254af07a1 Fix warnings in preparation for adding -Wall -Wcast-qual to the
kernel compile
1999-01-27 21:50:00 +00:00
Jordan K. Hubbard
337c96916f poll(2) sets POLLNVAL for descriptors passed in that are less than
0.  This makes it difficult to do efficient manipulation of the
struct pollfd since you can't leave a slot empty.

PR:		8599
Submitted-by:	Marc Slemko <marcs@znep.com>
1998-12-10 01:53:26 +00:00
Don Lewis
831d27a9f5 Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.

This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices.  For more details, see the description on the PR.

Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.

PR:		kern/7899
Reviewed by:	bde, elvind
1998-11-11 10:04:13 +00:00
Bruce Evans
134e06fe71 Fixed bogotification of pseudocode for syscall args by rev.1.53 of
syscalls.master.
1998-09-05 14:30:11 +00:00
Doug Rabson
069e9bc1b4 Change various syscalls to use size_t arguments instead of u_int.
Add some overflow checks to read/write (from bde).

Change all modifications to vm_page::flags, vm_page::busy, vm_object::flags
and vm_object::paging_in_progress to use operations which are not
interruptable.

Reviewed by: Bruce Evans <bde@zeta.org.au>
1998-08-24 08:39:39 +00:00