Commit Graph

113 Commits

Author SHA1 Message Date
Warner Losh
d167cf6f3a /* -> /*- for copyright notices, minor format tweaks as necessary 2005-01-06 18:10:42 +00:00
Poul-Henning Kamp
5cb471d04d Don't forget to bypass vnodes in corner cases.
Found by:	kkenn and ports/shell/zsh
Thanks to:	jeffr
2004-12-13 10:07:57 +00:00
Poul-Henning Kamp
dce357b112 Explicitly panic vop_read/vop_write on fifos. 2004-12-13 07:07:50 +00:00
Poul-Henning Kamp
aec0fb7b40 Back when VOP_* was introduced, we did not have new-style struct
initializations but we did have lofty goals and big ideals.

Adjust to more contemporary circumstances and gain type checking.

	Replace the entire vop_t frobbing thing with properly typed
	structures.  The only casualty is that we can not add a new
	VOP_ method with a loadable module.  History has not given
	us reason to belive this would ever be feasible in the the
	first place.

	Eliminate in toto VOCALL(), vop_t, VNODEOP_SET() etc.

	Give coda correct prototypes and function definitions for
	all vop_()s.

	Generate a bit more data from the vnode_if.src file:  a
	struct vop_vector and protype typedefs for all vop methods.

	Add a new vop_bypass() and make vop_default be a pointer
	to another struct vop_vector.

	Remove a lot of vfs_init since vop_vector is ready to use
	from the compiler.

	Cast various vop_mumble() to void * with uppercase name,
	for instance VOP_PANIC, VOP_NULL etc.

	Implement VCALL() by making vdesc_offset the offsetof() the
	relevant function pointer in vop_vector.  This is disgusting
	but since the code is generated by a script comparatively
	safe.  The alternative for nullfs etc. would be much worse.

	Fix up all vnode method vectors to remove casts so they
	become typesafe.  (The bulk of this is generated by scripts)
2004-12-01 23:16:38 +00:00
Poul-Henning Kamp
6fde64c778 Mechanically change prototypes for vnode operations to use the new typedefs. 2004-12-01 12:24:41 +00:00
Poul-Henning Kamp
75ad04b4f6 Add dropped implementation of ioctl for fifos. 2004-11-18 17:18:11 +00:00
Poul-Henning Kamp
003e18aef4 Make vnode bypass for fifos (read, write, poll) mandatory. 2004-11-17 07:30:02 +00:00
Poul-Henning Kamp
d6d64f0f2c Add file ops to fifofs so that we can bypass vnodes (and Giant) for the
heavy-duty operations (read, write, poll/select, kqueue).

Disabled for now, enable with "vfs.fifofs.fops=1" in loader.conf.
2004-11-15 14:51:44 +00:00
Poul-Henning Kamp
1ecf144493 fifos doesn't need a vop_lookup, the default will do fine. 2004-11-13 18:51:13 +00:00
Poul-Henning Kamp
5349c79d75 Properly implement a default version of VOP_GETWRITEMOUNT.
Remove improper access to vop_stdgetwritemount() which should and
will instead rely on the VOP default path.
2004-11-06 11:41:22 +00:00
John-Mark Gurney
ad3b9257c2 Add locking to the kqueue subsystem. This also makes the kqueue subsystem
a more complete subsystem, and removes the knowlege of how things are
implemented from the drivers.  Include locking around filter ops, so a
module like aio will know when not to be unloaded if there are outstanding
knotes using it's filter ops.

Currently, it uses the MTX_DUPOK even though it is not always safe to
aquire duplicate locks.  Witness currently doesn't support the ability
to discover if a dup lock is ok (in some cases).

Reviewed by:	green, rwatson (both earlier versions)
2004-08-15 06:24:42 +00:00
Robert Watson
7d84f9d293 Remove unlocked read annotation for sbspace(); the read is locked. 2004-06-23 00:35:50 +00:00
Robert Watson
c012260726 Merge some additional leaf node socket buffer locking from
rwatson_netperf:

Introduce conditional locking of the socket buffer in fifofs kqueue
filters; KNOTE() will be called holding the socket buffer locks in
fifofs, but sometimes the kqueue() system call will poll using the
same entry point without holding the socket buffer lock.

Introduce conditional locking of the socket buffer in the socket
kqueue filters; KNOTE() will be called holding the socket buffer
locks in the socket code, but sometimes the kqueue() system call
will poll using the same entry points without holding the socket
buffer lock.

Simplify the logic in sodisconnect() since we no longer need spls.

NOTE: To remove conditional locking in the kqueue filters, it would
make sense to use a separate kqueue API entry into the socket/fifo
code when calling from the kqueue() system call.
2004-06-18 02:57:55 +00:00
Robert Watson
9535efc00d Merge additional socket buffer locking from rwatson_netperf:
- Lock down low hanging fruit use of sb_flags with socket buffer
  lock.

- Lock down low hanging fruit use of so_state with socket lock.

- Lock down low hanging fruit use of so_options.

- Lock down low-hanging fruit use of sb_lowwat and sb_hiwat with
  socket buffer lock.

- Annotate situations in which we unlock the socket lock and then
  grab the receive socket buffer lock, which are currently actually
  the same lock.  Depending on how we want to play our cards, we
  may want to coallesce these lock uses to reduce overhead.

- Convert a if()->panic() into a KASSERT relating to so_state in
  soaccept().

- Remove a number of splnet()/splx() references.

More complex merging of socket and socket buffer locking to
follow.
2004-06-17 22:48:11 +00:00
Robert Watson
7721f5d760 Grab the socket buffer send or receive mutex when performing a
read-modify-write on the sb_state field.  This commit catches only
the "easy" ones where it doesn't interact with as yet unmerged
locking.
2004-06-15 03:51:44 +00:00
Robert Watson
c0b99ffa02 The socket field so_state is used to hold a variety of socket related
flags relating to several aspects of socket functionality.  This change
breaks out several bits relating to send and receive operation into a
new per-socket buffer field, sb_state, in order to facilitate locking.
This is required because, in order to provide more granular locking of
sockets, different state fields have different locking properties.  The
following fields are moved to sb_state:

  SS_CANTRCVMORE            (so_state)
  SS_CANTSENDMORE           (so_state)
  SS_RCVATMARK              (so_state)

Rename respectively to:

  SBS_CANTRCVMORE           (so_rcv.sb_state)
  SBS_CANTSENDMORE          (so_snd.sb_state)
  SBS_RCVATMARK             (so_rcv.sb_state)

This facilitates locking by isolating fields to be located with other
identically locked fields, and permits greater granularity in socket
locking by avoiding storing fields with different locking semantics in
the same short (avoiding locking conflicts).  In the future, we may
wish to coallesce sb_state and sb_flags; for the time being I leave
them separate and there is no additional memory overhead due to the
packing/alignment of shorts in the socket buffer structure.
2004-06-14 18:16:22 +00:00
Don Lewis
866046f5a6 Add MSG_NBIO flag option to soreceive() and sosend() that causes
them to behave the same as if the SS_NBIO socket flag had been set
for this call.  The SS_NBIO flag for ordinary sockets is set by
fcntl(fd, F_SETFL, O_NONBLOCK).

Pass the MSG_NBIO flag to the soreceive() and sosend() calls in
fifo_read() and fifo_write() instead of frobbing the SS_NBIO flag
on the underlying socket for each I/O operation.  The O_NONBLOCK
flag is a property of the descriptor, and unlike ordinary sockets,
fifos may be referenced by multiple descriptors.
2004-06-01 01:18:51 +00:00
Don Lewis
2526dc2b61 Switch from using the vnode interlock to a private mutex in fifo_open()
to avoid lock order problems when manipulating the sockets associated
with the fifo.

Minor optimization of a couple of calls to fifo_cleanup() from
fifo_open().
2004-05-17 20:16:40 +00:00
Warner Losh
f36cfd49ad Remove advertising clause from University of California Regent's
license, per letter dated July 22, 1999 and email from Peter Wemm,
Alan Cox and Robert Watson.

Approved by: core, peter, alc, rwatson
2004-04-07 20:46:16 +00:00
Robert Watson
db48c0d254 Export uipc_connect2() from uipc_usrreq.c instead of unp_connect2(),
and consume that interface in portalfs and fifofs instead.  In the
new world order, unp_connect2() assumes that the unpcb mutex is
held, whereas uipc_connect2() validates that the passed sockets are
UNIX domain sockets, then grabs the mutex.

NB: the portalfs and fifofs code gets down and dirty with UNIX domain
sockets.  Maybe this is a bad thing.
2004-03-31 01:41:30 +00:00
Don Lewis
95c6cd2f4b Use "fip->fi_readers == 0 && fip->fi_writers == 0" as the condition for
disposing fifo resources in fifo_cleanup() instead using of
"vp->v_usecount == 1".  There may be other references to the vnode, for
instance by nullfs, at the time fifo_open() or fifo_close() is called,
which could cause a resource leak.

Don't bother grabbing the vnode interlock in fifo_cleanup() since it no
longer accesses v_usecount.
2003-11-16 01:11:11 +00:00
Don Lewis
8d0c247562 If fifo_open() is interrupted, fifo_close() may not get called, causing
a resource leak.  Move the resource deallocation code from fifo_close()
to a new function, fifo_cleanup(), and call fifo_cleanup() from
fifo_close() and the appropriate places in fifo_open().

Tested by: 	Lukas Ertl
Pointy hat to:	truckman
2003-11-10 22:21:00 +00:00
Don Lewis
bbddbed9f3 Partially back out rev 1.87 by nuking fifo_inactive() and moving the
resource deallocation back to fifo_close().  This eliminates any
stale data that might be stuck in the socket buffers after all the
readers and writers have closed the fifo.

Tested by: Thorsten Schroeder <ths@katjusha.de>
2003-06-16 17:17:09 +00:00
Don Lewis
b156281658 Clean up the fifo_open() implementation:
Restructure the error handling portion of the resource allocation
        code to eliminate duplicated code.

        Test for the O_NONBLOCK && fi_readers == 0 case before incrementing
        fi_writers and modifying the the socket flag to avoid having to
        undo these operations in this error case.

        Restructure and simplify the code that handles blocking opens.

There should be no change to functionality.
2003-06-13 06:58:11 +00:00
Don Lewis
3a140162b4 Fix up locking problems in fifo_open() and fifo_close():
Sleep on the vnode interlock while waiting for another
	caller to increment fi_readers or fi_writers.  Hold the
	vnode interlock while incrementing fi_readers or fi_writers
	to prevent a wakeup from being missed.

	Only access fi_readers and fi_writers while holding the vnode
	lock.  Previously fifo_close() decremented their values without
	holding a lock.

	Move resource deallocation from fifo_close() to fifo_inactive(),
	which allows the VOP_CLOSE() call in the error return path in
	fifo_open() to be removed.  Fifo_open() was calling VOP_CLOSE()
	with the vnode lock held, in violation the current vnode locking
	API.  Also the way fifo_close() used vrefcnt() to decide whether
	to deallocate resources was bogus according to comments in the
	vrefcnt() implementation.

Reviewed by:	bde
2003-06-01 06:24:32 +00:00
Poul-Henning Kamp
c7b24d7dcd Remove unused variable.
Found by:       FlexeLint
2003-05-31 18:45:32 +00:00
Bruce Evans
520cab0a32 Better fix for the problem addressed by rev.1.79: don't loop in
fifo_open() waiting for another reader or writer if one arrived and
departed while we were waiting (or a little earlier).

Rev.1.79 broke blocking opens of fifos by making them time out after 1
second.  This was bad for at least apsfilter.

Tested by:	"Simon 'corecode' Schubert" <corecode@corecode.ath.cx>,
		Alexander Leidinger <Alexander@leidinger.net>,
		phk
MFC after:	4 weeks
2003-03-24 11:03:42 +00:00
Dag-Erling Smørgrav
521f364b80 More low-hanging fruit: kill caddr_t in calls to wakeup(9) / [mt]sleep(9). 2003-03-02 16:54:40 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Matthew Dillon
48e3128b34 Bow to the whining masses and change a union back into void *. Retain
removal of unnecessary casts and throw in some minor cleanups to see if
anyone complains, just for the hell of it.
2003-01-13 00:33:17 +00:00
Matthew Dillon
cd72f2180b Change struct file f_data to un_data, a union of the correct struct
pointer types, and remove a huge number of casts from code using it.

Change struct xfile xf_data to xun_data (ABI is still compatible).

If we need to add a #define for f_data and xf_data we can, but I don't
think it will be necessary.  There are no operational changes in this
commit.
2003-01-12 01:37:13 +00:00
Poul-Henning Kamp
402746a2d7 There is some sort of race/deadlock which I have not identified
here.  It manifests itself by sendmail hanging in "fifoow" during
boot on a diskless machine with sendmail disabled.

Giving the sleep a 1sec timout breaks the deadlock, but does not solve
the underlying problem.

XXX comment applied.
2002-12-29 10:32:16 +00:00
Poul-Henning Kamp
bc9d8a9a37 Fix comments and one resulting code confusion about the type of the
"command" argument to VOP_IOCTL.

Spotted by:	FlexeLint.
2002-10-16 08:04:11 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Jeff Roberson
4d93c0be1f - Use vrefcnt() where it is safe to do so instead of doing direct and
unlocked accesses to v_usecount.
 - Lock access to the buf lists in the various sync routines.  interlock
   locking could be avoided almost entirely in leaf filesystems if the
   fsync function had a generic helper.
2002-09-25 02:32:42 +00:00
Nate Lawson
86ed6d45ac Remove any VOP_PRINT that redundantly prints the tag.
Move lockmgr_printinfo() into vprint() for everyone's benefit.

Suggested by: bde
2002-09-18 20:42:04 +00:00
Nate Lawson
06be2aaa83 Remove all use of vnode->v_tag, replacing with appropriate substitutes.
v_tag is now const char * and should only be used for debugging.

Additionally:
1. All users of VT_NTS now check vfsconf->vf_type VFCF_NETWORK
2. The user of VT_PROCFS now checks for the new flag VV_PROCDEP, which
is propagated by pseudofs to all child vnodes if the fs sets PFS_PROCDEP.

Suggested by:   phk
Reviewed by:    bde, rwatson (earlier version)
2002-09-14 09:02:28 +00:00
Robert Watson
b4edfededd Handle one more case of a fifofs filetmp: set filetmp.f_cred to
ap->a_cred, and pass in ap->a_td->td_ucred as the active_cred to
soo_poll().

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-08-20 02:17:59 +00:00
Robert Watson
d49fa1ca6e In continuation of early fileop credential changes, modify fo_ioctl() to
accept an 'active_cred' argument reflecting the credential of the thread
initiating the ioctl operation.

- Change fo_ioctl() to accept active_cred; change consumers of the
  fo_ioctl() interface to generally pass active_cred from td->td_ucred.
- In fifofs, initialize filetmp.f_cred to ap->a_cred so that the
  invocations of soo_ioctl() are provided access to the calling f_cred.
  Pass ap->a_td->td_ucred as the active_cred, but note that this is
  required because we don't yet distinguish file_cred and active_cred
  in invoking VOP's.
- Update kqueue_ioctl() for its new argument.
- Update pipe_ioctl() for its new argument, pass active_cred rather
  than td_ucred to MAC for authorization.
- Update soo_ioctl() for its new argument.
- Update vn_ioctl() for its new argument, use active_cred rather than
  td->td_ucred to authorize VOP_IOCTL() and the associated VOP_GETATTR().

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-08-17 02:36:16 +00:00
Robert Watson
ea6027a8e1 Make similar changes to fo_stat() and fo_poll() as made earlier to
fo_read() and fo_write(): explicitly use the cred argument to fo_poll()
as "active_cred" using the passed file descriptor's f_cred reference
to provide access to the file credential.  Add an active_cred
argument to fo_stat() so that implementers have access to the active
credential as well as the file credential.  Generally modify callers
of fo_stat() to pass in td->td_ucred rather than fp->f_cred, which
was redundantly provided via the fp argument.  This set of modifications
also permits threads to perform these operations on behalf of another
thread without modifying their credential.

Trickle this change down into fo_stat/poll() implementations:

- badfo_poll(), badfo_stat(): modify/add arguments.
- kqueue_poll(), kqueue_stat(): modify arguments.
- pipe_poll(), pipe_stat(): modify/add arguments, pass active_cred to
  MAC checks rather than td->td_ucred.
- soo_poll(), soo_stat(): modify/add arguments, pass fp->f_cred rather
  than cred to pru_sopoll() to maintain current semantics.
- sopoll(): moidfy arguments.
- vn_poll(), vn_statfile(): modify/add arguments, pass new arguments
  to vn_stat().  Pass active_cred to MAC and fp->f_cred to VOP_POLL()
  to maintian current semantics.
- vn_close(): rename cred to file_cred to reflect reality while I'm here.
- vn_stat(): Add active_cred and file_cred arguments to vn_stat()
  and consumers so that this distinction is maintained at the VFS
  as well as 'struct file' layer.  Pass active_cred instead of
  td->td_ucred to MAC and to VOP_GETATTR() to maintain current semantics.

- fifofs: modify the creation of a "filetemp" so that the file
  credential is properly initialized and can be used in the socket
  code if desired.  Pass ap->a_td->td_ucred as the active
  credential to soo_poll().  If we teach the vnop interface about
  the distinction between file and active credentials, we would use
  the active credential here.

Note that current inconsistent passing of active_cred vs. file_cred to
VOP's is maintained.  It's not clear why GETATTR would be authorized
using active_cred while POLL would be authorized using file_cred at
the file system level.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-08-16 12:52:03 +00:00
Seigo Tanimura
4cc20ab1f0 Back out my lats commit of locking down a socket, it conflicts with hsu's work.
Requested by:	hsu
2002-05-31 11:52:35 +00:00
Seigo Tanimura
102638407c Lock the writer socket across sorwakeup(fip->fi_writesock).
Spotted by:	peter
2002-05-21 02:37:56 +00:00
Seigo Tanimura
243917fe3b Lock down a socket, milestone 1.
o Add a mutex (sb_mtx) to struct sockbuf. This protects the data in a
  socket buffer. The mutex in the receive buffer also protects the data
  in struct socket.

o Determine the lock strategy for each members in struct socket.

o Lock down the following members:

  - so_count
  - so_options
  - so_linger
  - so_state

o Remove *_locked() socket APIs.  Make the following socket APIs
  touching the members above now require a locked socket:

 - sodisconnect()
 - soisconnected()
 - soisconnecting()
 - soisdisconnected()
 - soisdisconnecting()
 - sofree()
 - soref()
 - sorele()
 - sorwakeup()
 - sotryfree()
 - sowakeup()
 - sowwakeup()

Reviewed by:	alfred
2002-05-20 05:41:09 +00:00
Poul-Henning Kamp
ef41ad17bd Use vop_panic() instead of rolling our own. 2002-05-02 19:13:44 +00:00
Seigo Tanimura
960ed29c4b Revert the change of #includes in sys/filedesc.h and sys/socketvar.h.
Requested by:	bde

Since locking sigio_lock is usually followed by calling pgsigio(),
move the declaration of sigio_lock and the definitions of SIGIO_*() to
sys/signalvar.h.

While I am here, sort include files alphabetically, where possible.
2002-04-30 01:54:54 +00:00
Alfred Perlstein
7858dcd629 Cleanup of logic, flow and comments.
Submitted by: bde
2002-04-18 14:47:34 +00:00
Alfred Perlstein
11caded34f Remove __P. 2002-03-19 22:20:14 +00:00
John Baldwin
a854ed9893 Simple p_ucred -> td_ucred changes to start using the per-thread ucred
reference.
2002-02-27 18:32:23 +00:00
Alfred Perlstein
468485b8d2 Fix select on fifos.
Backout revision 1.56 and 1.57 of fifo_vnops.c.

Introduce a new poll op "POLLINIGNEOF" that can be used to ignore
EOF on a fifo, POLLIN/POLLRDNORM is converted to POLLINIGNEOF within
the FIFO implementation to effect the correct behavior.

This should allow one to view a fifo pretty much as a data source
rather than worry about connections coming and going.

Reviewed by: bde
2002-01-14 22:03:48 +00:00