Commit Graph

195 Commits

Author SHA1 Message Date
Jeff Roberson
767b9a529d - Cleanup unlocked accesses to buf flags by introducing a new b_vflag member
that is protected by the vnode lock.
 - Move B_SCANNED into b_vflags and call it BV_SCANNED.
 - Create a vop_stdfsync() modeled after spec's sync.
 - Replace spec_fsync, msdos_fsync, and hpfs_fsync with the stdfsync and some
   fs specific processing.  This gives all of these filesystems proper
   behavior wrt MNT_WAIT/NOWAIT and the use of the B_SCANNED flag.
 - Annotate the locking in buf.h
2003-02-09 11:28:35 +00:00
Poul-Henning Kamp
7a976ae838 Don't override the vop_lock, vop_unlock and vop_isunlocked methods.
Previously all filesystems which relied on specfs to do devices
would have private overrides for vop_std*, so the vop_no* overrides
here had no effect.  I overlooked the transitive nature of the vop
vectors when I removed the vop_std* in those filesystems.

Removing the override here restores device node locking to it's
previous modus operandi.

Spotted by:	bde
2003-01-05 19:14:44 +00:00
Poul-Henning Kamp
f576a06e22 Don't take the detour over VOP_STRATEGY from spec_getpages, call our
own strategy directly.
2003-01-05 10:03:57 +00:00
Poul-Henning Kamp
973418dbf5 Split out the vnode and buf arguments to the internal strategy worker
routine instead of doing evil casts.
2003-01-05 09:55:26 +00:00
Poul-Henning Kamp
f5b11b6e2d Temporarily introduce a new VOP_SPECSTRATEGY operation while I try
to sort out disk-io from file-io in the vm/buffer/filesystem space.

The intent is to sort VOP_STRATEGY calls into those which operate
on "real" vnodes and those which operate on VCHR vnodes.  For
the latter kind, the call will be changed to VOP_SPECSTRATEGY,
possibly conditionally for those places where dual-use happens.

Add a default VOP_SPECSTRATEGY method which will call the normal
VOP_STRATEGY.  First time it is called it will print debugging
information.  This will only happen if a normal vnode is passed
to VOP_SPECSTRATEGY by mistake.

Add a real VOP_SPECSTRATEGY in specfs, which does what VOP_STRATEGY
does on a VCHR vnode today.

Add a new VOP_STRATEGY method in specfs to catch instances where
the conversion to VOP_SPECSTRATEGY has not yet happened.  Handle
the request just like we always did, but first time called print
debugging information.

Apart up to two instances of console messages per boot, this amounts
to a glorified no-op commit.

If you get any of the messages on your console I would very much
like a copy of them mailed to phk@freebsd.org
2003-01-04 22:10:36 +00:00
Poul-Henning Kamp
646e95fe69 resort vnode ops list 2003-01-04 20:32:03 +00:00
Poul-Henning Kamp
2ff9e749be Replace spec_bmap() with vop_panic: We should never BMAP a device backed
vnode only filesystem backed vnodes.
2003-01-04 11:29:44 +00:00
Poul-Henning Kamp
862702306b Convert calls to BUF_STRATEGY to VOP_STRATEGY calls. This is a no-op since
all BUF_STRATEGY did in the first place was call VOP_STRATEGY.
2003-01-03 06:32:15 +00:00
Poul-Henning Kamp
e2a3ea1c45 Remove unused second argument from DEV_STRATEGY(). 2003-01-03 05:57:35 +00:00
Kirk McKusick
5878393060 Add debug.doslowdown to enable/disable niced slowdown on I/O. Default
to off until locking interference issues get sorted out.

Sponsored by:   DARPA & NAI Labs.
2002-11-04 07:29:20 +00:00
Poul-Henning Kamp
45c5890054 Put a KASSERT in specfs::strategy() to check that the incoming buffer
has a valid b_iocmd.  Valid is any one of BIO_{READ,WRITE,DELETE}.

I have seen at least one case where the bio_cmd field was zero once the
request made it into GEOM.  Putting the KASSERT here allows us to spot
the culprit in the backtrace.
2002-11-01 15:32:12 +00:00
Kirk McKusick
9ab73fd11a Within ufs, the ffs_sync and ffs_fsync functions did not always
check for and/or report I/O errors. The result is that a VFS_SYNC
or VOP_FSYNC called with MNT_WAIT could loop infinitely on ufs in
the presence of a hard error writing a disk sector or in a filesystem
full condition. This patch ensures that I/O errors will always be
checked and returned.  This patch also ensures that every call to
VFS_SYNC or VOP_FSYNC with MNT_WAIT set checks for and takes
appropriate action when an error is returned.

Sponsored by:   DARPA & NAI Labs.
2002-10-25 00:20:37 +00:00
Kirk McKusick
e03486d198 This checkin reimplements the io-request priority hack in a way
that works in the new threaded kernel. It was commented out of
the disksort routine earlier this year for the reasons given in
kern/subr_disklabel.c (which is where this code used to reside
before it moved to kern/subr_disk.c):

----------------------------
revision 1.65
date: 2002/04/22 06:53:20;  author: phk;  state: Exp;  lines: +5 -0
Comment out Kirks io-request priority hack until we can do this in a
civilized way which doesn't cause grief.

The problem is that it is not generally safe to cast a "struct bio
*" to a "struct buf *".  Things like ccd, vinum, ata-raid and GEOM
constructs bio's which are not entrails of a struct buf.

Also, curthread may or may not have anything to do with the I/O request
at hand.

The correct solution can either be to tag struct bio's with a
priority derived from the requesting threads nice and have disksort
act on this field, this wouldn't address the "silly-seek syndrome"
where two equal processes bang the diskheads from one edge to the
other of the disk repeatedly.

Alternatively, and probably better: a sleep should be introduced
either at the time the I/O is requested or at the time it is completed
where we can be sure to sleep in the right thread.

The sleep also needs to be in constant timeunits, 1/hz can be practicaly
any sub-second size, at high HZ the current code practically doesn't
do anything.
----------------------------

As suggested in this comment, it is no longer located in the disk sort
routine, but rather now resides in spec_strategy where the disk operations
are being queued by the thread that is associated with the process that
is really requesting the I/O. At that point, the disk queues are not
visible, so the I/O for positively niced processes is always slowed
down whether or not there is other activity on the disk.

On the issue of scaling HZ, I believe that the current scheme is
better than using a fixed quantum of time. As machines and I/O
subsystems get faster, the resolution on the clock also rises.
So, ten years from now we will be slowing things down for shorter
periods of time, but the proportional effect on the system will
be about the same as it is today. So, I view this as a feature
rather than a drawback. Hence this patch sticks with using HZ.

Sponsored by:	DARPA & NAI Labs.
Reviewed by:	Poul-Henning Kamp <phk@critter.freebsd.dk>
2002-10-22 00:59:49 +00:00
Poul-Henning Kamp
bc9d8a9a37 Fix comments and one resulting code confusion about the type of the
"command" argument to VOP_IOCTL.

Spotted by:	FlexeLint.
2002-10-16 08:04:11 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Poul-Henning Kamp
d241c36453 I misplaced a local variable yesterday. 2002-09-28 13:42:04 +00:00
Poul-Henning Kamp
2e27173fb5 Add a D_NOGIANT flag which can be set in a struct cdevsw to indicate
that a particular device driver is not Giant-challenged.

SPECFS will DROP_GIANT() ... PICKUP_GIANT() around calls to the
driver in question.

Notice that the interrupt path is not affected by this!

This does _NOT_ work for drivers accessed through cdevsw->d_strategy()
ie drivers for disk(-like), some tapes, maybe others.
2002-09-27 19:47:59 +00:00
Poul-Henning Kamp
48a7c35e51 I hate it when patch gives me .rej files.
Can't we make the pre-commit check refuse if there are .rej files in
the directory ?
2002-09-26 17:25:22 +00:00
Poul-Henning Kamp
c2aee6b42d Return ENOTTY on unhandled ioctls. 2002-09-26 14:11:49 +00:00
Jeff Roberson
2a96b2d69f - Fix a botch in previous commit; oldvp should not be unconditionally
assigned.
2002-09-26 02:54:30 +00:00
Jeff Roberson
c944ebed73 - Lock access to the buf lists in spec_sync()
- Fixup interlock locking in spec_close()
2002-09-25 02:29:49 +00:00
Nate Lawson
06be2aaa83 Remove all use of vnode->v_tag, replacing with appropriate substitutes.
v_tag is now const char * and should only be used for debugging.

Additionally:
1. All users of VT_NTS now check vfsconf->vf_type VFCF_NETWORK
2. The user of VT_PROCFS now checks for the new flag VV_PROCDEP, which
is propagated by pseudofs to all child vnodes if the fs sets PFS_PROCDEP.

Suggested by:   phk
Reviewed by:    bde, rwatson (earlier version)
2002-09-14 09:02:28 +00:00
Jeff Roberson
e6e370a7fe - Replace v_flag with v_iflag and v_vflag
- v_vflag is protected by the vnode lock and is used when synchronization
   with VOP calls is needed.
 - v_iflag is protected by interlock and is used for dealing with vnode
   management issues.  These flags include X/O LOCK, FREE, DOOMED, etc.
 - All accesses to v_iflag and v_vflag have either been locked or marked with
   mp_fixme's.
 - Many ASSERT_VOP_LOCKED calls have been added where the locking was not
   clear.
 - Many functions in vfs_subr.c were restructured to provide for stronger
   locking.

Idea stolen from:	BSD/OS
2002-08-04 10:29:36 +00:00
Jeff Roberson
17f888f15f - Explicitly state that specfs does not support locking by using
vop_no{lock,unlock,islocked}.  This should be the only vnode opv that does
   so.
2002-07-27 05:14:59 +00:00
Alan Cox
eb13174a6b o Lock page queue accesses by vm_page_activate() and vm_page_deactivate(). 2002-07-27 05:08:49 +00:00
Poul-Henning Kamp
17b5825d7e Remove a check of blocknumbers/offsets which will be pointless with
64 bit daddr_t.

Sponsored by: DARPA & NAI Labs.
2002-05-18 09:32:56 +00:00
John Baldwin
ba626c1db2 Lock proctree_lock instead of pgrpsess_lock. 2002-04-16 17:11:34 +00:00
Alfred Perlstein
11caded34f Remove __P. 2002-03-19 22:20:14 +00:00
Poul-Henning Kamp
26facaeb4d If in strategy we find that we have no devsw on the device anymore we
are probably talking about some disk-device which wente away, so
return ENXIO instead of panicing.
2002-03-05 13:25:57 +00:00
John Baldwin
a854ed9893 Simple p_ucred -> td_ucred changes to start using the per-thread ucred
reference.
2002-02-27 18:32:23 +00:00
Seigo Tanimura
f591779bb5 Lock struct pgrp, session and sigio.
New locks are:

- pgrpsess_lock which locks the whole pgrps and sessions,
- pg_mtx which protects the pgrp members, and
- s_mtx which protects the session members.

Please refer to sys/proc.h for the coverage of these locks.

Changes on the pgrp/session interface:

- pgfind() needs the pgrpsess_lock held.

- The caller of enterpgrp() is responsible to allocate a new pgrp and
  session.

- Call enterthispgrp() in order to enter an existing pgrp.

- pgsignal() requires a pgrp lock held.

Reviewed by:	jhb, alfred
Tested on:	cvsup.jp.FreeBSD.org
		(which is a quad-CPU machine running -current)
2002-02-23 11:12:57 +00:00
Poul-Henning Kamp
40f7b5a9cc Various nit-picking, mostly of style(9) character.
Obtained from:	~bde/sys.dif.gz
2002-02-10 22:00:20 +00:00
John Baldwin
bd78cece5d Change the kernel's ucred API as follows:
- crhold() returns a reference to the ucred whose refcount it bumps.
- crcopy() now simply copies the credentials from one credential to
  another and has no return value.
- a new crshared() primitive is added which returns true if a ucred's
  refcount is > 1 and false (0) otherwise.
2001-10-11 23:38:17 +00:00
Robert Watson
f86cf763ef o Modify generic specfs device open access control checks to use
securelevel_ge() instead of direct securelevel variable checks.

Obtained from:	TrustedBSD Project
2001-09-26 20:18:26 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Matthew Dillon
0cddd8f023 With Alfred's permission, remove vm_mtx in favor of a fine-grained approach
(this commit is just the first stage).  Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
2001-07-04 16:20:28 +00:00
John Baldwin
c7f52620e0 Don't acquire/release Giant around some of the places that need it in
spec_getpages().  Instead, assert that Giant is held by the caller.
2001-05-23 22:20:29 +00:00
Alfred Perlstein
2395531439 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
Bruce Evans
438abdb9c6 Backed out previous commit. It cause massive filesystem corruption,
not to mention a compile-time warning about the critical function
becoming unused, by replacing spec_bmap() with vop_stdbmap().

ntfs seems to have the same bug.

The factor for converting specfs block numbers to physical block
numbers is 1, but vop_stdbmap() uses the bogus factor
btodb(ap->a_vp->v_mount->mnt_stat.f_iosize), which is 16 for ffs with
the default block size of 8K.  This factor is bogus even for vop_stdbmap()
-- the correct factor is related to the filesystem blocksize which is not
necessarily the same to the optimal i/o size.  vop_stdbmap() was apparently
cloned from nfs where these sizes happen to be the same.

There may also be a problem with a_vp->v_mount being null.  spec_bmap()
still checks for this, but I think the checks in specfs are dead code
which used to support block devices.
2001-04-30 14:35:35 +00:00
Poul-Henning Kamp
b7ebffbc08 Add a vop_stdbmap(), and make it part of the default vop vector.
Make 7 filesystems which don't really know about VOP_BMAP rely
on the default vector, rather than more or less complete local
vop_nopbmap() implementations.
2001-04-29 11:48:41 +00:00
Greg Lehey
60fb0ce365 Revert consequences of changes to mount.h, part 2.
Requested by:	bde
2001-04-29 02:45:39 +00:00
Greg Lehey
d98dc34f52 Correct #includes to work with fixed sys/mount.h. 2001-04-23 09:05:15 +00:00
Kirk McKusick
589c7af992 Fixes to track snapshot copy-on-write checking in the specinfo
structure rather than assuming that the device vnode would reside
in the FFS filesystem (which is obviously a broken assumption with
the device filesystem).
2001-03-07 07:09:55 +00:00
Jonathan Lemon
608a3ce62a Extend kqueue down to the device layer.
Backwards compatible approach suggested by: peter
2001-02-15 16:34:11 +00:00
Poul-Henning Kamp
37d4006626 Another round of the <sys/queue.h> FOREACH transmogriffer.
Created with:   sed(1)
Reviewed by:    md5(1)
2001-02-04 16:08:18 +00:00
Poul-Henning Kamp
4997ad7c1f Add a BUF_KERNPROC() in the BIO_DELETE path.
This seems to fix the problem which md(4) backed filesystems exposed.
2001-01-30 10:06:08 +00:00
Matthew Dillon
2a9737202a This patch reestablishes the spec_fsync() guarentee that synchronous
fsyncs, which typically occur during unmounting, will drain all dirty
buffers even if it takes multiple passes to do so.  The guarentee was
mangled by the last patch which solved a problem due to -current disabling
interrupts while holding giant (which caused an infinite spin loop waiting for
I/O to complete).  -stable does not have either patch, but has a similar
bug in the original spec_fsync() code which is triggered by a bug in the
softupdates umount code, a fix for which will be committed to -current
as soon as Kirk stamps it.  Then both solutions will be MFC'd to -stable.

-stable currently suffers from a combination of the softupdates bug and
a small window of opportunity in the original spec_fsync() code, and -stable
also suffers from the spin-loop bug but since interrupts are enabled the
spin resolves itself in a few milliseconds.
2001-01-29 08:19:28 +00:00
Matthew Dillon
08c0a67b2e Fix a lockup problem that occurs with 'cvs update'. specfs's fsync can
get into the same sort of infinite loop that ffs's fsync used to get
into, probably due to background bitmap writes.  The solution is
the same.
2000-12-30 23:32:24 +00:00
Matthew Dillon
2b6b0df712 This implements a better launder limiting solution. There was a solution
in 4.2-REL which I ripped out in -stable and -current when implementing the
low-memory handling solution.  However, maxlaunder turns out to be the saving
grace in certain very heavily loaded systems (e.g. newsreader box).  The new
algorithm limits the number of pages laundered in the first pageout daemon
pass.  If that is not sufficient then suceessive will be run without any
limit.

Write I/O is now pipelined using two sysctls, vfs.lorunningspace and
vfs.hirunningspace.  This prevents excessive buffered writes in the
disk queues which cause long (multi-second) delays for reads.  It leads
to more stable (less jerky) and generally faster I/O streaming to disk
by allowing required read ops (e.g. for indirect blocks and such) to occur
without interrupting the write stream, amoung other things.

NOTE: eventually, filesystem write I/O pipelining needs to be done on a
per-device basis.  At the moment it is globalized.
2000-12-26 19:41:38 +00:00
Poul-Henning Kamp
1d7e3e42e7 Take VBLK devices further out of their missery.
This should fix the panic I introduced in my previous commit on this topic.
2000-11-02 21:14:13 +00:00