Poul-Henning Kamp 4392001125 Move UFS from DEVFS backing to GEOM backing.
This eliminates a bunch of vnode overhead (approx 1-2 % speed
improvement) and gives us more control over the access to the storage
device.

Access counts on the underlying device are not correctly tracked and
therefore it is possible to read-only mount the same disk device multiple
times:
	syv# mount -p
	/dev/md0        /var    ufs rw  2 2
	/dev/ad0        /mnt    ufs ro  1 1
	/dev/ad0        /mnt2   ufs ro  1 1
	/dev/ad0        /mnt3   ufs ro  1 1

Since UFS/FFS is not a synchrousely consistent filesystem (ie: it caches
things in RAM) this is not possible with read-write mounts, and the system
will correctly reject this.

Details:

	Add a geom consumer and a bufobj pointer to ufsmount.

	Eliminate the vnode argument from softdep_disk_prewrite().
	Pick the vnode out of bp->b_vp for now.  Eventually we
	should find it through bp->b_bufobj->b_private.

	In the mountcode, use g_vfs_open() once we have used
	VOP_ACCESS() to check permissions.

	When upgrading and downgrading between r/o and r/w do the
	right thing with GEOM access counts.  Remove all the
	workarounds for not being able to do this with VOP_OPEN().

	If we are the root mount, drop the exclusive access count
	until we upgrade to r/w.  This allows fsck of the root
	filesystem and the MNT_RELOAD to work correctly.

	Set bo_private to the GEOM consumer on the device bufobj.

	Change the ffs_ops->strategy function to call g_vfs_strategy()

	In ufs_strategy() directly call the strategy on the disk
	bufobj.  Same in rawread.

	In ffs_fsync() we will no longer see VCHR device nodes, so
	remove code which synced the filesystem mounted on it, in
	case we came there.  I'm not sure this code made sense in
	the first place since we would have taken the specfs route
	on such a vnode.

	Redo the highly bogus readblock() function in the snapshot
	code to something slightly less bogus: Constructing an uio
	and using physio was really quite a detour.  Instead just
	fill in a bio and ship it down.
2004-10-29 10:15:56 +00:00
..

$FreeBSD$

Using Soft Updates

To enable the soft updates feature in your kernel, add option
SOFTUPDATES to your kernel configuration.

Once you are running a kernel with soft update support, you need to enable
it for whichever filesystems you wish to run with the soft update policy.
This is done with the -n option to tunefs(8) on the UNMOUNTED filesystems,
e.g. from single-user mode you'd do something like:

	tunefs -n enable /usr

To permanently enable soft updates on the /usr filesystem (or at least
until a corresponding ``tunefs -n disable'' is done).


Soft Updates Copyright Restrictions

As of June 2000 the restrictive copyright has been removed and 
replaced with a `Berkeley-style' copyright. The files implementing
soft updates now reside in the sys/ufs/ffs directory and are
compiled into the generic kernel by default.


Soft Updates Status

The soft updates code has been running in production on many
systems for the past two years generally quite successfully.
The two current sets of shortcomings are:

1) On filesystems that are chronically full, the two minute lag
   from the time a file is deleted until its free space shows up
   will result in premature filesystem full failures. This
   failure mode is most evident in small filesystems such as
   the root. For this reason, use of soft updates is not
   recommended on the root filesystem.

2) If your system routines runs parallel processes each of which
   remove many files, the kernel memory rate limiting code may
   not be able to slow removal operations to a level sustainable
   by the disk subsystem. The result is that the kernel runs out
   of memory and hangs.

Both of these problems are being addressed, but have not yet
been resolved. There are no other known problems at this time.


How Soft Updates Work

For more general information on soft updates, please see:
	http://www.mckusick.com/softdep/
	http://www.ece.cmu.edu/~ganger/papers/CSE-TR-254-95/

--
Marshall Kirk McKusick <mckusick@mckusick.com>
July 2000