2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
2017-11-20 19:43:44 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
1994-05-24 10:09:53 +00:00
|
|
|
* Copyright (c) 1989, 1991, 1993, 1994
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2017-02-28 23:42:47 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
1994-05-24 10:09:53 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1997-02-10 02:22:35 +00:00
|
|
|
* @(#)ffs_vfsops.c 8.31 (Berkeley) 5/20/95
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
2003-06-11 06:34:30 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
1996-01-05 18:31:58 +00:00
|
|
|
#include "opt_quota.h"
|
2001-03-19 04:35:40 +00:00
|
|
|
#include "opt_ufs.h"
|
2004-10-26 10:44:10 +00:00
|
|
|
#include "opt_ffs.h"
|
2008-09-16 11:19:38 +00:00
|
|
|
#include "opt_ddb.h"
|
1996-01-05 18:31:58 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
2019-06-17 19:49:08 +00:00
|
|
|
#include <sys/gsb_crc32.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/namei.h>
|
2006-11-06 13:42:10 +00:00
|
|
|
#include <sys/priv.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/proc.h>
|
2016-03-27 08:21:17 +00:00
|
|
|
#include <sys/taskqueue.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/kernel.h>
|
2019-05-21 20:38:48 +00:00
|
|
|
#include <sys/ktr.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/vnode.h>
|
|
|
|
#include <sys/mount.h>
|
2000-05-05 09:59:14 +00:00
|
|
|
#include <sys/bio.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/buf.h>
|
1997-09-27 13:40:20 +00:00
|
|
|
#include <sys/conf.h>
|
1997-03-23 03:37:54 +00:00
|
|
|
#include <sys/fcntl.h>
|
2012-11-18 18:57:19 +00:00
|
|
|
#include <sys/ioccom.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/malloc.h>
|
2001-01-24 12:35:55 +00:00
|
|
|
#include <sys/mutex.h>
|
2013-05-31 00:43:41 +00:00
|
|
|
#include <sys/rwlock.h>
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
#include <sys/sysctl.h>
|
2017-04-17 17:07:00 +00:00
|
|
|
#include <sys/vmmeter.h>
|
2000-10-04 01:29:17 +00:00
|
|
|
|
2006-10-22 11:52:19 +00:00
|
|
|
#include <security/mac/mac_framework.h>
|
|
|
|
|
2017-04-05 01:44:03 +00:00
|
|
|
#include <ufs/ufs/dir.h>
|
Introduce extended attribute support for FFS, allowing arbitrary
(name, value) pairs to be associated with inodes. This support is
used for ACLs, MAC labels, and Capabilities in the TrustedBSD
security extensions, which are currently under development.
In this implementation, attributes are backed to data vnodes in the
style of the quota support in FFS. Support for FFS extended
attributes may be enabled using the FFS_EXTATTR kernel option
(disabled by default). Userland utilities and man pages will be
committed in the next batch. VFS interfaces and man pages have
been in the repo since 4.0-RELEASE and are unchanged.
o ufs/ufs/extattr.h: UFS-specific extattr defines
o ufs/ufs/ufs_extattr.c: bulk of support routines
o ufs/{ufs,ffs,mfs}/*.[ch]: hooks and extattr.h includes
o contrib/softupdates/ffs_softdep.c: extattr.h includes
o conf/options, conf/files, i386/conf/LINT: added FFS_EXTATTR
o coda/coda_vfsops.c: XXX required extattr.h due to ufsmount.h
(This should not be the case, and will be fixed in a future commit)
Currently attributes are not supported in MFS. This will be fixed.
Reviewed by: adrian, bp, freebsd-fs, other unthanked souls
Obtained from: TrustedBSD Project
2000-04-15 03:34:27 +00:00
|
|
|
#include <ufs/ufs/extattr.h>
|
2006-10-31 21:48:54 +00:00
|
|
|
#include <ufs/ufs/gjournal.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <ufs/ufs/quota.h>
|
|
|
|
#include <ufs/ufs/ufsmount.h>
|
|
|
|
#include <ufs/ufs/inode.h>
|
|
|
|
#include <ufs/ufs/ufs_extern.h>
|
|
|
|
|
|
|
|
#include <ufs/ffs/fs.h>
|
|
|
|
#include <ufs/ffs/ffs_extern.h>
|
|
|
|
|
1995-04-09 06:03:56 +00:00
|
|
|
#include <vm/vm.h>
|
2002-12-27 11:05:05 +00:00
|
|
|
#include <vm/uma.h>
|
1995-04-09 06:03:56 +00:00
|
|
|
#include <vm/vm_page.h>
|
|
|
|
|
2004-10-29 10:15:56 +00:00
|
|
|
#include <geom/geom.h>
|
|
|
|
#include <geom/geom_vfs.h>
|
|
|
|
|
2008-09-16 11:19:38 +00:00
|
|
|
#include <ddb/ddb.h>
|
|
|
|
|
2005-02-10 12:20:08 +00:00
|
|
|
static uma_zone_t uma_inode, uma_ufs1, uma_ufs2;
|
2020-07-25 10:38:05 +00:00
|
|
|
VFS_SMR_DECLARE;
|
1997-10-11 18:31:40 +00:00
|
|
|
|
2002-12-27 10:06:37 +00:00
|
|
|
static int ffs_mountfs(struct vnode *, struct mount *, struct thread *);
|
2002-06-21 06:18:05 +00:00
|
|
|
static void ffs_oldfscompat_read(struct fs *, struct ufsmount *,
|
|
|
|
ufs2_daddr_t);
|
2002-12-27 10:06:37 +00:00
|
|
|
static void ffs_ifree(struct ufsmount *ump, struct inode *ip);
|
2012-03-28 14:06:47 +00:00
|
|
|
static int ffs_sync_lazy(struct mount *mp);
|
2018-01-26 00:58:32 +00:00
|
|
|
static int ffs_use_bread(void *devfd, off_t loc, void **bufp, int size);
|
|
|
|
static int ffs_use_bwrite(void *devfd, off_t loc, void *buf, int size);
|
2012-03-28 14:06:47 +00:00
|
|
|
|
2002-08-13 10:05:50 +00:00
|
|
|
static vfs_init_t ffs_init;
|
|
|
|
static vfs_uninit_t ffs_uninit;
|
2002-08-13 10:33:57 +00:00
|
|
|
static vfs_extattrctl_t ffs_extattrctl;
|
2004-12-07 08:15:41 +00:00
|
|
|
static vfs_cmount_t ffs_cmount;
|
2005-02-10 12:20:08 +00:00
|
|
|
static vfs_unmount_t ffs_unmount;
|
2004-12-07 08:15:41 +00:00
|
|
|
static vfs_mount_t ffs_mount;
|
2005-02-10 12:20:08 +00:00
|
|
|
static vfs_statfs_t ffs_statfs;
|
|
|
|
static vfs_fhtovp_t ffs_fhtovp;
|
|
|
|
static vfs_sync_t ffs_sync;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1998-02-09 06:11:36 +00:00
|
|
|
static struct vfsops ufs_vfsops = {
|
2003-06-12 20:48:38 +00:00
|
|
|
.vfs_extattrctl = ffs_extattrctl,
|
|
|
|
.vfs_fhtovp = ffs_fhtovp,
|
|
|
|
.vfs_init = ffs_init,
|
2004-12-07 08:15:41 +00:00
|
|
|
.vfs_mount = ffs_mount,
|
|
|
|
.vfs_cmount = ffs_cmount,
|
2003-06-12 20:48:38 +00:00
|
|
|
.vfs_quotactl = ufs_quotactl,
|
2019-10-06 22:18:03 +00:00
|
|
|
.vfs_root = vfs_cache_root,
|
|
|
|
.vfs_cachedroot = ufs_root,
|
2003-06-12 20:48:38 +00:00
|
|
|
.vfs_statfs = ffs_statfs,
|
|
|
|
.vfs_sync = ffs_sync,
|
|
|
|
.vfs_uninit = ffs_uninit,
|
|
|
|
.vfs_unmount = ffs_unmount,
|
|
|
|
.vfs_vget = ffs_vget,
|
2008-09-16 11:51:06 +00:00
|
|
|
.vfs_susp_clean = process_deferred_inactive,
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
|
|
|
|
1998-09-07 13:17:06 +00:00
|
|
|
VFS_SET(ufs_vfsops, ufs, 0);
|
2006-07-09 14:11:09 +00:00
|
|
|
MODULE_VERSION(ufs, 1);
|
1994-09-21 03:47:43 +00:00
|
|
|
|
2004-10-26 10:44:10 +00:00
|
|
|
static b_strategy_t ffs_geom_strategy;
|
2005-02-08 20:29:10 +00:00
|
|
|
static b_write_t ffs_bufwrite;
|
2004-10-26 10:44:10 +00:00
|
|
|
|
|
|
|
static struct buf_ops ffs_ops = {
|
2004-10-26 20:13:21 +00:00
|
|
|
.bop_name = "FFS",
|
2005-02-08 20:29:10 +00:00
|
|
|
.bop_write = ffs_bufwrite,
|
2004-10-26 20:13:21 +00:00
|
|
|
.bop_strategy = ffs_geom_strategy,
|
2005-01-11 10:43:08 +00:00
|
|
|
.bop_sync = bufsync,
|
Cylinder group bitmaps and blocks containing inode for a snapshot
file are after snaplock, while other ffs device buffers are before
snaplock in global lock order. By itself, this could cause deadlock
when bdwrite() tries to flush dirty buffers on snapshotted ffs. If,
during the flush, COW activity for snapshot needs to allocate block
and ffs_alloccg() selects the cylinder group that is being written
by bdwrite(), then kernel would panic due to recursive buffer lock
acquision.
Avoid dealing with buffers in bdwrite() that are from other side of
snaplock divisor in the lock order then the buffer being written. Add
new BOP, bop_bdwrite(), to do dirty buffer flushing for same vnode in
the bdwrite(). Default implementation, bufbdflush(), refactors the code
from bdwrite(). For ffs device buffers, specialized implementation is
used.
Reviewed by: tegge, jeff, Russell Cattelan (cattelan xfs org, xfs changes)
Tested by: Peter Holm
X-MFC after: 3 weeks (if ever: it changes ABI)
2007-01-23 10:01:19 +00:00
|
|
|
#ifdef NO_FFS_SNAPSHOT
|
|
|
|
.bop_bdflush = bufbdflush,
|
|
|
|
#else
|
|
|
|
.bop_bdflush = ffs_bdflush,
|
|
|
|
#endif
|
2004-10-26 10:44:10 +00:00
|
|
|
};
|
|
|
|
|
2010-05-19 09:32:11 +00:00
|
|
|
/*
|
|
|
|
* Note that userquota and groupquota options are not currently used
|
|
|
|
* by UFS/FFS code and generally mount(8) does not pass those options
|
|
|
|
* from userland, but they can be passed by loader(8) via
|
|
|
|
* vfs.root.mountfrom.options.
|
|
|
|
*/
|
2008-03-26 20:48:07 +00:00
|
|
|
static const char *ffs_opts[] = { "acls", "async", "noatime", "noclusterr",
|
2010-05-19 09:32:11 +00:00
|
|
|
"noclusterw", "noexec", "export", "force", "from", "groupquota",
|
2011-07-15 16:20:33 +00:00
|
|
|
"multilabel", "nfsv4acls", "fsckpid", "snapshot", "nosuid", "suiddir",
|
2019-07-01 23:22:26 +00:00
|
|
|
"nosymfollow", "sync", "union", "userquota", "untrusted", NULL };
|
2004-12-07 08:15:41 +00:00
|
|
|
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
static int ffs_enxio_enable = 1;
|
|
|
|
SYSCTL_DECL(_vfs_ffs);
|
|
|
|
SYSCTL_INT(_vfs_ffs, OID_AUTO, enxio_enable, CTLFLAG_RWTUN,
|
|
|
|
&ffs_enxio_enable, 0,
|
|
|
|
"enable mapping of other disk I/O errors to ENXIO");
|
|
|
|
|
2020-06-17 23:39:52 +00:00
|
|
|
/*
|
|
|
|
* Return buffer with the contents of block "offset" from the beginning of
|
|
|
|
* directory "ip". If "res" is non-zero, fill it in with a pointer to the
|
|
|
|
* remaining space in the directory.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_blkatoff(struct vnode *vp, off_t offset, char **res, struct buf **bpp)
|
|
|
|
{
|
|
|
|
struct inode *ip;
|
|
|
|
struct fs *fs;
|
|
|
|
struct buf *bp;
|
|
|
|
ufs_lbn_t lbn;
|
|
|
|
int bsize, error;
|
|
|
|
|
|
|
|
ip = VTOI(vp);
|
|
|
|
fs = ITOFS(ip);
|
|
|
|
lbn = lblkno(fs, offset);
|
|
|
|
bsize = blksize(fs, ip, lbn);
|
|
|
|
|
|
|
|
*bpp = NULL;
|
|
|
|
error = bread(vp, lbn, bsize, NOCRED, &bp);
|
|
|
|
if (error) {
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
if (res)
|
|
|
|
*res = (char *)bp->b_data + blkoff(fs, offset);
|
|
|
|
*bpp = bp;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Load up the contents of an inode and copy the appropriate pieces
|
|
|
|
* to the incore copy.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_load_inode(struct buf *bp, struct inode *ip, struct fs *fs, ino_t ino)
|
|
|
|
{
|
|
|
|
struct ufs1_dinode *dip1;
|
|
|
|
struct ufs2_dinode *dip2;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
if (I_IS_UFS1(ip)) {
|
|
|
|
dip1 = ip->i_din1;
|
|
|
|
*dip1 =
|
|
|
|
*((struct ufs1_dinode *)bp->b_data + ino_to_fsbo(fs, ino));
|
|
|
|
ip->i_mode = dip1->di_mode;
|
|
|
|
ip->i_nlink = dip1->di_nlink;
|
|
|
|
ip->i_effnlink = dip1->di_nlink;
|
|
|
|
ip->i_size = dip1->di_size;
|
|
|
|
ip->i_flags = dip1->di_flags;
|
|
|
|
ip->i_gen = dip1->di_gen;
|
|
|
|
ip->i_uid = dip1->di_uid;
|
|
|
|
ip->i_gid = dip1->di_gid;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
dip2 = ((struct ufs2_dinode *)bp->b_data + ino_to_fsbo(fs, ino));
|
|
|
|
if ((error = ffs_verify_dinode_ckhash(fs, dip2)) != 0 &&
|
|
|
|
!ffs_fsfail_cleanup(ITOUMP(ip), error)) {
|
|
|
|
printf("%s: inode %jd: check-hash failed\n", fs->fs_fsmnt,
|
|
|
|
(intmax_t)ino);
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
*ip->i_din2 = *dip2;
|
|
|
|
dip2 = ip->i_din2;
|
|
|
|
ip->i_mode = dip2->di_mode;
|
|
|
|
ip->i_nlink = dip2->di_nlink;
|
|
|
|
ip->i_effnlink = dip2->di_nlink;
|
|
|
|
ip->i_size = dip2->di_size;
|
|
|
|
ip->i_flags = dip2->di_flags;
|
|
|
|
ip->i_gen = dip2->di_gen;
|
|
|
|
ip->i_uid = dip2->di_uid;
|
|
|
|
ip->i_gid = dip2->di_gid;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Verify that a filesystem block number is a valid data block.
|
|
|
|
* This routine is only called on untrusted filesystems.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_check_blkno(struct mount *mp, ino_t inum, ufs2_daddr_t daddr, int blksize)
|
|
|
|
{
|
|
|
|
struct fs *fs;
|
|
|
|
struct ufsmount *ump;
|
|
|
|
ufs2_daddr_t end_daddr;
|
|
|
|
int cg, havemtx;
|
|
|
|
|
|
|
|
KASSERT((mp->mnt_flag & MNT_UNTRUSTED) != 0,
|
|
|
|
("ffs_check_blkno called on a trusted file system"));
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
|
|
|
cg = dtog(fs, daddr);
|
|
|
|
end_daddr = daddr + numfrags(fs, blksize);
|
|
|
|
/*
|
|
|
|
* Verify that the block number is a valid data block. Also check
|
|
|
|
* that it does not point to an inode block or a superblock. Accept
|
|
|
|
* blocks that are unalloacted (0) or part of snapshot metadata
|
|
|
|
* (BLK_NOCOPY or BLK_SNAP).
|
|
|
|
*
|
|
|
|
* Thus, the block must be in a valid range for the filesystem and
|
|
|
|
* either in the space before a backup superblock (except the first
|
|
|
|
* cylinder group where that space is used by the bootstrap code) or
|
|
|
|
* after the inode blocks and before the end of the cylinder group.
|
|
|
|
*/
|
|
|
|
if ((uint64_t)daddr <= BLK_SNAP ||
|
|
|
|
((uint64_t)end_daddr <= fs->fs_size &&
|
|
|
|
((cg > 0 && end_daddr <= cgsblock(fs, cg)) ||
|
|
|
|
(daddr >= cgdmin(fs, cg) &&
|
|
|
|
end_daddr <= cgbase(fs, cg) + fs->fs_fpg))))
|
|
|
|
return (0);
|
|
|
|
if ((havemtx = mtx_owned(UFS_MTX(ump))) == 0)
|
|
|
|
UFS_LOCK(ump);
|
|
|
|
if (ppsratecheck(&ump->um_last_integritymsg,
|
|
|
|
&ump->um_secs_integritymsg, 1)) {
|
|
|
|
UFS_UNLOCK(ump);
|
|
|
|
uprintf("\n%s: inode %jd, out-of-range indirect block "
|
|
|
|
"number %jd\n", mp->mnt_stat.f_mntonname, inum, daddr);
|
|
|
|
if (havemtx)
|
|
|
|
UFS_LOCK(ump);
|
|
|
|
} else if (!havemtx)
|
|
|
|
UFS_UNLOCK(ump);
|
|
|
|
return (EINTEGRITY);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initiate a forcible unmount.
|
|
|
|
* Used to unmount filesystems whose underlying media has gone away.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ffs_fsfail_unmount(void *v, int pending)
|
|
|
|
{
|
|
|
|
struct fsfail_task *etp;
|
|
|
|
struct mount *mp;
|
|
|
|
|
|
|
|
etp = v;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find our mount and get a ref on it, then try to unmount.
|
|
|
|
*/
|
|
|
|
mp = vfs_getvfs(&etp->fsid);
|
|
|
|
if (mp != NULL)
|
|
|
|
dounmount(mp, MNT_FORCE, curthread);
|
|
|
|
free(etp, M_UFSMNT);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* On first ENXIO error, start a task that forcibly unmounts the filesystem.
|
|
|
|
*
|
|
|
|
* Return true if a cleanup is in progress.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_fsfail_cleanup(struct ufsmount *ump, int error)
|
|
|
|
{
|
|
|
|
int retval;
|
|
|
|
|
|
|
|
UFS_LOCK(ump);
|
|
|
|
retval = ffs_fsfail_cleanup_locked(ump, error);
|
|
|
|
UFS_UNLOCK(ump);
|
|
|
|
return (retval);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ffs_fsfail_cleanup_locked(struct ufsmount *ump, int error)
|
|
|
|
{
|
|
|
|
struct fsfail_task *etp;
|
|
|
|
struct task *tp;
|
|
|
|
|
|
|
|
mtx_assert(UFS_MTX(ump), MA_OWNED);
|
|
|
|
if (error == ENXIO && (ump->um_flags & UM_FSFAIL_CLEANUP) == 0) {
|
|
|
|
ump->um_flags |= UM_FSFAIL_CLEANUP;
|
|
|
|
/*
|
|
|
|
* Queue an async forced unmount.
|
|
|
|
*/
|
|
|
|
etp = ump->um_fsfail_task;
|
|
|
|
ump->um_fsfail_task = NULL;
|
|
|
|
if (etp != NULL) {
|
|
|
|
tp = &etp->task;
|
|
|
|
TASK_INIT(tp, 0, ffs_fsfail_unmount, etp);
|
|
|
|
taskqueue_enqueue(taskqueue_thread, tp);
|
|
|
|
printf("UFS: forcibly unmounting %s from %s\n",
|
|
|
|
ump->um_mountp->mnt_stat.f_mntfromname,
|
|
|
|
ump->um_mountp->mnt_stat.f_mntonname);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ((ump->um_flags & UM_FSFAIL_CLEANUP) != 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wrapper used during ENXIO cleanup to allocate empty buffers when
|
|
|
|
* the kernel is unable to read the real one. They are needed so that
|
|
|
|
* the soft updates code can use them to unwind its dependencies.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ffs_breadz(struct ufsmount *ump, struct vnode *vp, daddr_t lblkno,
|
|
|
|
daddr_t dblkno, int size, daddr_t *rablkno, int *rabsize, int cnt,
|
|
|
|
struct ucred *cred, int flags, void (*ckhashfunc)(struct buf *),
|
|
|
|
struct buf **bpp)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
flags |= GB_CVTENXIO;
|
|
|
|
error = breadn_flags(vp, lblkno, dblkno, size, rablkno, rabsize, cnt,
|
|
|
|
cred, flags, ckhashfunc, bpp);
|
|
|
|
if (error != 0 && ffs_fsfail_cleanup(ump, error)) {
|
|
|
|
error = getblkx(vp, lblkno, dblkno, size, 0, 0, flags, bpp);
|
|
|
|
KASSERT(error == 0, ("getblkx failed"));
|
|
|
|
vfs_bio_bzero_buf(*bpp, 0, size);
|
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2004-07-30 22:08:52 +00:00
|
|
|
static int
|
2009-05-11 15:33:26 +00:00
|
|
|
ffs_mount(struct mount *mp)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2020-03-06 18:41:37 +00:00
|
|
|
struct vnode *devvp, *odevvp;
|
2009-05-11 15:33:26 +00:00
|
|
|
struct thread *td;
|
2012-07-22 15:40:31 +00:00
|
|
|
struct ufsmount *ump = NULL;
|
2002-05-13 09:22:31 +00:00
|
|
|
struct fs *fs;
|
2011-07-15 16:20:33 +00:00
|
|
|
pid_t fsckpid = 0;
|
2016-11-13 21:49:51 +00:00
|
|
|
int error, error1, flags;
|
2019-04-08 15:20:05 +00:00
|
|
|
uint64_t mntorflags, saved_mnt_flag;
|
2008-10-28 13:44:11 +00:00
|
|
|
accmode_t accmode;
|
2004-07-30 22:08:52 +00:00
|
|
|
struct nameidata ndp;
|
2004-12-07 08:15:41 +00:00
|
|
|
char *fspec;
|
2021-02-28 18:55:35 +00:00
|
|
|
bool mounted_softdep;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
2009-05-11 15:33:26 +00:00
|
|
|
td = curthread;
|
2004-12-07 08:15:41 +00:00
|
|
|
if (vfs_filteropt(mp->mnt_optnew, ffs_opts))
|
|
|
|
return (EINVAL);
|
2002-12-27 11:05:05 +00:00
|
|
|
if (uma_inode == NULL) {
|
|
|
|
uma_inode = uma_zcreate("FFS inode",
|
|
|
|
sizeof(struct inode), NULL, NULL, NULL, NULL,
|
|
|
|
UMA_ALIGN_PTR, 0);
|
|
|
|
uma_ufs1 = uma_zcreate("FFS1 dinode",
|
|
|
|
sizeof(struct ufs1_dinode), NULL, NULL, NULL, NULL,
|
|
|
|
UMA_ALIGN_PTR, 0);
|
|
|
|
uma_ufs2 = uma_zcreate("FFS2 dinode",
|
|
|
|
sizeof(struct ufs2_dinode), NULL, NULL, NULL, NULL,
|
|
|
|
UMA_ALIGN_PTR, 0);
|
2020-07-25 10:38:05 +00:00
|
|
|
VFS_SMR_ZONE_SET(uma_inode);
|
2002-12-27 11:05:05 +00:00
|
|
|
}
|
2004-10-05 11:26:43 +00:00
|
|
|
|
2010-05-19 09:32:11 +00:00
|
|
|
vfs_deleteopt(mp->mnt_optnew, "groupquota");
|
|
|
|
vfs_deleteopt(mp->mnt_optnew, "userquota");
|
|
|
|
|
2004-12-07 08:15:41 +00:00
|
|
|
fspec = vfs_getopts(mp->mnt_optnew, "from", &error);
|
|
|
|
if (error)
|
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2006-09-26 04:12:49 +00:00
|
|
|
mntorflags = 0;
|
2019-07-01 23:22:26 +00:00
|
|
|
if (vfs_getopt(mp->mnt_optnew, "untrusted", NULL, NULL) == 0)
|
|
|
|
mntorflags |= MNT_UNTRUSTED;
|
|
|
|
|
- Add parsing for the following existing UFS/FFS mount options in the nmount()
callpath via vfs_getopt(), and set the appropriate MNT_* flag:
-> acls, async, force, multilabel, noasync, noatime,
-> noclusterr, noclusterw, snapshot, update
- Allow errmsg as a valid mount option via vfs_getopt(),
so we can later add a hook to propagate mount errors back
to userspace via vfs_mount_error().
2005-11-18 06:06:10 +00:00
|
|
|
if (vfs_getopt(mp->mnt_optnew, "acls", NULL, NULL) == 0)
|
2006-09-26 04:12:49 +00:00
|
|
|
mntorflags |= MNT_ACLS;
|
- Add parsing for the following existing UFS/FFS mount options in the nmount()
callpath via vfs_getopt(), and set the appropriate MNT_* flag:
-> acls, async, force, multilabel, noasync, noatime,
-> noclusterr, noclusterw, snapshot, update
- Allow errmsg as a valid mount option via vfs_getopt(),
so we can later add a hook to propagate mount errors back
to userspace via vfs_mount_error().
2005-11-18 06:06:10 +00:00
|
|
|
|
2008-05-24 00:41:32 +00:00
|
|
|
if (vfs_getopt(mp->mnt_optnew, "snapshot", NULL, NULL) == 0) {
|
2006-09-26 04:12:49 +00:00
|
|
|
mntorflags |= MNT_SNAPSHOT;
|
2008-05-24 00:41:32 +00:00
|
|
|
/*
|
|
|
|
* Once we have set the MNT_SNAPSHOT flag, do not
|
|
|
|
* persist "snapshot" in the options list.
|
|
|
|
*/
|
|
|
|
vfs_deleteopt(mp->mnt_optnew, "snapshot");
|
2008-08-10 12:15:36 +00:00
|
|
|
vfs_deleteopt(mp->mnt_opt, "snapshot");
|
2008-05-24 00:41:32 +00:00
|
|
|
}
|
- Add parsing for the following existing UFS/FFS mount options in the nmount()
callpath via vfs_getopt(), and set the appropriate MNT_* flag:
-> acls, async, force, multilabel, noasync, noatime,
-> noclusterr, noclusterw, snapshot, update
- Allow errmsg as a valid mount option via vfs_getopt(),
so we can later add a hook to propagate mount errors back
to userspace via vfs_mount_error().
2005-11-18 06:06:10 +00:00
|
|
|
|
2011-07-15 16:20:33 +00:00
|
|
|
if (vfs_getopt(mp->mnt_optnew, "fsckpid", NULL, NULL) == 0 &&
|
|
|
|
vfs_scanopt(mp->mnt_optnew, "fsckpid", "%d", &fsckpid) == 1) {
|
|
|
|
/*
|
|
|
|
* Once we have set the restricted PID, do not
|
|
|
|
* persist "fsckpid" in the options list.
|
|
|
|
*/
|
|
|
|
vfs_deleteopt(mp->mnt_optnew, "fsckpid");
|
|
|
|
vfs_deleteopt(mp->mnt_opt, "fsckpid");
|
|
|
|
if (mp->mnt_flag & MNT_UPDATE) {
|
|
|
|
if (VFSTOUFS(mp)->um_fs->fs_ronly == 0 &&
|
|
|
|
vfs_flagopt(mp->mnt_optnew, "ro", NULL, 0) == 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"Checker enable: Must be read-only");
|
2011-07-15 16:20:33 +00:00
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
} else if (vfs_flagopt(mp->mnt_optnew, "ro", NULL, 0) == 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"Checker enable: Must be read-only");
|
2011-07-15 16:20:33 +00:00
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
/* Set to -1 if we are done */
|
|
|
|
if (fsckpid == 0)
|
|
|
|
fsckpid = -1;
|
|
|
|
}
|
|
|
|
|
2009-12-21 19:39:10 +00:00
|
|
|
if (vfs_getopt(mp->mnt_optnew, "nfsv4acls", NULL, NULL) == 0) {
|
|
|
|
if (mntorflags & MNT_ACLS) {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"\"acls\" and \"nfsv4acls\" options "
|
|
|
|
"are mutually exclusive");
|
2009-12-21 19:39:10 +00:00
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
mntorflags |= MNT_NFS4ACLS;
|
|
|
|
}
|
|
|
|
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
2020-07-25 10:38:05 +00:00
|
|
|
mp->mnt_kern_flag &= ~MNTK_FPLOOKUP;
|
2010-02-10 18:56:49 +00:00
|
|
|
mp->mnt_flag |= mntorflags;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* If updating, check whether changing from read-only to
|
|
|
|
* read/write; if there is no device name, that's all we do.
|
|
|
|
*/
|
|
|
|
if (mp->mnt_flag & MNT_UPDATE) {
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
2020-03-06 18:41:37 +00:00
|
|
|
odevvp = ump->um_odevvp;
|
1998-03-27 14:20:57 +00:00
|
|
|
devvp = ump->um_devvp;
|
2011-07-15 16:20:33 +00:00
|
|
|
if (fsckpid == -1 && ump->um_fsckpid > 0) {
|
|
|
|
if ((error = ffs_flushfiles(mp, WRITECLOSE, td)) != 0 ||
|
|
|
|
(error = ffs_sbupdate(ump, MNT_WAIT, 0)) != 0)
|
|
|
|
return (error);
|
|
|
|
g_topology_lock();
|
|
|
|
/*
|
|
|
|
* Return to normal read-only mode.
|
|
|
|
*/
|
|
|
|
error = g_access(ump->um_cp, 0, -1, 0);
|
|
|
|
g_topology_unlock();
|
|
|
|
ump->um_fsckpid = 0;
|
|
|
|
}
|
2004-12-07 08:15:41 +00:00
|
|
|
if (fs->fs_ronly == 0 &&
|
|
|
|
vfs_flagopt(mp->mnt_optnew, "ro", NULL, 0)) {
|
2002-01-15 07:17:12 +00:00
|
|
|
/*
|
2008-09-16 11:55:53 +00:00
|
|
|
* Flush any dirty data and suspend filesystem.
|
2002-01-15 07:17:12 +00:00
|
|
|
*/
|
2008-09-16 11:55:53 +00:00
|
|
|
if ((error = vn_start_write(NULL, &mp, V_WAIT)) != 0)
|
2002-10-25 00:20:37 +00:00
|
|
|
return (error);
|
2014-07-14 09:10:00 +00:00
|
|
|
error = vfs_write_suspend_umnt(mp);
|
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
2021-02-28 18:55:35 +00:00
|
|
|
|
|
|
|
fs->fs_ronly = 1;
|
|
|
|
if (MOUNTEDSOFTDEP(mp)) {
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag &= ~MNT_SOFTDEP;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
mounted_softdep = true;
|
|
|
|
} else
|
|
|
|
mounted_softdep = false;
|
|
|
|
|
2002-01-15 07:17:12 +00:00
|
|
|
/*
|
|
|
|
* Check for and optionally get rid of files open
|
|
|
|
* for writing.
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
flags = WRITECLOSE;
|
|
|
|
if (mp->mnt_flag & MNT_FORCE)
|
|
|
|
flags |= FORCECLOSE;
|
2021-02-28 18:55:35 +00:00
|
|
|
if (mounted_softdep) {
|
2001-09-12 08:38:13 +00:00
|
|
|
error = softdep_flushfiles(mp, flags, td);
|
1998-03-08 09:59:44 +00:00
|
|
|
} else {
|
2001-09-12 08:38:13 +00:00
|
|
|
error = ffs_flushfiles(mp, flags, td);
|
1998-03-08 09:59:44 +00:00
|
|
|
}
|
2000-07-11 22:07:57 +00:00
|
|
|
if (error) {
|
2021-02-28 18:55:35 +00:00
|
|
|
fs->fs_ronly = 0;
|
|
|
|
if (mounted_softdep) {
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag |= MNT_SOFTDEP;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
}
|
2013-01-11 06:08:32 +00:00
|
|
|
vfs_write_resume(mp, 0);
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
|
|
|
}
|
2021-02-28 18:55:35 +00:00
|
|
|
|
2001-05-08 07:42:20 +00:00
|
|
|
if (fs->fs_pendingblocks != 0 ||
|
|
|
|
fs->fs_pendinginodes != 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s Update error: blocks %jd "
|
|
|
|
"files %d\n", fs->fs_fsmnt,
|
2002-06-21 06:18:05 +00:00
|
|
|
(intmax_t)fs->fs_pendingblocks,
|
2001-05-08 07:42:20 +00:00
|
|
|
fs->fs_pendinginodes);
|
|
|
|
fs->fs_pendingblocks = 0;
|
|
|
|
fs->fs_pendinginodes = 0;
|
|
|
|
}
|
2001-04-14 05:26:28 +00:00
|
|
|
if ((fs->fs_flags & (FS_UNCLEAN | FS_NEEDSFSCK)) == 0)
|
2000-07-11 22:07:57 +00:00
|
|
|
fs->fs_clean = 1;
|
2006-03-08 23:43:39 +00:00
|
|
|
if ((error = ffs_sbupdate(ump, MNT_WAIT, 0)) != 0) {
|
2000-07-11 22:07:57 +00:00
|
|
|
fs->fs_ronly = 0;
|
|
|
|
fs->fs_clean = 0;
|
2021-02-28 18:55:35 +00:00
|
|
|
if (mounted_softdep) {
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag |= MNT_SOFTDEP;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
}
|
2013-01-11 06:08:32 +00:00
|
|
|
vfs_write_resume(mp, 0);
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
|
|
|
}
|
2021-02-28 18:55:35 +00:00
|
|
|
if (mounted_softdep)
|
2011-06-12 18:46:48 +00:00
|
|
|
softdep_unmount(mp);
|
2004-10-29 10:15:56 +00:00
|
|
|
g_topology_lock();
|
2011-07-10 00:41:31 +00:00
|
|
|
/*
|
|
|
|
* Drop our write and exclusive access.
|
|
|
|
*/
|
|
|
|
g_access(ump->um_cp, 0, -1, -1);
|
2004-10-29 10:15:56 +00:00
|
|
|
g_topology_unlock();
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
2004-12-07 08:15:41 +00:00
|
|
|
mp->mnt_flag |= MNT_RDONLY;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2008-09-16 11:55:53 +00:00
|
|
|
/*
|
|
|
|
* Allow the writers to note that filesystem
|
|
|
|
* is ro now.
|
|
|
|
*/
|
2013-01-11 06:08:32 +00:00
|
|
|
vfs_write_resume(mp, 0);
|
1995-08-28 09:19:25 +00:00
|
|
|
}
|
2000-07-11 22:07:57 +00:00
|
|
|
if ((mp->mnt_flag & MNT_RELOAD) &&
|
2012-11-18 18:57:19 +00:00
|
|
|
(error = ffs_reload(mp, td, 0)) != 0)
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
2004-12-07 08:15:41 +00:00
|
|
|
if (fs->fs_ronly &&
|
|
|
|
!vfs_flagopt(mp->mnt_optnew, "ro", NULL, 0)) {
|
2011-07-15 16:20:33 +00:00
|
|
|
/*
|
|
|
|
* If we are running a checker, do not allow upgrade.
|
|
|
|
*/
|
|
|
|
if (ump->um_fsckpid > 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"Active checker, cannot upgrade to write");
|
2011-07-15 16:20:33 +00:00
|
|
|
return (EINVAL);
|
|
|
|
}
|
1998-02-25 04:47:04 +00:00
|
|
|
/*
|
|
|
|
* If upgrade to read-write by non-root, then verify
|
|
|
|
* that user has necessary permissions on the device.
|
|
|
|
*/
|
2020-03-06 18:41:37 +00:00
|
|
|
vn_lock(odevvp, LK_EXCLUSIVE | LK_RETRY);
|
|
|
|
error = VOP_ACCESS(odevvp, VREAD | VWRITE,
|
2006-11-06 13:42:10 +00:00
|
|
|
td->td_ucred, td);
|
|
|
|
if (error)
|
|
|
|
error = priv_check(td, PRIV_VFS_MOUNT_PERM);
|
2020-03-06 18:41:37 +00:00
|
|
|
VOP_UNLOCK(odevvp);
|
2006-11-06 13:42:10 +00:00
|
|
|
if (error) {
|
|
|
|
return (error);
|
1998-02-25 04:47:04 +00:00
|
|
|
}
|
1999-12-23 15:42:14 +00:00
|
|
|
fs->fs_flags &= ~FS_UNCLEAN;
|
1998-09-26 04:59:42 +00:00
|
|
|
if (fs->fs_clean == 0) {
|
1999-12-23 15:42:14 +00:00
|
|
|
fs->fs_flags |= FS_UNCLEAN;
|
2001-03-21 04:09:01 +00:00
|
|
|
if ((mp->mnt_flag & MNT_FORCE) ||
|
2010-04-24 07:05:35 +00:00
|
|
|
((fs->fs_flags &
|
|
|
|
(FS_SUJ | FS_NEEDSFSCK)) == 0 &&
|
2001-04-14 05:26:28 +00:00
|
|
|
(fs->fs_flags & FS_DOSOFTDEP))) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s was not properly "
|
|
|
|
"dismounted\n", fs->fs_fsmnt);
|
1998-09-26 04:59:42 +00:00
|
|
|
} else {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"R/W mount of %s denied. %s.%s",
|
|
|
|
fs->fs_fsmnt,
|
|
|
|
"Filesystem is not clean - run fsck",
|
|
|
|
(fs->fs_flags & FS_SUJ) == 0 ? "" :
|
|
|
|
" Forced mount will invalidate"
|
|
|
|
" journal contents");
|
2000-07-11 22:07:57 +00:00
|
|
|
return (EPERM);
|
1998-09-26 04:59:42 +00:00
|
|
|
}
|
|
|
|
}
|
2004-11-04 09:11:22 +00:00
|
|
|
g_topology_lock();
|
|
|
|
/*
|
2011-07-10 00:41:31 +00:00
|
|
|
* Request exclusive write access.
|
2004-11-04 09:11:22 +00:00
|
|
|
*/
|
2011-07-10 00:41:31 +00:00
|
|
|
error = g_access(ump->um_cp, 0, 1, 1);
|
2004-11-04 09:11:22 +00:00
|
|
|
g_topology_unlock();
|
|
|
|
if (error)
|
|
|
|
return (error);
|
2000-07-11 22:07:57 +00:00
|
|
|
if ((error = vn_start_write(NULL, &mp, V_WAIT)) != 0)
|
|
|
|
return (error);
|
2019-04-08 15:20:05 +00:00
|
|
|
error = vfs_write_suspend_umnt(mp);
|
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
2000-07-11 22:07:57 +00:00
|
|
|
fs->fs_ronly = 0;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
2019-04-08 15:20:05 +00:00
|
|
|
saved_mnt_flag = MNT_RDONLY;
|
|
|
|
if (MOUNTEDSOFTDEP(mp) && (mp->mnt_flag &
|
|
|
|
MNT_ASYNC) != 0)
|
|
|
|
saved_mnt_flag |= MNT_ASYNC;
|
|
|
|
mp->mnt_flag &= ~saved_mnt_flag;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2010-04-24 07:05:35 +00:00
|
|
|
fs->fs_mtime = time_second;
|
1998-03-27 14:20:57 +00:00
|
|
|
/* check to see if we need to start softdep */
|
2000-07-11 22:07:57 +00:00
|
|
|
if ((fs->fs_flags & FS_DOSOFTDEP) &&
|
2002-02-27 18:32:23 +00:00
|
|
|
(error = softdep_mount(devvp, mp, fs, td->td_ucred))){
|
2019-04-08 15:20:05 +00:00
|
|
|
fs->fs_ronly = 1;
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag |= saved_mnt_flag;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
vfs_write_resume(mp, 0);
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
1998-03-27 14:20:57 +00:00
|
|
|
}
|
2010-04-24 07:05:35 +00:00
|
|
|
fs->fs_clean = 0;
|
|
|
|
if ((error = ffs_sbupdate(ump, MNT_WAIT, 0)) != 0) {
|
2019-04-08 15:20:05 +00:00
|
|
|
fs->fs_ronly = 1;
|
2021-03-03 18:02:13 +00:00
|
|
|
if ((fs->fs_flags & FS_DOSOFTDEP) != 0)
|
|
|
|
softdep_unmount(mp);
|
2019-04-08 15:20:05 +00:00
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag |= saved_mnt_flag;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
vfs_write_resume(mp, 0);
|
2010-04-24 07:05:35 +00:00
|
|
|
return (error);
|
|
|
|
}
|
2000-07-11 22:07:57 +00:00
|
|
|
if (fs->fs_snapinum[0] != 0)
|
|
|
|
ffs_snapshot_mount(mp);
|
2019-04-08 15:20:05 +00:00
|
|
|
vfs_write_resume(mp, 0);
|
1995-05-15 08:39:37 +00:00
|
|
|
}
|
1998-05-18 06:38:18 +00:00
|
|
|
/*
|
|
|
|
* Soft updates is incompatible with "async",
|
|
|
|
* so if we are doing softupdates stop the user
|
|
|
|
* from setting the async flag in an update.
|
2010-09-17 09:14:40 +00:00
|
|
|
* Softdep_mount() clears it in an initial mount
|
1998-05-18 06:38:18 +00:00
|
|
|
* or ro->rw remount.
|
|
|
|
*/
|
2011-07-30 00:43:18 +00:00
|
|
|
if (MOUNTEDSOFTDEP(mp)) {
|
2006-09-26 04:12:49 +00:00
|
|
|
/* XXX: Reset too late ? */
|
|
|
|
MNT_ILOCK(mp);
|
1998-05-18 06:38:18 +00:00
|
|
|
mp->mnt_flag &= ~MNT_ASYNC;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
}
|
2005-01-15 17:09:53 +00:00
|
|
|
/*
|
|
|
|
* Keep MNT_ACLS flag if it is stored in superblock.
|
|
|
|
*/
|
2006-09-26 04:12:49 +00:00
|
|
|
if ((fs->fs_flags & FS_ACLS) != 0) {
|
|
|
|
/* XXX: Set too late ? */
|
|
|
|
MNT_ILOCK(mp);
|
2005-01-15 17:09:53 +00:00
|
|
|
mp->mnt_flag |= MNT_ACLS;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
}
|
2005-11-20 17:04:50 +00:00
|
|
|
|
2009-12-21 19:39:10 +00:00
|
|
|
if ((fs->fs_flags & FS_NFS4ACLS) != 0) {
|
|
|
|
/* XXX: Set too late ? */
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag |= MNT_NFS4ACLS;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
}
|
2011-07-15 16:20:33 +00:00
|
|
|
/*
|
|
|
|
* If this is a request from fsck to clean up the filesystem,
|
|
|
|
* then allow the specified pid to proceed.
|
|
|
|
*/
|
|
|
|
if (fsckpid > 0) {
|
|
|
|
if (ump->um_fsckpid != 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"Active checker already running on %s",
|
2011-07-15 16:20:33 +00:00
|
|
|
fs->fs_fsmnt);
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
2011-07-30 00:43:18 +00:00
|
|
|
KASSERT(MOUNTEDSOFTDEP(mp) == 0,
|
2011-07-15 16:20:33 +00:00
|
|
|
("soft updates enabled on read-only file system"));
|
|
|
|
g_topology_lock();
|
|
|
|
/*
|
|
|
|
* Request write access.
|
|
|
|
*/
|
|
|
|
error = g_access(ump->um_cp, 0, 1, 0);
|
|
|
|
g_topology_unlock();
|
|
|
|
if (error) {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp,
|
|
|
|
"Checker activation failed on %s",
|
2011-07-15 16:20:33 +00:00
|
|
|
fs->fs_fsmnt);
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
ump->um_fsckpid = fsckpid;
|
|
|
|
if (fs->fs_snapinum[0] != 0)
|
|
|
|
ffs_snapshot_mount(mp);
|
|
|
|
fs->fs_mtime = time_second;
|
|
|
|
fs->fs_fmod = 1;
|
|
|
|
fs->fs_clean = 0;
|
|
|
|
(void) ffs_sbupdate(ump, MNT_WAIT, 0);
|
|
|
|
}
|
2010-09-17 09:14:40 +00:00
|
|
|
|
2000-07-11 22:07:57 +00:00
|
|
|
/*
|
|
|
|
* If this is a snapshot request, take the snapshot.
|
|
|
|
*/
|
|
|
|
if (mp->mnt_flag & MNT_SNAPSHOT)
|
2004-12-07 08:15:41 +00:00
|
|
|
return (ffs_snapshot(mp, fspec));
|
2016-11-13 21:49:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Must not call namei() while owning busy ref.
|
|
|
|
*/
|
|
|
|
vfs_unbusy(mp);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1995-08-28 09:19:25 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Not an update, or updating the name: look up the name
|
2004-02-14 04:41:13 +00:00
|
|
|
* and verify that it refers to a sensible disk device.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2005-09-02 13:52:55 +00:00
|
|
|
NDINIT(&ndp, LOOKUP, FOLLOW | LOCKLEAF, UIO_SYSSPACE, fspec, td);
|
2016-11-13 21:49:51 +00:00
|
|
|
error = namei(&ndp);
|
|
|
|
if ((mp->mnt_flag & MNT_UPDATE) != 0) {
|
|
|
|
/*
|
|
|
|
* Unmount does not start if MNT_UPDATE is set. Mount
|
|
|
|
* update busies mp before setting MNT_UPDATE. We
|
|
|
|
* must be able to retain our busy ref succesfully,
|
|
|
|
* without sleep.
|
|
|
|
*/
|
|
|
|
error1 = vfs_busy(mp, MBF_NOWAIT);
|
|
|
|
MPASS(error1 == 0);
|
|
|
|
}
|
|
|
|
if (error != 0)
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
2004-07-30 22:08:52 +00:00
|
|
|
NDFREE(&ndp, NDF_ONLY_PNBUF);
|
|
|
|
devvp = ndp.ni_vp;
|
2020-08-19 02:51:17 +00:00
|
|
|
if (!vn_isdisk_error(devvp, &error)) {
|
2005-09-02 13:52:55 +00:00
|
|
|
vput(devvp);
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
|
|
|
}
|
1998-02-25 04:47:04 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If mount by non-root, then verify that user has necessary
|
|
|
|
* permissions on the device.
|
|
|
|
*/
|
2008-10-28 13:44:11 +00:00
|
|
|
accmode = VREAD;
|
2006-11-06 13:42:10 +00:00
|
|
|
if ((mp->mnt_flag & MNT_RDONLY) == 0)
|
2008-10-28 13:44:11 +00:00
|
|
|
accmode |= VWRITE;
|
|
|
|
error = VOP_ACCESS(devvp, accmode, td->td_ucred, td);
|
2006-11-06 13:42:10 +00:00
|
|
|
if (error)
|
|
|
|
error = priv_check(td, PRIV_VFS_MOUNT_PERM);
|
|
|
|
if (error) {
|
|
|
|
vput(devvp);
|
|
|
|
return (error);
|
1998-02-25 04:47:04 +00:00
|
|
|
}
|
|
|
|
|
1995-08-28 09:19:25 +00:00
|
|
|
if (mp->mnt_flag & MNT_UPDATE) {
|
|
|
|
/*
|
2000-07-11 22:07:57 +00:00
|
|
|
* Update only
|
|
|
|
*
|
1998-04-19 23:32:49 +00:00
|
|
|
* If it's not the same vnode, or at least the same device
|
|
|
|
* then it's not correct.
|
1995-08-28 09:19:25 +00:00
|
|
|
*/
|
|
|
|
|
2004-10-29 10:15:56 +00:00
|
|
|
if (devvp->v_rdev != ump->um_devvp->v_rdev)
|
2000-07-11 22:07:57 +00:00
|
|
|
error = EINVAL; /* needs translation */
|
2005-09-02 13:52:55 +00:00
|
|
|
vput(devvp);
|
2000-07-11 22:07:57 +00:00
|
|
|
if (error)
|
|
|
|
return (error);
|
1995-08-28 09:19:25 +00:00
|
|
|
} else {
|
|
|
|
/*
|
2000-07-11 22:07:57 +00:00
|
|
|
* New mount
|
|
|
|
*
|
|
|
|
* We need the name for the mount point (also used for
|
|
|
|
* "last mounted on") copied in. If an error occurs,
|
|
|
|
* the mount point is discarded by the upper level code.
|
2016-05-17 08:24:27 +00:00
|
|
|
* Note that vfs_mount_alloc() populates f_mntonname for us.
|
1995-08-28 09:19:25 +00:00
|
|
|
*/
|
2002-12-27 10:06:37 +00:00
|
|
|
if ((error = ffs_mountfs(devvp, mp, td)) != 0) {
|
2000-07-11 22:07:57 +00:00
|
|
|
vrele(devvp);
|
|
|
|
return (error);
|
|
|
|
}
|
2011-07-15 16:20:33 +00:00
|
|
|
if (fsckpid > 0) {
|
2011-07-30 00:43:18 +00:00
|
|
|
KASSERT(MOUNTEDSOFTDEP(mp) == 0,
|
2011-07-15 16:20:33 +00:00
|
|
|
("soft updates enabled on read-only file system"));
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
|
|
|
g_topology_lock();
|
|
|
|
/*
|
|
|
|
* Request write access.
|
|
|
|
*/
|
|
|
|
error = g_access(ump->um_cp, 0, 1, 0);
|
|
|
|
g_topology_unlock();
|
|
|
|
if (error) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: Checker activation "
|
|
|
|
"failed\n", fs->fs_fsmnt);
|
2011-07-15 16:20:33 +00:00
|
|
|
} else {
|
|
|
|
ump->um_fsckpid = fsckpid;
|
|
|
|
if (fs->fs_snapinum[0] != 0)
|
|
|
|
ffs_snapshot_mount(mp);
|
|
|
|
fs->fs_mtime = time_second;
|
|
|
|
fs->fs_clean = 0;
|
|
|
|
(void) ffs_sbupdate(ump, MNT_WAIT, 0);
|
|
|
|
}
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2020-07-25 10:38:05 +00:00
|
|
|
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
/*
|
|
|
|
* This is racy versus lookup, see ufs_fplookup_vexec for details.
|
|
|
|
*/
|
|
|
|
if ((mp->mnt_kern_flag & MNTK_FPLOOKUP) != 0)
|
|
|
|
panic("MNTK_FPLOOKUP set on mount %p when it should not be", mp);
|
2020-08-10 11:51:21 +00:00
|
|
|
if ((mp->mnt_flag & (MNT_ACLS | MNT_NFS4ACLS | MNT_UNION)) == 0)
|
2020-07-25 10:38:05 +00:00
|
|
|
mp->mnt_kern_flag |= MNTK_FPLOOKUP;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
|
2004-12-07 08:15:41 +00:00
|
|
|
vfs_mountedfrom(mp, fspec);
|
2000-07-11 22:07:57 +00:00
|
|
|
return (0);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2004-12-07 08:15:41 +00:00
|
|
|
/*
|
|
|
|
* Compatibility with old mount system call.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
2012-01-17 01:08:01 +00:00
|
|
|
ffs_cmount(struct mntarg *ma, void *data, uint64_t flags)
|
2004-12-07 08:15:41 +00:00
|
|
|
{
|
|
|
|
struct ufs_args args;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
if (data == NULL)
|
|
|
|
return (EINVAL);
|
|
|
|
error = copyin(data, &args, sizeof args);
|
|
|
|
if (error)
|
|
|
|
return (error);
|
|
|
|
|
|
|
|
ma = mount_argsu(ma, "from", args.fspec, MAXPATHLEN);
|
Fix export_args ex_flags field so that is 64bits, the same as mnt_flags.
Since mnt_flags was upgraded to 64bits there has been a quirk in
"struct export_args", since it hold a copy of mnt_flags
in ex_flags, which is an "int" (32bits).
This happens to currently work, since all the flag bits used in ex_flags are
defined in the low order 32bits. However, new export flags cannot be defined.
Also, ex_anon is a "struct xucred", which limits it to 16 additional groups.
This patch revises "struct export_args" to make ex_flags 64bits and replaces
ex_anon with ex_uid, ex_ngroups and ex_groups (which points to a
groups list, so it can be malloc'd up to NGROUPS in size.
This requires that the VFS_CHECKEXP() arguments change, so I also modified the
last "secflavors" argument to be an array pointer, so that the
secflavors could be copied in VFS_CHECKEXP() while the export entry is locked.
(Without this patch VFS_CHECKEXP() returns a pointer to the secflavors
array and then it is used after being unlocked, which is potentially
a problem if the exports entry is changed.
In practice this does not occur when mountd is run with "-S",
but I think it is worth fixing.)
This patch also deleted the vfs_oexport_conv() function, since
do_mount_update() does the conversion, as required by the old vfs_cmount()
calls.
Reviewed by: kib, freqlabs
Relnotes: yes
Differential Revision: https://reviews.freebsd.org/D25088
2020-06-14 00:10:18 +00:00
|
|
|
ma = mount_arg(ma, "export", &args.export, sizeof(args.export));
|
2004-12-07 08:15:41 +00:00
|
|
|
error = kernel_mount(ma, flags);
|
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Reload all incore data for a filesystem (used after running fsck on
|
2012-11-18 18:57:19 +00:00
|
|
|
* the root filesystem and finding things to fix). If the 'force' flag
|
|
|
|
* is 0, the filesystem must be mounted read-only.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
|
|
|
* Things to do to update the mount:
|
|
|
|
* 1) invalidate all cached meta-data.
|
|
|
|
* 2) re-read superblock from disk.
|
|
|
|
* 3) re-read summary information from disk.
|
|
|
|
* 4) invalidate all inactive vnodes.
|
2016-09-08 12:01:28 +00:00
|
|
|
* 5) clear MNTK_SUSPEND2 and MNTK_SUSPENDED flags, allowing secondary
|
|
|
|
* writers, if requested.
|
|
|
|
* 6) invalidate all cached file data.
|
|
|
|
* 7) re-read inode data for all active vnodes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2012-11-18 18:57:19 +00:00
|
|
|
int
|
2016-09-08 12:01:28 +00:00
|
|
|
ffs_reload(struct mount *mp, struct thread *td, int flags)
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2006-01-09 20:42:19 +00:00
|
|
|
struct vnode *vp, *mvp, *devvp;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct inode *ip;
|
2001-01-15 18:30:40 +00:00
|
|
|
void *space;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct buf *bp;
|
1997-02-10 02:22:35 +00:00
|
|
|
struct fs *fs, *newfs;
|
2005-01-24 10:12:28 +00:00
|
|
|
struct ufsmount *ump;
|
2002-06-21 06:18:05 +00:00
|
|
|
ufs2_daddr_t sblockloc;
|
2016-10-28 20:15:19 +00:00
|
|
|
int i, blks, error;
|
|
|
|
u_long size;
|
1997-02-10 02:22:35 +00:00
|
|
|
int32_t *lp;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2005-01-24 10:12:28 +00:00
|
|
|
ump = VFSTOUFS(mp);
|
2012-11-18 18:57:19 +00:00
|
|
|
|
|
|
|
MNT_ILOCK(mp);
|
2016-09-08 12:01:28 +00:00
|
|
|
if ((mp->mnt_flag & MNT_RDONLY) == 0 && (flags & FFSR_FORCE) == 0) {
|
2012-11-18 18:57:19 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
MNT_IUNLOCK(mp);
|
2020-09-01 21:23:00 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Step 1: invalidate all cached meta-data.
|
|
|
|
*/
|
1995-08-28 09:19:25 +00:00
|
|
|
devvp = VFSTOUFS(mp)->um_devvp;
|
2008-01-10 01:10:58 +00:00
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
2008-10-10 21:23:50 +00:00
|
|
|
if (vinvalbuf(devvp, 0, 0, 0) != 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("ffs_reload: dirty1");
|
2020-01-03 22:29:58 +00:00
|
|
|
VOP_UNLOCK(devvp);
|
Make our v_usecount vnode reference count work identically to the
original BSD code. The association between the vnode and the vm_object
no longer includes reference counts. The major difference is that
vm_object's are no longer freed gratuitiously from the vnode, and so
once an object is created for the vnode, it will last as long as the
vnode does.
When a vnode object reference count is incremented, then the underlying
vnode reference count is incremented also. The two "objects" are now
more intimately related, and so the interactions are now much less
complex.
When vnodes are now normally placed onto the free queue with an object still
attached. The rundown of the object happens at vnode rundown time, and
happens with exactly the same filesystem semantics of the original VFS
code. There is absolutely no need for vnode_pager_uncache and other
travesties like that anymore.
A side-effect of these changes is that SMP locking should be much simpler,
the I/O copyin/copyout optimizations work, NFS should be more ponderable,
and further work on layered filesystems should be less frustrating, because
of the totally coherent management of the vnode objects and vnodes.
Please be careful with your system while running this code, but I would
greatly appreciate feedback as soon a reasonably possible.
1998-01-06 05:26:17 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Step 2: re-read superblock from disk.
|
|
|
|
*/
|
2002-06-21 06:18:05 +00:00
|
|
|
fs = VFSTOUFS(mp)->um_fs;
|
2002-11-27 02:18:58 +00:00
|
|
|
if ((error = bread(devvp, btodb(fs->fs_sblockloc), fs->fs_sbsize,
|
2002-06-21 06:18:05 +00:00
|
|
|
NOCRED, &bp)) != 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
1997-02-10 02:22:35 +00:00
|
|
|
newfs = (struct fs *)bp->b_data;
|
2002-06-21 06:18:05 +00:00
|
|
|
if ((newfs->fs_magic != FS_UFS1_MAGIC &&
|
|
|
|
newfs->fs_magic != FS_UFS2_MAGIC) ||
|
|
|
|
newfs->fs_bsize > MAXBSIZE ||
|
|
|
|
newfs->fs_bsize < sizeof(struct fs)) {
|
1997-02-10 02:22:35 +00:00
|
|
|
brelse(bp);
|
|
|
|
return (EIO); /* XXX needs translation */
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1997-02-10 02:22:35 +00:00
|
|
|
/*
|
2020-06-19 01:02:53 +00:00
|
|
|
* Preserve the summary information, read-only status, and
|
|
|
|
* superblock location by copying these fields into our new
|
|
|
|
* superblock before using it to update the existing superblock.
|
1997-02-10 02:22:35 +00:00
|
|
|
*/
|
2020-06-19 01:02:53 +00:00
|
|
|
newfs->fs_si = fs->fs_si;
|
2012-11-18 18:57:19 +00:00
|
|
|
newfs->fs_ronly = fs->fs_ronly;
|
2002-06-21 06:18:05 +00:00
|
|
|
sblockloc = fs->fs_sblockloc;
|
1997-02-10 02:22:35 +00:00
|
|
|
bcopy(newfs, fs, (u_int)fs->fs_sbsize);
|
1994-05-24 10:09:53 +00:00
|
|
|
brelse(bp);
|
1997-02-10 02:22:35 +00:00
|
|
|
mp->mnt_maxsymlinklen = fs->fs_maxsymlinklen;
|
2002-06-21 06:18:05 +00:00
|
|
|
ffs_oldfscompat_read(fs, VFSTOUFS(mp), sblockloc);
|
2005-01-24 10:12:28 +00:00
|
|
|
UFS_LOCK(ump);
|
2001-05-08 07:42:20 +00:00
|
|
|
if (fs->fs_pendingblocks != 0 || fs->fs_pendinginodes != 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: reload pending error: blocks %jd "
|
|
|
|
"files %d\n", fs->fs_fsmnt, (intmax_t)fs->fs_pendingblocks,
|
2002-06-21 06:18:05 +00:00
|
|
|
fs->fs_pendinginodes);
|
2001-05-08 07:42:20 +00:00
|
|
|
fs->fs_pendingblocks = 0;
|
|
|
|
fs->fs_pendinginodes = 0;
|
|
|
|
}
|
2005-01-24 10:12:28 +00:00
|
|
|
UFS_UNLOCK(ump);
|
1997-02-10 02:22:35 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Step 3: re-read summary information from disk.
|
|
|
|
*/
|
2012-04-08 13:44:55 +00:00
|
|
|
size = fs->fs_cssize;
|
|
|
|
blks = howmany(size, fs->fs_fsize);
|
|
|
|
if (fs->fs_contigsumsize > 0)
|
|
|
|
size += fs->fs_ncg * sizeof(int32_t);
|
|
|
|
size += fs->fs_ncg * sizeof(u_int8_t);
|
|
|
|
free(fs->fs_csp, M_UFSMNT);
|
2016-10-28 20:15:19 +00:00
|
|
|
space = malloc(size, M_UFSMNT, M_WAITOK);
|
2012-04-08 13:44:55 +00:00
|
|
|
fs->fs_csp = space;
|
1994-05-24 10:09:53 +00:00
|
|
|
for (i = 0; i < blks; i += fs->fs_frag) {
|
|
|
|
size = fs->fs_bsize;
|
|
|
|
if (i + fs->fs_frag > blks)
|
|
|
|
size = (blks - i) * fs->fs_fsize;
|
1994-10-08 06:20:06 +00:00
|
|
|
error = bread(devvp, fsbtodb(fs, fs->fs_csaddr + i), size,
|
|
|
|
NOCRED, &bp);
|
|
|
|
if (error)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
2001-01-15 18:30:40 +00:00
|
|
|
bcopy(bp->b_data, space, (u_int)size);
|
|
|
|
space = (char *)space + size;
|
1994-05-24 10:09:53 +00:00
|
|
|
brelse(bp);
|
|
|
|
}
|
1997-02-10 02:22:35 +00:00
|
|
|
/*
|
|
|
|
* We no longer know anything about clusters per cylinder group.
|
|
|
|
*/
|
|
|
|
if (fs->fs_contigsumsize > 0) {
|
2012-04-21 10:45:46 +00:00
|
|
|
fs->fs_maxcluster = lp = space;
|
1997-02-10 02:22:35 +00:00
|
|
|
for (i = 0; i < fs->fs_ncg; i++)
|
|
|
|
*lp++ = fs->fs_contigsumsize;
|
2012-04-21 10:45:46 +00:00
|
|
|
space = lp;
|
1997-02-10 02:22:35 +00:00
|
|
|
}
|
2012-04-21 10:45:46 +00:00
|
|
|
size = fs->fs_ncg * sizeof(u_int8_t);
|
|
|
|
fs->fs_contigdirs = (u_int8_t *)space;
|
|
|
|
bzero(fs->fs_contigdirs, size);
|
2016-09-08 12:01:28 +00:00
|
|
|
if ((flags & FFSR_UNSUSPEND) != 0) {
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_kern_flag &= ~(MNTK_SUSPENDED | MNTK_SUSPEND2);
|
|
|
|
wakeup(&mp->mnt_flag);
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
}
|
1997-02-10 02:22:35 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
loop:
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_VNODE_FOREACH_ALL(vp, mp, mvp) {
|
2012-11-18 18:57:19 +00:00
|
|
|
/*
|
|
|
|
* Skip syncer vnode.
|
|
|
|
*/
|
|
|
|
if (vp->v_type == VNON) {
|
|
|
|
VI_UNLOCK(vp);
|
|
|
|
continue;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2005-03-13 12:03:14 +00:00
|
|
|
* Step 4: invalidate all cached file data.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2020-08-16 17:18:54 +00:00
|
|
|
if (vget(vp, LK_EXCLUSIVE | LK_INTERLOCK)) {
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_VNODE_FOREACH_ALL_ABORT(mp, mvp);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto loop;
|
1997-02-10 02:22:35 +00:00
|
|
|
}
|
2008-10-10 21:23:50 +00:00
|
|
|
if (vinvalbuf(vp, 0, 0, 0))
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("ffs_reload: dirty2");
|
|
|
|
/*
|
2005-03-13 12:03:14 +00:00
|
|
|
* Step 5: re-read inode data for all active vnodes.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
ip = VTOI(vp);
|
1994-10-08 06:20:06 +00:00
|
|
|
error =
|
1994-05-24 10:09:53 +00:00
|
|
|
bread(devvp, fsbtodb(fs, ino_to_fsba(fs, ip->i_number)),
|
1994-10-08 06:20:06 +00:00
|
|
|
(int)fs->fs_bsize, NOCRED, &bp);
|
|
|
|
if (error) {
|
2018-11-13 21:40:56 +00:00
|
|
|
vput(vp);
|
|
|
|
MNT_VNODE_FOREACH_ALL_ABORT(mp, mvp);
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
if ((error = ffs_load_inode(bp, ip, fs, ip->i_number)) != 0) {
|
|
|
|
brelse(bp);
|
|
|
|
vput(vp);
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_VNODE_FOREACH_ALL_ABORT(mp, mvp);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
1998-03-08 09:59:44 +00:00
|
|
|
ip->i_effnlink = ip->i_nlink;
|
1994-05-24 10:09:53 +00:00
|
|
|
brelse(bp);
|
2018-11-13 21:40:56 +00:00
|
|
|
vput(vp);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Common code for mount and mountroot
|
|
|
|
*/
|
2002-12-27 10:06:37 +00:00
|
|
|
static int
|
2020-03-06 18:41:37 +00:00
|
|
|
ffs_mountfs(odevvp, mp, td)
|
|
|
|
struct vnode *odevvp;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct mount *mp;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2002-05-13 09:22:31 +00:00
|
|
|
struct ufsmount *ump;
|
|
|
|
struct fs *fs;
|
2004-06-16 09:47:26 +00:00
|
|
|
struct cdev *dev;
|
2018-01-26 00:58:32 +00:00
|
|
|
int error, i, len, ronly;
|
1997-02-10 02:22:35 +00:00
|
|
|
struct ucred *cred;
|
2004-10-29 10:15:56 +00:00
|
|
|
struct g_consumer *cp;
|
2006-03-31 03:54:20 +00:00
|
|
|
struct mount *nmp;
|
2020-03-06 18:41:37 +00:00
|
|
|
struct vnode *devvp;
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
struct fsfail_task *etp;
|
2020-02-16 23:10:59 +00:00
|
|
|
int candelete, canspeedup;
|
Normally when an attempt is made to mount a UFS/FFS filesystem whose
superblock has a check-hash error, an error message noting the
superblock check-hash failure is printed and the mount fails. The
administrator then runs fsck to repair the filesystem and when
successful, the filesystem can once again be mounted.
This approach fails if the filesystem in question is a root filesystem
from which you are trying to boot. Here, the loader fails when trying
to access the filesystem to get the kernel to boot. So it is necessary
to allow the loader to ignore the superblock check-hash error and make
a best effort to read the kernel. The filesystem may be suffiently
corrupted that the read attempt fails, but there is no harm in trying
since the loader makes no attempt to write to the filesystem.
Once the kernel is loaded and starts to run, it attempts to mount its
root filesystem. Once again, failure means that it breaks to its prompt
to ask where to get its root filesystem. Unless you have an alternate
root filesystem, you are stuck.
Since the root filesystem is initially mounted read-only, it is
safe to make an attempt to mount the root filesystem with the failed
superblock check-hash. Thus, when asked to mount a root filesystem
with a failed superblock check-hash, the kernel prints a warning
message that the root filesystem superblock check-hash needs repair,
but notes that it is ignoring the error and proceeding. It does
mark the filesystem as needing an fsck which prevents it from being
enabled for writing until fsck has been run on it. The net effect
is that the reboot fails to single user, but at least at that point
the administrator has the tools at hand to fix the problem.
Reported by: Rick Macklem (rmacklem@)
Discussed with: Warner Losh (imp@)
Sponsored by: Netflix
2018-12-06 00:09:39 +00:00
|
|
|
off_t loc;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2018-01-26 00:58:32 +00:00
|
|
|
fs = NULL;
|
2009-02-11 22:19:54 +00:00
|
|
|
ump = NULL;
|
2002-02-27 18:32:23 +00:00
|
|
|
cred = td ? td->td_ucred : NOCRED;
|
2004-10-29 10:15:56 +00:00
|
|
|
ronly = (mp->mnt_flag & MNT_RDONLY) != 0;
|
2009-02-11 22:19:54 +00:00
|
|
|
|
2020-03-06 18:41:37 +00:00
|
|
|
devvp = mntfs_allocvp(mp, odevvp);
|
|
|
|
VOP_UNLOCK(odevvp);
|
2016-05-21 09:49:35 +00:00
|
|
|
KASSERT(devvp->v_type == VCHR, ("reclaimed devvp"));
|
2009-02-11 22:19:54 +00:00
|
|
|
dev = devvp->v_rdev;
|
2021-01-16 00:33:00 +00:00
|
|
|
KASSERT(dev->si_snapdata == NULL, ("non-NULL snapshot data"));
|
2016-05-21 09:49:35 +00:00
|
|
|
if (atomic_cmpset_acq_ptr((uintptr_t *)&dev->si_mountpt, 0,
|
|
|
|
(uintptr_t)mp) == 0) {
|
2020-03-06 18:41:37 +00:00
|
|
|
mntfs_freevp(devvp);
|
2016-05-21 09:49:35 +00:00
|
|
|
return (EBUSY);
|
|
|
|
}
|
2004-10-29 10:15:56 +00:00
|
|
|
g_topology_lock();
|
|
|
|
error = g_vfs_open(devvp, &cp, "ffs", ronly ? 0 : 1);
|
|
|
|
g_topology_unlock();
|
2016-05-21 09:49:35 +00:00
|
|
|
if (error != 0) {
|
|
|
|
atomic_store_rel_ptr((uintptr_t *)&dev->si_mountpt, 0);
|
2020-03-06 18:41:37 +00:00
|
|
|
mntfs_freevp(devvp);
|
2016-05-21 09:49:35 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
dev_ref(dev);
|
|
|
|
devvp->v_bufobj.bo_ops = &ffs_ops;
|
2020-03-06 18:41:37 +00:00
|
|
|
BO_LOCK(&odevvp->v_bufobj);
|
|
|
|
odevvp->v_bufobj.bo_flag |= BO_NOBUFS;
|
|
|
|
BO_UNLOCK(&odevvp->v_bufobj);
|
2016-05-21 09:49:35 +00:00
|
|
|
if (dev->si_iosize_max != 0)
|
|
|
|
mp->mnt_iosize_max = dev->si_iosize_max;
|
Make MAXPHYS tunable. Bump MAXPHYS to 1M.
Replace MAXPHYS by runtime variable maxphys. It is initialized from
MAXPHYS by default, but can be also adjusted with the tunable kern.maxphys.
Make b_pages[] array in struct buf flexible. Size b_pages[] for buffer
cache buffers exactly to atop(maxbcachebuf) (currently it is sized to
atop(MAXPHYS)), and b_pages[] for pbufs is sized to atop(maxphys) + 1.
The +1 for pbufs allow several pbuf consumers, among them vmapbuf(),
to use unaligned buffers still sized to maxphys, esp. when such
buffers come from userspace (*). Overall, we save significant amount
of otherwise wasted memory in b_pages[] for buffer cache buffers,
while bumping MAXPHYS to desired high value.
Eliminate all direct uses of the MAXPHYS constant in kernel and driver
sources, except a place which initialize maxphys. Some random (and
arguably weird) uses of MAXPHYS, e.g. in linuxolator, are converted
straight. Some drivers, which use MAXPHYS to size embeded structures,
get private MAXPHYS-like constant; their convertion is out of scope
for this work.
Changes to cam/, dev/ahci, dev/ata, dev/mpr, dev/mpt, dev/mvs,
dev/siis, where either submitted by, or based on changes by mav.
Suggested by: mav (*)
Reviewed by: imp, mav, imp, mckusick, scottl (intermediate versions)
Tested by: pho
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D27225
2020-11-28 12:12:51 +00:00
|
|
|
if (mp->mnt_iosize_max > maxphys)
|
|
|
|
mp->mnt_iosize_max = maxphys;
|
2018-01-26 00:58:32 +00:00
|
|
|
if ((SBLOCKSIZE % cp->provider->sectorsize) != 0) {
|
|
|
|
error = EINVAL;
|
|
|
|
vfs_mount_error(mp,
|
|
|
|
"Invalid sectorsize %d for superblock size %d",
|
|
|
|
cp->provider->sectorsize, SBLOCKSIZE);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
|
|
|
}
|
2018-01-26 00:58:32 +00:00
|
|
|
/* fetch the superblock and summary information */
|
Normally when an attempt is made to mount a UFS/FFS filesystem whose
superblock has a check-hash error, an error message noting the
superblock check-hash failure is printed and the mount fails. The
administrator then runs fsck to repair the filesystem and when
successful, the filesystem can once again be mounted.
This approach fails if the filesystem in question is a root filesystem
from which you are trying to boot. Here, the loader fails when trying
to access the filesystem to get the kernel to boot. So it is necessary
to allow the loader to ignore the superblock check-hash error and make
a best effort to read the kernel. The filesystem may be suffiently
corrupted that the read attempt fails, but there is no harm in trying
since the loader makes no attempt to write to the filesystem.
Once the kernel is loaded and starts to run, it attempts to mount its
root filesystem. Once again, failure means that it breaks to its prompt
to ask where to get its root filesystem. Unless you have an alternate
root filesystem, you are stuck.
Since the root filesystem is initially mounted read-only, it is
safe to make an attempt to mount the root filesystem with the failed
superblock check-hash. Thus, when asked to mount a root filesystem
with a failed superblock check-hash, the kernel prints a warning
message that the root filesystem superblock check-hash needs repair,
but notes that it is ignoring the error and proceeding. It does
mark the filesystem as needing an fsck which prevents it from being
enabled for writing until fsck has been run on it. The net effect
is that the reboot fails to single user, but at least at that point
the administrator has the tools at hand to fix the problem.
Reported by: Rick Macklem (rmacklem@)
Discussed with: Warner Losh (imp@)
Sponsored by: Netflix
2018-12-06 00:09:39 +00:00
|
|
|
loc = STDSB;
|
|
|
|
if ((mp->mnt_flag & MNT_ROOTFS) != 0)
|
|
|
|
loc = STDSB_NOHASHFAIL;
|
|
|
|
if ((error = ffs_sbget(devvp, &fs, loc, M_UFSMNT, ffs_use_bread)) != 0)
|
2018-01-26 00:58:32 +00:00
|
|
|
goto out;
|
1998-09-26 04:59:42 +00:00
|
|
|
fs->fs_flags &= ~FS_UNCLEAN;
|
|
|
|
if (fs->fs_clean == 0) {
|
|
|
|
fs->fs_flags |= FS_UNCLEAN;
|
2001-03-21 04:09:01 +00:00
|
|
|
if (ronly || (mp->mnt_flag & MNT_FORCE) ||
|
2010-04-24 07:05:35 +00:00
|
|
|
((fs->fs_flags & (FS_SUJ | FS_NEEDSFSCK)) == 0 &&
|
2001-04-14 05:26:28 +00:00
|
|
|
(fs->fs_flags & FS_DOSOFTDEP))) {
|
2010-09-17 09:14:40 +00:00
|
|
|
printf("WARNING: %s was not properly dismounted\n",
|
1998-09-26 04:59:42 +00:00
|
|
|
fs->fs_fsmnt);
|
1995-05-15 08:39:37 +00:00
|
|
|
} else {
|
2012-01-14 07:26:16 +00:00
|
|
|
vfs_mount_error(mp, "R/W mount of %s denied. %s%s",
|
|
|
|
fs->fs_fsmnt, "Filesystem is not clean - run fsck.",
|
|
|
|
(fs->fs_flags & FS_SUJ) == 0 ? "" :
|
|
|
|
" Forced mount will invalidate journal contents");
|
1995-05-15 08:39:37 +00:00
|
|
|
error = EPERM;
|
|
|
|
goto out;
|
|
|
|
}
|
2002-06-21 06:18:05 +00:00
|
|
|
if ((fs->fs_pendingblocks != 0 || fs->fs_pendinginodes != 0) &&
|
|
|
|
(mp->mnt_flag & MNT_FORCE)) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: lost blocks %jd files %d\n",
|
|
|
|
fs->fs_fsmnt, (intmax_t)fs->fs_pendingblocks,
|
2002-06-21 06:18:05 +00:00
|
|
|
fs->fs_pendinginodes);
|
2001-05-08 07:42:20 +00:00
|
|
|
fs->fs_pendingblocks = 0;
|
|
|
|
fs->fs_pendinginodes = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (fs->fs_pendingblocks != 0 || fs->fs_pendinginodes != 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: mount pending error: blocks %jd "
|
|
|
|
"files %d\n", fs->fs_fsmnt, (intmax_t)fs->fs_pendingblocks,
|
2002-06-21 06:18:05 +00:00
|
|
|
fs->fs_pendinginodes);
|
2001-05-08 07:42:20 +00:00
|
|
|
fs->fs_pendingblocks = 0;
|
|
|
|
fs->fs_pendinginodes = 0;
|
1995-05-15 08:39:37 +00:00
|
|
|
}
|
2006-10-31 21:48:54 +00:00
|
|
|
if ((fs->fs_flags & FS_GJOURNAL) != 0) {
|
|
|
|
#ifdef UFS_GJOURNAL
|
|
|
|
/*
|
|
|
|
* Get journal provider name.
|
|
|
|
*/
|
2016-10-28 20:15:19 +00:00
|
|
|
len = 1024;
|
|
|
|
mp->mnt_gjprovider = malloc((u_long)len, M_UFSMNT, M_WAITOK);
|
|
|
|
if (g_io_getattr("GJOURNAL::provider", cp, &len,
|
2006-10-31 21:48:54 +00:00
|
|
|
mp->mnt_gjprovider) == 0) {
|
2016-10-28 20:15:19 +00:00
|
|
|
mp->mnt_gjprovider = realloc(mp->mnt_gjprovider, len,
|
2006-10-31 21:48:54 +00:00
|
|
|
M_UFSMNT, M_WAITOK);
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
mp->mnt_flag |= MNT_GJOURNAL;
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
} else {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: GJOURNAL flag on fs "
|
|
|
|
"but no gjournal provider below\n",
|
2006-10-31 21:48:54 +00:00
|
|
|
mp->mnt_stat.f_mntonname);
|
|
|
|
free(mp->mnt_gjprovider, M_UFSMNT);
|
|
|
|
mp->mnt_gjprovider = NULL;
|
|
|
|
}
|
|
|
|
#else
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: GJOURNAL flag on fs but no "
|
|
|
|
"UFS_GJOURNAL support\n", mp->mnt_stat.f_mntonname);
|
2006-10-31 21:48:54 +00:00
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
mp->mnt_gjprovider = NULL;
|
|
|
|
}
|
2003-02-19 05:47:46 +00:00
|
|
|
ump = malloc(sizeof *ump, M_UFSMNT, M_WAITOK | M_ZERO);
|
2004-10-29 10:15:56 +00:00
|
|
|
ump->um_cp = cp;
|
|
|
|
ump->um_bo = &devvp->v_bufobj;
|
2018-01-26 00:58:32 +00:00
|
|
|
ump->um_fs = fs;
|
2002-06-21 06:18:05 +00:00
|
|
|
if (fs->fs_magic == FS_UFS1_MAGIC) {
|
|
|
|
ump->um_fstype = UFS1;
|
|
|
|
ump->um_balloc = ffs_balloc_ufs1;
|
|
|
|
} else {
|
|
|
|
ump->um_fstype = UFS2;
|
|
|
|
ump->um_balloc = ffs_balloc_ufs2;
|
|
|
|
}
|
VFS mega cleanup commit (x/N)
1. Add new file "sys/kern/vfs_default.c" where default actions for
VOPs go. Implement proper defaults for ABORTOP, BWRITE, LEASE,
POLL, REVOKE and STRATEGY. Various stuff spread over the entire
tree belongs here.
2. Change VOP_BLKATOFF to a normal function in cd9660.
3. Kill VOP_BLKATOFF, VOP_TRUNCATE, VOP_VFREE, VOP_VALLOC. These
are private interface functions between UFS and the underlying
storage manager layer (FFS/LFS/MFS/EXT2FS). The functions now
live in struct ufsmount instead.
4. Remove a kludge of VOP_ functions in all filesystems, that did
nothing but obscure the simplicity and break the expandability.
If a filesystem doesn't implement VOP_FOO, it shouldn't have an
entry for it in its vnops table. The system will try to DTRT
if it is not implemented. There are still some cruft left, but
the bulk of it is done.
5. Fix another VCALL in vfs_cache.c (thanks Bruce!)
1997-10-16 10:50:27 +00:00
|
|
|
ump->um_blkatoff = ffs_blkatoff;
|
|
|
|
ump->um_truncate = ffs_truncate;
|
1997-10-16 20:32:40 +00:00
|
|
|
ump->um_update = ffs_update;
|
VFS mega cleanup commit (x/N)
1. Add new file "sys/kern/vfs_default.c" where default actions for
VOPs go. Implement proper defaults for ABORTOP, BWRITE, LEASE,
POLL, REVOKE and STRATEGY. Various stuff spread over the entire
tree belongs here.
2. Change VOP_BLKATOFF to a normal function in cd9660.
3. Kill VOP_BLKATOFF, VOP_TRUNCATE, VOP_VFREE, VOP_VALLOC. These
are private interface functions between UFS and the underlying
storage manager layer (FFS/LFS/MFS/EXT2FS). The functions now
live in struct ufsmount instead.
4. Remove a kludge of VOP_ functions in all filesystems, that did
nothing but obscure the simplicity and break the expandability.
If a filesystem doesn't implement VOP_FOO, it shouldn't have an
entry for it in its vnops table. The system will try to DTRT
if it is not implemented. There are still some cruft left, but
the bulk of it is done.
5. Fix another VCALL in vfs_cache.c (thanks Bruce!)
1997-10-16 10:50:27 +00:00
|
|
|
ump->um_valloc = ffs_valloc;
|
|
|
|
ump->um_vfree = ffs_vfree;
|
2002-12-27 10:06:37 +00:00
|
|
|
ump->um_ifree = ffs_ifree;
|
2008-09-16 10:59:35 +00:00
|
|
|
ump->um_rdonly = ffs_rdonly;
|
2011-03-20 21:05:09 +00:00
|
|
|
ump->um_snapgone = ffs_snapgone;
|
2019-07-17 22:07:43 +00:00
|
|
|
if ((mp->mnt_flag & MNT_UNTRUSTED) != 0)
|
|
|
|
ump->um_check_blkno = ffs_check_blkno;
|
|
|
|
else
|
|
|
|
ump->um_check_blkno = NULL;
|
2005-01-24 10:12:28 +00:00
|
|
|
mtx_init(UFS_MTX(ump), "FFS", "FFS Lock", MTX_DEF);
|
2018-01-26 00:58:32 +00:00
|
|
|
ffs_oldfscompat_read(fs, ump, fs->fs_sblockloc);
|
1994-05-24 10:09:53 +00:00
|
|
|
fs->fs_ronly = ronly;
|
2001-12-16 18:54:09 +00:00
|
|
|
fs->fs_active = NULL;
|
2007-10-16 10:54:55 +00:00
|
|
|
mp->mnt_data = ump;
|
1999-07-11 19:16:50 +00:00
|
|
|
mp->mnt_stat.f_fsid.val[0] = fs->fs_id[0];
|
|
|
|
mp->mnt_stat.f_fsid.val[1] = fs->fs_id[1];
|
2006-03-31 03:54:20 +00:00
|
|
|
nmp = NULL;
|
2010-09-17 09:14:40 +00:00
|
|
|
if (fs->fs_id[0] == 0 || fs->fs_id[1] == 0 ||
|
2006-03-31 03:54:20 +00:00
|
|
|
(nmp = vfs_getvfs(&mp->mnt_stat.f_fsid))) {
|
|
|
|
if (nmp)
|
|
|
|
vfs_rel(nmp);
|
1999-07-11 19:16:50 +00:00
|
|
|
vfs_getnewfsid(mp);
|
2006-03-31 03:54:20 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
mp->mnt_maxsymlinklen = fs->fs_maxsymlinklen;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
1997-03-18 19:50:12 +00:00
|
|
|
mp->mnt_flag |= MNT_LOCAL;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
if ((fs->fs_flags & FS_MULTILABEL) != 0) {
|
2006-04-22 04:22:15 +00:00
|
|
|
#ifdef MAC
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
2002-10-15 20:00:06 +00:00
|
|
|
mp->mnt_flag |= MNT_MULTILABEL;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2006-04-22 04:22:15 +00:00
|
|
|
#else
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: multilabel flag on fs but "
|
|
|
|
"no MAC support\n", mp->mnt_stat.f_mntonname);
|
2006-04-22 04:22:15 +00:00
|
|
|
#endif
|
2006-09-26 04:12:49 +00:00
|
|
|
}
|
|
|
|
if ((fs->fs_flags & FS_ACLS) != 0) {
|
2006-04-22 04:22:15 +00:00
|
|
|
#ifdef UFS_ACL
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
2009-12-21 19:39:10 +00:00
|
|
|
|
|
|
|
if (mp->mnt_flag & MNT_NFS4ACLS)
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: ACLs flag on fs conflicts with "
|
|
|
|
"\"nfsv4acls\" mount option; option ignored\n",
|
|
|
|
mp->mnt_stat.f_mntonname);
|
2009-12-21 19:39:10 +00:00
|
|
|
mp->mnt_flag &= ~MNT_NFS4ACLS;
|
2002-10-15 20:00:06 +00:00
|
|
|
mp->mnt_flag |= MNT_ACLS;
|
2009-12-21 19:39:10 +00:00
|
|
|
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2006-04-22 04:22:15 +00:00
|
|
|
#else
|
2010-09-17 09:14:40 +00:00
|
|
|
printf("WARNING: %s: ACLs flag on fs but no ACLs support\n",
|
2006-07-09 14:10:35 +00:00
|
|
|
mp->mnt_stat.f_mntonname);
|
2006-04-22 04:22:15 +00:00
|
|
|
#endif
|
2006-09-26 04:12:49 +00:00
|
|
|
}
|
2009-12-21 19:39:10 +00:00
|
|
|
if ((fs->fs_flags & FS_NFS4ACLS) != 0) {
|
|
|
|
#ifdef UFS_ACL
|
|
|
|
MNT_ILOCK(mp);
|
|
|
|
|
|
|
|
if (mp->mnt_flag & MNT_ACLS)
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: NFSv4 ACLs flag on fs conflicts "
|
|
|
|
"with \"acls\" mount option; option ignored\n",
|
|
|
|
mp->mnt_stat.f_mntonname);
|
2009-12-21 19:39:10 +00:00
|
|
|
mp->mnt_flag &= ~MNT_ACLS;
|
|
|
|
mp->mnt_flag |= MNT_NFS4ACLS;
|
|
|
|
|
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
#else
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: NFSv4 ACLs flag on fs but no "
|
|
|
|
"ACLs support\n", mp->mnt_stat.f_mntonname);
|
2009-12-21 19:39:10 +00:00
|
|
|
#endif
|
|
|
|
}
|
2010-12-29 12:25:28 +00:00
|
|
|
if ((fs->fs_flags & FS_TRIM) != 0) {
|
2016-10-28 20:15:19 +00:00
|
|
|
len = sizeof(int);
|
|
|
|
if (g_io_getattr("GEOM::candelete", cp, &len,
|
2018-06-29 22:24:41 +00:00
|
|
|
&candelete) == 0) {
|
|
|
|
if (candelete)
|
|
|
|
ump->um_flags |= UM_CANDELETE;
|
|
|
|
else
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: TRIM flag on fs but disk "
|
|
|
|
"does not support TRIM\n",
|
2010-12-29 12:25:28 +00:00
|
|
|
mp->mnt_stat.f_mntonname);
|
|
|
|
} else {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: TRIM flag on fs but disk does "
|
|
|
|
"not confirm that it supports TRIM\n",
|
2010-12-29 12:25:28 +00:00
|
|
|
mp->mnt_stat.f_mntonname);
|
|
|
|
}
|
2018-06-29 22:24:41 +00:00
|
|
|
if (((ump->um_flags) & UM_CANDELETE) != 0) {
|
2016-03-27 08:21:17 +00:00
|
|
|
ump->um_trim_tq = taskqueue_create("trim", M_WAITOK,
|
|
|
|
taskqueue_thread_enqueue, &ump->um_trim_tq);
|
|
|
|
taskqueue_start_threads(&ump->um_trim_tq, 1, PVFS,
|
|
|
|
"%s trim", mp->mnt_stat.f_mntonname);
|
2018-08-18 22:21:59 +00:00
|
|
|
ump->um_trimhash = hashinit(MAXTRIMIO, M_TRIM,
|
|
|
|
&ump->um_trimlisthashsize);
|
2016-03-27 08:21:17 +00:00
|
|
|
}
|
2010-12-29 12:25:28 +00:00
|
|
|
}
|
2009-12-21 19:39:10 +00:00
|
|
|
|
2020-02-16 23:10:59 +00:00
|
|
|
len = sizeof(int);
|
|
|
|
if (g_io_getattr("GEOM::canspeedup", cp, &len, &canspeedup) == 0) {
|
|
|
|
if (canspeedup)
|
|
|
|
ump->um_flags |= UM_CANSPEEDUP;
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
ump->um_mountp = mp;
|
|
|
|
ump->um_dev = dev;
|
|
|
|
ump->um_devvp = devvp;
|
2020-03-06 18:41:37 +00:00
|
|
|
ump->um_odevvp = odevvp;
|
1994-05-24 10:09:53 +00:00
|
|
|
ump->um_nindir = fs->fs_nindir;
|
|
|
|
ump->um_bptrtodb = fs->fs_fsbtodb;
|
|
|
|
ump->um_seqinc = fs->fs_frag;
|
|
|
|
for (i = 0; i < MAXQUOTAS; i++)
|
|
|
|
ump->um_quotas[i] = NULLVP;
|
2001-03-19 04:35:40 +00:00
|
|
|
#ifdef UFS_EXTATTR
|
Introduce extended attribute support for FFS, allowing arbitrary
(name, value) pairs to be associated with inodes. This support is
used for ACLs, MAC labels, and Capabilities in the TrustedBSD
security extensions, which are currently under development.
In this implementation, attributes are backed to data vnodes in the
style of the quota support in FFS. Support for FFS extended
attributes may be enabled using the FFS_EXTATTR kernel option
(disabled by default). Userland utilities and man pages will be
committed in the next batch. VFS interfaces and man pages have
been in the repo since 4.0-RELEASE and are unchanged.
o ufs/ufs/extattr.h: UFS-specific extattr defines
o ufs/ufs/ufs_extattr.c: bulk of support routines
o ufs/{ufs,ffs,mfs}/*.[ch]: hooks and extattr.h includes
o contrib/softupdates/ffs_softdep.c: extattr.h includes
o conf/options, conf/files, i386/conf/LINT: added FFS_EXTATTR
o coda/coda_vfsops.c: XXX required extattr.h due to ufsmount.h
(This should not be the case, and will be fixed in a future commit)
Currently attributes are not supported in MFS. This will be fixed.
Reviewed by: adrian, bp, freebsd-fs, other unthanked souls
Obtained from: TrustedBSD Project
2000-04-15 03:34:27 +00:00
|
|
|
ufs_extattr_uepm_init(&ump->um_extattr);
|
|
|
|
#endif
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
* Set FS local "last mounted on" information (NULL pad)
|
|
|
|
*/
|
2005-08-21 22:06:41 +00:00
|
|
|
bzero(fs->fs_fsmnt, MAXMNTLEN);
|
|
|
|
strlcpy(fs->fs_fsmnt, mp->mnt_stat.f_mntonname, MAXMNTLEN);
|
2010-04-24 07:05:35 +00:00
|
|
|
mp->mnt_stat.f_iosize = fs->fs_bsize;
|
1995-08-28 09:19:25 +00:00
|
|
|
|
2011-07-10 00:41:31 +00:00
|
|
|
if (mp->mnt_flag & MNT_ROOTFS) {
|
1995-08-28 09:19:25 +00:00
|
|
|
/*
|
|
|
|
* Root mount; update timestamp in mount structure.
|
|
|
|
* this will be used by the common root mount code
|
|
|
|
* to update the system clock.
|
|
|
|
*/
|
|
|
|
mp->mnt_time = fs->fs_time;
|
|
|
|
}
|
1997-02-10 02:22:35 +00:00
|
|
|
|
|
|
|
if (ronly == 0) {
|
2010-04-24 07:05:35 +00:00
|
|
|
fs->fs_mtime = time_second;
|
1998-03-08 09:59:44 +00:00
|
|
|
if ((fs->fs_flags & FS_DOSOFTDEP) &&
|
|
|
|
(error = softdep_mount(devvp, mp, fs, cred)) != 0) {
|
2010-12-01 21:19:11 +00:00
|
|
|
ffs_flushfiles(mp, FORCECLOSE, td);
|
1998-03-08 09:59:44 +00:00
|
|
|
goto out;
|
|
|
|
}
|
2000-07-11 22:07:57 +00:00
|
|
|
if (fs->fs_snapinum[0] != 0)
|
|
|
|
ffs_snapshot_mount(mp);
|
2000-01-10 00:24:24 +00:00
|
|
|
fs->fs_fmod = 1;
|
1997-02-10 02:22:35 +00:00
|
|
|
fs->fs_clean = 0;
|
2006-03-08 23:43:39 +00:00
|
|
|
(void) ffs_sbupdate(ump, MNT_WAIT, 0);
|
1997-02-10 02:22:35 +00:00
|
|
|
}
|
2004-07-14 14:19:32 +00:00
|
|
|
/*
|
2016-05-17 08:24:27 +00:00
|
|
|
* Initialize filesystem state information in mount struct.
|
2004-07-14 14:19:32 +00:00
|
|
|
*/
|
2008-03-04 12:10:03 +00:00
|
|
|
MNT_ILOCK(mp);
|
2012-11-09 18:02:25 +00:00
|
|
|
mp->mnt_kern_flag |= MNTK_LOOKUP_SHARED | MNTK_EXTENDED_SHARED |
|
2015-07-05 22:37:33 +00:00
|
|
|
MNTK_NO_IOPF | MNTK_UNMAPPED_BUFS | MNTK_USES_BCACHE;
|
2008-03-04 12:10:03 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2001-03-19 04:35:40 +00:00
|
|
|
#ifdef UFS_EXTATTR
|
|
|
|
#ifdef UFS_EXTATTR_AUTOSTART
|
2000-10-04 04:44:51 +00:00
|
|
|
/*
|
|
|
|
*
|
o Implement "options FFS_EXTATTR_AUTOSTART", which depends on
"options FFS_EXTATTR". When extended attribute auto-starting
is enabled, FFS will scan the .attribute directory off of the
root of each file system, as it is mounted. If .attribute
exists, EA support will be started for the file system. If
there are files in the directory, FFS will attempt to start
them as attribute backing files for attributes baring the same
name. All attributes are started before access to the file
system is permitted, so this permits race-free enabling of
attributes. For attributes backing support for security
features, such as ACLs, MAC, Capabilities, this is vital, as
it prevents the file system attributes from getting out of
sync as a result of file system operations between mount-time
and the enabling of the extended attribute. The userland
extattrctl tool will still function exactly as previously.
Files must be placed directly in .attribute, which must be
directly off of the file system root: symbolic links are
not permitted. FFS_EXTATTR will continue to be able
to function without FFS_EXTATTR_AUTOSTART for sites that do not
want/require auto-starting. If you're using the UFS_ACL code
available from www.TrustedBSD.org, using FFS_EXTATTR_AUTOSTART
is recommended.
o This support is implemented by adding an invocation of
ufs_extattr_autostart() to ffs_mountfs(). In addition,
several new supporting calls are introduced in
ufs_extattr.c:
ufs_extattr_autostart(): start EAs on the specified mount
ufs_extattr_lookup(): given a directory and filename,
return the vnode for the file.
ufs_extattr_enable_with_open(): invoke ufs_extattr_enable()
after doing the equililent of vn_open()
on the passed file.
ufs_extattr_iterate_directory(): iterate over a directory,
invoking ufs_extattr_lookup() and
ufs_extattr_enable_with_open() on each
entry.
o This feature is not widely tested, and therefore may contain
bugs, caution is advised. Several changes are in the pipeline
for this feature, including breaking out of EA namespaces into
subdirectories of .attribute (this is waiting on the updated
EA API), as well as a per-filesystem flag indicating whether
or not EAs should be auto-started. This is required because
administrators may not want .attribute auto-started on all
file systems, especially if non-administrators have write access
to the root of a file system.
Obtained from: TrustedBSD Project
2001-03-14 05:32:31 +00:00
|
|
|
* Auto-starting does the following:
|
2000-10-04 04:44:51 +00:00
|
|
|
* - check for /.attribute in the fs, and extattr_start if so
|
|
|
|
* - for each file in .attribute, enable that file with
|
|
|
|
* an attribute of the same name.
|
|
|
|
* Not clear how to report errors -- probably eat them.
|
2002-05-16 21:28:32 +00:00
|
|
|
* This would all happen while the filesystem was busy/not
|
2000-10-04 04:44:51 +00:00
|
|
|
* available, so would effectively be "atomic".
|
|
|
|
*/
|
2001-09-12 08:38:13 +00:00
|
|
|
(void) ufs_extattr_autostart(mp, td);
|
2001-03-19 04:35:40 +00:00
|
|
|
#endif /* !UFS_EXTATTR_AUTOSTART */
|
|
|
|
#endif /* !UFS_EXTATTR */
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
etp = malloc(sizeof *ump->um_fsfail_task, M_UFSMNT, M_WAITOK | M_ZERO);
|
|
|
|
etp->fsid = mp->mnt_stat.f_fsid;
|
|
|
|
ump->um_fsfail_task = etp;
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
out:
|
2018-01-26 00:58:32 +00:00
|
|
|
if (fs != NULL) {
|
|
|
|
free(fs->fs_csp, M_UFSMNT);
|
2020-06-19 01:02:53 +00:00
|
|
|
free(fs->fs_si, M_UFSMNT);
|
2018-01-26 00:58:32 +00:00
|
|
|
free(fs, M_UFSMNT);
|
|
|
|
}
|
2004-10-29 10:15:56 +00:00
|
|
|
if (cp != NULL) {
|
|
|
|
g_topology_lock();
|
2008-10-10 21:23:50 +00:00
|
|
|
g_vfs_close(cp);
|
2004-10-29 10:15:56 +00:00
|
|
|
g_topology_unlock();
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
if (ump) {
|
2005-01-24 10:12:28 +00:00
|
|
|
mtx_destroy(UFS_MTX(ump));
|
2006-10-31 21:48:54 +00:00
|
|
|
if (mp->mnt_gjprovider != NULL) {
|
|
|
|
free(mp->mnt_gjprovider, M_UFSMNT);
|
|
|
|
mp->mnt_gjprovider = NULL;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
free(ump, M_UFSMNT);
|
2007-10-16 10:54:55 +00:00
|
|
|
mp->mnt_data = NULL;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2020-03-06 18:41:37 +00:00
|
|
|
BO_LOCK(&odevvp->v_bufobj);
|
|
|
|
odevvp->v_bufobj.bo_flag &= ~BO_NOBUFS;
|
|
|
|
BO_UNLOCK(&odevvp->v_bufobj);
|
2016-05-21 09:49:35 +00:00
|
|
|
atomic_store_rel_ptr((uintptr_t *)&dev->si_mountpt, 0);
|
2020-03-06 18:41:37 +00:00
|
|
|
mntfs_freevp(devvp);
|
2009-01-29 16:47:15 +00:00
|
|
|
dev_rel(dev);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2018-01-26 00:58:32 +00:00
|
|
|
/*
|
|
|
|
* A read function for use by filesystem-layer routines.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_use_bread(void *devfd, off_t loc, void **bufp, int size)
|
|
|
|
{
|
|
|
|
struct buf *bp;
|
|
|
|
int error;
|
|
|
|
|
This change is some refactoring of Mark Johnston's changes in r329375
to fix the memory leak that I introduced in r328426. Instead of
trying to clear up the possible memory leak in all the clients, I
ensure that it gets cleaned up in the source (e.g., ffs_sbget ensures
that memory is always freed if it returns an error).
The original change in r328426 was a bit sparse in its description.
So I am expanding on its description here (thanks cem@ and rgrimes@
for your encouragement for my longer commit messages).
In preparation for adding check hashing to superblocks, r328426 is
a refactoring of the code to get the reading/writing of the superblock
into one place. Unlike the cylinder group reading/writing which
ends up in two places (ffs_getcg/ffs_geom_strategy in the kernel
and cgget/cgput in libufs), I have the core superblock functions
just in the kernel (ffs_sbfetch/ffs_sbput in ffs_subr.c which is
already imported into utilities like fsck_ffs as well as libufs to
implement sbget/sbput). The ffs_sbfetch and ffs_sbput functions
take a function pointer to do the actual I/O for which there are
four variants:
ffs_use_bread / ffs_use_bwrite for the in-kernel filesystem
g_use_g_read_data / g_use_g_write_data for kernel geom clients
ufs_use_sa_read for the standalone code (stand/libsa/ufs.c
but not stand/libsa/ufsread.c which is size constrained)
use_pread / use_pwrite for libufs
Uses of these interfaces are in the UFS filesystem, geoms journal &
label, libsa changes, and libufs. They also permeate out into the
filesystem utilities fsck_ffs, newfs, growfs, clri, dump, quotacheck,
fsirand, fstyp, and quot. Some of these utilities should probably be
converted to directly use libufs (like dumpfs was for example), but
there does not seem to be much win in doing so.
Tested by: Peter Holm (pho@)
2018-03-02 04:34:53 +00:00
|
|
|
KASSERT(*bufp == NULL, ("ffs_use_bread: non-NULL *bufp %p\n", *bufp));
|
2018-01-26 00:58:32 +00:00
|
|
|
*bufp = malloc(size, M_UFSMNT, M_WAITOK);
|
|
|
|
if ((error = bread((struct vnode *)devfd, btodb(loc), size, NOCRED,
|
This change is some refactoring of Mark Johnston's changes in r329375
to fix the memory leak that I introduced in r328426. Instead of
trying to clear up the possible memory leak in all the clients, I
ensure that it gets cleaned up in the source (e.g., ffs_sbget ensures
that memory is always freed if it returns an error).
The original change in r328426 was a bit sparse in its description.
So I am expanding on its description here (thanks cem@ and rgrimes@
for your encouragement for my longer commit messages).
In preparation for adding check hashing to superblocks, r328426 is
a refactoring of the code to get the reading/writing of the superblock
into one place. Unlike the cylinder group reading/writing which
ends up in two places (ffs_getcg/ffs_geom_strategy in the kernel
and cgget/cgput in libufs), I have the core superblock functions
just in the kernel (ffs_sbfetch/ffs_sbput in ffs_subr.c which is
already imported into utilities like fsck_ffs as well as libufs to
implement sbget/sbput). The ffs_sbfetch and ffs_sbput functions
take a function pointer to do the actual I/O for which there are
four variants:
ffs_use_bread / ffs_use_bwrite for the in-kernel filesystem
g_use_g_read_data / g_use_g_write_data for kernel geom clients
ufs_use_sa_read for the standalone code (stand/libsa/ufs.c
but not stand/libsa/ufsread.c which is size constrained)
use_pread / use_pwrite for libufs
Uses of these interfaces are in the UFS filesystem, geoms journal &
label, libsa changes, and libufs. They also permeate out into the
filesystem utilities fsck_ffs, newfs, growfs, clri, dump, quotacheck,
fsirand, fstyp, and quot. Some of these utilities should probably be
converted to directly use libufs (like dumpfs was for example), but
there does not seem to be much win in doing so.
Tested by: Peter Holm (pho@)
2018-03-02 04:34:53 +00:00
|
|
|
&bp)) != 0)
|
2018-01-26 00:58:32 +00:00
|
|
|
return (error);
|
|
|
|
bcopy(bp->b_data, *bufp, size);
|
|
|
|
bp->b_flags |= B_INVAL | B_NOCACHE;
|
|
|
|
brelse(bp);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2005-02-10 12:20:08 +00:00
|
|
|
static int bigcgs = 0;
|
2002-06-21 06:18:05 +00:00
|
|
|
SYSCTL_INT(_debug, OID_AUTO, bigcgs, CTLFLAG_RW, &bigcgs, 0, "");
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2002-06-21 06:18:05 +00:00
|
|
|
* Sanity checks for loading old filesystem superblocks.
|
|
|
|
* See ffs_oldfscompat_write below for unwound actions.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
2002-06-21 06:18:05 +00:00
|
|
|
* XXX - Parts get retired eventually.
|
|
|
|
* Unfortunately new bits get added.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2002-06-21 06:18:05 +00:00
|
|
|
static void
|
|
|
|
ffs_oldfscompat_read(fs, ump, sblockloc)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct fs *fs;
|
2002-06-21 06:18:05 +00:00
|
|
|
struct ufsmount *ump;
|
|
|
|
ufs2_daddr_t sblockloc;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2002-06-21 06:18:05 +00:00
|
|
|
off_t maxfilesize;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2002-11-27 02:18:58 +00:00
|
|
|
/*
|
|
|
|
* If not yet done, update fs_flags location and value of fs_sblockloc.
|
|
|
|
*/
|
|
|
|
if ((fs->fs_old_flags & FS_FLAGS_UPDATED) == 0) {
|
|
|
|
fs->fs_flags = fs->fs_old_flags;
|
|
|
|
fs->fs_old_flags |= FS_FLAGS_UPDATED;
|
|
|
|
fs->fs_sblockloc = sblockloc;
|
|
|
|
}
|
2002-06-21 06:18:05 +00:00
|
|
|
/*
|
|
|
|
* If not yet done, update UFS1 superblock with new wider fields.
|
|
|
|
*/
|
2003-02-25 23:21:08 +00:00
|
|
|
if (fs->fs_magic == FS_UFS1_MAGIC && fs->fs_maxbsize != fs->fs_bsize) {
|
2002-06-21 06:18:05 +00:00
|
|
|
fs->fs_maxbsize = fs->fs_bsize;
|
|
|
|
fs->fs_time = fs->fs_old_time;
|
|
|
|
fs->fs_size = fs->fs_old_size;
|
|
|
|
fs->fs_dsize = fs->fs_old_dsize;
|
|
|
|
fs->fs_csaddr = fs->fs_old_csaddr;
|
|
|
|
fs->fs_cstotal.cs_ndir = fs->fs_old_cstotal.cs_ndir;
|
|
|
|
fs->fs_cstotal.cs_nbfree = fs->fs_old_cstotal.cs_nbfree;
|
|
|
|
fs->fs_cstotal.cs_nifree = fs->fs_old_cstotal.cs_nifree;
|
|
|
|
fs->fs_cstotal.cs_nffree = fs->fs_old_cstotal.cs_nffree;
|
|
|
|
}
|
|
|
|
if (fs->fs_magic == FS_UFS1_MAGIC &&
|
|
|
|
fs->fs_old_inodefmt < FS_44INODEFMT) {
|
2005-10-21 01:54:00 +00:00
|
|
|
fs->fs_maxfilesize = ((uint64_t)1 << 31) - 1;
|
2002-06-21 06:18:05 +00:00
|
|
|
fs->fs_qbmask = ~fs->fs_bmask;
|
|
|
|
fs->fs_qfmask = ~fs->fs_fmask;
|
|
|
|
}
|
2002-06-26 18:34:51 +00:00
|
|
|
if (fs->fs_magic == FS_UFS1_MAGIC) {
|
|
|
|
ump->um_savedmaxfilesize = fs->fs_maxfilesize;
|
2005-10-21 01:54:00 +00:00
|
|
|
maxfilesize = (uint64_t)0x80000000 * fs->fs_bsize - 1;
|
2002-06-26 18:34:51 +00:00
|
|
|
if (fs->fs_maxfilesize > maxfilesize)
|
|
|
|
fs->fs_maxfilesize = maxfilesize;
|
|
|
|
}
|
2002-06-21 06:18:05 +00:00
|
|
|
/* Compatibility for old filesystems */
|
|
|
|
if (fs->fs_avgfilesize <= 0)
|
|
|
|
fs->fs_avgfilesize = AVFILESIZ;
|
|
|
|
if (fs->fs_avgfpdir <= 0)
|
|
|
|
fs->fs_avgfpdir = AFPDIR;
|
|
|
|
if (bigcgs) {
|
|
|
|
fs->fs_save_cgsize = fs->fs_cgsize;
|
|
|
|
fs->fs_cgsize = fs->fs_bsize;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unwinding superblock updates for old filesystems.
|
|
|
|
* See ffs_oldfscompat_read above for details.
|
|
|
|
*
|
|
|
|
* XXX - Parts get retired eventually.
|
|
|
|
* Unfortunately new bits get added.
|
|
|
|
*/
|
2010-04-24 07:05:35 +00:00
|
|
|
void
|
2002-06-21 06:18:05 +00:00
|
|
|
ffs_oldfscompat_write(fs, ump)
|
|
|
|
struct fs *fs;
|
|
|
|
struct ufsmount *ump;
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy back UFS2 updated fields that UFS1 inspects.
|
|
|
|
*/
|
|
|
|
if (fs->fs_magic == FS_UFS1_MAGIC) {
|
|
|
|
fs->fs_old_time = fs->fs_time;
|
|
|
|
fs->fs_old_cstotal.cs_ndir = fs->fs_cstotal.cs_ndir;
|
|
|
|
fs->fs_old_cstotal.cs_nbfree = fs->fs_cstotal.cs_nbfree;
|
|
|
|
fs->fs_old_cstotal.cs_nifree = fs->fs_cstotal.cs_nifree;
|
|
|
|
fs->fs_old_cstotal.cs_nffree = fs->fs_cstotal.cs_nffree;
|
2002-06-26 18:34:51 +00:00
|
|
|
fs->fs_maxfilesize = ump->um_savedmaxfilesize;
|
2002-06-21 06:18:05 +00:00
|
|
|
}
|
|
|
|
if (bigcgs) {
|
|
|
|
fs->fs_cgsize = fs->fs_save_cgsize;
|
|
|
|
fs->fs_save_cgsize = 0;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* unmount system call
|
|
|
|
*/
|
2005-02-10 12:20:08 +00:00
|
|
|
static int
|
2009-05-11 15:33:26 +00:00
|
|
|
ffs_unmount(mp, mntflags)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct mount *mp;
|
|
|
|
int mntflags;
|
|
|
|
{
|
2009-05-11 15:33:26 +00:00
|
|
|
struct thread *td;
|
2002-05-13 09:22:31 +00:00
|
|
|
struct ufsmount *ump = VFSTOUFS(mp);
|
|
|
|
struct fs *fs;
|
2008-09-16 11:55:53 +00:00
|
|
|
int error, flags, susp;
|
2009-01-08 12:48:27 +00:00
|
|
|
#ifdef UFS_EXTATTR
|
|
|
|
int e_restart;
|
|
|
|
#endif
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
flags = 0;
|
2009-05-11 15:33:26 +00:00
|
|
|
td = curthread;
|
2008-09-16 11:55:53 +00:00
|
|
|
fs = ump->um_fs;
|
2020-04-10 01:24:16 +00:00
|
|
|
if (mntflags & MNT_FORCE)
|
1994-05-24 10:09:53 +00:00
|
|
|
flags |= FORCECLOSE;
|
2020-04-10 01:24:16 +00:00
|
|
|
susp = fs->fs_ronly == 0;
|
2001-03-19 04:35:40 +00:00
|
|
|
#ifdef UFS_EXTATTR
|
2001-09-12 08:38:13 +00:00
|
|
|
if ((error = ufs_extattr_stop(mp, td))) {
|
2000-06-04 04:50:36 +00:00
|
|
|
if (error != EOPNOTSUPP)
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: unmount %s: ufs_extattr_stop "
|
|
|
|
"returned errno %d\n", mp->mnt_stat.f_mntonname,
|
2000-06-04 04:50:36 +00:00
|
|
|
error);
|
2009-01-08 12:48:27 +00:00
|
|
|
e_restart = 0;
|
2001-09-01 20:11:05 +00:00
|
|
|
} else {
|
|
|
|
ufs_extattr_uepm_destroy(&ump->um_extattr);
|
2009-01-08 12:48:27 +00:00
|
|
|
e_restart = 1;
|
2001-09-01 20:11:05 +00:00
|
|
|
}
|
Introduce extended attribute support for FFS, allowing arbitrary
(name, value) pairs to be associated with inodes. This support is
used for ACLs, MAC labels, and Capabilities in the TrustedBSD
security extensions, which are currently under development.
In this implementation, attributes are backed to data vnodes in the
style of the quota support in FFS. Support for FFS extended
attributes may be enabled using the FFS_EXTATTR kernel option
(disabled by default). Userland utilities and man pages will be
committed in the next batch. VFS interfaces and man pages have
been in the repo since 4.0-RELEASE and are unchanged.
o ufs/ufs/extattr.h: UFS-specific extattr defines
o ufs/ufs/ufs_extattr.c: bulk of support routines
o ufs/{ufs,ffs,mfs}/*.[ch]: hooks and extattr.h includes
o contrib/softupdates/ffs_softdep.c: extattr.h includes
o conf/options, conf/files, i386/conf/LINT: added FFS_EXTATTR
o coda/coda_vfsops.c: XXX required extattr.h due to ufsmount.h
(This should not be the case, and will be fixed in a future commit)
Currently attributes are not supported in MFS. This will be fixed.
Reviewed by: adrian, bp, freebsd-fs, other unthanked souls
Obtained from: TrustedBSD Project
2000-04-15 03:34:27 +00:00
|
|
|
#endif
|
2008-09-16 11:55:53 +00:00
|
|
|
if (susp) {
|
2014-07-14 09:10:00 +00:00
|
|
|
error = vfs_write_suspend_umnt(mp);
|
|
|
|
if (error != 0)
|
|
|
|
goto fail1;
|
2008-09-16 11:55:53 +00:00
|
|
|
}
|
2011-07-30 00:43:18 +00:00
|
|
|
if (MOUNTEDSOFTDEP(mp))
|
2009-02-23 20:56:27 +00:00
|
|
|
error = softdep_flushfiles(mp, flags, td);
|
|
|
|
else
|
|
|
|
error = ffs_flushfiles(mp, flags, td);
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (error != 0 && !ffs_fsfail_cleanup(ump, error))
|
2009-02-23 20:56:27 +00:00
|
|
|
goto fail;
|
|
|
|
|
2005-01-24 10:12:28 +00:00
|
|
|
UFS_LOCK(ump);
|
2001-05-08 07:42:20 +00:00
|
|
|
if (fs->fs_pendingblocks != 0 || fs->fs_pendinginodes != 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: unmount %s: pending error: blocks %jd "
|
|
|
|
"files %d\n", fs->fs_fsmnt, (intmax_t)fs->fs_pendingblocks,
|
2002-06-21 06:18:05 +00:00
|
|
|
fs->fs_pendinginodes);
|
2001-05-08 07:42:20 +00:00
|
|
|
fs->fs_pendingblocks = 0;
|
|
|
|
fs->fs_pendinginodes = 0;
|
|
|
|
}
|
2005-01-24 10:12:28 +00:00
|
|
|
UFS_UNLOCK(ump);
|
2013-10-20 21:11:40 +00:00
|
|
|
if (MOUNTEDSOFTDEP(mp))
|
|
|
|
softdep_unmount(mp);
|
2011-07-15 16:20:33 +00:00
|
|
|
if (fs->fs_ronly == 0 || ump->um_fsckpid > 0) {
|
2001-04-14 05:26:28 +00:00
|
|
|
fs->fs_clean = fs->fs_flags & (FS_UNCLEAN|FS_NEEDSFSCK) ? 0 : 1;
|
2006-03-08 23:43:39 +00:00
|
|
|
error = ffs_sbupdate(ump, MNT_WAIT, 0);
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (ffs_fsfail_cleanup(ump, error))
|
|
|
|
error = 0;
|
|
|
|
if (error != 0 && !ffs_fsfail_cleanup(ump, error)) {
|
1997-02-10 02:22:35 +00:00
|
|
|
fs->fs_clean = 0;
|
2008-09-16 11:55:53 +00:00
|
|
|
goto fail;
|
1997-02-10 02:22:35 +00:00
|
|
|
}
|
1994-08-20 16:03:26 +00:00
|
|
|
}
|
2013-01-11 06:08:32 +00:00
|
|
|
if (susp)
|
|
|
|
vfs_write_resume(mp, VR_START_WRITE);
|
2016-03-27 08:21:17 +00:00
|
|
|
if (ump->um_trim_tq != NULL) {
|
|
|
|
while (ump->um_trim_inflight != 0)
|
|
|
|
pause("ufsutr", hz);
|
|
|
|
taskqueue_drain_all(ump->um_trim_tq);
|
|
|
|
taskqueue_free(ump->um_trim_tq);
|
2018-08-18 22:21:59 +00:00
|
|
|
free (ump->um_trimhash, M_TRIM);
|
2016-03-27 08:21:17 +00:00
|
|
|
}
|
2004-10-29 10:15:56 +00:00
|
|
|
g_topology_lock();
|
2011-07-15 16:20:33 +00:00
|
|
|
if (ump->um_fsckpid > 0) {
|
|
|
|
/*
|
|
|
|
* Return to normal read-only mode.
|
|
|
|
*/
|
|
|
|
error = g_access(ump->um_cp, 0, -1, 0);
|
|
|
|
ump->um_fsckpid = 0;
|
|
|
|
}
|
2008-10-10 21:23:50 +00:00
|
|
|
g_vfs_close(ump->um_cp);
|
2004-10-29 10:15:56 +00:00
|
|
|
g_topology_unlock();
|
2020-03-06 18:41:37 +00:00
|
|
|
BO_LOCK(&ump->um_odevvp->v_bufobj);
|
|
|
|
ump->um_odevvp->v_bufobj.bo_flag &= ~BO_NOBUFS;
|
|
|
|
BO_UNLOCK(&ump->um_odevvp->v_bufobj);
|
2016-05-21 09:49:35 +00:00
|
|
|
atomic_store_rel_ptr((uintptr_t *)&ump->um_dev->si_mountpt, 0);
|
2020-03-06 18:41:37 +00:00
|
|
|
mntfs_freevp(ump->um_devvp);
|
2021-01-30 08:03:37 +00:00
|
|
|
vrele(ump->um_odevvp);
|
2009-01-29 16:47:15 +00:00
|
|
|
dev_rel(ump->um_dev);
|
2005-01-24 10:12:28 +00:00
|
|
|
mtx_destroy(UFS_MTX(ump));
|
2006-10-31 21:48:54 +00:00
|
|
|
if (mp->mnt_gjprovider != NULL) {
|
|
|
|
free(mp->mnt_gjprovider, M_UFSMNT);
|
|
|
|
mp->mnt_gjprovider = NULL;
|
|
|
|
}
|
2001-01-15 18:30:40 +00:00
|
|
|
free(fs->fs_csp, M_UFSMNT);
|
2020-06-19 01:02:53 +00:00
|
|
|
free(fs->fs_si, M_UFSMNT);
|
1994-05-24 10:09:53 +00:00
|
|
|
free(fs, M_UFSMNT);
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (ump->um_fsfail_task != NULL)
|
|
|
|
free(ump->um_fsfail_task, M_UFSMNT);
|
1994-05-24 10:09:53 +00:00
|
|
|
free(ump, M_UFSMNT);
|
2007-10-16 10:54:55 +00:00
|
|
|
mp->mnt_data = NULL;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_ILOCK(mp);
|
1997-03-18 19:50:12 +00:00
|
|
|
mp->mnt_flag &= ~MNT_LOCAL;
|
2006-09-26 04:12:49 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2017-06-03 14:15:14 +00:00
|
|
|
if (td->td_su == mp) {
|
|
|
|
td->td_su = NULL;
|
|
|
|
vfs_rel(mp);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
2008-09-16 11:55:53 +00:00
|
|
|
|
|
|
|
fail:
|
2013-01-11 06:08:32 +00:00
|
|
|
if (susp)
|
|
|
|
vfs_write_resume(mp, VR_START_WRITE);
|
2014-07-14 09:10:00 +00:00
|
|
|
fail1:
|
2009-01-08 12:48:27 +00:00
|
|
|
#ifdef UFS_EXTATTR
|
|
|
|
if (e_restart) {
|
|
|
|
ufs_extattr_uepm_init(&ump->um_extattr);
|
|
|
|
#ifdef UFS_EXTATTR_AUTOSTART
|
|
|
|
(void) ufs_extattr_autostart(mp, td);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-09-16 11:55:53 +00:00
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Flush out all the files in a filesystem.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
ffs_flushfiles(mp, flags, td)
|
2002-05-13 09:22:31 +00:00
|
|
|
struct mount *mp;
|
1994-05-24 10:09:53 +00:00
|
|
|
int flags;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2002-05-13 09:22:31 +00:00
|
|
|
struct ufsmount *ump;
|
2013-02-27 07:32:39 +00:00
|
|
|
int qerror, error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
ump = VFSTOUFS(mp);
|
2013-02-27 07:32:39 +00:00
|
|
|
qerror = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
#ifdef QUOTA
|
|
|
|
if (mp->mnt_flag & MNT_QUOTA) {
|
1994-10-10 01:04:55 +00:00
|
|
|
int i;
|
2004-07-12 08:14:09 +00:00
|
|
|
error = vflush(mp, 0, SKIPSYSTEM|flags, td);
|
1994-10-10 01:04:55 +00:00
|
|
|
if (error)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
for (i = 0; i < MAXQUOTAS; i++) {
|
2013-02-27 07:32:39 +00:00
|
|
|
error = quotaoff(td, mp, i);
|
|
|
|
if (error != 0) {
|
|
|
|
if ((flags & EARLYFLUSH) == 0)
|
|
|
|
return (error);
|
|
|
|
else
|
|
|
|
qerror = error;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2013-02-27 07:32:39 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2013-02-27 07:32:39 +00:00
|
|
|
* Here we fall through to vflush again to ensure that
|
|
|
|
* we have gotten rid of all the system vnodes, unless
|
|
|
|
* quotas must not be closed.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
}
|
|
|
|
#endif
|
2002-08-04 10:29:36 +00:00
|
|
|
ASSERT_VOP_LOCKED(ump->um_devvp, "ffs_flushfiles");
|
|
|
|
if (ump->um_devvp->v_vflag & VV_COPYONWRITE) {
|
2004-07-12 08:14:09 +00:00
|
|
|
if ((error = vflush(mp, 0, SKIPSYSTEM | flags, td)) != 0)
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
|
|
|
ffs_snapshot_unmount(mp);
|
2006-03-19 21:09:19 +00:00
|
|
|
flags |= FORCECLOSE;
|
2000-07-11 22:07:57 +00:00
|
|
|
/*
|
|
|
|
* Here we fall through to vflush again to ensure
|
|
|
|
* that we have gotten rid of all the system vnodes.
|
|
|
|
*/
|
|
|
|
}
|
2013-02-27 07:32:39 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not close system files if quotas were not closed, to be
|
|
|
|
* able to sync the remaining dquots. The freeblks softupdate
|
|
|
|
* workitems might hold a reference on a dquot, preventing
|
|
|
|
* quotaoff() from completing. Next round of
|
|
|
|
* softdep_flushworklist() iteration should process the
|
|
|
|
* blockers, allowing the next run of quotaoff() to finally
|
|
|
|
* flush held dquots.
|
|
|
|
*
|
|
|
|
* Otherwise, flush all the files.
|
1998-03-08 09:59:44 +00:00
|
|
|
*/
|
2013-02-27 07:32:39 +00:00
|
|
|
if (qerror == 0 && (error = vflush(mp, 0, flags, td)) != 0)
|
1998-03-08 09:59:44 +00:00
|
|
|
return (error);
|
2013-02-27 07:32:39 +00:00
|
|
|
|
1998-03-08 09:59:44 +00:00
|
|
|
/*
|
|
|
|
* Flush filesystem metadata.
|
|
|
|
*/
|
2008-01-10 01:10:58 +00:00
|
|
|
vn_lock(ump->um_devvp, LK_EXCLUSIVE | LK_RETRY);
|
2005-01-11 07:36:22 +00:00
|
|
|
error = VOP_FSYNC(ump->um_devvp, MNT_WAIT, td);
|
2020-01-03 22:29:58 +00:00
|
|
|
VOP_UNLOCK(ump->um_devvp);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2002-05-16 21:28:32 +00:00
|
|
|
* Get filesystem statistics.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2005-02-10 12:20:08 +00:00
|
|
|
static int
|
2009-05-11 15:33:26 +00:00
|
|
|
ffs_statfs(mp, sbp)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct mount *mp;
|
2002-05-13 09:22:31 +00:00
|
|
|
struct statfs *sbp;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2002-05-13 09:22:31 +00:00
|
|
|
struct ufsmount *ump;
|
|
|
|
struct fs *fs;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
2002-06-21 06:18:05 +00:00
|
|
|
if (fs->fs_magic != FS_UFS1_MAGIC && fs->fs_magic != FS_UFS2_MAGIC)
|
1994-05-24 10:09:53 +00:00
|
|
|
panic("ffs_statfs");
|
2003-11-12 08:01:40 +00:00
|
|
|
sbp->f_version = STATFS_VERSION;
|
1994-05-24 10:09:53 +00:00
|
|
|
sbp->f_bsize = fs->fs_fsize;
|
|
|
|
sbp->f_iosize = fs->fs_bsize;
|
|
|
|
sbp->f_blocks = fs->fs_dsize;
|
2005-01-24 10:12:28 +00:00
|
|
|
UFS_LOCK(ump);
|
1994-05-24 10:09:53 +00:00
|
|
|
sbp->f_bfree = fs->fs_cstotal.cs_nbfree * fs->fs_frag +
|
2001-05-08 07:42:20 +00:00
|
|
|
fs->fs_cstotal.cs_nffree + dbtofsb(fs, fs->fs_pendingblocks);
|
|
|
|
sbp->f_bavail = freespace(fs, fs->fs_minfree) +
|
|
|
|
dbtofsb(fs, fs->fs_pendingblocks);
|
2017-02-15 19:50:26 +00:00
|
|
|
sbp->f_files = fs->fs_ncg * fs->fs_ipg - UFS_ROOTINO;
|
2001-05-08 07:42:20 +00:00
|
|
|
sbp->f_ffree = fs->fs_cstotal.cs_nifree + fs->fs_pendinginodes;
|
2005-01-24 10:12:28 +00:00
|
|
|
UFS_UNLOCK(ump);
|
2017-04-05 01:44:03 +00:00
|
|
|
sbp->f_namemax = UFS_MAXNAMLEN;
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
vfs_msync(), called from syncer vnode fsync VOP, only iterates over
the active vnode list for the given mount point, with the assumption
that vnodes with dirty pages are active. This is enforced by
vinactive() doing vm_object_page_clean() pass over the vnode pages.
The issue is, if vinactive() cannot be called during vput() due to the
vnode being only shared-locked, we might end up with the dirty pages
for the vnode on the free list. Such vnode is invisible to syncer,
and pages are only cleaned on the vnode reactivation. In other words,
the race results in the broken guarantee that user data, written
through the mmap(2), is written to the disk not later than in 30
seconds after the write.
Fix this by keeping the vnode which is freed but still owing
inactivation, on the active list. When syncer loops find such vnode,
it is deactivated and cleaned by the final vput() call.
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-06-17 04:46:58 +00:00
|
|
|
static bool
|
|
|
|
sync_doupdate(struct inode *ip)
|
|
|
|
{
|
|
|
|
|
|
|
|
return ((ip->i_flag & (IN_ACCESS | IN_CHANGE | IN_MODIFIED |
|
|
|
|
IN_UPDATE)) != 0);
|
|
|
|
}
|
|
|
|
|
2020-01-13 02:35:15 +00:00
|
|
|
static int
|
|
|
|
ffs_sync_lazy_filter(struct vnode *vp, void *arg __unused)
|
|
|
|
{
|
|
|
|
struct inode *ip;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Flags are safe to access because ->v_data invalidation
|
|
|
|
* is held off by listmtx.
|
|
|
|
*/
|
|
|
|
if (vp->v_type == VNON)
|
|
|
|
return (false);
|
|
|
|
ip = VTOI(vp);
|
|
|
|
if (!sync_doupdate(ip) && (vp->v_iflag & VI_OWEINACT) == 0)
|
|
|
|
return (false);
|
|
|
|
return (true);
|
|
|
|
}
|
|
|
|
|
2012-03-28 14:06:47 +00:00
|
|
|
/*
|
|
|
|
* For a lazy sync, we only care about access times, quotas and the
|
|
|
|
* superblock. Other filesystem changes are already converted to
|
|
|
|
* cylinder group blocks or inode blocks updates and are written to
|
|
|
|
* disk by syncer.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_sync_lazy(mp)
|
|
|
|
struct mount *mp;
|
|
|
|
{
|
|
|
|
struct vnode *mvp, *vp;
|
|
|
|
struct inode *ip;
|
|
|
|
struct thread *td;
|
|
|
|
int allerror, error;
|
|
|
|
|
|
|
|
allerror = 0;
|
|
|
|
td = curthread;
|
2020-01-14 22:27:46 +00:00
|
|
|
if ((mp->mnt_flag & MNT_NOATIME) != 0) {
|
|
|
|
#ifdef QUOTA
|
|
|
|
qsync(mp);
|
|
|
|
#endif
|
|
|
|
goto sbupdate;
|
|
|
|
}
|
2020-01-13 02:35:15 +00:00
|
|
|
MNT_VNODE_FOREACH_LAZY(vp, mp, mvp, ffs_sync_lazy_filter, NULL) {
|
2012-04-17 16:28:22 +00:00
|
|
|
if (vp->v_type == VNON) {
|
2012-03-28 14:06:47 +00:00
|
|
|
VI_UNLOCK(vp);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
ip = VTOI(vp);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The IN_ACCESS flag is converted to IN_MODIFIED by
|
|
|
|
* ufs_close() and ufs_getattr() by the calls to
|
2012-03-28 14:16:15 +00:00
|
|
|
* ufs_itimes_locked(), without subsequent UFS_UPDATE().
|
|
|
|
* Test also all the other timestamp flags too, to pick up
|
|
|
|
* any other cases that could be missed.
|
2012-03-28 14:06:47 +00:00
|
|
|
*/
|
vfs_msync(), called from syncer vnode fsync VOP, only iterates over
the active vnode list for the given mount point, with the assumption
that vnodes with dirty pages are active. This is enforced by
vinactive() doing vm_object_page_clean() pass over the vnode pages.
The issue is, if vinactive() cannot be called during vput() due to the
vnode being only shared-locked, we might end up with the dirty pages
for the vnode on the free list. Such vnode is invisible to syncer,
and pages are only cleaned on the vnode reactivation. In other words,
the race results in the broken guarantee that user data, written
through the mmap(2), is written to the disk not later than in 30
seconds after the write.
Fix this by keeping the vnode which is freed but still owing
inactivation, on the active list. When syncer loops find such vnode,
it is deactivated and cleaned by the final vput() call.
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-06-17 04:46:58 +00:00
|
|
|
if (!sync_doupdate(ip) && (vp->v_iflag & VI_OWEINACT) == 0) {
|
2012-03-28 14:06:47 +00:00
|
|
|
VI_UNLOCK(vp);
|
|
|
|
continue;
|
|
|
|
}
|
2020-08-16 17:18:54 +00:00
|
|
|
if ((error = vget(vp, LK_EXCLUSIVE | LK_NOWAIT | LK_INTERLOCK)) != 0)
|
2012-03-28 14:06:47 +00:00
|
|
|
continue;
|
2020-01-14 22:27:46 +00:00
|
|
|
#ifdef QUOTA
|
|
|
|
qsyncvp(vp);
|
|
|
|
#endif
|
vfs_msync(), called from syncer vnode fsync VOP, only iterates over
the active vnode list for the given mount point, with the assumption
that vnodes with dirty pages are active. This is enforced by
vinactive() doing vm_object_page_clean() pass over the vnode pages.
The issue is, if vinactive() cannot be called during vput() due to the
vnode being only shared-locked, we might end up with the dirty pages
for the vnode on the free list. Such vnode is invisible to syncer,
and pages are only cleaned on the vnode reactivation. In other words,
the race results in the broken guarantee that user data, written
through the mmap(2), is written to the disk not later than in 30
seconds after the write.
Fix this by keeping the vnode which is freed but still owing
inactivation, on the active list. When syncer loops find such vnode,
it is deactivated and cleaned by the final vput() call.
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-06-17 04:46:58 +00:00
|
|
|
if (sync_doupdate(ip))
|
|
|
|
error = ffs_update(vp, 0);
|
2012-03-28 14:06:47 +00:00
|
|
|
if (error != 0)
|
|
|
|
allerror = error;
|
|
|
|
vput(vp);
|
|
|
|
}
|
2020-01-14 22:27:46 +00:00
|
|
|
sbupdate:
|
2012-03-28 14:06:47 +00:00
|
|
|
if (VFSTOUFS(mp)->um_fs->fs_fmod != 0 &&
|
|
|
|
(error = ffs_sbupdate(VFSTOUFS(mp), MNT_LAZY, 0)) != 0)
|
|
|
|
allerror = error;
|
|
|
|
return (allerror);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Go through the disk queues to initiate sandbagged IO;
|
|
|
|
* go through the inodes to write those that have been modified;
|
|
|
|
* initiate the writing of the super block if it has been modified.
|
|
|
|
*
|
2012-03-28 14:06:47 +00:00
|
|
|
* Note: we are always called with the filesystem marked busy using
|
|
|
|
* vfs_busy().
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2005-02-10 12:20:08 +00:00
|
|
|
static int
|
2009-05-11 15:33:26 +00:00
|
|
|
ffs_sync(mp, waitfor)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct mount *mp;
|
|
|
|
int waitfor;
|
|
|
|
{
|
2006-01-09 20:42:19 +00:00
|
|
|
struct vnode *mvp, *vp, *devvp;
|
2009-05-11 15:33:26 +00:00
|
|
|
struct thread *td;
|
1997-02-10 02:22:35 +00:00
|
|
|
struct inode *ip;
|
|
|
|
struct ufsmount *ump = VFSTOUFS(mp);
|
|
|
|
struct fs *fs;
|
2015-05-29 13:24:17 +00:00
|
|
|
int error, count, lockreq, allerror = 0;
|
2006-03-08 23:43:39 +00:00
|
|
|
int suspend;
|
|
|
|
int suspended;
|
|
|
|
int secondary_writes;
|
|
|
|
int secondary_accwrites;
|
|
|
|
int softdep_deps;
|
|
|
|
int softdep_accdeps;
|
2004-10-25 09:14:03 +00:00
|
|
|
struct bufobj *bo;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
In the original days of BSD, a sync was issued on every filesystem
every 30 seconds. This spike in I/O caused the system to pause every
30 seconds which was quite annoying. So, the way that sync worked
was changed so that when a vnode was first dirtied, it was put on
a 30-second cleaning queue (see the syncer_workitem_pending queues
in kern/vfs_subr.c). If the file has not been written or deleted
after 30 seconds, the syncer pushes it out. As the syncer runs once
per second, dirty files are trickled out slowly over the 30-second
period instead of all at once by a call to sync(2).
The one drawback to this is that it does not cover the filesystem
metadata. To handle the metadata, vfs_allocate_syncvnode() is called
to create a "filesystem syncer vnode" at mount time which cycles
around the cleaning queue being sync'ed every 30 seconds. In the
original design, the only things it would sync for UFS were the
filesystem metadata: inode blocks, cylinder group bitmaps, and the
superblock (e.g., by VOP_FSYNC'ing devvp, the device vnode from
which the filesystem is mounted).
Somewhere in its path to integration with FreeBSD the flushing of
the filesystem syncer vnode got changed to sync every vnode associated
with the filesystem. The result of this change is to return to the
old filesystem-wide flush every 30-seconds behavior and makes the
whole 30-second delay per vnode useless.
This change goes back to the originally intended trickle out sync
behavior. Key to ensuring that all the intended semantics are
preserved (e.g., that all inode updates get flushed within a bounded
period of time) is that all inode modifications get pushed to their
corresponding inode blocks so that the metadata flush by the
filesystem syncer vnode gets them to the disk in a timely way.
Thanks to Konstantin Belousov (kib@) for doing the audit and commit
-r231122 which ensures that all of these updates are being made.
Reviewed by: kib
Tested by: scottl
MFC after: 2 weeks
2012-02-07 20:43:28 +00:00
|
|
|
suspend = 0;
|
|
|
|
suspended = 0;
|
2009-05-11 15:33:26 +00:00
|
|
|
td = curthread;
|
1994-05-24 10:09:53 +00:00
|
|
|
fs = ump->um_fs;
|
2012-01-14 07:26:16 +00:00
|
|
|
if (fs->fs_fmod != 0 && fs->fs_ronly != 0 && ump->um_fsckpid == 0)
|
|
|
|
panic("%s: ffs_sync: modification on read-only filesystem",
|
|
|
|
fs->fs_fsmnt);
|
Fix the hand after the immediate reboot when the following command
sequence is performed on UFS SU+J rootfs:
cp -Rp /sbin/init /sbin/init.old
mv -f /sbin/init.old /sbin/init
Hang occurs on the rootfs unmount. There are two issues:
1. Removed init binary, which is still mapped, creates a reference to
the removed vnode. The inodeblock for such vnode must have active
inodedep, which is (eventually) linked through the unlinked list. This
means that ffs_sync(MNT_SUSPEND) cannot succeed, because number of
softdep workitems for the mp is always > 0. FFS is suspended during
unmount, so unmount just hangs.
2. As noted above, the inodedep is linked eventually. It is not
linked until the superblock is written. But at the vfs_unmountall()
time, when the rootfs is unmounted, the call is made to
ffs_unmount()->ffs_sync() before vflush(), and ffs_sync() only calls
ffs_sbupdate() after all workitems are flushed. It is masked for
normal system operations, because syncer works in parallel and
eventually flushes superblock. Syncer is stopped when rootfs
unmounted, so ffs_sync() must do sb update on its own.
Correct the issues listed above. For MNT_SUSPEND, count the number of
linked unlinked inodedeps (this is not a typo) and substract the count
of such workitems from the total. For the second issue, the
ffs_sbupdate() is called right after device sync in ffs_sync() loop.
There is third problem, occuring with both SU and SU+J. The
softdep_waitidle() loop, which waits for softdep_flush() thread to
clear the worklist, only waits 20ms max. It seems that the 1 tick,
specified for msleep(9), was a typo.
Add fsync(devvp, MNT_WAIT) call to softdep_waitidle(), which seems to
significantly help the softdep thread, and change the MNT_LAZY update
at the reboot time to MNT_WAIT for similar reasons. Note that
userspace cannot create more work while devvp is flushed, since the
mount point is always suspended before the call to softdep_waitidle()
in unmount or remount path.
PR: 195458
In collaboration with: gjb, pho
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-03-27 13:55:56 +00:00
|
|
|
if (waitfor == MNT_LAZY) {
|
|
|
|
if (!rebooting)
|
|
|
|
return (ffs_sync_lazy(mp));
|
|
|
|
waitfor = MNT_NOWAIT;
|
|
|
|
}
|
2012-03-28 14:06:47 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Write back each (modified) inode.
|
|
|
|
*/
|
2001-10-26 00:08:05 +00:00
|
|
|
lockreq = LK_EXCLUSIVE | LK_NOWAIT;
|
2006-03-08 23:43:39 +00:00
|
|
|
if (waitfor == MNT_SUSPEND) {
|
|
|
|
suspend = 1;
|
|
|
|
waitfor = MNT_WAIT;
|
|
|
|
}
|
2015-05-29 13:24:17 +00:00
|
|
|
if (waitfor == MNT_WAIT)
|
2001-10-26 00:08:05 +00:00
|
|
|
lockreq = LK_EXCLUSIVE;
|
2005-04-03 10:38:18 +00:00
|
|
|
lockreq |= LK_INTERLOCK | LK_SLEEPFAIL;
|
1994-05-24 10:09:53 +00:00
|
|
|
loop:
|
2006-03-08 23:43:39 +00:00
|
|
|
/* Grab snapshot of secondary write counts */
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_ILOCK(mp);
|
2006-03-08 23:43:39 +00:00
|
|
|
secondary_writes = mp->mnt_secondary_writes;
|
|
|
|
secondary_accwrites = mp->mnt_secondary_accwrites;
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
2006-03-08 23:43:39 +00:00
|
|
|
|
|
|
|
/* Grab snapshot of softdep dependency counts */
|
|
|
|
softdep_get_depcounts(mp, &softdep_deps, &softdep_accdeps);
|
|
|
|
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_VNODE_FOREACH_ALL(vp, mp, mvp) {
|
2001-10-26 00:08:05 +00:00
|
|
|
/*
|
2012-03-28 13:47:07 +00:00
|
|
|
* Depend on the vnode interlock to keep things stable enough
|
2001-10-26 00:08:05 +00:00
|
|
|
* for a quick test. Since there might be hundreds of
|
|
|
|
* thousands of vnodes, we cannot afford even a subroutine
|
|
|
|
* call unless there's a good chance that we have work to do.
|
|
|
|
*/
|
2012-04-17 16:28:22 +00:00
|
|
|
if (vp->v_type == VNON) {
|
2003-10-05 07:16:45 +00:00
|
|
|
VI_UNLOCK(vp);
|
|
|
|
continue;
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
ip = VTOI(vp);
|
2012-04-17 16:28:22 +00:00
|
|
|
if ((ip->i_flag &
|
2001-10-26 00:08:05 +00:00
|
|
|
(IN_ACCESS | IN_CHANGE | IN_MODIFIED | IN_UPDATE)) == 0 &&
|
2012-04-17 16:28:22 +00:00
|
|
|
vp->v_bufobj.bo_dirty.bv_cnt == 0) {
|
2003-10-05 07:16:45 +00:00
|
|
|
VI_UNLOCK(vp);
|
1994-05-24 10:09:53 +00:00
|
|
|
continue;
|
1997-02-10 02:22:35 +00:00
|
|
|
}
|
2020-08-16 17:18:54 +00:00
|
|
|
if ((error = vget(vp, lockreq)) != 0) {
|
2006-01-09 20:42:19 +00:00
|
|
|
if (error == ENOENT || error == ENOLCK) {
|
2012-04-17 16:28:22 +00:00
|
|
|
MNT_VNODE_FOREACH_ALL_ABORT(mp, mvp);
|
2003-10-05 09:42:24 +00:00
|
|
|
goto loop;
|
2006-01-09 20:42:19 +00:00
|
|
|
}
|
2003-10-05 09:42:24 +00:00
|
|
|
continue;
|
|
|
|
}
|
2020-01-14 22:27:46 +00:00
|
|
|
#ifdef QUOTA
|
|
|
|
qsyncvp(vp);
|
|
|
|
#endif
|
Handle LoR in flush_pagedep_deps().
When operating in SU or SU+J mode, ffs_syncvnode() might need to
instantiate other vnode by inode number while owning syncing vnode
lock. Typically this other vnode is the parent of our vnode, but due
to renames occuring right before fsync (or during fsync when we drop
the syncing vnode lock, see below) it might be no longer parent.
More, the called function flush_pagedep_deps() needs to lock other
vnode while owning the lock for vnode which owns the buffer, for which
the dependencies are flushed. This creates another instance of the
same LoR as was fixed in softdep_sync().
Put the generic code for safe relocking into new SU helper
get_parent_vp() and use it in flush_pagedep_deps(). The case for safe
relocking of two vnodes with undefined lock order was extracted into
vn helper vn_lock_pair().
Due to call sequence
ffs_syncvnode()->softdep_sync_buf()->flush_pagedep_deps(),
ffs_syncvnode() indicates with ERELOOKUP that passed vnode was
unlocked in process, and can return ENOENT if the passed vnode
reclaimed. All callers of the function were inspected.
Because UFS namei lookups store auxiliary information about directory
entry in in-memory directory inode, and this information is then used
by UFS code that creates/removed directory entry in the actual
mutating VOPs, it is critical that directory vnode lock is not dropped
between lookup and VOP. For softdep_prelink(), which ensures that
later link/unlink operation can proceed without overflowing the
journal, calls were moved to the place where it is safe to drop
processing VOP because mutations are not yet applied. Then, ERELOOKUP
causes restart of the whole VFS operation (typically VFS syscall) at
top level, including the re-lookup of the involved pathes. [Note that
we already do the same restart for failing calls to vn_start_write(),
so formally this patch does not introduce new behavior.]
Similarly, unsafe calls to fsync in snapshot creation code were
plugged. A possible view on these failures is that it does not make
sense to continue creating snapshot if the snapshot vnode was
reclaimed due to forced unmount.
It is possible that relock/ERELOOKUP situation occurs in
ffs_truncate() called from ufs_inactive(). In this case, dropping the
vnode lock is not safe. Detect the situation with VI_DOINGINACT and
reschedule inactivation by setting VI_OWEINACT. ufs_inactive()
rechecks VI_OWEINACT and avoids reclaiming vnode is truncation failed
this way.
In ffs_truncate(), allocation of the EOF block for partial truncation
is re-done after vnode is synced, since we cannot leave the buffer
locked through ffs_syncvnode().
In collaboration with: pho
Reviewed by: mckusick (previous version), markj
Tested by: markj (syzkaller), pho
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D26136
2020-11-14 05:30:10 +00:00
|
|
|
for (;;) {
|
|
|
|
error = ffs_syncvnode(vp, waitfor, 0);
|
|
|
|
if (error == ERELOOKUP)
|
|
|
|
continue;
|
|
|
|
if (error != 0)
|
|
|
|
allerror = error;
|
|
|
|
break;
|
|
|
|
}
|
2005-04-03 10:38:18 +00:00
|
|
|
vput(vp);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
2002-05-16 21:28:32 +00:00
|
|
|
* Force stale filesystem control information to be flushed.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
Fix the hand after the immediate reboot when the following command
sequence is performed on UFS SU+J rootfs:
cp -Rp /sbin/init /sbin/init.old
mv -f /sbin/init.old /sbin/init
Hang occurs on the rootfs unmount. There are two issues:
1. Removed init binary, which is still mapped, creates a reference to
the removed vnode. The inodeblock for such vnode must have active
inodedep, which is (eventually) linked through the unlinked list. This
means that ffs_sync(MNT_SUSPEND) cannot succeed, because number of
softdep workitems for the mp is always > 0. FFS is suspended during
unmount, so unmount just hangs.
2. As noted above, the inodedep is linked eventually. It is not
linked until the superblock is written. But at the vfs_unmountall()
time, when the rootfs is unmounted, the call is made to
ffs_unmount()->ffs_sync() before vflush(), and ffs_sync() only calls
ffs_sbupdate() after all workitems are flushed. It is masked for
normal system operations, because syncer works in parallel and
eventually flushes superblock. Syncer is stopped when rootfs
unmounted, so ffs_sync() must do sb update on its own.
Correct the issues listed above. For MNT_SUSPEND, count the number of
linked unlinked inodedeps (this is not a typo) and substract the count
of such workitems from the total. For the second issue, the
ffs_sbupdate() is called right after device sync in ffs_sync() loop.
There is third problem, occuring with both SU and SU+J. The
softdep_waitidle() loop, which waits for softdep_flush() thread to
clear the worklist, only waits 20ms max. It seems that the 1 tick,
specified for msleep(9), was a typo.
Add fsync(devvp, MNT_WAIT) call to softdep_waitidle(), which seems to
significantly help the softdep thread, and change the MNT_LAZY update
at the reboot time to MNT_WAIT for similar reasons. Note that
userspace cannot create more work while devvp is flushed, since the
mount point is always suspended before the call to softdep_waitidle()
in unmount or remount path.
PR: 195458
In collaboration with: gjb, pho
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-03-27 13:55:56 +00:00
|
|
|
if (waitfor == MNT_WAIT || rebooting) {
|
2001-09-12 08:38:13 +00:00
|
|
|
if ((error = softdep_flushworklist(ump->um_mountp, &count, td)))
|
2000-07-24 05:28:33 +00:00
|
|
|
allerror = error;
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (ffs_fsfail_cleanup(ump, allerror))
|
|
|
|
allerror = 0;
|
2000-07-24 05:28:33 +00:00
|
|
|
/* Flushed work items may create new vnodes to clean */
|
2012-04-17 16:28:22 +00:00
|
|
|
if (allerror == 0 && count)
|
2000-07-24 05:28:33 +00:00
|
|
|
goto loop;
|
|
|
|
}
|
In the original days of BSD, a sync was issued on every filesystem
every 30 seconds. This spike in I/O caused the system to pause every
30 seconds which was quite annoying. So, the way that sync worked
was changed so that when a vnode was first dirtied, it was put on
a 30-second cleaning queue (see the syncer_workitem_pending queues
in kern/vfs_subr.c). If the file has not been written or deleted
after 30 seconds, the syncer pushes it out. As the syncer runs once
per second, dirty files are trickled out slowly over the 30-second
period instead of all at once by a call to sync(2).
The one drawback to this is that it does not cover the filesystem
metadata. To handle the metadata, vfs_allocate_syncvnode() is called
to create a "filesystem syncer vnode" at mount time which cycles
around the cleaning queue being sync'ed every 30 seconds. In the
original design, the only things it would sync for UFS were the
filesystem metadata: inode blocks, cylinder group bitmaps, and the
superblock (e.g., by VOP_FSYNC'ing devvp, the device vnode from
which the filesystem is mounted).
Somewhere in its path to integration with FreeBSD the flushing of
the filesystem syncer vnode got changed to sync every vnode associated
with the filesystem. The result of this change is to return to the
old filesystem-wide flush every 30-seconds behavior and makes the
whole 30-second delay per vnode useless.
This change goes back to the originally intended trickle out sync
behavior. Key to ensuring that all the intended semantics are
preserved (e.g., that all inode updates get flushed within a bounded
period of time) is that all inode modifications get pushed to their
corresponding inode blocks so that the metadata flush by the
filesystem syncer vnode gets them to the disk in a timely way.
Thanks to Konstantin Belousov (kib@) for doing the audit and commit
-r231122 which ensures that all of these updates are being made.
Reviewed by: kib
Tested by: scottl
MFC after: 2 weeks
2012-02-07 20:43:28 +00:00
|
|
|
|
2001-04-25 08:11:18 +00:00
|
|
|
devvp = ump->um_devvp;
|
2004-10-25 09:14:03 +00:00
|
|
|
bo = &devvp->v_bufobj;
|
2008-03-22 09:15:16 +00:00
|
|
|
BO_LOCK(bo);
|
In the original days of BSD, a sync was issued on every filesystem
every 30 seconds. This spike in I/O caused the system to pause every
30 seconds which was quite annoying. So, the way that sync worked
was changed so that when a vnode was first dirtied, it was put on
a 30-second cleaning queue (see the syncer_workitem_pending queues
in kern/vfs_subr.c). If the file has not been written or deleted
after 30 seconds, the syncer pushes it out. As the syncer runs once
per second, dirty files are trickled out slowly over the 30-second
period instead of all at once by a call to sync(2).
The one drawback to this is that it does not cover the filesystem
metadata. To handle the metadata, vfs_allocate_syncvnode() is called
to create a "filesystem syncer vnode" at mount time which cycles
around the cleaning queue being sync'ed every 30 seconds. In the
original design, the only things it would sync for UFS were the
filesystem metadata: inode blocks, cylinder group bitmaps, and the
superblock (e.g., by VOP_FSYNC'ing devvp, the device vnode from
which the filesystem is mounted).
Somewhere in its path to integration with FreeBSD the flushing of
the filesystem syncer vnode got changed to sync every vnode associated
with the filesystem. The result of this change is to return to the
old filesystem-wide flush every 30-seconds behavior and makes the
whole 30-second delay per vnode useless.
This change goes back to the originally intended trickle out sync
behavior. Key to ensuring that all the intended semantics are
preserved (e.g., that all inode updates get flushed within a bounded
period of time) is that all inode modifications get pushed to their
corresponding inode blocks so that the metadata flush by the
filesystem syncer vnode gets them to the disk in a timely way.
Thanks to Konstantin Belousov (kib@) for doing the audit and commit
-r231122 which ensures that all of these updates are being made.
Reviewed by: kib
Tested by: scottl
MFC after: 2 weeks
2012-02-07 20:43:28 +00:00
|
|
|
if (bo->bo_numoutput > 0 || bo->bo_dirty.bv_cnt > 0) {
|
2008-03-22 09:15:16 +00:00
|
|
|
BO_UNLOCK(bo);
|
|
|
|
vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY);
|
Fix the hand after the immediate reboot when the following command
sequence is performed on UFS SU+J rootfs:
cp -Rp /sbin/init /sbin/init.old
mv -f /sbin/init.old /sbin/init
Hang occurs on the rootfs unmount. There are two issues:
1. Removed init binary, which is still mapped, creates a reference to
the removed vnode. The inodeblock for such vnode must have active
inodedep, which is (eventually) linked through the unlinked list. This
means that ffs_sync(MNT_SUSPEND) cannot succeed, because number of
softdep workitems for the mp is always > 0. FFS is suspended during
unmount, so unmount just hangs.
2. As noted above, the inodedep is linked eventually. It is not
linked until the superblock is written. But at the vfs_unmountall()
time, when the rootfs is unmounted, the call is made to
ffs_unmount()->ffs_sync() before vflush(), and ffs_sync() only calls
ffs_sbupdate() after all workitems are flushed. It is masked for
normal system operations, because syncer works in parallel and
eventually flushes superblock. Syncer is stopped when rootfs
unmounted, so ffs_sync() must do sb update on its own.
Correct the issues listed above. For MNT_SUSPEND, count the number of
linked unlinked inodedeps (this is not a typo) and substract the count
of such workitems from the total. For the second issue, the
ffs_sbupdate() is called right after device sync in ffs_sync() loop.
There is third problem, occuring with both SU and SU+J. The
softdep_waitidle() loop, which waits for softdep_flush() thread to
clear the worklist, only waits 20ms max. It seems that the 1 tick,
specified for msleep(9), was a typo.
Add fsync(devvp, MNT_WAIT) call to softdep_waitidle(), which seems to
significantly help the softdep thread, and change the MNT_LAZY update
at the reboot time to MNT_WAIT for similar reasons. Note that
userspace cannot create more work while devvp is flushed, since the
mount point is always suspended before the call to softdep_waitidle()
in unmount or remount path.
PR: 195458
In collaboration with: gjb, pho
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-03-27 13:55:56 +00:00
|
|
|
error = VOP_FSYNC(devvp, waitfor, td);
|
2020-01-03 22:29:58 +00:00
|
|
|
VOP_UNLOCK(devvp);
|
Fix the hand after the immediate reboot when the following command
sequence is performed on UFS SU+J rootfs:
cp -Rp /sbin/init /sbin/init.old
mv -f /sbin/init.old /sbin/init
Hang occurs on the rootfs unmount. There are two issues:
1. Removed init binary, which is still mapped, creates a reference to
the removed vnode. The inodeblock for such vnode must have active
inodedep, which is (eventually) linked through the unlinked list. This
means that ffs_sync(MNT_SUSPEND) cannot succeed, because number of
softdep workitems for the mp is always > 0. FFS is suspended during
unmount, so unmount just hangs.
2. As noted above, the inodedep is linked eventually. It is not
linked until the superblock is written. But at the vfs_unmountall()
time, when the rootfs is unmounted, the call is made to
ffs_unmount()->ffs_sync() before vflush(), and ffs_sync() only calls
ffs_sbupdate() after all workitems are flushed. It is masked for
normal system operations, because syncer works in parallel and
eventually flushes superblock. Syncer is stopped when rootfs
unmounted, so ffs_sync() must do sb update on its own.
Correct the issues listed above. For MNT_SUSPEND, count the number of
linked unlinked inodedeps (this is not a typo) and substract the count
of such workitems from the total. For the second issue, the
ffs_sbupdate() is called right after device sync in ffs_sync() loop.
There is third problem, occuring with both SU and SU+J. The
softdep_waitidle() loop, which waits for softdep_flush() thread to
clear the worklist, only waits 20ms max. It seems that the 1 tick,
specified for msleep(9), was a typo.
Add fsync(devvp, MNT_WAIT) call to softdep_waitidle(), which seems to
significantly help the softdep thread, and change the MNT_LAZY update
at the reboot time to MNT_WAIT for similar reasons. Note that
userspace cannot create more work while devvp is flushed, since the
mount point is always suspended before the call to softdep_waitidle()
in unmount or remount path.
PR: 195458
In collaboration with: gjb, pho
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-03-27 13:55:56 +00:00
|
|
|
if (MOUNTEDSOFTDEP(mp) && (error == 0 || error == EAGAIN))
|
|
|
|
error = ffs_sbupdate(ump, waitfor, 0);
|
|
|
|
if (error != 0)
|
|
|
|
allerror = error;
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (ffs_fsfail_cleanup(ump, allerror))
|
|
|
|
allerror = 0;
|
2012-04-17 16:28:22 +00:00
|
|
|
if (allerror == 0 && waitfor == MNT_WAIT)
|
2001-04-25 08:11:18 +00:00
|
|
|
goto loop;
|
2006-03-08 23:43:39 +00:00
|
|
|
} else if (suspend != 0) {
|
|
|
|
if (softdep_check_suspend(mp,
|
|
|
|
devvp,
|
|
|
|
softdep_deps,
|
|
|
|
softdep_accdeps,
|
|
|
|
secondary_writes,
|
2012-04-17 16:28:22 +00:00
|
|
|
secondary_accwrites) != 0) {
|
|
|
|
MNT_IUNLOCK(mp);
|
2006-03-08 23:43:39 +00:00
|
|
|
goto loop; /* More work needed */
|
2012-04-17 16:28:22 +00:00
|
|
|
}
|
2006-03-08 23:43:39 +00:00
|
|
|
mtx_assert(MNT_MTX(mp), MA_OWNED);
|
2006-03-11 01:08:37 +00:00
|
|
|
mp->mnt_kern_flag |= MNTK_SUSPEND2 | MNTK_SUSPENDED;
|
2006-03-08 23:43:39 +00:00
|
|
|
MNT_IUNLOCK(mp);
|
|
|
|
suspended = 1;
|
2001-04-25 08:11:18 +00:00
|
|
|
} else
|
2008-03-22 09:15:16 +00:00
|
|
|
BO_UNLOCK(bo);
|
1997-02-10 02:22:35 +00:00
|
|
|
/*
|
|
|
|
* Write back modified superblock.
|
|
|
|
*/
|
2006-03-08 23:43:39 +00:00
|
|
|
if (fs->fs_fmod != 0 &&
|
|
|
|
(error = ffs_sbupdate(ump, waitfor, suspended)) != 0)
|
1998-03-08 09:59:44 +00:00
|
|
|
allerror = error;
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (ffs_fsfail_cleanup(ump, allerror))
|
|
|
|
allerror = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
return (allerror);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
2002-03-17 01:25:47 +00:00
|
|
|
ffs_vget(mp, ino, flags, vpp)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct mount *mp;
|
|
|
|
ino_t ino;
|
2002-03-17 01:25:47 +00:00
|
|
|
int flags;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct vnode **vpp;
|
2008-08-28 09:18:20 +00:00
|
|
|
{
|
|
|
|
return (ffs_vgetf(mp, ino, flags, vpp, 0));
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ffs_vgetf(mp, ino, flags, vpp, ffs_flags)
|
|
|
|
struct mount *mp;
|
|
|
|
ino_t ino;
|
|
|
|
int flags;
|
|
|
|
struct vnode **vpp;
|
|
|
|
int ffs_flags;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
1997-02-10 02:22:35 +00:00
|
|
|
struct fs *fs;
|
|
|
|
struct inode *ip;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct ufsmount *ump;
|
|
|
|
struct buf *bp;
|
|
|
|
struct vnode *vp;
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
daddr_t dbn;
|
2002-05-30 22:04:17 +00:00
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2021-01-26 11:35:21 +00:00
|
|
|
MPASS((ffs_flags & (FFSV_REPLACE | FFSV_REPLACE_DOOMED)) == 0 ||
|
|
|
|
(flags & LK_EXCLUSIVE) != 0);
|
2019-08-29 07:45:23 +00:00
|
|
|
|
2005-03-16 11:20:51 +00:00
|
|
|
error = vfs_hash_get(mp, ino, flags, curthread, vpp, NULL, NULL);
|
2019-08-29 07:45:23 +00:00
|
|
|
if (error != 0)
|
2005-03-14 10:21:16 +00:00
|
|
|
return (error);
|
2019-08-29 07:45:23 +00:00
|
|
|
if (*vpp != NULL) {
|
2021-01-26 11:35:21 +00:00
|
|
|
if ((ffs_flags & FFSV_REPLACE) == 0 ||
|
|
|
|
((ffs_flags & FFSV_REPLACE_DOOMED) == 0 ||
|
|
|
|
!VN_IS_DOOMED(*vpp)))
|
2019-08-29 07:45:23 +00:00
|
|
|
return (0);
|
|
|
|
vgone(*vpp);
|
|
|
|
vput(*vpp);
|
|
|
|
}
|
2002-05-30 22:04:17 +00:00
|
|
|
|
2005-03-29 10:10:51 +00:00
|
|
|
/*
|
|
|
|
* We must promote to an exclusive lock for vnode creation. This
|
|
|
|
* can happen if lookup is passed LOCKSHARED.
|
2014-03-02 02:52:34 +00:00
|
|
|
*/
|
2005-03-29 10:10:51 +00:00
|
|
|
if ((flags & LK_TYPE_MASK) == LK_SHARED) {
|
|
|
|
flags &= ~LK_TYPE_MASK;
|
|
|
|
flags |= LK_EXCLUSIVE;
|
|
|
|
}
|
|
|
|
|
2002-05-30 22:04:17 +00:00
|
|
|
/*
|
2002-06-06 20:43:03 +00:00
|
|
|
* We do not lock vnode creation as it is believed to be too
|
2002-05-30 22:04:17 +00:00
|
|
|
* expensive for such rare case as simultaneous creation of vnode
|
|
|
|
* for same ino by different processes. We just allow them to race
|
|
|
|
* and check later to decide who wins. Let the race begin!
|
|
|
|
*/
|
2005-03-14 10:21:16 +00:00
|
|
|
|
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
2020-07-25 10:38:05 +00:00
|
|
|
ip = uma_zalloc_smr(uma_inode, M_WAITOK | M_ZERO);
|
1996-06-12 03:37:57 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Allocate a new vnode/inode. */
|
2015-11-29 21:01:02 +00:00
|
|
|
error = getnewvnode("ufs", mp, fs->fs_magic == FS_UFS1_MAGIC ?
|
|
|
|
&ffs_vnodeops1 : &ffs_vnodeops2, &vp);
|
1994-10-08 06:20:06 +00:00
|
|
|
if (error) {
|
1994-05-24 10:09:53 +00:00
|
|
|
*vpp = NULL;
|
2020-07-25 10:38:05 +00:00
|
|
|
uma_zfree_smr(uma_inode, ip);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
2000-09-25 15:24:04 +00:00
|
|
|
/*
|
2009-03-11 14:13:47 +00:00
|
|
|
* FFS supports recursive locking.
|
2000-09-25 15:24:04 +00:00
|
|
|
*/
|
2010-08-20 19:46:50 +00:00
|
|
|
lockmgr(vp->v_vnlock, LK_EXCLUSIVE, NULL);
|
2008-02-24 16:38:58 +00:00
|
|
|
VN_LOCK_AREC(vp);
|
1994-05-24 10:09:53 +00:00
|
|
|
vp->v_data = ip;
|
2004-10-26 07:39:12 +00:00
|
|
|
vp->v_bufobj.bo_bsize = fs->fs_bsize;
|
1994-05-24 10:09:53 +00:00
|
|
|
ip->i_vnode = vp;
|
2002-06-21 06:18:05 +00:00
|
|
|
ip->i_ump = ump;
|
1994-05-24 10:09:53 +00:00
|
|
|
ip->i_number = ino;
|
2009-03-12 12:43:56 +00:00
|
|
|
ip->i_ea_refs = 0;
|
2015-04-24 23:27:50 +00:00
|
|
|
ip->i_nextclustercg = -1;
|
2016-09-17 16:47:34 +00:00
|
|
|
ip->i_flag = fs->fs_magic == FS_UFS1_MAGIC ? 0 : IN_UFS2;
|
2018-12-15 18:35:46 +00:00
|
|
|
ip->i_mode = 0; /* ensure error cases below throw away vnode */
|
2021-02-15 04:35:59 +00:00
|
|
|
cluster_init_vn(&ip->i_clusterw);
|
Add a framework that tracks exclusive vnode lock generation count for UFS.
This count is memoized together with the lookup metadata in directory
inode, and we assert that accesses to lookup metadata are done under
the same lock generation as they were stored. Enabled under DIAGNOSTICS.
UFS saves additional data for parent dirent when doing lookup
(i_offset, i_count, i_endoff), and this data is used later by VOPs
operating on dirents. If parent vnode exclusive lock is dropped and
re-acquired between lookup and the VOP call, we corrupt directories.
Framework asserts that corruption cannot occur that way, by tracking
vnode lock generation counter. Updates to inode dirent members also
save the counter, while users compare current and saved counters
values.
Also, fix a case in ufs_lookup_ino() where i_offset and i_count could
be updated under shared lock. It is not a bug on its own since dvp
i_offset results from such lookup cannot be used, but it causes false
positive in the checker.
In collaboration with: pho
Reviewed by: mckusick (previous version), markj
Tested by: markj (syzkaller), pho
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D26136
2020-11-14 05:10:39 +00:00
|
|
|
#ifdef DIAGNOSTIC
|
|
|
|
ufs_init_trackers(ip);
|
|
|
|
#endif
|
1994-05-24 10:09:53 +00:00
|
|
|
#ifdef QUOTA
|
1994-10-10 01:04:55 +00:00
|
|
|
{
|
1995-07-21 03:52:40 +00:00
|
|
|
int i;
|
|
|
|
for (i = 0; i < MAXQUOTAS; i++)
|
|
|
|
ip->i_dquot[i] = NODQUOT;
|
1994-10-10 01:04:55 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
#endif
|
|
|
|
|
2008-08-28 09:18:20 +00:00
|
|
|
if (ffs_flags & FFSV_FORCEINSMQ)
|
|
|
|
vp->v_vflag |= VV_FORCEINSMQ;
|
2007-03-13 01:50:27 +00:00
|
|
|
error = insmntque(vp, mp);
|
|
|
|
if (error != 0) {
|
2020-07-25 10:38:05 +00:00
|
|
|
uma_zfree_smr(uma_inode, ip);
|
2007-03-13 01:50:27 +00:00
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
2008-08-28 09:18:20 +00:00
|
|
|
vp->v_vflag &= ~VV_FORCEINSMQ;
|
2008-07-19 22:29:44 +00:00
|
|
|
error = vfs_hash_insert(vp, ino, flags, curthread, vpp, NULL, NULL);
|
2019-08-29 07:45:23 +00:00
|
|
|
if (error != 0)
|
2002-05-30 22:04:17 +00:00
|
|
|
return (error);
|
2019-08-29 07:45:23 +00:00
|
|
|
if (*vpp != NULL) {
|
|
|
|
/*
|
|
|
|
* Calls from ffs_valloc() (i.e. FFSV_REPLACE set)
|
|
|
|
* operate on empty inode, which must not be found by
|
|
|
|
* other threads until fully filled. Vnode for empty
|
|
|
|
* inode must be not re-inserted on the hash by other
|
|
|
|
* thread, after removal by us at the beginning.
|
|
|
|
*/
|
|
|
|
MPASS((ffs_flags & FFSV_REPLACE) == 0);
|
|
|
|
return (0);
|
|
|
|
}
|
2002-05-30 22:04:17 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/* Read in the disk contents for the inode, copy into the inode. */
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
dbn = fsbtodb(fs, ino_to_fsba(fs, ino));
|
|
|
|
error = ffs_breadz(ump, ump->um_devvp, dbn, dbn, (int)fs->fs_bsize,
|
|
|
|
NULL, NULL, 0, NOCRED, 0, NULL, &bp);
|
|
|
|
if (error != 0) {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* The inode does not contain anything useful, so it would
|
|
|
|
* be misleading to leave it on its hash chain. With mode
|
|
|
|
* still zero, it will be unlinked and returned to the free
|
|
|
|
* list by vput().
|
|
|
|
*/
|
2020-01-26 00:38:06 +00:00
|
|
|
vgone(vp);
|
1996-01-19 04:00:31 +00:00
|
|
|
vput(vp);
|
1994-05-24 10:09:53 +00:00
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
2016-09-17 16:47:34 +00:00
|
|
|
if (I_IS_UFS1(ip))
|
2003-02-19 05:47:46 +00:00
|
|
|
ip->i_din1 = uma_zalloc(uma_ufs1, M_WAITOK);
|
2002-12-27 10:23:03 +00:00
|
|
|
else
|
2003-02-19 05:47:46 +00:00
|
|
|
ip->i_din2 = uma_zalloc(uma_ufs2, M_WAITOK);
|
2018-11-13 21:40:56 +00:00
|
|
|
if ((error = ffs_load_inode(bp, ip, fs, ino)) != 0) {
|
|
|
|
bqrelse(bp);
|
2020-01-26 00:38:06 +00:00
|
|
|
vgone(vp);
|
2018-11-13 21:40:56 +00:00
|
|
|
vput(vp);
|
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
2021-03-03 17:40:56 +00:00
|
|
|
if (DOINGSOFTDEP(vp) && (!fs->fs_ronly ||
|
|
|
|
(ffs_flags & FFSV_FORCEINODEDEP) != 0))
|
1998-03-08 09:59:44 +00:00
|
|
|
softdep_load_inodeblock(ip);
|
|
|
|
else
|
|
|
|
ip->i_effnlink = ip->i_nlink;
|
1996-01-19 04:00:31 +00:00
|
|
|
bqrelse(bp);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the vnode from the inode, check for aliases.
|
|
|
|
* Note that the underlying vnode may have changed.
|
|
|
|
*/
|
2016-09-17 16:47:34 +00:00
|
|
|
error = ufs_vinit(mp, I_IS_UFS1(ip) ? &ffs_fifoops1 : &ffs_fifoops2,
|
|
|
|
&vp);
|
1994-10-08 06:20:06 +00:00
|
|
|
if (error) {
|
2020-01-26 00:38:06 +00:00
|
|
|
vgone(vp);
|
1994-05-24 10:09:53 +00:00
|
|
|
vput(vp);
|
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
2005-03-15 20:50:58 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
2003-08-15 20:03:19 +00:00
|
|
|
* Finish inode initialization.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2009-03-11 14:13:47 +00:00
|
|
|
if (vp->v_type != VFIFO) {
|
|
|
|
/* FFS supports shared locking for all files except fifos. */
|
|
|
|
VN_LOCK_ASHARE(vp);
|
|
|
|
}
|
2005-03-15 20:50:58 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Set up a generation number for this inode if it does not
|
|
|
|
* already have one. This should only happen on old filesystems.
|
|
|
|
*/
|
|
|
|
if (ip->i_gen == 0) {
|
2016-05-22 14:31:20 +00:00
|
|
|
while (ip->i_gen == 0)
|
|
|
|
ip->i_gen = arc4random();
|
2002-06-21 06:18:05 +00:00
|
|
|
if ((vp->v_mount->mnt_flag & MNT_RDONLY) == 0) {
|
2020-01-13 02:31:51 +00:00
|
|
|
UFS_INODE_SET_FLAG(ip, IN_MODIFIED);
|
2004-07-28 06:41:27 +00:00
|
|
|
DIP_SET(ip, i_gen, ip->i_gen);
|
2002-06-21 06:18:05 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
Slightly change the semantics of vnode labels for MAC: rather than
"refreshing" the label on the vnode before use, just get the label
right from inception. For single-label file systems, set the label
in the generic VFS getnewvnode() code; for multi-label file systems,
leave the labeling up to the file system. With UFS1/2, this means
reading the extended attribute during vfs_vget() as the inode is
pulled off disk, rather than hitting the extended attributes
frequently during operations later, improving performance. This
also corrects sematics for shared vnode locks, which were not
previously present in the system. This chances the cache
coherrency properties WRT out-of-band access to label data, but in
an acceptable form. With UFS1, there is a small race condition
during automatic extended attribute start -- this is not present
with UFS2, and occurs because EAs aren't available at vnode
inception. We'll introduce a work around for this shortly.
Approved by: re
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
2002-10-26 14:38:24 +00:00
|
|
|
#ifdef MAC
|
|
|
|
if ((mp->mnt_flag & MNT_MULTILABEL) && ip->i_mode) {
|
|
|
|
/*
|
|
|
|
* If this vnode is already allocated, and we're running
|
|
|
|
* multi-label, attempt to perform a label association
|
|
|
|
* from the extended attributes on the inode.
|
|
|
|
*/
|
2007-10-24 19:04:04 +00:00
|
|
|
error = mac_vnode_associate_extattr(mp, vp);
|
Slightly change the semantics of vnode labels for MAC: rather than
"refreshing" the label on the vnode before use, just get the label
right from inception. For single-label file systems, set the label
in the generic VFS getnewvnode() code; for multi-label file systems,
leave the labeling up to the file system. With UFS1/2, this means
reading the extended attribute during vfs_vget() as the inode is
pulled off disk, rather than hitting the extended attributes
frequently during operations later, improving performance. This
also corrects sematics for shared vnode locks, which were not
previously present in the system. This chances the cache
coherrency properties WRT out-of-band access to label data, but in
an acceptable form. With UFS1, there is a small race condition
during automatic extended attribute start -- this is not present
with UFS2, and occurs because EAs aren't available at vnode
inception. We'll introduce a work around for this shortly.
Approved by: re
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
2002-10-26 14:38:24 +00:00
|
|
|
if (error) {
|
|
|
|
/* ufs_inactive will release ip->i_devvp ref. */
|
2020-01-26 00:38:06 +00:00
|
|
|
vgone(vp);
|
Slightly change the semantics of vnode labels for MAC: rather than
"refreshing" the label on the vnode before use, just get the label
right from inception. For single-label file systems, set the label
in the generic VFS getnewvnode() code; for multi-label file systems,
leave the labeling up to the file system. With UFS1/2, this means
reading the extended attribute during vfs_vget() as the inode is
pulled off disk, rather than hitting the extended attributes
frequently during operations later, improving performance. This
also corrects sematics for shared vnode locks, which were not
previously present in the system. This chances the cache
coherrency properties WRT out-of-band access to label data, but in
an acceptable form. With UFS1, there is a small race condition
during automatic extended attribute start -- this is not present
with UFS2, and occurs because EAs aren't available at vnode
inception. We'll introduce a work around for this shortly.
Approved by: re
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
2002-10-26 14:38:24 +00:00
|
|
|
vput(vp);
|
|
|
|
*vpp = NULL;
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
*vpp = vp;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* File handle to vnode
|
|
|
|
*
|
|
|
|
* Have to be really careful about stale file handles:
|
|
|
|
* - check that the inode number is valid
|
2016-01-27 21:27:05 +00:00
|
|
|
* - for UFS2 check that the inode number is initialized
|
1994-05-24 10:09:53 +00:00
|
|
|
* - call ffs_vget() to get the locked inode
|
|
|
|
* - check for an unallocated inode (i_mode == 0)
|
|
|
|
* - check that the given client host has export rights and return
|
|
|
|
* those rights via. exflagsp and credanonp
|
|
|
|
*/
|
2005-02-10 12:20:08 +00:00
|
|
|
static int
|
2011-05-22 01:07:54 +00:00
|
|
|
ffs_fhtovp(mp, fhp, flags, vpp)
|
2002-05-13 09:22:31 +00:00
|
|
|
struct mount *mp;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct fid *fhp;
|
2011-05-22 01:07:54 +00:00
|
|
|
int flags;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct vnode **vpp;
|
|
|
|
{
|
2002-05-13 09:22:31 +00:00
|
|
|
struct ufid *ufhp;
|
2021-01-26 11:52:59 +00:00
|
|
|
|
|
|
|
ufhp = (struct ufid *)fhp;
|
|
|
|
return (ffs_inotovp(mp, ufhp->ufid_ino, ufhp->ufid_gen, flags,
|
|
|
|
vpp, 0));
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ffs_inotovp(mp, ino, gen, lflags, vpp, ffs_flags)
|
|
|
|
struct mount *mp;
|
|
|
|
ino_t ino;
|
|
|
|
u_int64_t gen;
|
|
|
|
int lflags;
|
|
|
|
struct vnode **vpp;
|
|
|
|
int ffs_flags;
|
|
|
|
{
|
2016-01-27 21:27:05 +00:00
|
|
|
struct ufsmount *ump;
|
2021-01-26 11:52:59 +00:00
|
|
|
struct vnode *nvp;
|
2021-01-28 12:20:48 +00:00
|
|
|
struct inode *ip;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct fs *fs;
|
2016-01-27 21:27:05 +00:00
|
|
|
struct cg *cgp;
|
|
|
|
struct buf *bp;
|
|
|
|
u_int cg;
|
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2016-01-27 21:27:05 +00:00
|
|
|
ump = VFSTOUFS(mp);
|
|
|
|
fs = ump->um_fs;
|
2021-01-28 12:20:48 +00:00
|
|
|
*vpp = NULL;
|
|
|
|
|
2017-02-15 19:50:26 +00:00
|
|
|
if (ino < UFS_ROOTINO || ino >= fs->fs_ncg * fs->fs_ipg)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (ESTALE);
|
2021-01-26 11:52:59 +00:00
|
|
|
|
2016-01-27 21:27:05 +00:00
|
|
|
/*
|
|
|
|
* Need to check if inode is initialized because UFS2 does lazy
|
|
|
|
* initialization and nfs_fhtovp can offer arbitrary inode numbers.
|
|
|
|
*/
|
2021-01-26 11:52:59 +00:00
|
|
|
if (fs->fs_magic == FS_UFS2_MAGIC) {
|
|
|
|
cg = ino_to_cg(fs, ino);
|
|
|
|
error = ffs_getcg(fs, ump->um_devvp, cg, 0, &bp, &cgp);
|
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
|
|
|
if (ino >= cg * fs->fs_ipg + cgp->cg_initediblk) {
|
|
|
|
brelse(bp);
|
|
|
|
return (ESTALE);
|
|
|
|
}
|
2016-01-27 21:27:05 +00:00
|
|
|
brelse(bp);
|
|
|
|
}
|
2021-01-26 11:52:59 +00:00
|
|
|
|
|
|
|
error = ffs_vgetf(mp, ino, lflags, &nvp, ffs_flags);
|
2021-01-28 12:20:48 +00:00
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
|
|
|
|
|
|
|
ip = VTOI(nvp);
|
|
|
|
if (ip->i_mode == 0 || ip->i_gen != gen || ip->i_effnlink <= 0) {
|
|
|
|
if (ip->i_mode == 0)
|
|
|
|
vgone(nvp);
|
|
|
|
vput(nvp);
|
|
|
|
return (ESTALE);
|
|
|
|
}
|
|
|
|
|
|
|
|
vnode_create_vobject(nvp, DIP(ip, i_size), curthread);
|
|
|
|
*vpp = nvp;
|
|
|
|
return (0);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1997-02-10 02:22:35 +00:00
|
|
|
/*
|
2002-07-01 11:00:47 +00:00
|
|
|
* Initialize the filesystem.
|
1997-02-10 02:22:35 +00:00
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_init(vfsp)
|
|
|
|
struct vfsconf *vfsp;
|
|
|
|
{
|
|
|
|
|
2012-11-18 18:57:19 +00:00
|
|
|
ffs_susp_initialize();
|
1998-03-08 09:59:44 +00:00
|
|
|
softdep_initialize();
|
1997-02-10 02:22:35 +00:00
|
|
|
return (ufs_init(vfsp));
|
|
|
|
}
|
|
|
|
|
2002-07-01 11:00:47 +00:00
|
|
|
/*
|
|
|
|
* Undo the work of ffs_init().
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_uninit(vfsp)
|
|
|
|
struct vfsconf *vfsp;
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ufs_uninit(vfsp);
|
|
|
|
softdep_uninitialize();
|
2012-11-18 18:57:19 +00:00
|
|
|
ffs_susp_uninitialize();
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
taskqueue_drain_all(taskqueue_thread);
|
2002-07-01 11:00:47 +00:00
|
|
|
return (ret);
|
|
|
|
}
|
|
|
|
|
2018-01-26 00:58:32 +00:00
|
|
|
/*
|
|
|
|
* Structure used to pass information from ffs_sbupdate to its
|
|
|
|
* helper routine ffs_use_bwrite.
|
|
|
|
*/
|
|
|
|
struct devfd {
|
|
|
|
struct ufsmount *ump;
|
|
|
|
struct buf *sbbp;
|
|
|
|
int waitfor;
|
|
|
|
int suspended;
|
|
|
|
int error;
|
|
|
|
};
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Write a superblock and associated information back to disk.
|
|
|
|
*/
|
2006-10-31 21:48:54 +00:00
|
|
|
int
|
2011-07-15 16:20:33 +00:00
|
|
|
ffs_sbupdate(ump, waitfor, suspended)
|
|
|
|
struct ufsmount *ump;
|
1994-05-24 10:09:53 +00:00
|
|
|
int waitfor;
|
2006-03-08 23:43:39 +00:00
|
|
|
int suspended;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2018-01-26 00:58:32 +00:00
|
|
|
struct fs *fs;
|
2005-01-24 10:12:28 +00:00
|
|
|
struct buf *sbbp;
|
2018-01-26 00:58:32 +00:00
|
|
|
struct devfd devfd;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2018-01-26 00:58:32 +00:00
|
|
|
fs = ump->um_fs;
|
2003-02-25 23:21:08 +00:00
|
|
|
if (fs->fs_ronly == 1 &&
|
2011-07-15 16:20:33 +00:00
|
|
|
(ump->um_mountp->mnt_flag & (MNT_RDONLY | MNT_UPDATE)) !=
|
|
|
|
(MNT_RDONLY | MNT_UPDATE) && ump->um_fsckpid == 0)
|
2003-02-25 23:21:08 +00:00
|
|
|
panic("ffs_sbupdate: write read-only filesystem");
|
2005-01-24 10:12:28 +00:00
|
|
|
/*
|
|
|
|
* We use the superblock's buf to serialize calls to ffs_sbupdate().
|
|
|
|
*/
|
2011-07-15 16:20:33 +00:00
|
|
|
sbbp = getblk(ump->um_devvp, btodb(fs->fs_sblockloc),
|
|
|
|
(int)fs->fs_sbsize, 0, 0, 0);
|
1997-02-10 02:22:35 +00:00
|
|
|
/*
|
2018-01-26 00:58:32 +00:00
|
|
|
* Initialize info needed for write function.
|
1997-02-10 02:22:35 +00:00
|
|
|
*/
|
2018-01-26 00:58:32 +00:00
|
|
|
devfd.ump = ump;
|
|
|
|
devfd.sbbp = sbbp;
|
|
|
|
devfd.waitfor = waitfor;
|
|
|
|
devfd.suspended = suspended;
|
|
|
|
devfd.error = 0;
|
|
|
|
return (ffs_sbput(&devfd, fs, fs->fs_sblockloc, ffs_use_bwrite));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write function for use by filesystem-layer routines.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_use_bwrite(void *devfd, off_t loc, void *buf, int size)
|
|
|
|
{
|
|
|
|
struct devfd *devfdp;
|
|
|
|
struct ufsmount *ump;
|
|
|
|
struct buf *bp;
|
|
|
|
struct fs *fs;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
devfdp = devfd;
|
|
|
|
ump = devfdp->ump;
|
|
|
|
fs = ump->um_fs;
|
|
|
|
/*
|
|
|
|
* Writing the superblock summary information.
|
|
|
|
*/
|
|
|
|
if (loc != fs->fs_sblockloc) {
|
|
|
|
bp = getblk(ump->um_devvp, btodb(loc), size, 0, 0, 0);
|
|
|
|
bcopy(buf, bp->b_data, (u_int)size);
|
|
|
|
if (devfdp->suspended)
|
2006-03-08 23:43:39 +00:00
|
|
|
bp->b_flags |= B_VALIDSUSPWRT;
|
2018-01-26 00:58:32 +00:00
|
|
|
if (devfdp->waitfor != MNT_WAIT)
|
1994-05-24 10:09:53 +00:00
|
|
|
bawrite(bp);
|
1999-01-28 00:57:57 +00:00
|
|
|
else if ((error = bwrite(bp)) != 0)
|
2018-01-26 00:58:32 +00:00
|
|
|
devfdp->error = error;
|
|
|
|
return (0);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1997-02-10 02:22:35 +00:00
|
|
|
/*
|
2018-01-26 00:58:32 +00:00
|
|
|
* Writing the superblock itself. We need to do special checks for it.
|
1997-02-10 02:22:35 +00:00
|
|
|
*/
|
2018-01-26 00:58:32 +00:00
|
|
|
bp = devfdp->sbbp;
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (ffs_fsfail_cleanup(ump, devfdp->error))
|
|
|
|
devfdp->error = 0;
|
2018-01-26 00:58:32 +00:00
|
|
|
if (devfdp->error != 0) {
|
|
|
|
brelse(bp);
|
|
|
|
return (devfdp->error);
|
2005-01-24 10:12:28 +00:00
|
|
|
}
|
2002-11-30 19:04:57 +00:00
|
|
|
if (fs->fs_magic == FS_UFS1_MAGIC && fs->fs_sblockloc != SBLOCK_UFS1 &&
|
2014-06-03 21:46:13 +00:00
|
|
|
(fs->fs_old_flags & FS_FLAGS_UPDATED) == 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: correcting fs_sblockloc from %jd to %d\n",
|
2002-11-29 19:20:15 +00:00
|
|
|
fs->fs_fsmnt, fs->fs_sblockloc, SBLOCK_UFS1);
|
|
|
|
fs->fs_sblockloc = SBLOCK_UFS1;
|
|
|
|
}
|
2002-11-30 19:04:57 +00:00
|
|
|
if (fs->fs_magic == FS_UFS2_MAGIC && fs->fs_sblockloc != SBLOCK_UFS2 &&
|
2014-06-03 21:46:13 +00:00
|
|
|
(fs->fs_old_flags & FS_FLAGS_UPDATED) == 0) {
|
2012-01-14 07:26:16 +00:00
|
|
|
printf("WARNING: %s: correcting fs_sblockloc from %jd to %d\n",
|
2002-11-29 19:20:15 +00:00
|
|
|
fs->fs_fsmnt, fs->fs_sblockloc, SBLOCK_UFS2);
|
|
|
|
fs->fs_sblockloc = SBLOCK_UFS2;
|
|
|
|
}
|
2013-10-20 21:11:40 +00:00
|
|
|
if (MOUNTEDSOFTDEP(ump->um_mountp))
|
2011-07-15 16:20:33 +00:00
|
|
|
softdep_setup_sbupdate(ump, (struct fs *)bp->b_data, bp);
|
1997-02-10 02:22:35 +00:00
|
|
|
bcopy((caddr_t)fs, bp->b_data, (u_int)fs->fs_sbsize);
|
2019-08-06 18:10:34 +00:00
|
|
|
fs = (struct fs *)bp->b_data;
|
|
|
|
ffs_oldfscompat_write(fs, ump);
|
2020-06-19 01:04:25 +00:00
|
|
|
fs->fs_si = NULL;
|
2020-06-19 01:02:53 +00:00
|
|
|
/* Recalculate the superblock hash */
|
2019-08-06 18:10:34 +00:00
|
|
|
fs->fs_ckhash = ffs_calc_sbhash(fs);
|
2018-01-26 00:58:32 +00:00
|
|
|
if (devfdp->suspended)
|
2006-03-08 23:43:39 +00:00
|
|
|
bp->b_flags |= B_VALIDSUSPWRT;
|
2018-01-26 00:58:32 +00:00
|
|
|
if (devfdp->waitfor != MNT_WAIT)
|
1997-02-10 02:22:35 +00:00
|
|
|
bawrite(bp);
|
1999-01-28 00:57:57 +00:00
|
|
|
else if ((error = bwrite(bp)) != 0)
|
2018-01-26 00:58:32 +00:00
|
|
|
devfdp->error = error;
|
|
|
|
return (devfdp->error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2002-08-13 10:33:57 +00:00
|
|
|
|
|
|
|
static int
|
|
|
|
ffs_extattrctl(struct mount *mp, int cmd, struct vnode *filename_vp,
|
2009-05-11 15:33:26 +00:00
|
|
|
int attrnamespace, const char *attrname)
|
2002-08-13 10:33:57 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
#ifdef UFS_EXTATTR
|
|
|
|
return (ufs_extattrctl(mp, cmd, filename_vp, attrnamespace,
|
2009-05-11 15:33:26 +00:00
|
|
|
attrname));
|
2002-08-13 10:33:57 +00:00
|
|
|
#else
|
|
|
|
return (vfs_stdextattrctl(mp, cmd, filename_vp, attrnamespace,
|
2009-05-11 15:33:26 +00:00
|
|
|
attrname));
|
2002-08-13 10:33:57 +00:00
|
|
|
#endif
|
|
|
|
}
|
2002-12-27 10:06:37 +00:00
|
|
|
|
|
|
|
static void
|
|
|
|
ffs_ifree(struct ufsmount *ump, struct inode *ip)
|
|
|
|
{
|
|
|
|
|
2003-05-01 06:41:59 +00:00
|
|
|
if (ump->um_fstype == UFS1 && ip->i_din1 != NULL)
|
2002-12-27 11:05:05 +00:00
|
|
|
uma_zfree(uma_ufs1, ip->i_din1);
|
2003-05-01 06:41:59 +00:00
|
|
|
else if (ip->i_din2 != NULL)
|
2003-05-01 06:38:27 +00:00
|
|
|
uma_zfree(uma_ufs2, ip->i_din2);
|
2020-07-25 10:38:05 +00:00
|
|
|
uma_zfree_smr(uma_inode, ip);
|
2002-12-27 10:06:37 +00:00
|
|
|
}
|
2004-10-26 10:44:10 +00:00
|
|
|
|
2005-02-08 20:29:10 +00:00
|
|
|
static int dobkgrdwrite = 1;
|
|
|
|
SYSCTL_INT(_debug, OID_AUTO, dobkgrdwrite, CTLFLAG_RW, &dobkgrdwrite, 0,
|
|
|
|
"Do background writes (honoring the BV_BKGRDWRITE flag)?");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Complete a background write started from bwrite.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ffs_backgroundwritedone(struct buf *bp)
|
|
|
|
{
|
2005-05-30 07:04:15 +00:00
|
|
|
struct bufobj *bufobj;
|
2005-02-08 20:29:10 +00:00
|
|
|
struct buf *origbp;
|
|
|
|
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
#ifdef SOFTUPDATES
|
|
|
|
if (!LIST_EMPTY(&bp->b_dep) && (bp->b_ioflags & BIO_ERROR) != 0)
|
|
|
|
softdep_handle_error(bp);
|
|
|
|
#endif
|
|
|
|
|
2005-02-08 20:29:10 +00:00
|
|
|
/*
|
|
|
|
* Find the original buffer that we are writing.
|
|
|
|
*/
|
2005-05-30 07:04:15 +00:00
|
|
|
bufobj = bp->b_bufobj;
|
|
|
|
BO_LOCK(bufobj);
|
2005-02-08 20:29:10 +00:00
|
|
|
if ((origbp = gbincore(bp->b_bufobj, bp->b_lblkno)) == NULL)
|
|
|
|
panic("backgroundwritedone: lost buffer");
|
Handle errors from background write of the cylinder group blocks.
First, on the write error, bufdone() call from ffs_backgroundwrite()
panics because pbrelvp() cleared bp->b_bufobj, while brelse() would
try to re-dirty the copy of the cg buffer. Handle this by setting
B_INVAL for the case of BIO_ERROR.
Second, we must re-dirty the real buffer containing the cylinder group
block data when background write failed. Real cg buffer was already
marked clean in ffs_bufwrite(). After the BV_BKGRDINPROG flag is
cleared on the real cg buffer in ffs_backgroundwrite(), buffer scan
may reuse the buffer at any moment. The result is lost write, and if
the write error was only transient, we get corrupted bitmaps.
We cannot re-dirty the original cg buffer in the
ffs_backgroundwritedone(), since the context is not sleepable,
preventing us from sleeping for origbp' lock. Add BV_BKGDERR flag
(protected by the buffer object lock), which is converted into delayed
write by brelse(), bqrelse() and buffer scan.
In collaboration with: Conrad Meyer <cse.cem@gmail.com>
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation (kib),
EMC/Isilon storage division (Conrad)
MFC after: 2 weeks
2015-06-27 09:44:14 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We should mark the cylinder group buffer origbp as
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
* dirty, to not lose the failed write.
|
Handle errors from background write of the cylinder group blocks.
First, on the write error, bufdone() call from ffs_backgroundwrite()
panics because pbrelvp() cleared bp->b_bufobj, while brelse() would
try to re-dirty the copy of the cg buffer. Handle this by setting
B_INVAL for the case of BIO_ERROR.
Second, we must re-dirty the real buffer containing the cylinder group
block data when background write failed. Real cg buffer was already
marked clean in ffs_bufwrite(). After the BV_BKGRDINPROG flag is
cleared on the real cg buffer in ffs_backgroundwrite(), buffer scan
may reuse the buffer at any moment. The result is lost write, and if
the write error was only transient, we get corrupted bitmaps.
We cannot re-dirty the original cg buffer in the
ffs_backgroundwritedone(), since the context is not sleepable,
preventing us from sleeping for origbp' lock. Add BV_BKGDERR flag
(protected by the buffer object lock), which is converted into delayed
write by brelse(), bqrelse() and buffer scan.
In collaboration with: Conrad Meyer <cse.cem@gmail.com>
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation (kib),
EMC/Isilon storage division (Conrad)
MFC after: 2 weeks
2015-06-27 09:44:14 +00:00
|
|
|
*/
|
|
|
|
if ((bp->b_ioflags & BIO_ERROR) != 0)
|
|
|
|
origbp->b_vflags |= BV_BKGRDERR;
|
2005-05-30 07:04:15 +00:00
|
|
|
BO_UNLOCK(bufobj);
|
2005-02-08 20:29:10 +00:00
|
|
|
/*
|
|
|
|
* Process dependencies then return any unfinished ones.
|
|
|
|
*/
|
Handle errors from background write of the cylinder group blocks.
First, on the write error, bufdone() call from ffs_backgroundwrite()
panics because pbrelvp() cleared bp->b_bufobj, while brelse() would
try to re-dirty the copy of the cg buffer. Handle this by setting
B_INVAL for the case of BIO_ERROR.
Second, we must re-dirty the real buffer containing the cylinder group
block data when background write failed. Real cg buffer was already
marked clean in ffs_bufwrite(). After the BV_BKGRDINPROG flag is
cleared on the real cg buffer in ffs_backgroundwrite(), buffer scan
may reuse the buffer at any moment. The result is lost write, and if
the write error was only transient, we get corrupted bitmaps.
We cannot re-dirty the original cg buffer in the
ffs_backgroundwritedone(), since the context is not sleepable,
preventing us from sleeping for origbp' lock. Add BV_BKGDERR flag
(protected by the buffer object lock), which is converted into delayed
write by brelse(), bqrelse() and buffer scan.
In collaboration with: Conrad Meyer <cse.cem@gmail.com>
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation (kib),
EMC/Isilon storage division (Conrad)
MFC after: 2 weeks
2015-06-27 09:44:14 +00:00
|
|
|
if (!LIST_EMPTY(&bp->b_dep) && (bp->b_ioflags & BIO_ERROR) == 0)
|
2005-02-08 20:29:10 +00:00
|
|
|
buf_complete(bp);
|
|
|
|
#ifdef SOFTUPDATES
|
2007-04-04 07:29:53 +00:00
|
|
|
if (!LIST_EMPTY(&bp->b_dep))
|
2005-02-08 20:29:10 +00:00
|
|
|
softdep_move_dependencies(bp, origbp);
|
|
|
|
#endif
|
|
|
|
/*
|
2005-05-30 07:04:15 +00:00
|
|
|
* This buffer is marked B_NOCACHE so when it is released
|
2021-01-30 02:10:34 +00:00
|
|
|
* by biodone it will be tossed. Clear B_IOSTARTED in case of error.
|
2005-02-08 20:29:10 +00:00
|
|
|
*/
|
|
|
|
bp->b_flags |= B_NOCACHE;
|
2021-01-30 02:10:34 +00:00
|
|
|
bp->b_flags &= ~(B_CACHE | B_IOSTARTED);
|
2018-01-09 10:33:11 +00:00
|
|
|
pbrelvp(bp);
|
Handle errors from background write of the cylinder group blocks.
First, on the write error, bufdone() call from ffs_backgroundwrite()
panics because pbrelvp() cleared bp->b_bufobj, while brelse() would
try to re-dirty the copy of the cg buffer. Handle this by setting
B_INVAL for the case of BIO_ERROR.
Second, we must re-dirty the real buffer containing the cylinder group
block data when background write failed. Real cg buffer was already
marked clean in ffs_bufwrite(). After the BV_BKGRDINPROG flag is
cleared on the real cg buffer in ffs_backgroundwrite(), buffer scan
may reuse the buffer at any moment. The result is lost write, and if
the write error was only transient, we get corrupted bitmaps.
We cannot re-dirty the original cg buffer in the
ffs_backgroundwritedone(), since the context is not sleepable,
preventing us from sleeping for origbp' lock. Add BV_BKGDERR flag
(protected by the buffer object lock), which is converted into delayed
write by brelse(), bqrelse() and buffer scan.
In collaboration with: Conrad Meyer <cse.cem@gmail.com>
Reviewed by: mckusick
Sponsored by: The FreeBSD Foundation (kib),
EMC/Isilon storage division (Conrad)
MFC after: 2 weeks
2015-06-27 09:44:14 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Prevent brelse() from trying to keep and re-dirtying bp on
|
|
|
|
* errors. It causes b_bufobj dereference in
|
|
|
|
* bdirty()/reassignbuf(), and b_bufobj was cleared in
|
|
|
|
* pbrelvp() above.
|
|
|
|
*/
|
|
|
|
if ((bp->b_ioflags & BIO_ERROR) != 0)
|
|
|
|
bp->b_flags |= B_INVAL;
|
2005-02-08 20:29:10 +00:00
|
|
|
bufdone(bp);
|
2005-05-30 07:04:15 +00:00
|
|
|
BO_LOCK(bufobj);
|
2005-02-08 20:29:10 +00:00
|
|
|
/*
|
|
|
|
* Clear the BV_BKGRDINPROG flag in the original buffer
|
|
|
|
* and awaken it if it is waiting for the write to complete.
|
|
|
|
* If BV_BKGRDINPROG is not set in the original buffer it must
|
|
|
|
* have been released and re-instantiated - which is not legal.
|
|
|
|
*/
|
|
|
|
KASSERT((origbp->b_vflags & BV_BKGRDINPROG),
|
|
|
|
("backgroundwritedone: lost buffer2"));
|
|
|
|
origbp->b_vflags &= ~BV_BKGRDINPROG;
|
|
|
|
if (origbp->b_vflags & BV_BKGRDWAIT) {
|
|
|
|
origbp->b_vflags &= ~BV_BKGRDWAIT;
|
|
|
|
wakeup(&origbp->b_xflags);
|
|
|
|
}
|
2005-05-30 07:04:15 +00:00
|
|
|
BO_UNLOCK(bufobj);
|
2005-02-08 20:29:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write, release buffer on completion. (Done by iodone
|
|
|
|
* if async). Do not bother writing anything if the buffer
|
|
|
|
* is invalid.
|
|
|
|
*
|
|
|
|
* Note that we set B_CACHE here, indicating that buffer is
|
|
|
|
* fully valid and thus cacheable. This is true even of NFS
|
2010-09-17 09:14:40 +00:00
|
|
|
* now so we set it generally. This could be set either here
|
2005-02-08 20:29:10 +00:00
|
|
|
* or in biodone() since the I/O is synchronous. We put it
|
|
|
|
* here.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ffs_bufwrite(struct buf *bp)
|
|
|
|
{
|
|
|
|
struct buf *newbp;
|
Occasional cylinder-group check-hash errors were being reported on
systems running with a heavy filesystem load. Tracking down this
bug was elusive because there were actually two problems. Sometimes
the in-memory check hash was wrong and sometimes the check hash
computed when doing the read was wrong. The occurrence of either
error caused a check-hash mismatch to be reported.
The first error was that the check hash in the in-memory cylinder
group was incorrect. This error was caused by the following
sequence of events:
- We read a cylinder-group buffer and the check hash is valid.
- We update its cg_time and cg_old_time which makes the in-memory
check-hash value invalid but we do not mark the cylinder group dirty.
- We do not make any other changes to the cylinder group, so we
never mark it dirty, thus do not write it out, and hence never
update the incorrect check hash for the in-memory buffer.
- Later, the buffer gets freed, but the page with the old incorrect
check hash is still in the VM cache.
- Later, we read the cylinder group again, and the first page with
the old check hash is still in the VM cache, but some other pages
are not, so we have to do a read.
- The read does not actually get the first page from disk, but rather
from the VM cache, resulting in the old check hash in the buffer.
- The value computed after doing the read does not match causing the
error to be printed.
The fix for this problem is to only set cg_time and cg_old_time as
the cylinder group is being written to disk. This keeps the in-memory
check-hash valid unless the cylinder group has had other modifications
which will require it to be written with a new check hash calculated.
It also requires that the check hash be recalculated in the in-memory
cylinder group when it is marked clean after doing a background write.
The second problem was that the check hash computed at the end of the
read was incorrect because the calculation of the check hash on
completion of the read was being done too soon.
- When a read completes we had the following sequence:
- bufdone()
-- b_ckhashcalc (calculates check hash)
-- bufdone_finish()
--- vfs_vmio_iodone() (replaces bogus pages with the cached ones)
- When we are reading a buffer where one or more pages are already
in memory (but not all pages, or we wouldn't be doing the read),
the I/O is done with bogus_page mapped in for the pages that exist
in the VM cache. This mapping is done to avoid corrupting the
cached pages if there is any I/O overrun. The vfs_vmio_iodone()
function is responsible for replacing the bogus_page(s) with the
cached ones. But we were calculating the check hash before the
bogus_page(s) were replaced. Hence, when we were calculating the
check hash, we were partly reading from bogus_page, which means
we calculated a bad check hash (e.g., because multiple pages have
been mapped to bogus_page, so its contents are indeterminate).
The second fix is to move the check-hash calculation from bufdone()
to bufdone_finish() after the call to vfs_vmio_iodone() so that it
computes the check hash over the correct set of pages.
With these two changes, the occasional cylinder-group check-hash
errors are gone.
Submitted by: David Pfitzner <dpfitzner@netflix.com>
Reviewed by: kib
Tested by: David Pfitzner
2018-02-06 00:19:46 +00:00
|
|
|
struct cg *cgp;
|
2005-02-08 20:29:10 +00:00
|
|
|
|
|
|
|
CTR3(KTR_BUF, "bufwrite(%p) vp %p flags %X", bp, bp->b_vp, bp->b_flags);
|
|
|
|
if (bp->b_flags & B_INVAL) {
|
|
|
|
brelse(bp);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2008-01-19 17:36:23 +00:00
|
|
|
if (!BUF_ISLOCKED(bp))
|
2005-02-08 20:29:10 +00:00
|
|
|
panic("bufwrite: buffer is not busy???");
|
|
|
|
/*
|
|
|
|
* If a background write is already in progress, delay
|
|
|
|
* writing this block if it is asynchronous. Otherwise
|
|
|
|
* wait for the background write to complete.
|
|
|
|
*/
|
|
|
|
BO_LOCK(bp->b_bufobj);
|
|
|
|
if (bp->b_vflags & BV_BKGRDINPROG) {
|
|
|
|
if (bp->b_flags & B_ASYNC) {
|
|
|
|
BO_UNLOCK(bp->b_bufobj);
|
|
|
|
bdwrite(bp);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
bp->b_vflags |= BV_BKGRDWAIT;
|
2013-05-31 00:43:41 +00:00
|
|
|
msleep(&bp->b_xflags, BO_LOCKPTR(bp->b_bufobj), PRIBIO,
|
|
|
|
"bwrbg", 0);
|
2005-02-08 20:29:10 +00:00
|
|
|
if (bp->b_vflags & BV_BKGRDINPROG)
|
|
|
|
panic("bufwrite: still writing");
|
|
|
|
}
|
2015-06-29 13:06:24 +00:00
|
|
|
bp->b_vflags &= ~BV_BKGRDERR;
|
2005-02-08 20:29:10 +00:00
|
|
|
BO_UNLOCK(bp->b_bufobj);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this buffer is marked for background writing and we
|
|
|
|
* do not have to wait for it, make a copy and write the
|
|
|
|
* copy so as to leave this buffer ready for further use.
|
|
|
|
*
|
|
|
|
* This optimization eats a lot of memory. If we have a page
|
|
|
|
* or buffer shortfall we can't do it.
|
|
|
|
*/
|
2010-09-17 09:14:40 +00:00
|
|
|
if (dobkgrdwrite && (bp->b_xflags & BX_BKGRDWRITE) &&
|
2005-02-08 20:29:10 +00:00
|
|
|
(bp->b_flags & B_ASYNC) &&
|
|
|
|
!vm_page_count_severe() &&
|
|
|
|
!buf_dirty_count_severe()) {
|
|
|
|
KASSERT(bp->b_iodone == NULL,
|
|
|
|
("bufwrite: needs chained iodone (%p)", bp->b_iodone));
|
|
|
|
|
|
|
|
/* get a new block */
|
Fix two issues with bufdaemon, often causing the processes to hang in
the "nbufkv" sleep.
First, ffs background cg group block write requests a new buffer for
the shadow copy. When ffs_bufwrite() is called from the bufdaemon due
to buffers shortage, requesting the buffer deadlock bufdaemon.
Introduce a new flag for getnewbuf(), GB_NOWAIT_BD, to request getblk
to not block while allocating the buffer, and return failure
instead. Add a flag argument to the geteblk to allow to pass the flags
to getblk(). Do not repeat the getnewbuf() call from geteblk if buffer
allocation failed and either GB_NOWAIT_BD is specified, or geteblk()
is called from bufdaemon (or its helper, see below). In
ffs_bufwrite(), fall back to synchronous cg block write if shadow
block allocation failed.
Since r107847, buffer write assumes that vnode owning the buffer is
locked. The second problem is that buffer cache may accumulate many
buffers belonging to limited number of vnodes. With such workload,
quite often threads that own the mentioned vnodes locks are trying to
read another block from the vnodes, and, due to buffer cache
exhaustion, are asking bufdaemon for help. Bufdaemon is unable to make
any substantial progress because the vnodes are locked.
Allow the threads owning vnode locks to help the bufdaemon by doing
the flush pass over the buffer cache before getnewbuf() is going to
uninterruptible sleep. Move the flushing code from buf_daemon() to new
helper function buf_do_flush(), that is called from getnewbuf(). The
number of buffers flushed by single call to buf_do_flush() from
getnewbuf() is limited by new sysctl vfs.flushbufqtarget. Prevent
recursive calls to buf_do_flush() by marking the bufdaemon and threads
that temporarily help bufdaemon by TDP_BUFNEED flag.
In collaboration with: pho
Reviewed by: tegge (previous version)
Tested by: glebius, yandex ...
MFC after: 3 weeks
2009-03-16 15:39:46 +00:00
|
|
|
newbp = geteblk(bp->b_bufsize, GB_NOWAIT_BD);
|
|
|
|
if (newbp == NULL)
|
|
|
|
goto normal_write;
|
2005-02-08 20:29:10 +00:00
|
|
|
|
2015-07-23 19:13:41 +00:00
|
|
|
KASSERT(buf_mapped(bp), ("Unmapped cg"));
|
2005-02-08 20:29:10 +00:00
|
|
|
memcpy(newbp->b_data, bp->b_data, bp->b_bufsize);
|
|
|
|
BO_LOCK(bp->b_bufobj);
|
|
|
|
bp->b_vflags |= BV_BKGRDINPROG;
|
|
|
|
BO_UNLOCK(bp->b_bufobj);
|
2017-09-22 12:45:15 +00:00
|
|
|
newbp->b_xflags |=
|
|
|
|
(bp->b_xflags & BX_FSPRIV) | BX_BKGRDMARKER;
|
2013-04-06 22:21:23 +00:00
|
|
|
newbp->b_lblkno = bp->b_lblkno;
|
2005-02-08 20:29:10 +00:00
|
|
|
newbp->b_blkno = bp->b_blkno;
|
|
|
|
newbp->b_offset = bp->b_offset;
|
|
|
|
newbp->b_iodone = ffs_backgroundwritedone;
|
|
|
|
newbp->b_flags |= B_ASYNC;
|
|
|
|
newbp->b_flags &= ~B_INVAL;
|
2013-04-06 22:21:23 +00:00
|
|
|
pbgetvp(bp->b_vp, newbp);
|
2005-02-08 20:29:10 +00:00
|
|
|
|
|
|
|
#ifdef SOFTUPDATES
|
2010-04-24 07:05:35 +00:00
|
|
|
/*
|
|
|
|
* Move over the dependencies. If there are rollbacks,
|
|
|
|
* leave the parent buffer dirtied as it will need to
|
|
|
|
* be written again.
|
|
|
|
*/
|
|
|
|
if (LIST_EMPTY(&bp->b_dep) ||
|
|
|
|
softdep_move_dependencies(bp, newbp) == 0)
|
|
|
|
bundirty(bp);
|
|
|
|
#else
|
|
|
|
bundirty(bp);
|
2010-09-17 09:14:40 +00:00
|
|
|
#endif
|
2005-02-08 20:29:10 +00:00
|
|
|
|
|
|
|
/*
|
2013-04-06 22:21:23 +00:00
|
|
|
* Initiate write on the copy, release the original. The
|
|
|
|
* BKGRDINPROG flag prevents it from going away until
|
Occasional cylinder-group check-hash errors were being reported on
systems running with a heavy filesystem load. Tracking down this
bug was elusive because there were actually two problems. Sometimes
the in-memory check hash was wrong and sometimes the check hash
computed when doing the read was wrong. The occurrence of either
error caused a check-hash mismatch to be reported.
The first error was that the check hash in the in-memory cylinder
group was incorrect. This error was caused by the following
sequence of events:
- We read a cylinder-group buffer and the check hash is valid.
- We update its cg_time and cg_old_time which makes the in-memory
check-hash value invalid but we do not mark the cylinder group dirty.
- We do not make any other changes to the cylinder group, so we
never mark it dirty, thus do not write it out, and hence never
update the incorrect check hash for the in-memory buffer.
- Later, the buffer gets freed, but the page with the old incorrect
check hash is still in the VM cache.
- Later, we read the cylinder group again, and the first page with
the old check hash is still in the VM cache, but some other pages
are not, so we have to do a read.
- The read does not actually get the first page from disk, but rather
from the VM cache, resulting in the old check hash in the buffer.
- The value computed after doing the read does not match causing the
error to be printed.
The fix for this problem is to only set cg_time and cg_old_time as
the cylinder group is being written to disk. This keeps the in-memory
check-hash valid unless the cylinder group has had other modifications
which will require it to be written with a new check hash calculated.
It also requires that the check hash be recalculated in the in-memory
cylinder group when it is marked clean after doing a background write.
The second problem was that the check hash computed at the end of the
read was incorrect because the calculation of the check hash on
completion of the read was being done too soon.
- When a read completes we had the following sequence:
- bufdone()
-- b_ckhashcalc (calculates check hash)
-- bufdone_finish()
--- vfs_vmio_iodone() (replaces bogus pages with the cached ones)
- When we are reading a buffer where one or more pages are already
in memory (but not all pages, or we wouldn't be doing the read),
the I/O is done with bogus_page mapped in for the pages that exist
in the VM cache. This mapping is done to avoid corrupting the
cached pages if there is any I/O overrun. The vfs_vmio_iodone()
function is responsible for replacing the bogus_page(s) with the
cached ones. But we were calculating the check hash before the
bogus_page(s) were replaced. Hence, when we were calculating the
check hash, we were partly reading from bogus_page, which means
we calculated a bad check hash (e.g., because multiple pages have
been mapped to bogus_page, so its contents are indeterminate).
The second fix is to move the check-hash calculation from bufdone()
to bufdone_finish() after the call to vfs_vmio_iodone() so that it
computes the check hash over the correct set of pages.
With these two changes, the occasional cylinder-group check-hash
errors are gone.
Submitted by: David Pfitzner <dpfitzner@netflix.com>
Reviewed by: kib
Tested by: David Pfitzner
2018-02-06 00:19:46 +00:00
|
|
|
* the background write completes. We have to recalculate
|
|
|
|
* its check hash in case the buffer gets freed and then
|
|
|
|
* reconstituted from the buffer cache during a later read.
|
2005-02-08 20:29:10 +00:00
|
|
|
*/
|
Occasional cylinder-group check-hash errors were being reported on
systems running with a heavy filesystem load. Tracking down this
bug was elusive because there were actually two problems. Sometimes
the in-memory check hash was wrong and sometimes the check hash
computed when doing the read was wrong. The occurrence of either
error caused a check-hash mismatch to be reported.
The first error was that the check hash in the in-memory cylinder
group was incorrect. This error was caused by the following
sequence of events:
- We read a cylinder-group buffer and the check hash is valid.
- We update its cg_time and cg_old_time which makes the in-memory
check-hash value invalid but we do not mark the cylinder group dirty.
- We do not make any other changes to the cylinder group, so we
never mark it dirty, thus do not write it out, and hence never
update the incorrect check hash for the in-memory buffer.
- Later, the buffer gets freed, but the page with the old incorrect
check hash is still in the VM cache.
- Later, we read the cylinder group again, and the first page with
the old check hash is still in the VM cache, but some other pages
are not, so we have to do a read.
- The read does not actually get the first page from disk, but rather
from the VM cache, resulting in the old check hash in the buffer.
- The value computed after doing the read does not match causing the
error to be printed.
The fix for this problem is to only set cg_time and cg_old_time as
the cylinder group is being written to disk. This keeps the in-memory
check-hash valid unless the cylinder group has had other modifications
which will require it to be written with a new check hash calculated.
It also requires that the check hash be recalculated in the in-memory
cylinder group when it is marked clean after doing a background write.
The second problem was that the check hash computed at the end of the
read was incorrect because the calculation of the check hash on
completion of the read was being done too soon.
- When a read completes we had the following sequence:
- bufdone()
-- b_ckhashcalc (calculates check hash)
-- bufdone_finish()
--- vfs_vmio_iodone() (replaces bogus pages with the cached ones)
- When we are reading a buffer where one or more pages are already
in memory (but not all pages, or we wouldn't be doing the read),
the I/O is done with bogus_page mapped in for the pages that exist
in the VM cache. This mapping is done to avoid corrupting the
cached pages if there is any I/O overrun. The vfs_vmio_iodone()
function is responsible for replacing the bogus_page(s) with the
cached ones. But we were calculating the check hash before the
bogus_page(s) were replaced. Hence, when we were calculating the
check hash, we were partly reading from bogus_page, which means
we calculated a bad check hash (e.g., because multiple pages have
been mapped to bogus_page, so its contents are indeterminate).
The second fix is to move the check-hash calculation from bufdone()
to bufdone_finish() after the call to vfs_vmio_iodone() so that it
computes the check hash over the correct set of pages.
With these two changes, the occasional cylinder-group check-hash
errors are gone.
Submitted by: David Pfitzner <dpfitzner@netflix.com>
Reviewed by: kib
Tested by: David Pfitzner
2018-02-06 00:19:46 +00:00
|
|
|
if ((bp->b_xflags & BX_CYLGRP) != 0) {
|
|
|
|
cgp = (struct cg *)bp->b_data;
|
|
|
|
cgp->cg_ckhash = 0;
|
|
|
|
cgp->cg_ckhash =
|
|
|
|
calculate_crc32c(~0L, bp->b_data, bp->b_bcount);
|
|
|
|
}
|
2005-02-08 20:29:10 +00:00
|
|
|
bqrelse(bp);
|
|
|
|
bp = newbp;
|
2010-04-24 07:05:35 +00:00
|
|
|
} else
|
|
|
|
/* Mark the buffer clean */
|
|
|
|
bundirty(bp);
|
|
|
|
|
2005-02-08 20:29:10 +00:00
|
|
|
/* Let the normal bufwrite do the rest for us */
|
Fix two issues with bufdaemon, often causing the processes to hang in
the "nbufkv" sleep.
First, ffs background cg group block write requests a new buffer for
the shadow copy. When ffs_bufwrite() is called from the bufdaemon due
to buffers shortage, requesting the buffer deadlock bufdaemon.
Introduce a new flag for getnewbuf(), GB_NOWAIT_BD, to request getblk
to not block while allocating the buffer, and return failure
instead. Add a flag argument to the geteblk to allow to pass the flags
to getblk(). Do not repeat the getnewbuf() call from geteblk if buffer
allocation failed and either GB_NOWAIT_BD is specified, or geteblk()
is called from bufdaemon (or its helper, see below). In
ffs_bufwrite(), fall back to synchronous cg block write if shadow
block allocation failed.
Since r107847, buffer write assumes that vnode owning the buffer is
locked. The second problem is that buffer cache may accumulate many
buffers belonging to limited number of vnodes. With such workload,
quite often threads that own the mentioned vnodes locks are trying to
read another block from the vnodes, and, due to buffer cache
exhaustion, are asking bufdaemon for help. Bufdaemon is unable to make
any substantial progress because the vnodes are locked.
Allow the threads owning vnode locks to help the bufdaemon by doing
the flush pass over the buffer cache before getnewbuf() is going to
uninterruptible sleep. Move the flushing code from buf_daemon() to new
helper function buf_do_flush(), that is called from getnewbuf(). The
number of buffers flushed by single call to buf_do_flush() from
getnewbuf() is limited by new sysctl vfs.flushbufqtarget. Prevent
recursive calls to buf_do_flush() by marking the bufdaemon and threads
that temporarily help bufdaemon by TDP_BUFNEED flag.
In collaboration with: pho
Reviewed by: tegge (previous version)
Tested by: glebius, yandex ...
MFC after: 3 weeks
2009-03-16 15:39:46 +00:00
|
|
|
normal_write:
|
Occasional cylinder-group check-hash errors were being reported on
systems running with a heavy filesystem load. Tracking down this
bug was elusive because there were actually two problems. Sometimes
the in-memory check hash was wrong and sometimes the check hash
computed when doing the read was wrong. The occurrence of either
error caused a check-hash mismatch to be reported.
The first error was that the check hash in the in-memory cylinder
group was incorrect. This error was caused by the following
sequence of events:
- We read a cylinder-group buffer and the check hash is valid.
- We update its cg_time and cg_old_time which makes the in-memory
check-hash value invalid but we do not mark the cylinder group dirty.
- We do not make any other changes to the cylinder group, so we
never mark it dirty, thus do not write it out, and hence never
update the incorrect check hash for the in-memory buffer.
- Later, the buffer gets freed, but the page with the old incorrect
check hash is still in the VM cache.
- Later, we read the cylinder group again, and the first page with
the old check hash is still in the VM cache, but some other pages
are not, so we have to do a read.
- The read does not actually get the first page from disk, but rather
from the VM cache, resulting in the old check hash in the buffer.
- The value computed after doing the read does not match causing the
error to be printed.
The fix for this problem is to only set cg_time and cg_old_time as
the cylinder group is being written to disk. This keeps the in-memory
check-hash valid unless the cylinder group has had other modifications
which will require it to be written with a new check hash calculated.
It also requires that the check hash be recalculated in the in-memory
cylinder group when it is marked clean after doing a background write.
The second problem was that the check hash computed at the end of the
read was incorrect because the calculation of the check hash on
completion of the read was being done too soon.
- When a read completes we had the following sequence:
- bufdone()
-- b_ckhashcalc (calculates check hash)
-- bufdone_finish()
--- vfs_vmio_iodone() (replaces bogus pages with the cached ones)
- When we are reading a buffer where one or more pages are already
in memory (but not all pages, or we wouldn't be doing the read),
the I/O is done with bogus_page mapped in for the pages that exist
in the VM cache. This mapping is done to avoid corrupting the
cached pages if there is any I/O overrun. The vfs_vmio_iodone()
function is responsible for replacing the bogus_page(s) with the
cached ones. But we were calculating the check hash before the
bogus_page(s) were replaced. Hence, when we were calculating the
check hash, we were partly reading from bogus_page, which means
we calculated a bad check hash (e.g., because multiple pages have
been mapped to bogus_page, so its contents are indeterminate).
The second fix is to move the check-hash calculation from bufdone()
to bufdone_finish() after the call to vfs_vmio_iodone() so that it
computes the check hash over the correct set of pages.
With these two changes, the occasional cylinder-group check-hash
errors are gone.
Submitted by: David Pfitzner <dpfitzner@netflix.com>
Reviewed by: kib
Tested by: David Pfitzner
2018-02-06 00:19:46 +00:00
|
|
|
/*
|
|
|
|
* If we are writing a cylinder group, update its time.
|
|
|
|
*/
|
|
|
|
if ((bp->b_xflags & BX_CYLGRP) != 0) {
|
|
|
|
cgp = (struct cg *)bp->b_data;
|
|
|
|
cgp->cg_old_time = cgp->cg_time = time_second;
|
|
|
|
}
|
2005-10-09 20:49:01 +00:00
|
|
|
return (bufwrite(bp));
|
2005-02-08 20:29:10 +00:00
|
|
|
}
|
|
|
|
|
2004-10-26 20:13:21 +00:00
|
|
|
static void
|
2004-10-26 10:44:10 +00:00
|
|
|
ffs_geom_strategy(struct bufobj *bo, struct buf *bp)
|
|
|
|
{
|
2005-04-03 10:29:55 +00:00
|
|
|
struct vnode *vp;
|
2006-03-19 21:43:36 +00:00
|
|
|
struct buf *tbp;
|
2017-09-22 12:45:15 +00:00
|
|
|
int error, nocopy;
|
2004-10-26 10:44:10 +00:00
|
|
|
|
2020-03-06 18:41:37 +00:00
|
|
|
/*
|
|
|
|
* This is the bufobj strategy for the private VCHR vnodes
|
|
|
|
* used by FFS to access the underlying storage device.
|
|
|
|
* We override the default bufobj strategy and thus bypass
|
|
|
|
* VOP_STRATEGY() for these vnodes.
|
|
|
|
*/
|
2016-09-30 17:11:03 +00:00
|
|
|
vp = bo2vnode(bo);
|
2020-03-06 18:41:37 +00:00
|
|
|
KASSERT(bp->b_vp == NULL || bp->b_vp->v_type != VCHR ||
|
|
|
|
bp->b_vp->v_rdev == NULL ||
|
|
|
|
bp->b_vp->v_rdev->si_mountpt == NULL ||
|
|
|
|
VFSTOUFS(bp->b_vp->v_rdev->si_mountpt) == NULL ||
|
|
|
|
vp == VFSTOUFS(bp->b_vp->v_rdev->si_mountpt)->um_devvp,
|
|
|
|
("ffs_geom_strategy() with wrong vp"));
|
2005-04-03 10:29:55 +00:00
|
|
|
if (bp->b_iocmd == BIO_WRITE) {
|
|
|
|
if ((bp->b_flags & B_VALIDSUSPWRT) == 0 &&
|
|
|
|
bp->b_vp != NULL && bp->b_vp->v_mount != NULL &&
|
|
|
|
(bp->b_vp->v_mount->mnt_kern_flag & MNTK_SUSPENDED) != 0)
|
|
|
|
panic("ffs_geom_strategy: bad I/O");
|
2010-04-24 07:05:35 +00:00
|
|
|
nocopy = bp->b_flags & B_NOCOPY;
|
|
|
|
bp->b_flags &= ~(B_VALIDSUSPWRT | B_NOCOPY);
|
|
|
|
if ((vp->v_vflag & VV_COPYONWRITE) && nocopy == 0 &&
|
2006-03-19 21:43:36 +00:00
|
|
|
vp->v_rdev->si_snapdata != NULL) {
|
|
|
|
if ((bp->b_flags & B_CLUSTER) != 0) {
|
2006-05-03 00:10:29 +00:00
|
|
|
runningbufwakeup(bp);
|
2006-03-19 21:43:36 +00:00
|
|
|
TAILQ_FOREACH(tbp, &bp->b_cluster.cluster_head,
|
|
|
|
b_cluster.cluster_entry) {
|
|
|
|
error = ffs_copyonwrite(vp, tbp);
|
|
|
|
if (error != 0 &&
|
|
|
|
error != EOPNOTSUPP) {
|
|
|
|
bp->b_error = error;
|
|
|
|
bp->b_ioflags |= BIO_ERROR;
|
2020-10-08 22:41:02 +00:00
|
|
|
bp->b_flags &= ~B_BARRIER;
|
2006-03-19 21:43:36 +00:00
|
|
|
bufdone(bp);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2006-05-03 00:10:29 +00:00
|
|
|
bp->b_runningbufspace = bp->b_bufsize;
|
Adjust some variables (mostly related to the buffer cache) that hold
address space sizes to be longs instead of ints. Specifically, the follow
values are now longs: runningbufspace, bufspace, maxbufspace,
bufmallocspace, maxbufmallocspace, lobufspace, hibufspace, lorunningspace,
hirunningspace, maxswzone, maxbcache, and maxpipekva. Previously, a
relatively small number (~ 44000) of buffers set in kern.nbuf would result
in integer overflows resulting either in hangs or bogus values of
hidirtybuffers and lodirtybuffers. Now one has to overflow a long to see
such problems. There was a check for a nbuf setting that would cause
overflows in the auto-tuning of nbuf. I've changed it to always check and
cap nbuf but warn if a user-supplied tunable would cause overflow.
Note that this changes the ABI of several sysctls that are used by things
like top(1), etc., so any MFC would probably require a some gross shims
to allow for that.
MFC after: 1 month
2009-03-09 19:35:20 +00:00
|
|
|
atomic_add_long(&runningbufspace,
|
2006-05-03 00:10:29 +00:00
|
|
|
bp->b_runningbufspace);
|
2006-03-19 21:43:36 +00:00
|
|
|
} else {
|
|
|
|
error = ffs_copyonwrite(vp, bp);
|
|
|
|
if (error != 0 && error != EOPNOTSUPP) {
|
|
|
|
bp->b_error = error;
|
|
|
|
bp->b_ioflags |= BIO_ERROR;
|
2020-10-08 22:41:02 +00:00
|
|
|
bp->b_flags &= ~B_BARRIER;
|
2006-03-19 21:43:36 +00:00
|
|
|
bufdone(bp);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2005-04-03 10:29:55 +00:00
|
|
|
}
|
2006-03-19 21:43:36 +00:00
|
|
|
#ifdef SOFTUPDATES
|
|
|
|
if ((bp->b_flags & B_CLUSTER) != 0) {
|
|
|
|
TAILQ_FOREACH(tbp, &bp->b_cluster.cluster_head,
|
|
|
|
b_cluster.cluster_entry) {
|
2007-04-04 07:29:53 +00:00
|
|
|
if (!LIST_EMPTY(&tbp->b_dep))
|
2006-03-19 21:43:36 +00:00
|
|
|
buf_start(tbp);
|
|
|
|
}
|
|
|
|
} else {
|
2007-04-04 07:29:53 +00:00
|
|
|
if (!LIST_EMPTY(&bp->b_dep))
|
2006-03-19 21:43:36 +00:00
|
|
|
buf_start(bp);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
2017-09-22 12:45:15 +00:00
|
|
|
/*
|
|
|
|
* Check for metadata that needs check-hashes and update them.
|
|
|
|
*/
|
|
|
|
switch (bp->b_xflags & BX_FSPRIV) {
|
|
|
|
case BX_CYLGRP:
|
|
|
|
((struct cg *)bp->b_data)->cg_ckhash = 0;
|
|
|
|
((struct cg *)bp->b_data)->cg_ckhash =
|
|
|
|
calculate_crc32c(~0L, bp->b_data, bp->b_bcount);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case BX_SUPERBLOCK:
|
|
|
|
case BX_INODE:
|
|
|
|
case BX_INDIR:
|
|
|
|
case BX_DIR:
|
|
|
|
printf("Check-hash write is unimplemented!!!\n");
|
|
|
|
break;
|
|
|
|
|
|
|
|
case 0:
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
printf("multiple buffer types 0x%b\n",
|
|
|
|
(u_int)(bp->b_xflags & BX_FSPRIV),
|
|
|
|
PRINT_UFS_BUF_XFLAGS);
|
|
|
|
break;
|
|
|
|
}
|
2005-04-03 10:29:55 +00:00
|
|
|
}
|
This commit enables a UFS filesystem to do a forcible unmount when
the underlying media fails or becomes inaccessible. For example
when a USB flash memory card hosting a UFS filesystem is unplugged.
The strategy for handling disk I/O errors when soft updates are
enabled is to stop writing to the disk of the affected file system
but continue to accept I/O requests and report that all future
writes by the file system to that disk actually succeed. Then
initiate an asynchronous forced unmount of the affected file system.
There are two cases for disk I/O errors:
- ENXIO, which means that this disk is gone and the lower layers
of the storage stack already guarantee that no future I/O to
this disk will succeed.
- EIO (or most other errors), which means that this particular
I/O request has failed but subsequent I/O requests to this
disk might still succeed.
For ENXIO, we can just clear the error and continue, because we
know that the file system cannot affect the on-disk state after we
see this error. For EIO or other errors, we arrange for the geom_vfs
layer to reject all future I/O requests with ENXIO just like is
done when the geom_vfs is orphaned. In both cases, the file system
code can just clear the error and proceed with the forcible unmount.
This new treatment of I/O errors is needed for writes of any buffer
that is involved in a dependency. Most dependencies are described
by a structure attached to the buffer's b_dep field. But some are
created and processed as a result of the completion of the dependencies
attached to the buffer.
Clearing of some dependencies require a read. For example if there
is a dependency that requires an inode to be written, the disk block
containing that inode must be read, the updated inode copied into
place in that buffer, and the buffer then written back to disk.
Often the needed buffer is already in memory and can be used. But
if it needs to be read from the disk, the read will fail, so we
fabricate a buffer full of zeroes and pretend that the read succeeded.
This zero'ed buffer can be updated and written back to disk.
The only case where a buffer full of zeros causes the code to do
the wrong thing is when reading an inode buffer containing an inode
that still has an inode dependency in memory that will reinitialize
the effective link count (i_effnlink) based on the actual link count
(i_nlink) that we read. To handle this case we now store the i_nlink
value that we wrote in the inode dependency so that it can be
restored into the zero'ed buffer thus keeping the tracking of the
inode link count consistent.
Because applications depend on knowing when an attempt to write
their data to stable storage has failed, the fsync(2) and msync(2)
system calls need to return errors if data fails to be written to
stable storage. So these operations return ENXIO for every call
made on files in a file system where we have otherwise been ignoring
I/O errors.
Coauthered by: mckusick
Reviewed by: kib
Tested by: Peter Holm
Approved by: mckusick (mentor)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24088
2020-05-25 23:47:31 +00:00
|
|
|
if (bp->b_iocmd != BIO_READ && ffs_enxio_enable)
|
|
|
|
bp->b_xflags |= BX_CVTENXIO;
|
2004-10-29 10:15:56 +00:00
|
|
|
g_vfs_strategy(bo, bp);
|
2004-10-26 10:44:10 +00:00
|
|
|
}
|
2008-09-16 11:19:38 +00:00
|
|
|
|
2012-11-18 18:57:19 +00:00
|
|
|
int
|
|
|
|
ffs_own_mount(const struct mount *mp)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (mp->mnt_op == &ufs_vfsops)
|
|
|
|
return (1);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2008-09-16 11:19:38 +00:00
|
|
|
#ifdef DDB
|
2013-10-21 20:51:08 +00:00
|
|
|
#ifdef SOFTUPDATES
|
2008-09-16 11:19:38 +00:00
|
|
|
|
2013-10-20 21:11:40 +00:00
|
|
|
/* defined in ffs_softdep.c */
|
|
|
|
extern void db_print_ffs(struct ufsmount *ump);
|
2008-09-16 11:19:38 +00:00
|
|
|
|
|
|
|
DB_SHOW_COMMAND(ffs, db_show_ffs)
|
|
|
|
{
|
|
|
|
struct mount *mp;
|
|
|
|
struct ufsmount *ump;
|
|
|
|
|
|
|
|
if (have_addr) {
|
|
|
|
ump = VFSTOUFS((struct mount *)addr);
|
|
|
|
db_print_ffs(ump);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
TAILQ_FOREACH(mp, &mountlist, mnt_list) {
|
|
|
|
if (!strcmp(mp->mnt_stat.f_fstypename, ufs_vfsconf.vfc_name))
|
|
|
|
db_print_ffs(VFSTOUFS(mp));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-21 20:51:08 +00:00
|
|
|
#endif /* SOFTUPDATES */
|
2008-09-16 11:19:38 +00:00
|
|
|
#endif /* DDB */
|