1271 Commits

Author SHA1 Message Date
ngie
3913916991 Conditionalize code which defines sysctls per _KERNEL #ifdef guard
This resolves several issues when compiling libzpool (userspace library), i.e.
-Wimplicit-function-declaration and -Wmissing-declarations issues.

MFC after:	2 weeks
Reported by:	clang
Tested with:	clang 3.8.1, gcc 4.2.1, gcc 5.3.0
Sponsored by:	EMC / Isilon Storage Division
2016-07-31 06:34:49 +00:00
markj
216e4ec567 Restore an ifdef that should not have been removed in r303535.
X-MFC-With:	r303535
2016-07-30 07:05:32 +00:00
markj
beccef16b6 Include fasttrap handling for DATAMODEL_ILP32 when compiling for amd64.
MFC after:	1 month
2016-07-30 03:11:53 +00:00
avg
c7af2e4d9d MFV r302645: 6878 Add scrub completion info to "zpool history"
illumos/illumos-gate@1825bc56e5
1825bc56e5

https://www.illumos.org/issues/6878
  Summary of changes:
      * Replace generic "scan done" message with "scan aborted, restarting",
        "scan cancelled", or "scan done"
      * Log number of errors using spa_get_errlog_size
      * Refactor scan restarting check into static function

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Nav Ravindranath <nav@delphix.com>
MFC after:	2 weeks
2016-07-14 11:53:39 +00:00
avg
2448cff2e4 MFV r302650: 6940 Cannot unlink directories when over quota
illumos/illumos-gate@99189164df
99189164df

https://www.illumos.org/issues/6940
  Similar to #6334, but this time with empty directories:
  $ zfs create tank/quota
  $ zfs set quota=10M tank/quota
  $ zfs snapshot tank/quota@snap1
  $ zfs set mountpoint=/mnt/tank/quota tank/quota
  $ mkdir /mnt/tank/quota/dir # create an empty directory
  $ mkfile 11M /mnt/tank/quota/11M
  /mnt/tank/quota/11M: initialized 9830400 of 11534336 bytes: Disc quota exceeded
  $ rmdir /mnt/tank/quota/dir # now unlink the empty directory
  rmdir: directory "/mnt/tank/quota/dir": Disc quota exceeded
  From user perspective, I would expect that ZFS is always able to remove files
  and directories even when the quota is exceeded.

Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Simon Klinkert <simon.klinkert@gmail.com>
MFC after:	2 weeks
2016-07-14 11:51:01 +00:00
avg
2aecf10d2a MFV r302644: 6513 partially filled holes lose birth time
illumos/illumos-gate@8df0bcf0df
8df0bcf0df

https://www.illumos.org/issues/6513
  If a ZFS object contains a hole at level one, and then a data block is created
  at level 0 underneath that l1 block, l0 holes will be created. However, these
  l0 holes do not have the birth time property set; as a result, incremental
  sends will not send those holes.
  Fix is to modify the dbuf_read code to fill in birth time data.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Boris Protopopov <bprotopopov@hotmail.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Paul Dagnelie <pcd@delphix.com>
MFC after:	3 weeks
2016-07-14 11:48:42 +00:00
avg
f0c76721da MFV r302641: 6844 dnode_next_offset can detect fictional holes
illumos/illumos-gate@11ceac77ea
11ceac77ea

https://www.illumos.org/issues/6844
  dnode_next_offset is used in a variety of places to iterate over the holes or
  allocated blocks in a dnode. It operates under the premise that it can iterate
  over the blockpointers of a dnode in open context while holding only the
  dn_struct_rwlock as reader. Unfortunately, this premise does not hold.
  When we create the zio for a dbuf, we pass in the actual block pointer in the
  indirect block above that dbuf. When we later zero the bp in
  zio_write_compress, we are directly modifying the bp. The state of the bp is
  now inconsistent from the perspective of dnode_next_offset: the bp will appear
  to be a hole until zio_dva_allocate finally finishes filling it in. In the
  meantime, dnode_next_offset can detect a hole in the dnode when none exists.
  I was able to experimentally demonstrate this behavior with the following
  setup:
  1. Create a file with 1 million dbufs.
  2. Create a thread that randomly dirties L2 blocks by writing to the first L0
  block under them.
  3. Observe dnode_next_offset, waiting for it to skip over a hole in the middle
  of a file.
  4. Do dnode_next_offset in a loop until we skip over such a non-existent hole.
  The fix is to ensure that it is valid to iterate over the indirect blocks in a
  dnode while holding the dn_struct_rwlock by passing the zio a copy of the BP
  and updating the actual BP in dbuf_write_ready while holding the lock.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Boris Protopopov <bprotopopov@hotmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Alex Reece <alex@delphix.com>
MFC after:	3 weeks
2016-07-14 11:42:53 +00:00
avg
d6124d3695 MFV r302640: 6874 rollback and receive need to reset ZPL state to what's on disk
illumos/illumos-gate@1fdcbd00c9
1fdcbd00c9

https://www.illumos.org/issues/6874
  When we do a clone swap (caused by "zfs rollback" or "zfs receive"), the ZPL
  doesn't completely reload the state from the DMU; some values remain cached in
  the zfsvfs_t.
  steps to reproduce:
  ```
  #!/bin/bash -x
  zfs destroy -R test/fs
  zfs destroy -R test/recvd
  zfs create test/fs
  zfs snapshot test/fs@a
  zfs set userquota@$USER=1m test/fs
  zfs snapshot test/fs@b
  zfs send test/fs@a | zfs recv test/recvd
  zfs send -i @a test/fs@b | zfs recv test/recvd
  zfs userspace test/recvd
     1. should show 1m quota
        dd if=/dev/urandom of=/test/recvd/file bs=1k count=1024
        sync
        dd if=/dev/urandom of=/test/recvd/file2 bs=1k count=1024
     2. should fail with ENOSPC
        sync
        zfs unmount test/recvd
        zfs mount test/recvd
        zfs userspace test/recvd
     3. if bug above, now shows 1m quota
        dd if=/dev/urandom of=/test/recvd/file3 bs=1k count=1024
     4. if bug above, now fails with ENOSPC
  ```

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	3 weeks
2016-07-14 11:39:36 +00:00
avg
41b594a6e0 re-apply r299908: zfsctl_snapdir_lookup: clear VV_ROOT of snapshot's root
The change has been undone in r301275 on the assumption that it was no
longer required.  But that was incorrect, because in this case (and only
in this case) the snapshot root vnode is looked up before z_parent is
fixed up.

MFC after:	5 days
2016-07-13 15:16:51 +00:00
markj
a70869aa34 Avoid truncating the return value of DTrace predicates.
Predicates are DIF objects whose return value is compared with zero to
determine whether the corresponding probe body is to be executed. The return
value itself is the contents of a 64-bit DIF register, but it was being
truncated to an int before the comparison. This meant that a predicate such
as /0x100000000/ would evaluate to false.

Reported by:	rwatson
MFC after:	3 days
2016-07-09 22:41:21 +00:00
smh
f10b37d27b Fix ZFS ARC min / max tunable
Due to ARC initial configuration not being done and kmem information
not being available we need to blindly set zfs_arc_max and zfs_arc_min
when configured via the tunable.

This fixes vfs.zfs.arc_(min|max) configuration via loader.conf broken by
r302265.

Approved by:	re(gjb)
MFC after:	1 week
2016-07-06 23:49:19 +00:00
mav
b281d573a9 Revert r299454 and r299448.
Those changes were found confusing FreeBSD libc ACL code, that doesn't
differentiate ACL for directories and files, and report ACLs for all
directories created after those patches as non-trivial.  On the other
side these changes were considered wrong from POSIX and NFSv4 points of
view.  Until further investigation done upstream, revert those changes
locally in preparation for FreeBSD 11.0 release.

Approved by:	re (hrs)
2016-06-30 14:55:49 +00:00
smh
b24634094a Allow ZFS ARC min / max to be tuned at runtime
Prior to this change ZFS ARC min / max could only be changed using
boot time tunables, this allows the values to be tuned at runtime
using the sysctls:
* vfs.zfs.arc_max
* vfs.zfs.arc_min

When adjusting ZFS ARC minimum the memory used  will only reduce
to the new minimum given memory pressure.

Reviewed by:	allanjude
Approved by:	re (gjb)
MFC after:	2 weeks
Relnotes:	yes
Sponsored by:	Multiplay
Differential Revision:	https://reviews.freebsd.org/D5907
2016-06-29 07:55:45 +00:00
avg
6efa34305b fix deadlock-prone code in getzfsvfs()
getzfsvfs() called vfs_busy() in the waiting mode while having a hold on
a pool (via a call to dmu_objset_hold).  In other words,
dp_config_rwlock was held in the shared mode while a thread could be
sleeping in vfs_busy().
The pool's txg sync thread needs to take dp_config_rwlock in the
exclusive mode for some actions, e.g., for executing sync tasks.  If the
sync thread gets blocked, then any thread waiting for its sync task to
get executed is also blocked.  Which, in turn, could mean that
vfs_busy() will keep waiting indefinitely.

The solution is to use vfs_ref() in the locked section and to call
vfs_busy() only after dropping other locks.
Note that a reference on a struct mount object does not prevent an
associated zfsvfs_t object from being destroyed.  So, we have to be
careful to operate only on the struct mount object until we successfully
vfs_busy it.

Approved by:	re (gjb)
MFC after:	2 weeks
2016-06-23 07:01:54 +00:00
asomers
8abfab9cb0 Fix uninitialized variable from r300881
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c
	Initialize needs_update in vdev_geom_set_physpath

PR:		210409
Reported by:	kp
Reviewed by:	kp
Approved by:	re (hrs)
MFC after:	4 weeks
X-MFC-With:	300881
Sponsored by:	Spectra Logic Corp
2016-06-21 15:27:16 +00:00
kib
895e3e7c19 Fix gcc build.
Reported andt tested by:	swills
Sponsored by:	The FreeBSD Foundation
Approved by:	re (gjb)
2016-06-18 20:20:00 +00:00
kib
f2d255a0a1 Use vnlru_free(9) to implement dnlc_reduce_cache().
This apparently puts ARC back under the limits after the vnode pressure
rework in r291244, in particular due to the kmem exhaustion.

Based on patch by:	mckusick
Reviewed by:	avg, mckusick
Tested by:	allanjude, madpilot
Sponsored by:	The FreeBSD Foundation
Approved by:	re (gjb)
2016-06-17 17:34:28 +00:00
avg
542bc13e58 l2arc: reset b_tmp_cdata to NULL in the case of unset b_daddr
The change is in arc_buf_l2_cdata_free().
Without this we can trip the assertion in arc_hdr_realloc()
if INVARIANTS option is enabled.

Approved by:	re (kib)
MFC after:	1 week
2016-06-13 18:39:13 +00:00
avg
c862120336 zfs_vptocnp: check for an invalid znode
... which can arise after the receive or rollback
and failed zfs_rezget().

Approved by:	re (kib)
MFC after:	1 week
2016-06-13 10:53:34 +00:00
avg
3184a6cb73 zfs: set VROOT / VV_ROOT consistently and in a single place
This is a followup to r300131.

A filesystem's root vnode can be reached not only through VSF_ROOT, but
by other means as well.  For example, via a dot-dot lookup.
Also, a root vnode can get reclaimed and then re-created.  For these
reasons it was insufficient to clear VV_ROOT flag from a root vnode of a
snapshot mounted under .zfs in zfsctl_snapdir_lookup().

So, now we set the flag in zfs_znode_sa_init() only if a vnode
represent a root of a filesystem or a standalone snapshot.
That is, the flag is not set for snapshots mounted under .zfs.

MFC after:	2 weeks
2016-06-03 14:37:18 +00:00
avg
3ad48fe4cf zfs_root: fix a potential root vnode reference leak
It could happen in an unlikely case that we fail to lock the root vnode
with requested flags (which appear to never include LK_NOWAIT).

MFC after:	1 week
2016-06-03 14:22:12 +00:00
asomers
10478a3feb Improve the English in a comment
sys/cddl/contrib/opensolaris/uts/common/sys/acl.h:
	Improve the english in a comment.  No functional changes

Submitted by:	gibbs
MFC after:	4 weeks
Sponsored by:	Spectra Logic Corp
2016-06-01 22:21:42 +00:00
allanjude
8cebbf47be Connect the SHA-512t256 and Skein hashing algorithms to ZFS
Support for the new hashing algorithms in ZFS was introduced in r289422
However it was disconnected because FreeBSD lacked implementations of
SHA-512 (truncated to 256 bits), and Skein.

These implementations were introduced in r300921 and r300966 respectively

This commit connects them to ZFS and enabled these new checksum algorithms

This new algorithms are not supported by the boot blocks, so do not use them
on your root dataset if you boot from ZFS.

Relnotes:	yes
Sponsored by:	ScaleEngine Inc.
2016-05-31 04:12:14 +00:00
bdrewery
996c4295db Avoid more literal-suffix errors with C++11 2016-05-29 00:40:29 +00:00
asomers
442baa5184 zfsd(8), the ZFS fault management daemon
Add zfsd, which deals with hard drive faults in ZFS pools. It manages
hotspares and replements in drive slots that publish physical paths.

cddl/usr.sbin/zfsd
	Add zfsd(8) and its unit tests

cddl/usr.sbin/Makefile
	Add zfsd to the build

lib/libdevdctl
	A C++ library that helps devd clients process events

lib/Makefile
share/mk/bsd.libnames.mk
share/mk/src.libnames.mk
	Add libdevdctl to the build. It's a private library, unusable by
	out-of-tree software.

etc/defaults/rc.conf
	By default, set zfsd_enable to NO

etc/mtree/BSD.include.dist
	Add a directory for libdevdctl's include files

etc/mtree/BSD.tests.dist
	Add a directory for zfsd's unit tests

etc/mtree/BSD.var.dist
	Add /var/db/zfsd/cases, where zfsd stores case files while it's shut
	down.

etc/rc.d/Makefile
etc/rc.d/zfsd
	Add zfsd's rc script

sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c
	Fix the resource.fs.zfs.statechange message. It had a number of
	problems:

	It was only being emitted on a transition to the HEALTHY state.
	That made it impossible for zfsd to take actions based on drives
	getting sicker.

	It compared the new state to vdev_prevstate, which is the state that
	the vdev had the last time it was opened.  That doesn't make sense,
	because a vdev can change state multiple times without being
	reopened.

	vdev_set_state contains logic that will change the device's new
	state based on various conditions.  However, the statechange event
	was being posted _before_ that logic took effect.  Now it's being
	posted after.

Submitted by:	gibbs, asomers, mav, allanjude
Reviewed by:	mav, delphij
Relnotes:	yes
Sponsored by:	Spectra Logic Corp, iX Systems
Differential Revision:	https://reviews.freebsd.org/D6564
2016-05-28 17:43:40 +00:00
ngie
318ccd21e7 Fix up r300870
The sys/types.h fix I proposed was only tested with zfs(4), not with
libzpool, which is where the build failure actually existed

Remove vm/vm_pageout.h from arc.c and zfs_vnops.c because they're both
unneeded

MFC after: 1 week
X-MFC with: r300865, r300870
In collaboration with: kib
Submitted by: alc
Sponsored by: EMC / Isilon Storage Division
2016-05-27 22:56:00 +00:00
asomers
6836baf662 Avoid issuing spa config updates for physical path when not necessary
ZFS's configuration needs to be updated whenever the physical path for a
device changes, but not when a new device is introduced. This is because new
devices necessarily cause config updates, but only if they are actually
accepted into the pool.

sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c
	Split vdev_geom_set_physpath out of vdev_geom_attrchanged.  When
	setting the vdev's physical path, only request a config update if
	the physical path has changed.  Don't request it when opening a
	device for the first time, because the config sync will happen
	anyway upstack.

sys/geom/geom_dev.c
	Split g_dev_set_physpath and g_dev_set_media out of
	g_dev_attrchanged

Submitted by:	will, asomers
MFC after:	4 weeks
Sponsored by:	Spectra Logic Corp
Differential Revision:	https://reviews.freebsd.org/D6428
2016-05-27 22:32:44 +00:00
ngie
0dd69e35ff Unbreak the zfs(4) build
vm/vm_pageout.h grew a dependency on the bool typedef in r300865

arc.c didn't include sys/types.h, which included the definition for the typedef

Other items (ofed, drm2) might need to be chased for this commit.

X-MFC with: r300865
MFC after: 1 week
Pointyhat to: alc
Sponsored by: EMC / Isilon Storage Division
2016-05-27 20:33:38 +00:00
br
501a9d9525 Add initial DTrace support for RISC-V.
Sponsored by:	DARPA, AFRL
Sponsored by:	HEIF5
2016-05-24 16:41:37 +00:00
avg
109635ff68 add vop_print methods to vnode operatios of various zfsctl node types
This should help with diagnostics of zfsctl problems.

MFC after:	2 weeks
2016-05-18 13:21:29 +00:00
avg
3f1a1cd674 move zfsctl_freebsd_root_lookup right next to zfsctl_root_lookup
That makes it easier to reason about the code.

MFC after:	5 weeks
2016-05-18 08:29:39 +00:00
avg
d03d8087fa zfsctl_common_fid: remove redundant assignment
"Reinterpret cast" to zfid_short_t and assignment of zf_len
do the job already.

MFC after:	1 week
2016-05-18 08:26:09 +00:00
avg
196a4ff1d2 zfsctl: tighten an assertion and remove an unused definition
There are only two entries under .zfs and 'shares' has an ID of a
special persistent object in its filesystem.

MFC after:	1 week
2016-05-18 08:23:39 +00:00
avg
7553ece82a zfs_root: no need to set the root flag here
That was both redundant as zfs_znode_sa_init() already does the job and
insufficient as the root vnode can be reached via other means.

MFC after:	1 weeks
2016-05-18 08:19:41 +00:00
avg
a98987454e zfsctl_freebsd_root_lookup: gfs_vop_lookup may return a doomed vnode
gfs code is (almsot) completely agnostic of FreeBSD VFS locking, so it
does not handle doomed but not yet dead vnodes and may return them.
Check for those vnodes here and retry a lookup.
Note that ZFS and gfs have additional protections that ensure that a
parent vnode of the current vnode is never doomed.

The fixed problem is an occasional failure to lookup a 'snapshot' or
'shares' directories under .zfs.

Note that for the above reason all uses of zfsctl_root_lookup() are
better be replaced with VOP_LOOKUP.

MFC after:	5 weeks
2016-05-18 08:02:49 +00:00
asomers
1ca5fbab55 Speed up vdev_geom_open_by_guids
Speedup is hard to measure because the only time vdev_geom_open_by_guids
gets called on many drives at the same time is during boot. But with
vdev_geom_open hacked to always call vdev_geom_open_by_guids, operations
like "zpool create" speed up by 65%.

sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c

	* Read all of a vdev's labels in parallel instead of sequentially.
	* In vdev_geom_read_config, don't read the entire label, including
	  the uberblock.  That's a waste of RAM.  Just read the vdev config
	  nvlist.  Reduces the IO and RAM involved with tasting from 1MB to
	  448KB.

Reviewed by:	avg
MFC after:	4 weeks
Sponsored by:	Spectra Logic Corp
Differential Revision:	https://reviews.freebsd.org/D6153
2016-05-17 15:17:23 +00:00
avg
a288444e8c zfs_ioc_rename: fix a reversed condition
FreeBSD zfs_ioc_rename() has an option, not present upstream, that
allows to rename snapshots without unmounting them first.  I am not sure
what is a rationale for that option, but its actual behavior was the
opposite of the intended behavior.  That is, by default the snapshots
were not unmounted.
The option was introduced as part of a large update from upstream in
r248498.

One of the consequences was a havoc under .zfs/snapshot after the rename.
The snapshots got new names but were mounted on top of directories with
old names, so readdir would list the new names, but lookup would still
find the old mounts.

PR:		209093
Reported by:	Frédéric VANNIÈRE <f.vanniere@planet-work.com>
MFC after:	5 days
2016-05-17 07:56:05 +00:00
avg
d48f40647d do not destroy 'snapdir' when it becomes inactive
That was just wrong.  In fact, we can safely keep this static entry when
it's inactive.
Now the destructive action is moved to the reclaim method and the
function is renamed from zfsctl_snapdir_inactive(0 to
zfsctl_snapdir_reclaim().

Also, we can use gfs_vop_reclaim() instead of gfs_dir_inactive() +
kmem_free().

Lastly, we can just assert that the node does not any children when it
is reclaimed, even on the force unmount.  That's because zfs_umount()
does an extra vflush() pass which should destroy all snapshot-mountpoint
vnodes that are the snapdir's children.

MFC after:	5 weeks
2016-05-16 15:48:56 +00:00
avg
1b6b612801 try to recycle "snap" vnodes as soon as possible
Those vnodes should not linger.  "Stale" nodes may get out of
synchronization with actual snapshots.  For example if we destroy a
snapshot and create a new one with the same name.  Or when we rename a
snapshot.

While there fix the argument type for zfsctl_snapshot_reclaim().
Also, its original argument can be passed to gfs_vop_reclaim() directly.

Bug 209093 could be related although I have not specifically verified
that.  Referencing just in case.

PR:		209093
MFC after:	5 weeks
2016-05-16 15:37:41 +00:00
avg
77ee692358 fix locking in zfsctl_root_lookup
Dropping the root vnode's lock after VFS_ROOT() didn't really help the
fact that we acquired the lock while holding its child's, .zfs, lock
while performing the operaiton.
So, directly use zfs_zget() to get the root vnode.

While there simplify the code in zfsctl_freebsd_root_lookup.
We know that .zfs is always exclusively locked.
We know that there is already a reference on *vpp, so no need for an
extra one.
Account for the fact that .. lookup may ask for a different lock type,
not necessarily LK_EXCLUSIVE.  And handle a possible failure to acquire
the lock given the lock flags.

MFC after:	5 weeks
2016-05-16 15:28:39 +00:00
avg
fd1f834a75 gfs_lookup_dot() does not have to acquire any locks
In fact, that was dangerous.  For example, zfsctl_snapshot_reclaim()
calls gfs_dir_lookup() on ".." path and that ends up calling
gfs_lookup_dot() which violated locking order by acquiring the parent's
directory vnode lock after the child's vnode lock.

Also, the previous behavior was inconsistent as gfs_dir_lookup()
returned a locked vnode for . and .. lookups, but not for any other.

Now gfs_lookup_dot() just references a resulting vnode and the locking
is done in its consumers, where necessary.
Note that we do not enable shared locking support for any gfs / zfsctl
vnodes.

This commit partially reverts r273641.

MFC after:	5 weeks
2016-05-16 15:13:16 +00:00
avg
a87f62a9c2 avoid deadlock between zfsctl_snapdir_lookup and zfsctl_snapshot_reclaim
The former acquired a snap vnode lock while holding sd_lock while the
latter does the opposite.

The solution is drop sd_lock before acquiring the vnode lock.  That
should be okay as we are still holding a lock on the 'snapshot'
directory in the exclusive mode.  That lock ensures that there are no
concurrent lookups in the directory and thus no concurrent mount attempts.

But now we have to account for the possibility that the snap vnode
might get reclaim after we drop sd_lock and before we can get
the node lock.  So, check for that case and retry.

MFC after:	5 weeks
2016-05-16 15:03:52 +00:00
avg
88211e07b7 fix a vnode reference leak caused by illumos compat traverse()
This commit partially reverts r273641 which introduced the leak.
It did so to accomodate for some consumers of traverse() that expected
the starting vnode to stay as-is.  But that introduced the leak in the
case when a mounted filesystem was found and its root vnode was
returned.

r299914 removed the troublesome consumers and now there is no reason to
keep the starting vnode.  So, now the new rules are:
- if there is no mounted filesystem, then nothing is changed
- otherwise the starting vnode is always released
- the root vnode of the mounted filesystem is returned locked and
  referenced in the case of success

MFC after:	5 weeks
X-MFC after:	r299914
2016-05-16 12:15:19 +00:00
avg
1f6aa41ece fix up r299902: mount_snapshot requires that the covered vnode is locked
Previously that was not strictly enforced.

MFC after:	4 weeks
X-MFC with:	r299902
2016-05-16 11:48:43 +00:00
avg
bd8e66c093 zfsctl_ops_snapshot: remove methods should never be called
We pretend that snapshots mounted under .zfs are part of the original
filesystem and we try very hard to hide vnodes on top of which the snapshots
are mounted.  Given that I believe that the removed operations should
never be called.  They might have been called previously because
of issues fixed in r299906, r299908 and r299913.

MFC after:	5 weeks
2016-05-16 07:24:30 +00:00
avg
1c1d616948 zfsctl_snapdir_lookup: always clear VV_ROOT flag of snapshot's root VV_ROOT
Previosuly we did that only if the snapshot was mounted earlier, its
root vnode got recycled and then we accessed it again.
We never cleared the flag for a freshly mounted snapshot.

That was very inconsistent and probably a source of some bugs.
Or maybe that painted over some bugs which might get revealed now.

We should consistently clear the flag because we try very hard to
pretend that snapshots auto-mounted under .zfs are part of their
original filesystem.  In other words, we try to hide the fact that they
are different filesystems / mountpoints.

MFC after:	5 weeks
2016-05-16 06:49:09 +00:00
avg
689e078ae4 add zfs_vptocnp with special handling for snapshots under .zfs
The logic is similar to that already present in zfs_dirlook() to handle
a dot-dot lookup on a root vnode of a snapshot mounted under
.zfs/snapshot/.
illumos does not have an equivalent of vop_vptocnp, so there only the
lookup had to be patched up.

MFC after:	4 weeks
2016-05-16 06:40:51 +00:00
avg
c6df178a61 zfsctl: fix several problems with reference counts
* Remove excessive references on a snapshot mountpoint vnode.
  zfsctl_snapdir_lookup() called VN_HOLD() on a vnode returned from
  zfsctl_snapshot_mknode() and the latter also had a call to VN_HOLD()
  on the same vnode.
  On top of that gfs_dir_create() already returns the vnode with the
  use count of 1 (set in getnewvnode).
  So there was 3 references on the vnode.

* mount_snapshot() should keep a reference to a covered vnode.
  That reference is owned by the mountpoint (mounted snapshot filesystem).

* Remove cryptic manipulations of a covered vnode in zfs_umount().
  FreeBSD dounmount() already does the right thing and releases the covered
  vnode.

PR:		207464
Reported by:	dustinwenz@ebureau.com
Tested by:	Howard Powell <hpowell@lighthouseinstruments.com>
MFC after:	3 weeks
2016-05-16 06:24:04 +00:00
mav
5be2c009e5 MFV r299453: 6765 zfs_zaccess_delete() comments do not accurately reflect
delete permissions for ACLs

Reviewed by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Author: Kevin Crowe <kevin.crowe@nexenta.com>

openzfs/openzfs@a40149b935
2016-05-11 13:53:29 +00:00
mav
5406e47e28 MFV r299451: 6764 zfs issues with inheritance flags during chmod(2) with
aclmode=passthrough

Reviewed by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Author: Albert Lee <trisk@nexenta.com>

openzfs/openzfs@1bcf0d240b
2016-05-11 13:50:34 +00:00