illumos/illumos-gate@ad2760acbdad2760acbdhttps://www.illumos.org/issues/7571
ZFS displays the default value for non-present readonly numeric (and index)
properties. However, these properties default values are not meaningful.
Instead, we should display a "-", indicating that they are not present. For
example, on a version-12 pool, the usedby* properties are not available, but
they show up as the incorrect value "0":
1. zfs get all test12
...
test12 usedbysnapshots 0 -
test12 usedbydataset 0 -
test12 usedbychildren 0 -
test12 usedbyrefreservation 0 -
We will be introducing more sometimes-present numeric readonly properties, so
it would be nice to fix this.
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
illumos/illumos-gate@09c9e6dc9b09c9e6dc9bhttps://www.illumos.org/issues/7542
libshare keeps a cached copy of the sharetab listing in memory, which can
become out of date if shares are destroyed or created while leaving a libzfs
handle open. This results in a spurious unmounting failure when an NFS share
exists but isn't in the stale libshare cache.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Amdur <matt.amdur@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Chris Williamson <chris.williamson@delphix.com>
illumos/illumos-gate@873c4903a5873c4903a5https://www.illumos.org/issues/7336
We can run into a problem where we call into zfs_mount, which in turn calls
is_dir_empty, which opens the directory to try and make sure it's empty. The
issue with the current approach is that it holds the directory open while it
traverses it with readdir, which, due to subtle interaction with the Java JVM,
vfork, and exec can cause a tricky race condition resulting in zfs_mount
failures.
The approach to resolving the issue in this patch is to drop the usage of
readdir altogether, and instead rely on the fact that ZFS stores the number of
entries contained in a directory using the st_size field of the stat structure.
Thus, if the directory in question is a ZFS directory, we can check to see if
it's empty by calling stat() and inspecting the st_size field of structure
returned.
===============================================================================
The root cause appears to be an interesting race between vfork, exec, and
zfs_mount's usage of O_CLOEXEC when calling openat. Here's what is going on:
1. We call zfs_mount, and this in turn calls openat to check if the directory
is empty, which results in opening the directory we're trying to mount onto,
and increment v_count.
2. As we're in the middle of reading the directory, vfork is called by the JVM
and proceeds to exec the jspawnhelper utility. As a result of the vfork, we
take an additional hold on the directory, which increments v_count a second
time. The semantics of vfork mean the parent process will wait for the child
process to exit or exec before the parent can continue; at this point the
parent is in the middle of zfs_mount, reading the directory to determine if
it's empty or not.
3. The child process exec-ing jspawnhelper gets to the relvm call within
exec_args (which is called by exec_common). relvm is the function that releases
the parent process, allowing the parent to proceed. The problem is, at this
point of calling relvm, the child hasn't yet called close_exec which is
responsible for closing the file descriptors inherited from the parent process
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Prakash Surya <prakash.surya@delphix.com>
illumos/illumos-gate@d420209d9cd420209d9chttps://www.illumos.org/issues/7233
This fixes a race where one thread is executing zfs_mount() while another
thread forks and execs. If the fork occurs while the directory is open, the
child process will inherit (but not necessarily close immediately) the open fd
for the directory, preventing the mount.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Alex Reece <alex@delphix.com>
illumos/illumos-gate@0e60744c980e60744c98https://www.illumos.org/issues/7280
zdb is very handy for diagnosing problems with a pool in a safe and
quick way. When a pool is in a bad shape, we often want to disable some
fail-safes, or adjust some tunables in order to open them. In the
kernel, this is done by changing public variables in mdb. The goal of
this feature is to add the same capability to zdb and ztest, so that
they can change libzpool tuneables from the command line.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Pavel Zakharov <pavel.zakharov@delphix.com>
5697e03e6ehttps://www.illumos.org/issues/8046
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Pedro Giffuni <pfg@freebsd.org>
illumos/illumos-gate@8363e80ae7https://github.com/illumos/illumos-gate/commit/8363e80ae72609660f6090766ca8c2c18aa53f0https://www.illumos.org/issues/7303
This change introduces a new weighting algorithm to improve metaslab selection.
The new weighting algorithm relies on the SPACEMAP_HISTOGRAM feature. As a result,
the metaslab weight now encodes the type of weighting algorithm used
(size-based vs segment-based).
This also introduce a new allocation tracing facility and two new dcmds to help
debug allocation problems. Each zio now contains a zio_alloc_list_t structure
that is populated as the zio goes through the allocations stage. Here's an
example of how to use the tracing facility:
> c5ec000::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
MSID DVA ASIZE WEIGHT RESULT VDEV
- 0 400 0 NOT_ALLOCATABLE ztest.0a
- 0 400 0 NOT_ALLOCATABLE ztest.0a
- 0 400 0 ENOSPC ztest.0a
- 0 200 0 NOT_ALLOCATABLE ztest.0a
- 0 200 0 NOT_ALLOCATABLE ztest.0a
- 0 200 0 ENOSPC ztest.0a
1 0 400 1 x 8M 17b1a00 ztest.0a
> 1ff2400::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
MSID DVA ASIZE WEIGHT RESULT VDEV
- 0 200 0 NOT_ALLOCATABLE mirror-2
- 0 200 0 NOT_ALLOCATABLE mirror-0
1 0 200 1 x 4M 112ae00 mirror-1
- 1 200 0 NOT_ALLOCATABLE mirror-2
- 1 200 0 NOT_ALLOCATABLE mirror-0
1 1 200 1 x 4M 112b000 mirror-1
- 2 200 0 NOT_ALLOCATABLE mirror-2
If the metaslab is using segment-based weighting then the WEIGHT column will
display the number of segments available in the bucket where the allocation
attempt was made.
Author: George Wilson <george.wilson@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: Chris Siden <christopher.siden@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <paul.dagnelie@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
illumos/illumos-gate@c079fa4d20c079fa4d20https://www.illumos.org/issues/6428
Scenario:
$ zfs create rpool/p
$ zfs set canmount=noauto rpool/p
$ zfs umount rpool/p
$ zfs create rpool/p/c
$ zfs get -r mounted,canmount rpool/p
NAME PROPERTY VALUE SOURCE
rpool/p mounted no -
rpool/p canmount noauto local
rpool/p/c mounted yes -
rpool/p/c canmount on default
In another shell ensure that rpool/p/c is in use, for example:
$ cd /rpool/p/c
Then:
$ zfs set canmount=off rpool/p
cannot unmount '/rpool/p/c': Device busy
But there is no reason to try to unmount rpool/p/c in this scenario.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
Author: Andriy Gapon <andriy.gapon@clusterhq.com>
illumos/illumos-gate@9185393f299185393f29https://www.illumos.org/issues/6412
It seems that zfs receive -F -u would mount a received filesystem after
receiving a full stream if a destination filesystem already existed (and, thus,
got destroyed and re-created) and was mounted.
How to reproduce:
$ zfs create rpool/sandbox
$ zfs create rpool/sandbox/from
$ zfs create rpool/sandbox/to
$ zfs snap rpool/sandbox/from@snap
$ zfs send rpool/sandbox/from@snap | zfs recv -v -F -u rpool/sandbox/to
receiving full stream of rpool/sandbox/from@snap into rpool/sandbox/to@snap
received 41.7KB stream in 1 seconds (41.7KB/sec)
$ zfs get mounted rpool/sandbox/to
NAME PROPERTY VALUE SOURCE
rpool/tmp/sandbox/to mounted yes -
This behavior can be problematic if the mountpoint property changes either
because it had a non-inherited value or the stream contains properties because
it has been generated with either -R or -p.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Andriy Gapon <andriy.gapon@clusterhq.com>
illumos/illumos-gate@ee3c499ad1ee3c499ad1https://www.illumos.org/issues/5752
dump_nvlist() is not aware of the boolean array value type:
bad config type 24 for "foobar"
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Andriy Gapon <andriy.gapon@clusterhq.com>
illumos/illumos-gate@620f322510620f322510https://www.illumos.org/issues/6051
Currently lzc_receive() requires that its snapname argument is a snapshot name
(contains '').
@zfs receive allows to specify just a dataset name and would try to deduce the
snapshot name from the stream.
I propose to allow lzc_receive() to do the same.
That seems to be quite easy to implement, it requires only a small amount of
logic, it does not require any additional system calls or any additional data
from the stream.
The benefit is that the new behavior would allow to keep the snapshot names the
same between the sender and receiver at zero cost, without a need to pass the
names out of band.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Andriy Gapon <avg@icyb.net.ua>
7298 printa() of multiple aggregations can fail for llquantize()
illumos/illumos-gate@0ddc0ebb74
Reviewed by: Patrick Mooney <patrick.mooney@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Adam Leventhal <adam.leventhal@gmail.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Bryan Cantrill <bryan@joyent.com>
illumos/illumos-gate@c3bd3abd88
Add syntactic sugar to dtrace: "if" and "else" statements. The sugar is
baked down to standard dtrace features by adding additional clauses with
the appropriate predicates.
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Bryan Cantrill <bryan@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Matthew Ahrens <mahrens@delphix.com>
illumos/illumos-gate@759e89be35759e89be35https://www.illumos.org/issues/6447
I got a patch from someone who uses nvpair code outside of illumos. It fixes a
couple of gcc warnings/bugs for him.
1. silence uninitialized use warnings
2. add parentheses around assignment used as truth value
3. fix printf format specifier (ll is for integers only)
4. strstr, strspn, strcspn, and strcmp are declared in string.h, not
strings.h.
5. avoid scanning integer into boolean variable
Reviewed by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
Reviewed by: Andy Stormont <astormont@racktopsystems.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Steve Dougherty <sdougherty@barracuda.com>
illumos/illumos-gate@9adfa60d489adfa60d48https://www.illumos.org/issues/6314
Callers of dsl_dataset_name pass a buffer of size ZFS_MAXNAMELEN, but
dsl_dataset_name copies the datasets' name PLUS the snapshot name to it,
resulting in a max of 2 * ZFS_MAXNAMELEN + '@'.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
illumos/illumos-gate@f83b46baf9f83b46baf9https://www.illumos.org/issues/6872
We compile the zfs libraries with -Wno-uninitialized. We should remove
this. Change makefiles, fix new warnings, fix pbchk errors.
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Paul Dagnelie <pcd@delphix.com>
illumos/illumos-gate@8808ac5dae8808ac5daehttps://www.illumos.org/issues/4521
zfstest is trying to execute evil "zfs unmount -a", which fails (fortunately,
as it would otherwise leave me with my ~ missing):
03:44:11.86 cannot unmount '/export/home/yuri': Device busy cannot unmount '/
export/home': Device busy
03:44:11.86 ERROR: /usr/sbin/zfs unmount -a exited 1
This affects, at least, zfs_mount_009_neg and zfs_mount_all_001_pos, both
failing on that step. The pool containing the /export/home hierarchy is
included in KEEP variable, but it doesn't seem to affect anything here.
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Yuri Pankov <yuri.pankov@nexenta.com>
illumos/illumos-gate@6fdcb3d1c26fdcb3d1c2https://www.illumos.org/issues/5813
Lets pull in this patch from freebsd:
http://svnweb.freebsd.org/base?view=revision&revision=271764
zfs_setprop_error(): Handle errno value E2BIG.
This errno value is emitted by dsl_props_set_check() in
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_prop.c, and
is used to mean that the property value is too long. For the record,
the maximum length is ZAP_MAXVALUELEN, which is 8*1024 bytes.
Instead of claiming an unknown error (and abort()ing), provide
something more specific to the scenario involved. As far as I
can tell, E2BIG is not emitted for any other scenario.
MFC after: 1 week
Sponsored by: Spectra Logic
Affects: All ZFS versions starting 27 Feb 2009 (illumos ccba0801)
This change modified the value returned by
dsl_props_set_check(), so that it can distinguish between
a name that's too long and a value that's too long, but
libzfs was not updated accordingly.
MFSpectraBSD: r1051499 on 2014/03/28 11:07:59
Reviewed by: Paul Dagnelie <paul.dagnelie@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Elling <richard.elling@richardelling.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Author: Will Andrews <will@freebsd.org>
illumos/illumos-gate@4cde22c2994cde22c299https://www.illumos.org/issues/6873
lzc_destroy_snaps() returns an nvlist in errlist.
zfs_destroy_snaps_nvl() should nvlist_free() it before returning.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Chris Williamson <chris.williamson@delphix.com>
illumos/illumos-gate@20fea7a47420fea7a474https://www.illumos.org/issues/6879
In libzfs_sendrecv, there's a typo:
case DRR_SPILL:
if (byteswap) {
drr->drr_u.drr_write.drr_length =
BSWAP_64(drr->drr_u.drr_spill.drr_length);
}
Instead of drr_write.drr_length, we should be assigning the result of the
byteswap to drr_spill.drr_length.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Dan Kimmel <dan.kimmel@delphix.com>
illumos/illumos-gate@4a20c933b14a20c933b1https://www.illumos.org/issues/6111
If you create a zfs child folder, zfs send returns an error when a recursive
incremental send is done between two snapshots made prior to the folder
creation.
The problem can be reproduced with the following steps.
root@zfs:/# zfs create pool/test
root@zfs:/# zfs snapshot pool/test@snap1
root@zfs:/# zfs snapshot pool/test@snap2
root@zfs:/# zfs create pool/test/child
root@zfs:/# zfs send -R -I pool/test@snap1 pool/test@snap2 > /dev/null
WARNING: could not send pool/test/child@snap2: does not exist
WARNING: could not send pool/test/child@snap2: does not exist
root@zfs:/# echo $?
1
root@zfs:/# zfs snapshot -r pool/test@snap3
root@zfs:/# zfs send -R -I pool/test@snap1 pool/test@snap3 > /dev/null
root@zfs:/# echo $?
0
root@zfs:/# zfs send -R -I pool/test@snap2 pool/test@snap3 > /dev/null
root@zfs:/# echo $?
0
Since pool/test/child was created after snap2, zfs send should not expect snap2
to be in pool/test/child when doing a recursive send. It should examine the
compare the creation time of the snapshot and each child folder to decide if
the folder will be sent. The next incremental send between snap2 and snap3
would properly create the child folder and snap3 which first appears in the
child folder.
The problem is identical if '-i' is used instead of '-I'.
Reviewed by: Alex Aizman alex.aizman@nexenta.com
Reviewed by: Alek Pinchuk alek.pinchuk@nexenta.com
Reviewed by: Roman Strashkin roman.strashkin@nexenta.com
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Author: Alex Deiter <alex.deiter@nexenta.com>
illumos/illumos-gate@0d8fa8f8eb0d8fa8f8ebhttps://www.illumos.org/issues/6902
pjd has authored and commited a patch in Jan 21, 2012 that substanially speeds
up zfs snapshot listing if requesting only the name property and sorting by
name.
In this special case, the snapshot properties do not need to be loaded. This
code has been adopted by zfsonlinux on May 29, 2012.
Commit message from pjd:
Dramatically optimize listing snapshots when user requests only
snapshot
names and wants to sort them by name, ie. when executes:
1. zfs list -t snapshot -o name -s name
Because only name is needed we don't have to read all snapshot
properties.
Below you can find how long does it take to list 34509 snapshots from
a single
disk pool before and after this change with cold and warm cache:
before:
1. time zfs list -t snapshot -o name -s name > /dev/null
cold cache: 525s
warm cache: 218s
after:
1. time zfs list -t snapshot -o name -s name > /dev/null
cold cache: 1.7s
warm cache: 1.1s
References:
http://svnweb.freebsd.org/base?view=revision&revision=230438https://github.com/freebsd/freebsd/commit/8e3e9863https://github.com/zfsonlinux/zfs/commit/0cee2406
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pawel Dawidek <pjd@freebsd.org>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Author: Martin Matuska <martin@matuska.org>
illumos/illumos-gate@c971037baac971037baahttps://www.illumos.org/issues/6876
Calling dsl_dataset_name on a dataset with a 256 byte buffer is asking for
trouble. We should check every dataset on import, using a 1024 byte buffer and
checking each time to see if the dataset's new name is longer than 256 bytes.
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Paul Dagnelie <pcd@delphix.com>
illumos/illumos-gate@26455f9efc26455f9efchttps://www.illumos.org/issues/6052
At the moment type parameter of lzc_create() is of dmu_objset_type_t type.
That exposes an implementation detail and requires sys/fs/zfs.h to be included
in libzfs_core.h creating unnecessary coupling between libzfs_core interface
and ZFS internals.
I think that dmu_objset_type_t should be replaced with a libzfs_core
enumeration of supported dataset types.
For ABI reasons the new enumeration could be bit-compatible with
dmu_objset_type_t.
For example:
typedef enum {
LZC_DST_ZFS = 2,
LZC_DST_ZVOL
} lzc_dataset_type_t;
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Andriy Gapon <andriy.gapon@clusterhq.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Author: Will Andrews <will@firepipe.net>
Closes#83Closes#32
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: George Wilson <george.wilson@delphix.com>
illumos/illumos-gate@41c6413cb5
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Reviewed by: Toomas Soome <tsoome@me.com>
Approved by: Matthew Ahrens <mahrens@delphix.com>
Author: Joshua M. Clulow <jmc@joyent.com>
illumos/illumos-gate@b211eb9181
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Gerhard Roethlin <git@the-color-black.net>
illumos/illumos-gate@cb605c4d8a
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gordon.ross@nexenta.com>
Author: Dan McDonald <danmcd@omniti.com>
illumos/illumos-gate@5878fad70d
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Albert Lee <trisk@omniti.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Alexander Eremin <a.eremin@nexenta.com>
illumos/illumos-gate@8430278980
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Joe Stein <joe.stein@delphix.com>
illumos/illumos-gate@e9316f7696
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Bayard Bell <buffer.g.overflow@gmail.com>
Reviewed by: Richard PALO <richard@NetBSD.org>
Reviewed by: Steven Hartland <killing@multiplay.co.uk>
Approved by: Rich Lowe <richlowe@richlowe.net>
Author: Chris Williamson <chris.williamson@delphix.com>
illumos/illumos-gate@30925561c2
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed by: Xin Li <delphij@freebsd.org>
Reviewed by: Arne Jansen <sensille@gmx.net>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
illumos/illumos-gate@9c3fd1216f
illumos/illumos-gate@2bcf0248e9https://www.illumos.org/issues/5692
we would like to expose the number of hole (sparse) blocks in a file. this
can be useful to for example if you want to fill in the holes with some
data; knowing the number of holes in advances allows you to report progress
on hole filling. We could use SEEK_HOLE to do that but it would be O(n)
where n is the number of holes present in the file.
Author: Max Grossman <max.grossman@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Boris Protopopov <bprotopopov@hotmail.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
5925 zfs receive -o origin=
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Author: Paul Dagnelie <pcd@delphix.com>
While running 'zfs recv' we noticed that every 128th 8K block required a
read. We were seeing that restore_write() was calling dmu_tx_hold_write()
and the indirect block was not cached. We should prefetch upcoming indirect
blocks to avoid having to go to disk and blocking the restore_write().
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Bayard Bell <buffer.g.overflow@gmail.com>
Approved by: Albert Lee <trisk@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
illumos/illumos-gate@98110f08fa
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Richard Elling <richard.elling@richardelling.com>
Approved by: Gordon Ross <gordon.ross@nexenta.com>
Author: Andrew Stormont <andyjstormont@gmail.com>
illumos/illumos-gate@bd0f709169
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Elling <richard.elling@richardelling.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Prakash Surya <prakash.surya@delphix.com>
illumos/illumos-gate@244781f10d
Reviewed by: Michael Tsymbalyuk <mtzaurus@gmail.com>
Reviewed by: Steven Hartland <killing@multiplay.co.uk>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Jan Kryl <jan.kryl@nexenta.com>
illumos/illumos-gate@b7070b7dbc