Merge new ZFS features from illumos:

1644 add ZFS "clones" property
https://www.illumos.org/issues/1644

1645 add ZFS "written" and "written@..." properties
https://www.illumos.org/issues/1645

1646 "zfs send" should estimate size of stream
https://www.illumos.org/issues/1646

1647 "zfs destroy" should determine space reclaimed by destroying multiple
snapshots
https://www.illumos.org/issues/1647

1693 persistent 'comment' field for a zpool
https://www.illumos.org/issues/1693

1708 adjust size of zpool history data
https://www.illumos.org/issues/1708

1748 desire support for reguid in zfs
https://www.illumos.org/issues/1748

Obtained from:	illumos (changesets 13514, 13524, 13525)
MFC after:	1 month
This commit is contained in:
Martin Matuska 2011-11-28 21:40:00 +00:00
parent a56569ba55
commit 2f7f0f4112
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=228103
39 changed files with 2570 additions and 1414 deletions

View File

@ -48,12 +48,16 @@
.Ar size volume
.Nm
.Cm destroy
.Op Fl rRf
.Op Fl fnpRrv
.Ar filesystem Ns | Ns Ar volume
.Nm
.Cm destroy
.Op Fl rRd
.Op Fl dnpRrv
.Sm off
.Ar snapshot
.Ns Op % Ns Ar snapname
.Ns Op , Ns Ar ...
.Sm on
.Nm
.Cm snapshot
.Op Fl r
@ -160,7 +164,7 @@
.Fl a | Ar filesystem Ns | Ns Ar mountpoint
.Nm
.Cm send
.Op Fl DvRp
.Op Fl DnPpRrv
.Op Fl i Ar snapshot | Fl I Ar snapshot
.Ar snapshot
.Nm
@ -487,6 +491,17 @@ The default value is
.Cm off .
.It Sy creation
The time this dataset was created.
.It Sy clones
For snapshots, this property is a comma-separated list of filesystems or
volumes which are clones of this snapshot. The clones'
.Sy origin
property is this snapshot. If the
.Sy clones
property is not empty, then this snapshot can not be destroyed (even with the
.Fl r
or
.Fl f
options).
.It Sy defer_destroy
This property is
.Cm on
@ -644,6 +659,28 @@ power of 2 from 512 bytes to 128 Kbytes is valid.
.Pp
This property can also be referred to by its shortened column name,
.Sy volblock .
.It Sy written
The amount of
.Sy referenced
space written to this dataset since the previous snapshot.
.It Sy written@ Ns Ar snapshot
The amount of
.Sy referenced
space written to this dataset since the specified snapshot. This is the space
that is referenced by this dataset but was not referenced by the specified
snapshot.
.Pp
The
.Ar snapshot
may be specified as a short snapshot name (just the part after the
.Sy @ Ns ),
in which case it will be interpreted as a snapshot in the same filesystem as
this dataset. The
.Ar snapshot
may be a full snapshot name
.Pq Em filesystem@snapshot ,
which for clones may be a snapshot in the origin's filesystem (or the origin of
the origin's filesystem, etc).
.El
.Pp
The following native properties can be used to change the behavior of a
@ -1403,7 +1440,7 @@ options.
.It Xo
.Nm
.Cm destroy
.Op Fl rRf
.Op Fl fnpRrv
.Ar filesystem Ns | Ns Ar volume
.Xc
.Pp
@ -1422,6 +1459,17 @@ Force an unmount of any file systems using the
.Qq Nm Cm unmount Fl f
command. This option has no effect on non-file systems or unmounted file
systems.
.It Fl n
Do a dry-run ("No-op") deletion. No data will be deleted. This is useful in
conjunction with the
.Fl v
or
.Fl p
flags to determine what data would be deleted.
.It Fl p
Print machine-parsable verbose information about the deleted data.
.It Fl v
Print verbose information about the deleted data.
.El
.Pp
Extreme care should be taken when applying either the
@ -1433,11 +1481,15 @@ behavior for mounted file systems in use.
.It Xo
.Nm
.Cm destroy
.Op Fl rRd
.Op Fl dnpRrv
.Sm off
.Ar snapshot
.Ns Op % Ns Ar snapname
.Ns Op , Ns Ar ...
.Sm on
.Xc
.Pp
The given snapshot is destroyed immediately if and only if the
The given snapshots are destroyed immediately if and only if the
.Qq Nm Cm destroy
command without the
.Fl d
@ -1445,15 +1497,41 @@ option would have destroyed it. Such immediate destruction would occur, for
example, if the snapshot had no clones and the user-initiated reference count
were zero.
.Pp
If the snapshot does not qualify for immediate destruction, it is marked for
If a snapshot does not qualify for immediate destruction, it is marked for
deferred deletion. In this state, it exists as a usable, visible snapshot until
both of the preconditions listed above are met, at which point it is destroyed.
.Pp
An inclusive range of snapshots may be specified by separating the
first and last snapshots with a percent sign
.Pq Sy % .
The first and/or last snapshots may be left blank, in which case the
filesystem's oldest or newest snapshot will be implied.
.Pp
Multiple snapshots
(or ranges of snapshots) of the same filesystem or volume may be specified
in a comma-separated list of snapshots.
Only the snapshot's short name (the
part after the
.Sy @ )
should be specified when using a range or comma-separated list to identify
multiple snapshots.
.Bl -tag -width indent
.It Fl r
Destroy (or mark for deferred deletion) all snapshots with this name in
descendent file systems.
.It Fl R
Recursively destroy all dependents.
.It Fl n
Do a dry-run ("No-op") deletion. No data will be deleted. This is useful in
conjunction with the
.Fl v
or
.Fl p
flags to determine what data would be deleted.
.It Fl p
Print machine-parsable verbose information about the deleted data.
.It Fl v
Print verbose information about the deleted data.
.It Fl d
Defer snapshot deletion.
.El
@ -2080,7 +2158,7 @@ file system shared on the system.
.It Xo
.Nm
.Cm send
.Op Fl DvRp
.Op Fl DnPpRrv
.Op Fl i Ar snapshot | Fl I Ar snapshot
.Ar snapshot
.Xc
@ -2151,10 +2229,26 @@ be used regardless of the dataset's
property, but performance will be much better if the filesystem uses a
dedup-capable checksum (eg.
.Sy sha256 Ns ).
.It Fl r
Recursively send all descendant snapshots. This is similar to the
.Fl R
flag, but information about deleted and renamed datasets is not included, and
property information is only included if the
.Fl p
flag is specified.
.It Fl p
Include the dataset's properties in the stream. This flag is implicit when
.Fl R
is specified. The receiving system must also support this feature.
.It Fl n
Do a dry-run ("No-op") send. Do not generate any actual send data. This is
useful in conjunction with the
.Fl v
or
.Fl P
flags to determine what data will be sent.
.It Fl P
Print machine-parsable verbose information about the stream package generated.
.It Fl v
Print verbose information about the stream package generated.
.El

View File

@ -22,8 +22,9 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright (c) 2011 Pawel Jakub Dawidek <pawel@dawidek.net>.
* All rights reserved.
* Copyright (c) 2011 Martin Matuska <mm@FreeBSD.org>. All rights reserved.
*/
#include <assert.h>
@ -145,7 +146,7 @@ typedef enum {
HELP_HOLD,
HELP_HOLDS,
HELP_RELEASE,
HELP_DIFF
HELP_DIFF,
} zfs_help_t;
typedef struct zfs_command {
@ -220,8 +221,9 @@ get_usage(zfs_help_t idx)
"\tcreate [-ps] [-b blocksize] [-o property=value] ... "
"-V <size> <volume>\n"));
case HELP_DESTROY:
return (gettext("\tdestroy [-rRf] <filesystem|volume>\n"
"\tdestroy [-rRd] <snapshot>\n"));
return (gettext("\tdestroy [-fnpRrv] <filesystem|volume>\n"
"\tdestroy [-dnpRrv] "
"<snapshot>[%<snapname>][,...]\n"));
case HELP_GET:
return (gettext("\tget [-rHp] [-d max] "
"[-o \"all\" | field[,...]] [-s source[,...]]\n"
@ -260,7 +262,7 @@ get_usage(zfs_help_t idx)
case HELP_ROLLBACK:
return (gettext("\trollback [-rRf] <snapshot>\n"));
case HELP_SEND:
return (gettext("\tsend [-DvRp] "
return (gettext("\tsend [-DnPpRrv] "
"[-i snapshot | -I snapshot] <snapshot>\n"));
case HELP_SET:
return (gettext("\tset <property=value> "
@ -440,6 +442,8 @@ usage(boolean_t requested)
(void) fprintf(fp, "YES NO <size> | none\n");
(void) fprintf(fp, "\t%-15s ", "groupquota@...");
(void) fprintf(fp, "YES NO <size> | none\n");
(void) fprintf(fp, "\t%-15s ", "written@<snap>");
(void) fprintf(fp, " NO NO <size>\n");
(void) fprintf(fp, gettext("\nSizes are specified in bytes "
"with standard units such as K, M, G, etc.\n"));
@ -885,15 +889,23 @@ zfs_do_create(int argc, char **argv)
*/
typedef struct destroy_cbdata {
boolean_t cb_first;
int cb_force;
int cb_recurse;
int cb_error;
int cb_needforce;
int cb_doclones;
boolean_t cb_closezhp;
boolean_t cb_force;
boolean_t cb_recurse;
boolean_t cb_error;
boolean_t cb_doclones;
zfs_handle_t *cb_target;
char *cb_snapname;
boolean_t cb_defer_destroy;
boolean_t cb_verbose;
boolean_t cb_parsable;
boolean_t cb_dryrun;
nvlist_t *cb_nvl;
/* first snap in contiguous run */
zfs_handle_t *cb_firstsnap;
/* previous snap in contiguous run */
zfs_handle_t *cb_prevsnap;
int64_t cb_snapused;
char *cb_snapspec;
} destroy_cbdata_t;
/*
@ -923,7 +935,7 @@ destroy_check_dependent(zfs_handle_t *zhp, void *data)
(void) fprintf(stderr, gettext("use '-r' to destroy "
"the following datasets:\n"));
cbp->cb_first = B_FALSE;
cbp->cb_error = 1;
cbp->cb_error = B_TRUE;
}
(void) fprintf(stderr, "%s\n", zfs_get_name(zhp));
@ -944,7 +956,8 @@ destroy_check_dependent(zfs_handle_t *zhp, void *data)
(void) fprintf(stderr, gettext("use '-R' to destroy "
"the following datasets:\n"));
cbp->cb_first = B_FALSE;
cbp->cb_error = 1;
cbp->cb_error = B_TRUE;
cbp->cb_dryrun = B_TRUE;
}
(void) fprintf(stderr, "%s\n", zfs_get_name(zhp));
@ -958,7 +971,20 @@ destroy_check_dependent(zfs_handle_t *zhp, void *data)
static int
destroy_callback(zfs_handle_t *zhp, void *data)
{
destroy_cbdata_t *cbp = data;
destroy_cbdata_t *cb = data;
const char *name = zfs_get_name(zhp);
if (cb->cb_verbose) {
if (cb->cb_parsable) {
(void) printf("destroy\t%s\n", name);
} else if (cb->cb_dryrun) {
(void) printf(gettext("would destroy %s\n"),
name);
} else {
(void) printf(gettext("will destroy %s\n"),
name);
}
}
/*
* Ignore pools (which we've already flagged as an error before getting
@ -970,13 +996,12 @@ destroy_callback(zfs_handle_t *zhp, void *data)
return (0);
}
/*
* Bail out on the first error.
*/
if (zfs_unmount(zhp, NULL, cbp->cb_force ? MS_FORCE : 0) != 0 ||
zfs_destroy(zhp, cbp->cb_defer_destroy) != 0) {
zfs_close(zhp);
return (-1);
if (!cb->cb_dryrun) {
if (zfs_unmount(zhp, NULL, cb->cb_force ? MS_FORCE : 0) != 0 ||
zfs_destroy(zhp, cb->cb_defer_destroy) != 0) {
zfs_close(zhp);
return (-1);
}
}
zfs_close(zhp);
@ -984,39 +1009,142 @@ destroy_callback(zfs_handle_t *zhp, void *data)
}
static int
destroy_snap_clones(zfs_handle_t *zhp, void *arg)
destroy_print_cb(zfs_handle_t *zhp, void *arg)
{
destroy_cbdata_t *cbp = arg;
char thissnap[MAXPATHLEN];
zfs_handle_t *szhp;
boolean_t closezhp = cbp->cb_closezhp;
int rv;
destroy_cbdata_t *cb = arg;
const char *name = zfs_get_name(zhp);
int err = 0;
(void) snprintf(thissnap, sizeof (thissnap),
"%s@%s", zfs_get_name(zhp), cbp->cb_snapname);
libzfs_print_on_error(g_zfs, B_FALSE);
szhp = zfs_open(g_zfs, thissnap, ZFS_TYPE_SNAPSHOT);
libzfs_print_on_error(g_zfs, B_TRUE);
if (szhp) {
/*
* Destroy any clones of this snapshot
*/
if (zfs_iter_dependents(szhp, B_FALSE, destroy_callback,
cbp) != 0) {
zfs_close(szhp);
if (closezhp)
zfs_close(zhp);
return (-1);
if (nvlist_exists(cb->cb_nvl, name)) {
if (cb->cb_firstsnap == NULL)
cb->cb_firstsnap = zfs_handle_dup(zhp);
if (cb->cb_prevsnap != NULL)
zfs_close(cb->cb_prevsnap);
/* this snap continues the current range */
cb->cb_prevsnap = zfs_handle_dup(zhp);
if (cb->cb_verbose) {
if (cb->cb_parsable) {
(void) printf("destroy\t%s\n", name);
} else if (cb->cb_dryrun) {
(void) printf(gettext("would destroy %s\n"),
name);
} else {
(void) printf(gettext("will destroy %s\n"),
name);
}
}
zfs_close(szhp);
} else if (cb->cb_firstsnap != NULL) {
/* end of this range */
uint64_t used = 0;
err = zfs_get_snapused_int(cb->cb_firstsnap,
cb->cb_prevsnap, &used);
cb->cb_snapused += used;
zfs_close(cb->cb_firstsnap);
cb->cb_firstsnap = NULL;
zfs_close(cb->cb_prevsnap);
cb->cb_prevsnap = NULL;
}
zfs_close(zhp);
return (err);
}
static int
destroy_print_snapshots(zfs_handle_t *fs_zhp, destroy_cbdata_t *cb)
{
int err;
assert(cb->cb_firstsnap == NULL);
assert(cb->cb_prevsnap == NULL);
err = zfs_iter_snapshots_sorted(fs_zhp, destroy_print_cb, cb);
if (cb->cb_firstsnap != NULL) {
uint64_t used = 0;
if (err == 0) {
err = zfs_get_snapused_int(cb->cb_firstsnap,
cb->cb_prevsnap, &used);
}
cb->cb_snapused += used;
zfs_close(cb->cb_firstsnap);
cb->cb_firstsnap = NULL;
zfs_close(cb->cb_prevsnap);
cb->cb_prevsnap = NULL;
}
return (err);
}
static int
snapshot_to_nvl_cb(zfs_handle_t *zhp, void *arg)
{
destroy_cbdata_t *cb = arg;
int err = 0;
/* Check for clones. */
if (!cb->cb_doclones) {
cb->cb_target = zhp;
cb->cb_first = B_TRUE;
err = zfs_iter_dependents(zhp, B_TRUE,
destroy_check_dependent, cb);
}
cbp->cb_closezhp = B_TRUE;
rv = zfs_iter_filesystems(zhp, destroy_snap_clones, arg);
if (closezhp)
zfs_close(zhp);
return (rv);
if (err == 0) {
if (nvlist_add_boolean(cb->cb_nvl, zfs_get_name(zhp)))
nomem();
}
zfs_close(zhp);
return (err);
}
static int
gather_snapshots(zfs_handle_t *zhp, void *arg)
{
destroy_cbdata_t *cb = arg;
int err = 0;
err = zfs_iter_snapspec(zhp, cb->cb_snapspec, snapshot_to_nvl_cb, cb);
if (err == ENOENT)
err = 0;
if (err != 0)
goto out;
if (cb->cb_verbose) {
err = destroy_print_snapshots(zhp, cb);
if (err != 0)
goto out;
}
if (cb->cb_recurse)
err = zfs_iter_filesystems(zhp, gather_snapshots, cb);
out:
zfs_close(zhp);
return (err);
}
static int
destroy_clones(destroy_cbdata_t *cb)
{
nvpair_t *pair;
for (pair = nvlist_next_nvpair(cb->cb_nvl, NULL);
pair != NULL;
pair = nvlist_next_nvpair(cb->cb_nvl, pair)) {
zfs_handle_t *zhp = zfs_open(g_zfs, nvpair_name(pair),
ZFS_TYPE_SNAPSHOT);
if (zhp != NULL) {
boolean_t defer = cb->cb_defer_destroy;
int err;
/*
* We can't defer destroy non-snapshots, so set it to
* false while destroying the clones.
*/
cb->cb_defer_destroy = B_FALSE;
err = zfs_iter_dependents(zhp, B_FALSE,
destroy_callback, cb);
cb->cb_defer_destroy = defer;
zfs_close(zhp);
if (err != 0)
return (err);
}
}
return (0);
}
static int
@ -1025,25 +1153,35 @@ zfs_do_destroy(int argc, char **argv)
destroy_cbdata_t cb = { 0 };
int c;
zfs_handle_t *zhp;
char *cp;
char *at;
zfs_type_t type = ZFS_TYPE_DATASET;
/* check options */
while ((c = getopt(argc, argv, "dfrR")) != -1) {
while ((c = getopt(argc, argv, "vpndfrR")) != -1) {
switch (c) {
case 'v':
cb.cb_verbose = B_TRUE;
break;
case 'p':
cb.cb_verbose = B_TRUE;
cb.cb_parsable = B_TRUE;
break;
case 'n':
cb.cb_dryrun = B_TRUE;
break;
case 'd':
cb.cb_defer_destroy = B_TRUE;
type = ZFS_TYPE_SNAPSHOT;
break;
case 'f':
cb.cb_force = 1;
cb.cb_force = B_TRUE;
break;
case 'r':
cb.cb_recurse = 1;
cb.cb_recurse = B_TRUE;
break;
case 'R':
cb.cb_recurse = 1;
cb.cb_doclones = 1;
cb.cb_recurse = B_TRUE;
cb.cb_doclones = B_TRUE;
break;
case '?':
default:
@ -1058,7 +1196,7 @@ zfs_do_destroy(int argc, char **argv)
/* check number of arguments */
if (argc == 0) {
(void) fprintf(stderr, gettext("missing path argument\n"));
(void) fprintf(stderr, gettext("missing dataset argument\n"));
usage(B_FALSE);
}
if (argc > 1) {
@ -1066,92 +1204,118 @@ zfs_do_destroy(int argc, char **argv)
usage(B_FALSE);
}
/*
* If we are doing recursive destroy of a snapshot, then the
* named snapshot may not exist. Go straight to libzfs.
*/
if (cb.cb_recurse && (cp = strchr(argv[0], '@'))) {
int ret;
at = strchr(argv[0], '@');
if (at != NULL) {
int err;
*cp = '\0';
if ((zhp = zfs_open(g_zfs, argv[0], ZFS_TYPE_DATASET)) == NULL)
/* Build the list of snaps to destroy in cb_nvl. */
if (nvlist_alloc(&cb.cb_nvl, NV_UNIQUE_NAME, 0) != 0)
nomem();
*at = '\0';
zhp = zfs_open(g_zfs, argv[0],
ZFS_TYPE_FILESYSTEM | ZFS_TYPE_VOLUME);
if (zhp == NULL)
return (1);
*cp = '@';
cp++;
if (cb.cb_doclones) {
boolean_t defer = cb.cb_defer_destroy;
cb.cb_snapspec = at + 1;
if (gather_snapshots(zfs_handle_dup(zhp), &cb) != 0 ||
cb.cb_error) {
zfs_close(zhp);
nvlist_free(cb.cb_nvl);
return (1);
}
/*
* Temporarily ignore the defer_destroy setting since
* it's not supported for clones.
*/
cb.cb_defer_destroy = B_FALSE;
cb.cb_snapname = cp;
if (destroy_snap_clones(zhp, &cb) != 0) {
zfs_close(zhp);
return (1);
if (nvlist_empty(cb.cb_nvl)) {
(void) fprintf(stderr, gettext("could not find any "
"snapshots to destroy; check snapshot names.\n"));
zfs_close(zhp);
nvlist_free(cb.cb_nvl);
return (1);
}
if (cb.cb_verbose) {
char buf[16];
zfs_nicenum(cb.cb_snapused, buf, sizeof (buf));
if (cb.cb_parsable) {
(void) printf("reclaim\t%llu\n",
cb.cb_snapused);
} else if (cb.cb_dryrun) {
(void) printf(gettext("would reclaim %s\n"),
buf);
} else {
(void) printf(gettext("will reclaim %s\n"),
buf);
}
cb.cb_defer_destroy = defer;
}
ret = zfs_destroy_snaps(zhp, cp, cb.cb_defer_destroy);
zfs_close(zhp);
if (ret) {
(void) fprintf(stderr,
gettext("no snapshots destroyed\n"));
if (!cb.cb_dryrun) {
if (cb.cb_doclones)
err = destroy_clones(&cb);
if (err == 0) {
err = zfs_destroy_snaps_nvl(zhp, cb.cb_nvl,
cb.cb_defer_destroy);
}
}
return (ret != 0);
}
/* Open the given dataset */
if ((zhp = zfs_open(g_zfs, argv[0], type)) == NULL)
return (1);
cb.cb_target = zhp;
/*
* Perform an explicit check for pools before going any further.
*/
if (!cb.cb_recurse && strchr(zfs_get_name(zhp), '/') == NULL &&
zfs_get_type(zhp) == ZFS_TYPE_FILESYSTEM) {
(void) fprintf(stderr, gettext("cannot destroy '%s': "
"operation does not apply to pools\n"),
zfs_get_name(zhp));
(void) fprintf(stderr, gettext("use 'zfs destroy -r "
"%s' to destroy all datasets in the pool\n"),
zfs_get_name(zhp));
(void) fprintf(stderr, gettext("use 'zpool destroy %s' "
"to destroy the pool itself\n"), zfs_get_name(zhp));
zfs_close(zhp);
return (1);
nvlist_free(cb.cb_nvl);
if (err != 0)
return (1);
} else {
/* Open the given dataset */
if ((zhp = zfs_open(g_zfs, argv[0], type)) == NULL)
return (1);
cb.cb_target = zhp;
/*
* Perform an explicit check for pools before going any further.
*/
if (!cb.cb_recurse && strchr(zfs_get_name(zhp), '/') == NULL &&
zfs_get_type(zhp) == ZFS_TYPE_FILESYSTEM) {
(void) fprintf(stderr, gettext("cannot destroy '%s': "
"operation does not apply to pools\n"),
zfs_get_name(zhp));
(void) fprintf(stderr, gettext("use 'zfs destroy -r "
"%s' to destroy all datasets in the pool\n"),
zfs_get_name(zhp));
(void) fprintf(stderr, gettext("use 'zpool destroy %s' "
"to destroy the pool itself\n"), zfs_get_name(zhp));
zfs_close(zhp);
return (1);
}
/*
* Check for any dependents and/or clones.
*/
cb.cb_first = B_TRUE;
if (!cb.cb_doclones &&
zfs_iter_dependents(zhp, B_TRUE, destroy_check_dependent,
&cb) != 0) {
zfs_close(zhp);
return (1);
}
if (cb.cb_error) {
zfs_close(zhp);
return (1);
}
if (zfs_iter_dependents(zhp, B_FALSE, destroy_callback,
&cb) != 0) {
zfs_close(zhp);
return (1);
}
/*
* Do the real thing. The callback will close the
* handle regardless of whether it succeeds or not.
*/
if (destroy_callback(zhp, &cb) != 0)
return (1);
}
/*
* Check for any dependents and/or clones.
*/
cb.cb_first = B_TRUE;
if (!cb.cb_doclones && !cb.cb_defer_destroy &&
zfs_iter_dependents(zhp, B_TRUE, destroy_check_dependent,
&cb) != 0) {
zfs_close(zhp);
return (1);
}
if (cb.cb_error || (!cb.cb_defer_destroy &&
(zfs_iter_dependents(zhp, B_FALSE, destroy_callback, &cb) != 0))) {
zfs_close(zhp);
return (1);
}
/*
* Do the real thing. The callback will close the handle regardless of
* whether it succeeds or not.
*/
if (destroy_callback(zhp, &cb) != 0)
return (1);
return (0);
}
@ -1250,6 +1414,17 @@ get_callback(zfs_handle_t *zhp, void *data)
(void) strlcpy(buf, "-", sizeof (buf));
}
zprop_print_one_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, buf, sourcetype, source, NULL);
} else if (zfs_prop_written(pl->pl_user_prop)) {
sourcetype = ZPROP_SRC_LOCAL;
if (zfs_prop_get_written(zhp, pl->pl_user_prop,
buf, sizeof (buf), cbp->cb_literal) != 0) {
sourcetype = ZPROP_SRC_NONE;
(void) strlcpy(buf, "-", sizeof (buf));
}
zprop_print_one_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, buf, sourcetype, source, NULL);
} else {
@ -1796,8 +1971,8 @@ zfs_do_upgrade(int argc, char **argv)
"---------------\n");
(void) printf(gettext(" 1 Initial ZFS filesystem version\n"));
(void) printf(gettext(" 2 Enhanced directory entries\n"));
(void) printf(gettext(" 3 Case insensitive and File system "
"unique identifier (FUID)\n"));
(void) printf(gettext(" 3 Case insensitive and filesystem "
"user identifier (FUID)\n"));
(void) printf(gettext(" 4 userquota, groupquota "
"properties\n"));
(void) printf(gettext(" 5 System attributes\n"));
@ -2677,6 +2852,13 @@ print_dataset(zfs_handle_t *zhp, zprop_list_t *pl, boolean_t scripted)
else
propstr = property;
right_justify = B_TRUE;
} else if (zfs_prop_written(pl->pl_user_prop)) {
if (zfs_prop_get_written(zhp, pl->pl_user_prop,
property, sizeof (property), B_FALSE) != 0)
propstr = "-";
else
propstr = property;
right_justify = B_TRUE;
} else {
if (nvlist_lookup_nvlist(userprops,
pl->pl_user_prop, &propval) != 0)
@ -3303,9 +3485,6 @@ zfs_do_snapshot(int argc, char **argv)
}
/*
* zfs send [-vDp] -R [-i|-I <@snap>] <fs@snap>
* zfs send [-vDp] [-i|-I <@snap>] <fs@snap>
*
* Send a backup stream to stdout.
*/
static int
@ -3317,11 +3496,11 @@ zfs_do_send(int argc, char **argv)
zfs_handle_t *zhp;
sendflags_t flags = { 0 };
int c, err;
nvlist_t *dbgnv;
nvlist_t *dbgnv = NULL;
boolean_t extraverbose = B_FALSE;
/* check options */
while ((c = getopt(argc, argv, ":i:I:RDpv")) != -1) {
while ((c = getopt(argc, argv, ":i:I:RDpvnP")) != -1) {
switch (c) {
case 'i':
if (fromname)
@ -3340,6 +3519,10 @@ zfs_do_send(int argc, char **argv)
case 'p':
flags.props = B_TRUE;
break;
case 'P':
flags.parsable = B_TRUE;
flags.verbose = B_TRUE;
break;
case 'v':
if (flags.verbose)
extraverbose = B_TRUE;
@ -3348,6 +3531,9 @@ zfs_do_send(int argc, char **argv)
case 'D':
flags.dedup = B_TRUE;
break;
case 'n':
flags.dryrun = B_TRUE;
break;
case ':':
(void) fprintf(stderr, gettext("missing argument for "
"'%c' option\n"), optopt);
@ -3373,7 +3559,7 @@ zfs_do_send(int argc, char **argv)
usage(B_FALSE);
}
if (isatty(STDOUT_FILENO)) {
if (!flags.dryrun && isatty(STDOUT_FILENO)) {
(void) fprintf(stderr,
gettext("Error: Stream can not be written to a terminal.\n"
"You must redirect standard output.\n"));
@ -3427,10 +3613,10 @@ zfs_do_send(int argc, char **argv)
if (flags.replicate && fromname == NULL)
flags.doall = B_TRUE;
err = zfs_send(zhp, fromname, toname, flags, STDOUT_FILENO, NULL, 0,
err = zfs_send(zhp, fromname, toname, &flags, STDOUT_FILENO, NULL, 0,
extraverbose ? &dbgnv : NULL);
if (extraverbose) {
if (extraverbose && dbgnv != NULL) {
/*
* dump_nvlist prints to stdout, but that's been
* redirected to a file. Make it print to stderr
@ -3511,7 +3697,7 @@ zfs_do_receive(int argc, char **argv)
return (1);
}
err = zfs_receive(g_zfs, argv[0], flags, STDIN_FILENO, NULL);
err = zfs_receive(g_zfs, argv[0], &flags, STDIN_FILENO, NULL);
return (err != 0);
}

View File

@ -23,7 +23,7 @@
.\"
.\" $FreeBSD$
.\"
.Dd November 26, 2011
.Dd November 28, 2011
.Dt ZPOOL 8
.Os
.Sh NAME
@ -133,6 +133,9 @@
.Op Fl e
.Ar pool device ...
.Nm
.Cm reguid
.Ar pool
.Nm
.Cm remove
.Ar pool device ...
.Nm
@ -1346,6 +1349,14 @@ available to the pool.
.El
.It Xo
.Nm
.Cm reguid
.Ar pool
.Xc
.Pp
Generates a new unique identifier for the pool. You must ensure that all
devices in this pool are online and healthy before performing this action.
.It Xo
.Nm
.Cm remove
.Ar pool device ...
.Xc

View File

@ -22,6 +22,8 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright (c) 2011 Martin Matuska <mm@FreeBSD.org>. All rights reserved.
*/
#include <solaris.h>
@ -68,6 +70,8 @@ static int zpool_do_online(int, char **);
static int zpool_do_offline(int, char **);
static int zpool_do_clear(int, char **);
static int zpool_do_reguid(int, char **);
static int zpool_do_attach(int, char **);
static int zpool_do_detach(int, char **);
static int zpool_do_replace(int, char **);
@ -126,7 +130,8 @@ typedef enum {
HELP_UPGRADE,
HELP_GET,
HELP_SET,
HELP_SPLIT
HELP_SPLIT,
HELP_REGUID
} zpool_help_t;
@ -172,6 +177,7 @@ static zpool_command_t command_table[] = {
{ "import", zpool_do_import, HELP_IMPORT },
{ "export", zpool_do_export, HELP_EXPORT },
{ "upgrade", zpool_do_upgrade, HELP_UPGRADE },
{ "reguid", zpool_do_reguid, HELP_REGUID },
{ NULL },
{ "history", zpool_do_history, HELP_HISTORY },
{ "get", zpool_do_get, HELP_GET },
@ -251,6 +257,8 @@ get_usage(zpool_help_t idx) {
return (gettext("\tsplit [-n] [-R altroot] [-o mntopts]\n"
"\t [-o property=value] <pool> <newpool> "
"[<device> ...]\n"));
case HELP_REGUID:
return (gettext("\treguid <pool>\n"));
}
abort();
@ -1454,6 +1462,7 @@ show_import(nvlist_t *config)
const char *health;
uint_t vsc;
int namewidth;
char *comment;
verify(nvlist_lookup_string(config, ZPOOL_CONFIG_POOL_NAME,
&name) == 0);
@ -1470,9 +1479,9 @@ show_import(nvlist_t *config)
reason = zpool_import_status(config, &msgid);
(void) printf(gettext(" pool: %s\n"), name);
(void) printf(gettext(" id: %llu\n"), (u_longlong_t)guid);
(void) printf(gettext(" state: %s"), health);
(void) printf(gettext(" pool: %s\n"), name);
(void) printf(gettext(" id: %llu\n"), (u_longlong_t)guid);
(void) printf(gettext(" state: %s"), health);
if (pool_state == POOL_STATE_DESTROYED)
(void) printf(gettext(" (DESTROYED)"));
(void) printf("\n");
@ -1481,58 +1490,59 @@ show_import(nvlist_t *config)
case ZPOOL_STATUS_MISSING_DEV_R:
case ZPOOL_STATUS_MISSING_DEV_NR:
case ZPOOL_STATUS_BAD_GUID_SUM:
(void) printf(gettext("status: One or more devices are missing "
"from the system.\n"));
(void) printf(gettext(" status: One or more devices are "
"missing from the system.\n"));
break;
case ZPOOL_STATUS_CORRUPT_LABEL_R:
case ZPOOL_STATUS_CORRUPT_LABEL_NR:
(void) printf(gettext("status: One or more devices contains "
(void) printf(gettext(" status: One or more devices contains "
"corrupted data.\n"));
break;
case ZPOOL_STATUS_CORRUPT_DATA:
(void) printf(gettext("status: The pool data is corrupted.\n"));
(void) printf(
gettext(" status: The pool data is corrupted.\n"));
break;
case ZPOOL_STATUS_OFFLINE_DEV:
(void) printf(gettext("status: One or more devices "
(void) printf(gettext(" status: One or more devices "
"are offlined.\n"));
break;
case ZPOOL_STATUS_CORRUPT_POOL:
(void) printf(gettext("status: The pool metadata is "
(void) printf(gettext(" status: The pool metadata is "
"corrupted.\n"));
break;
case ZPOOL_STATUS_VERSION_OLDER:
(void) printf(gettext("status: The pool is formatted using an "
(void) printf(gettext(" status: The pool is formatted using an "
"older on-disk version.\n"));
break;
case ZPOOL_STATUS_VERSION_NEWER:
(void) printf(gettext("status: The pool is formatted using an "
(void) printf(gettext(" status: The pool is formatted using an "
"incompatible version.\n"));
break;
case ZPOOL_STATUS_HOSTID_MISMATCH:
(void) printf(gettext("status: The pool was last accessed by "
(void) printf(gettext(" status: The pool was last accessed by "
"another system.\n"));
break;
case ZPOOL_STATUS_FAULTED_DEV_R:
case ZPOOL_STATUS_FAULTED_DEV_NR:
(void) printf(gettext("status: One or more devices are "
(void) printf(gettext(" status: One or more devices are "
"faulted.\n"));
break;
case ZPOOL_STATUS_BAD_LOG:
(void) printf(gettext("status: An intent log record cannot be "
(void) printf(gettext(" status: An intent log record cannot be "
"read.\n"));
break;
case ZPOOL_STATUS_RESILVERING:
(void) printf(gettext("status: One or more devices were being "
(void) printf(gettext(" status: One or more devices were being "
"resilvered.\n"));
break;
@ -1548,26 +1558,26 @@ show_import(nvlist_t *config)
*/
if (vs->vs_state == VDEV_STATE_HEALTHY) {
if (reason == ZPOOL_STATUS_VERSION_OLDER)
(void) printf(gettext("action: The pool can be "
(void) printf(gettext(" action: The pool can be "
"imported using its name or numeric identifier, "
"though\n\tsome features will not be available "
"without an explicit 'zpool upgrade'.\n"));
else if (reason == ZPOOL_STATUS_HOSTID_MISMATCH)
(void) printf(gettext("action: The pool can be "
(void) printf(gettext(" action: The pool can be "
"imported using its name or numeric "
"identifier and\n\tthe '-f' flag.\n"));
else
(void) printf(gettext("action: The pool can be "
(void) printf(gettext(" action: The pool can be "
"imported using its name or numeric "
"identifier.\n"));
} else if (vs->vs_state == VDEV_STATE_DEGRADED) {
(void) printf(gettext("action: The pool can be imported "
(void) printf(gettext(" action: The pool can be imported "
"despite missing or damaged devices. The\n\tfault "
"tolerance of the pool may be compromised if imported.\n"));
} else {
switch (reason) {
case ZPOOL_STATUS_VERSION_NEWER:
(void) printf(gettext("action: The pool cannot be "
(void) printf(gettext(" action: The pool cannot be "
"imported. Access the pool on a system running "
"newer\n\tsoftware, or recreate the pool from "
"backup.\n"));
@ -1575,16 +1585,20 @@ show_import(nvlist_t *config)
case ZPOOL_STATUS_MISSING_DEV_R:
case ZPOOL_STATUS_MISSING_DEV_NR:
case ZPOOL_STATUS_BAD_GUID_SUM:
(void) printf(gettext("action: The pool cannot be "
(void) printf(gettext(" action: The pool cannot be "
"imported. Attach the missing\n\tdevices and try "
"again.\n"));
break;
default:
(void) printf(gettext("action: The pool cannot be "
(void) printf(gettext(" action: The pool cannot be "
"imported due to damaged devices or data.\n"));
}
}
/* Print the comment attached to the pool. */
if (nvlist_lookup_string(config, ZPOOL_CONFIG_COMMENT, &comment) == 0)
(void) printf(gettext("comment: %s\n"), comment);
/*
* If the state is "closed" or "can't open", and the aux state
* is "corrupt data":
@ -1605,7 +1619,7 @@ show_import(nvlist_t *config)
(void) printf(gettext(" see: http://www.sun.com/msg/%s\n"),
msgid);
(void) printf(gettext("config:\n\n"));
(void) printf(gettext(" config:\n\n"));
namewidth = max_width(NULL, nvroot, 0, 0);
if (namewidth < 10)
@ -3320,6 +3334,52 @@ zpool_do_clear(int argc, char **argv)
return (ret);
}
/*
* zpool reguid <pool>
*/
int
zpool_do_reguid(int argc, char **argv)
{
int c;
char *poolname;
zpool_handle_t *zhp;
int ret = 0;
/* check options */
while ((c = getopt(argc, argv, "")) != -1) {
switch (c) {
case '?':
(void) fprintf(stderr, gettext("invalid option '%c'\n"),
optopt);
usage(B_FALSE);
}
}
argc -= optind;
argv += optind;
/* get pool name and check number of arguments */
if (argc < 1) {
(void) fprintf(stderr, gettext("missing pool name\n"));
usage(B_FALSE);
}
if (argc > 1) {
(void) fprintf(stderr, gettext("too many arguments\n"));
usage(B_FALSE);
}
poolname = argv[0];
if ((zhp = zpool_open(g_zfs, poolname)) == NULL)
return (1);
ret = zpool_reguid(zhp);
zpool_close(zhp);
return (ret);
}
typedef struct scrub_cbdata {
int cb_type;
int cb_argc;

View File

@ -21,6 +21,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
*/
/*
@ -259,6 +260,7 @@ ztest_func_t ztest_vdev_LUN_growth;
ztest_func_t ztest_vdev_add_remove;
ztest_func_t ztest_vdev_aux_add_remove;
ztest_func_t ztest_split_pool;
ztest_func_t ztest_reguid;
uint64_t zopt_always = 0ULL * NANOSEC; /* all the time */
uint64_t zopt_incessant = 1ULL * NANOSEC / 10; /* every 1/10 second */
@ -289,6 +291,7 @@ ztest_info_t ztest_info[] = {
{ ztest_fault_inject, 1, &zopt_sometimes },
{ ztest_ddt_repair, 1, &zopt_sometimes },
{ ztest_dmu_snapshot_hold, 1, &zopt_sometimes },
{ ztest_reguid, 1, &zopt_sometimes },
{ ztest_spa_rename, 1, &zopt_rarely },
{ ztest_scrub, 1, &zopt_rarely },
{ ztest_dsl_dataset_promote_busy, 1, &zopt_rarely },
@ -325,6 +328,7 @@ typedef struct ztest_shared {
uint64_t zs_vdev_aux;
uint64_t zs_alloc;
uint64_t zs_space;
uint64_t zs_guid;
mutex_t zs_vdev_lock;
rwlock_t zs_name_lock;
ztest_info_t zs_info[ZTEST_FUNCS];
@ -4646,7 +4650,7 @@ ztest_ddt_repair(ztest_ds_t *zd, uint64_t id)
object = od[0].od_object;
blocksize = od[0].od_blocksize;
pattern = spa_guid(spa) ^ dmu_objset_fsid_guid(os);
pattern = zs->zs_guid ^ dmu_objset_fsid_guid(os);
ASSERT(object != 0);
@ -4716,6 +4720,31 @@ ztest_scrub(ztest_ds_t *zd, uint64_t id)
(void) spa_scan(spa, POOL_SCAN_SCRUB);
}
/*
* Change the guid for the pool.
*/
/* ARGSUSED */
void
ztest_reguid(ztest_ds_t *zd, uint64_t id)
{
ztest_shared_t *zs = ztest_shared;
spa_t *spa = zs->zs_spa;
uint64_t orig, load;
orig = spa_guid(spa);
load = spa_load_guid(spa);
if (spa_change_guid(spa) != 0)
return;
if (zopt_verbose >= 3) {
(void) printf("Changed guid old %llu -> %llu\n",
(u_longlong_t)orig, (u_longlong_t)spa_guid(spa));
}
VERIFY3U(orig, !=, spa_guid(spa));
VERIFY3U(load, ==, spa_load_guid(spa));
}
/*
* Rename the pool to a different name and then rename it back.
*/
@ -5145,6 +5174,7 @@ ztest_run(ztest_shared_t *zs)
{
thread_t *tid;
spa_t *spa;
objset_t *os;
thread_t resume_tid;
int error;
@ -5176,6 +5206,10 @@ ztest_run(ztest_shared_t *zs)
spa->spa_debug = B_TRUE;
zs->zs_spa = spa;
VERIFY3U(0, ==, dmu_objset_hold(zs->zs_pool, FTAG, &os));
zs->zs_guid = dmu_objset_fsid_guid(os);
dmu_objset_rele(os, FTAG);
spa->spa_dedup_ditto = 2 * ZIO_DEDUPDITTO_MIN;
/*

View File

@ -21,8 +21,8 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2010 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 Pawel Jakub Dawidek <pawel@dawidek.net>.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* All rights reserved.
*/
@ -233,6 +233,7 @@ typedef struct splitflags {
*/
extern int zpool_scan(zpool_handle_t *, pool_scan_func_t);
extern int zpool_clear(zpool_handle_t *, const char *, nvlist_t *);
extern int zpool_reguid(zpool_handle_t *);
extern int zpool_vdev_online(zpool_handle_t *, const char *, int,
vdev_state_t *);
@ -384,6 +385,7 @@ extern void zpool_explain_recover(libzfs_handle_t *, const char *, int,
* underlying datasets, only the references to them.
*/
extern zfs_handle_t *zfs_open(libzfs_handle_t *, const char *, int);
extern zfs_handle_t *zfs_handle_dup(zfs_handle_t *);
extern void zfs_close(zfs_handle_t *);
extern zfs_type_t zfs_get_type(const zfs_handle_t *);
extern const char *zfs_get_name(const zfs_handle_t *);
@ -417,12 +419,20 @@ extern int zfs_prop_get_userquota_int(zfs_handle_t *zhp, const char *propname,
uint64_t *propvalue);
extern int zfs_prop_get_userquota(zfs_handle_t *zhp, const char *propname,
char *propbuf, int proplen, boolean_t literal);
extern int zfs_prop_get_written_int(zfs_handle_t *zhp, const char *propname,
uint64_t *propvalue);
extern int zfs_prop_get_written(zfs_handle_t *zhp, const char *propname,
char *propbuf, int proplen, boolean_t literal);
extern int zfs_get_snapused_int(zfs_handle_t *firstsnap, zfs_handle_t *lastsnap,
uint64_t *usedp);
extern uint64_t zfs_prop_get_int(zfs_handle_t *, zfs_prop_t);
extern int zfs_prop_inherit(zfs_handle_t *, const char *, boolean_t);
extern const char *zfs_prop_values(zfs_prop_t);
extern int zfs_prop_is_string(zfs_prop_t prop);
extern nvlist_t *zfs_get_user_props(zfs_handle_t *);
extern nvlist_t *zfs_get_recvd_props(zfs_handle_t *);
extern nvlist_t *zfs_get_clones_nvl(zfs_handle_t *);
typedef struct zprop_list {
int pl_prop;
@ -497,6 +507,7 @@ extern int zfs_iter_dependents(zfs_handle_t *, boolean_t, zfs_iter_f, void *);
extern int zfs_iter_filesystems(zfs_handle_t *, zfs_iter_f, void *);
extern int zfs_iter_snapshots(zfs_handle_t *, zfs_iter_f, void *);
extern int zfs_iter_snapshots_sorted(zfs_handle_t *, zfs_iter_f, void *);
extern int zfs_iter_snapspec(zfs_handle_t *, const char *, zfs_iter_f, void *);
typedef struct get_all_cb {
zfs_handle_t **cb_handles;
@ -517,6 +528,7 @@ extern int zfs_create(libzfs_handle_t *, const char *, zfs_type_t,
extern int zfs_create_ancestors(libzfs_handle_t *, const char *);
extern int zfs_destroy(zfs_handle_t *, boolean_t);
extern int zfs_destroy_snaps(zfs_handle_t *, char *, boolean_t);
extern int zfs_destroy_snaps_nvl(zfs_handle_t *, nvlist_t *, boolean_t);
extern int zfs_clone(zfs_handle_t *, const char *, nvlist_t *);
extern int zfs_snapshot(libzfs_handle_t *, const char *, boolean_t, nvlist_t *);
extern int zfs_rollback(zfs_handle_t *, zfs_handle_t *, boolean_t);
@ -533,29 +545,34 @@ extern int zfs_rename(zfs_handle_t *, const char *, renameflags_t flags);
typedef struct sendflags {
/* print informational messages (ie, -v was specified) */
int verbose : 1;
boolean_t verbose;
/* recursive send (ie, -R) */
int replicate : 1;
boolean_t replicate;
/* for incrementals, do all intermediate snapshots */
int doall : 1; /* (ie, -I) */
boolean_t doall;
/* if dataset is a clone, do incremental from its origin */
int fromorigin : 1;
boolean_t fromorigin;
/* do deduplication */
int dedup : 1;
boolean_t dedup;
/* send properties (ie, -p) */
int props : 1;
boolean_t props;
/* do not send (no-op, ie. -n) */
boolean_t dryrun;
/* parsable verbose output (ie. -P) */
boolean_t parsable;
} sendflags_t;
typedef boolean_t (snapfilter_cb_t)(zfs_handle_t *, void *);
extern int zfs_send(zfs_handle_t *zhp, const char *fromsnap, const char *tosnap,
sendflags_t flags, int outfd, snapfilter_cb_t filter_func,
void *cb_arg, nvlist_t **debugnvp);
extern int zfs_send(zfs_handle_t *, const char *, const char *,
sendflags_t *, int, snapfilter_cb_t, void *, nvlist_t **);
extern int zfs_promote(zfs_handle_t *);
extern int zfs_hold(zfs_handle_t *, const char *, const char *, boolean_t,
@ -575,34 +592,34 @@ extern int zfs_set_fsacl(zfs_handle_t *, boolean_t, nvlist_t *);
typedef struct recvflags {
/* print informational messages (ie, -v was specified) */
int verbose : 1;
boolean_t verbose;
/* the destination is a prefix, not the exact fs (ie, -d) */
int isprefix : 1;
boolean_t isprefix;
/*
* Only the tail of the sent snapshot path is appended to the
* destination to determine the received snapshot name (ie, -e).
*/
int istail : 1;
boolean_t istail;
/* do not actually do the recv, just check if it would work (ie, -n) */
int dryrun : 1;
boolean_t dryrun;
/* rollback/destroy filesystems as necessary (eg, -F) */
int force : 1;
boolean_t force;
/* set "canmount=off" on all modified filesystems */
int canmountoff : 1;
boolean_t canmountoff;
/* byteswap flag is used internally; callers need not specify */
int byteswap : 1;
boolean_t byteswap;
/* do not mount file systems as they are extracted (private) */
int nomount : 1;
boolean_t nomount;
} recvflags_t;
extern int zfs_receive(libzfs_handle_t *, const char *, recvflags_t,
extern int zfs_receive(libzfs_handle_t *, const char *, recvflags_t *,
int, avl_tree_t *);
typedef enum diff_flags {

View File

@ -496,7 +496,7 @@ make_dataset_handle(libzfs_handle_t *hdl, const char *path)
return (zhp);
}
static zfs_handle_t *
zfs_handle_t *
make_dataset_handle_zc(libzfs_handle_t *hdl, zfs_cmd_t *zc)
{
zfs_handle_t *zhp = calloc(sizeof (zfs_handle_t), 1);
@ -513,6 +513,53 @@ make_dataset_handle_zc(libzfs_handle_t *hdl, zfs_cmd_t *zc)
return (zhp);
}
zfs_handle_t *
zfs_handle_dup(zfs_handle_t *zhp_orig)
{
zfs_handle_t *zhp = calloc(sizeof (zfs_handle_t), 1);
if (zhp == NULL)
return (NULL);
zhp->zfs_hdl = zhp_orig->zfs_hdl;
zhp->zpool_hdl = zhp_orig->zpool_hdl;
(void) strlcpy(zhp->zfs_name, zhp_orig->zfs_name,
sizeof (zhp->zfs_name));
zhp->zfs_type = zhp_orig->zfs_type;
zhp->zfs_head_type = zhp_orig->zfs_head_type;
zhp->zfs_dmustats = zhp_orig->zfs_dmustats;
if (zhp_orig->zfs_props != NULL) {
if (nvlist_dup(zhp_orig->zfs_props, &zhp->zfs_props, 0) != 0) {
(void) no_memory(zhp->zfs_hdl);
zfs_close(zhp);
return (NULL);
}
}
if (zhp_orig->zfs_user_props != NULL) {
if (nvlist_dup(zhp_orig->zfs_user_props,
&zhp->zfs_user_props, 0) != 0) {
(void) no_memory(zhp->zfs_hdl);
zfs_close(zhp);
return (NULL);
}
}
if (zhp_orig->zfs_recvd_props != NULL) {
if (nvlist_dup(zhp_orig->zfs_recvd_props,
&zhp->zfs_recvd_props, 0)) {
(void) no_memory(zhp->zfs_hdl);
zfs_close(zhp);
return (NULL);
}
}
zhp->zfs_mntcheck = zhp_orig->zfs_mntcheck;
if (zhp_orig->zfs_mntopts != NULL) {
zhp->zfs_mntopts = zfs_strdup(zhp_orig->zfs_hdl,
zhp_orig->zfs_mntopts);
}
zhp->zfs_props_table = zhp_orig->zfs_props_table;
return (zhp);
}
/*
* Opens the given snapshot, filesystem, or volume. The 'types'
* argument is a mask of acceptable types. The function will print an
@ -876,6 +923,12 @@ zfs_valid_proplist(libzfs_handle_t *hdl, zfs_type_t type, nvlist_t *nvl,
goto error;
}
continue;
} else if (prop == ZPROP_INVAL && zfs_prop_written(propname)) {
zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
"'%s' is readonly"),
propname);
(void) zfs_error(hdl, EZFS_PROPREADONLY, errbuf);
goto error;
}
if (prop == ZPROP_INVAL) {
@ -1869,8 +1922,6 @@ zfs_prop_get_recvd(zfs_handle_t *zhp, const char *propname, char *propbuf,
err = zfs_prop_get(zhp, prop, propbuf, proplen,
NULL, NULL, 0, literal);
zfs_unset_recvd_props_mode(zhp, &cookie);
} else if (zfs_prop_userquota(propname)) {
return (-1);
} else {
nvlist_t *propval;
char *recvdval;
@ -1885,6 +1936,120 @@ zfs_prop_get_recvd(zfs_handle_t *zhp, const char *propname, char *propbuf,
return (err == 0 ? 0 : -1);
}
static int
get_clones_string(zfs_handle_t *zhp, char *propbuf, size_t proplen)
{
nvlist_t *value;
nvpair_t *pair;
value = zfs_get_clones_nvl(zhp);
if (value == NULL)
return (-1);
propbuf[0] = '\0';
for (pair = nvlist_next_nvpair(value, NULL); pair != NULL;
pair = nvlist_next_nvpair(value, pair)) {
if (propbuf[0] != '\0')
(void) strlcat(propbuf, ",", proplen);
(void) strlcat(propbuf, nvpair_name(pair), proplen);
}
return (0);
}
struct get_clones_arg {
uint64_t numclones;
nvlist_t *value;
const char *origin;
char buf[ZFS_MAXNAMELEN];
};
int
get_clones_cb(zfs_handle_t *zhp, void *arg)
{
struct get_clones_arg *gca = arg;
if (gca->numclones == 0) {
zfs_close(zhp);
return (0);
}
if (zfs_prop_get(zhp, ZFS_PROP_ORIGIN, gca->buf, sizeof (gca->buf),
NULL, NULL, 0, B_TRUE) != 0)
goto out;
if (strcmp(gca->buf, gca->origin) == 0) {
if (nvlist_add_boolean(gca->value, zfs_get_name(zhp)) != 0) {
zfs_close(zhp);
return (no_memory(zhp->zfs_hdl));
}
gca->numclones--;
}
out:
(void) zfs_iter_children(zhp, get_clones_cb, gca);
zfs_close(zhp);
return (0);
}
nvlist_t *
zfs_get_clones_nvl(zfs_handle_t *zhp)
{
nvlist_t *nv, *value;
if (nvlist_lookup_nvlist(zhp->zfs_props,
zfs_prop_to_name(ZFS_PROP_CLONES), &nv) != 0) {
struct get_clones_arg gca;
/*
* if this is a snapshot, then the kernel wasn't able
* to get the clones. Do it by slowly iterating.
*/
if (zhp->zfs_type != ZFS_TYPE_SNAPSHOT)
return (NULL);
if (nvlist_alloc(&nv, NV_UNIQUE_NAME, 0) != 0)
return (NULL);
if (nvlist_alloc(&value, NV_UNIQUE_NAME, 0) != 0) {
nvlist_free(nv);
return (NULL);
}
gca.numclones = zfs_prop_get_int(zhp, ZFS_PROP_NUMCLONES);
gca.value = value;
gca.origin = zhp->zfs_name;
if (gca.numclones != 0) {
zfs_handle_t *root;
char pool[ZFS_MAXNAMELEN];
char *cp = pool;
/* get the pool name */
(void) strlcpy(pool, zhp->zfs_name, sizeof (pool));
(void) strsep(&cp, "/@");
root = zfs_open(zhp->zfs_hdl, pool,
ZFS_TYPE_FILESYSTEM);
(void) get_clones_cb(root, &gca);
}
if (gca.numclones != 0 ||
nvlist_add_nvlist(nv, ZPROP_VALUE, value) != 0 ||
nvlist_add_nvlist(zhp->zfs_props,
zfs_prop_to_name(ZFS_PROP_CLONES), nv) != 0) {
nvlist_free(nv);
nvlist_free(value);
return (NULL);
}
nvlist_free(nv);
nvlist_free(value);
verify(0 == nvlist_lookup_nvlist(zhp->zfs_props,
zfs_prop_to_name(ZFS_PROP_CLONES), &nv));
}
verify(nvlist_lookup_nvlist(nv, ZPROP_VALUE, &value) == 0);
return (value);
}
/*
* Retrieve a property from the given object. If 'literal' is specified, then
* numbers are left as exact values. Otherwise, numbers are converted to a
@ -2013,6 +2178,11 @@ zfs_prop_get(zfs_handle_t *zhp, zfs_prop_t prop, char *propbuf, size_t proplen,
return (-1);
break;
case ZFS_PROP_CLONES:
if (get_clones_string(zhp, propbuf, proplen) != 0)
return (-1);
break;
case ZFS_PROP_QUOTA:
case ZFS_PROP_REFQUOTA:
case ZFS_PROP_RESERVATION:
@ -2385,7 +2555,7 @@ zfs_prop_get_userquota_common(zfs_handle_t *zhp, const char *propname,
int err;
zfs_cmd_t zc = { 0 };
(void) strncpy(zc.zc_name, zhp->zfs_name, sizeof (zc.zc_name));
(void) strlcpy(zc.zc_name, zhp->zfs_name, sizeof (zc.zc_name));
err = userquota_propname_decode(propname,
zfs_prop_get_int(zhp, ZFS_PROP_ZONED),
@ -2437,6 +2607,79 @@ zfs_prop_get_userquota(zfs_handle_t *zhp, const char *propname,
return (0);
}
int
zfs_prop_get_written_int(zfs_handle_t *zhp, const char *propname,
uint64_t *propvalue)
{
int err;
zfs_cmd_t zc = { 0 };
const char *snapname;
(void) strlcpy(zc.zc_name, zhp->zfs_name, sizeof (zc.zc_name));
snapname = strchr(propname, '@') + 1;
if (strchr(snapname, '@')) {
(void) strlcpy(zc.zc_value, snapname, sizeof (zc.zc_value));
} else {
/* snapname is the short name, append it to zhp's fsname */
char *cp;
(void) strlcpy(zc.zc_value, zhp->zfs_name,
sizeof (zc.zc_value));
cp = strchr(zc.zc_value, '@');
if (cp != NULL)
*cp = '\0';
(void) strlcat(zc.zc_value, "@", sizeof (zc.zc_value));
(void) strlcat(zc.zc_value, snapname, sizeof (zc.zc_value));
}
err = ioctl(zhp->zfs_hdl->libzfs_fd, ZFS_IOC_SPACE_WRITTEN, &zc);
if (err)
return (err);
*propvalue = zc.zc_cookie;
return (0);
}
int
zfs_prop_get_written(zfs_handle_t *zhp, const char *propname,
char *propbuf, int proplen, boolean_t literal)
{
int err;
uint64_t propvalue;
err = zfs_prop_get_written_int(zhp, propname, &propvalue);
if (err)
return (err);
if (literal) {
(void) snprintf(propbuf, proplen, "%llu", propvalue);
} else {
zfs_nicenum(propvalue, propbuf, proplen);
}
return (0);
}
int
zfs_get_snapused_int(zfs_handle_t *firstsnap, zfs_handle_t *lastsnap,
uint64_t *usedp)
{
int err;
zfs_cmd_t zc = { 0 };
(void) strlcpy(zc.zc_name, lastsnap->zfs_name, sizeof (zc.zc_name));
(void) strlcpy(zc.zc_value, firstsnap->zfs_name, sizeof (zc.zc_value));
err = ioctl(lastsnap->zfs_hdl->libzfs_fd, ZFS_IOC_SPACE_SNAPS, &zc);
if (err)
return (err);
*usedp = zc.zc_cookie;
return (0);
}
/*
* Returns the name of the given zfs handle.
*/
@ -2455,128 +2698,6 @@ zfs_get_type(const zfs_handle_t *zhp)
return (zhp->zfs_type);
}
static int
zfs_do_list_ioctl(zfs_handle_t *zhp, unsigned long arg, zfs_cmd_t *zc)
{
int rc;
uint64_t orig_cookie;
orig_cookie = zc->zc_cookie;
top:
(void) strlcpy(zc->zc_name, zhp->zfs_name, sizeof (zc->zc_name));
rc = ioctl(zhp->zfs_hdl->libzfs_fd, arg, zc);
if (rc == -1) {
switch (errno) {
case ENOMEM:
/* expand nvlist memory and try again */
if (zcmd_expand_dst_nvlist(zhp->zfs_hdl, zc) != 0) {
zcmd_free_nvlists(zc);
return (-1);
}
zc->zc_cookie = orig_cookie;
goto top;
/*
* An errno value of ESRCH indicates normal completion.
* If ENOENT is returned, then the underlying dataset
* has been removed since we obtained the handle.
*/
case ESRCH:
case ENOENT:
rc = 1;
break;
default:
rc = zfs_standard_error(zhp->zfs_hdl, errno,
dgettext(TEXT_DOMAIN,
"cannot iterate filesystems"));
break;
}
}
return (rc);
}
/*
* Iterate over all child filesystems
*/
int
zfs_iter_filesystems(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
zfs_cmd_t zc = { 0 };
zfs_handle_t *nzhp;
int ret;
if (zhp->zfs_type != ZFS_TYPE_FILESYSTEM)
return (0);
if (zcmd_alloc_dst_nvlist(zhp->zfs_hdl, &zc, 0) != 0)
return (-1);
while ((ret = zfs_do_list_ioctl(zhp, ZFS_IOC_DATASET_LIST_NEXT,
&zc)) == 0) {
/*
* Silently ignore errors, as the only plausible explanation is
* that the pool has since been removed.
*/
if ((nzhp = make_dataset_handle_zc(zhp->zfs_hdl,
&zc)) == NULL) {
continue;
}
if ((ret = func(nzhp, data)) != 0) {
zcmd_free_nvlists(&zc);
return (ret);
}
}
zcmd_free_nvlists(&zc);
return ((ret < 0) ? ret : 0);
}
/*
* Iterate over all snapshots
*/
int
zfs_iter_snapshots(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
zfs_cmd_t zc = { 0 };
zfs_handle_t *nzhp;
int ret;
if (zhp->zfs_type == ZFS_TYPE_SNAPSHOT)
return (0);
if (zcmd_alloc_dst_nvlist(zhp->zfs_hdl, &zc, 0) != 0)
return (-1);
while ((ret = zfs_do_list_ioctl(zhp, ZFS_IOC_SNAPSHOT_LIST_NEXT,
&zc)) == 0) {
if ((nzhp = make_dataset_handle_zc(zhp->zfs_hdl,
&zc)) == NULL) {
continue;
}
if ((ret = func(nzhp, data)) != 0) {
zcmd_free_nvlists(&zc);
return (ret);
}
}
zcmd_free_nvlists(&zc);
return ((ret < 0) ? ret : 0);
}
/*
* Iterate over all children, snapshots and filesystems
*/
int
zfs_iter_children(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
int ret;
if ((ret = zfs_iter_filesystems(zhp, func, data)) != 0)
return (ret);
return (zfs_iter_snapshots(zhp, func, data));
}
/*
* Is one dataset name a child dataset of another?
*
@ -2600,18 +2721,19 @@ is_descendant(const char *ds1, const char *ds2)
/*
* Given a complete name, return just the portion that refers to the parent.
* Can return NULL if this is a pool.
* Will return -1 if there is no parent (path is just the name of the
* pool).
*/
static int
parent_name(const char *path, char *buf, size_t buflen)
{
char *loc;
char *slashp;
if ((loc = strrchr(path, '/')) == NULL)
(void) strlcpy(buf, path, buflen);
if ((slashp = strrchr(buf, '/')) == NULL)
return (-1);
(void) strncpy(buf, path, MIN(buflen, loc - path));
buf[loc - path] = '\0';
*slashp = '\0';
return (0);
}
@ -3010,9 +3132,8 @@ zfs_destroy(zfs_handle_t *zhp, boolean_t defer)
}
struct destroydata {
char *snapname;
boolean_t gotone;
boolean_t closezhp;
nvlist_t *nvl;
const char *snapname;
};
static int
@ -3021,24 +3142,19 @@ zfs_check_snap_cb(zfs_handle_t *zhp, void *arg)
struct destroydata *dd = arg;
zfs_handle_t *szhp;
char name[ZFS_MAXNAMELEN];
boolean_t closezhp = dd->closezhp;
int rv = 0;
(void) strlcpy(name, zhp->zfs_name, sizeof (name));
(void) strlcat(name, "@", sizeof (name));
(void) strlcat(name, dd->snapname, sizeof (name));
(void) snprintf(name, sizeof (name),
"%s@%s", zhp->zfs_name, dd->snapname);
szhp = make_dataset_handle(zhp->zfs_hdl, name);
if (szhp) {
dd->gotone = B_TRUE;
verify(nvlist_add_boolean(dd->nvl, name) == 0);
zfs_close(szhp);
}
dd->closezhp = B_TRUE;
if (!dd->gotone)
rv = zfs_iter_filesystems(zhp, zfs_check_snap_cb, arg);
if (closezhp)
zfs_close(zhp);
rv = zfs_iter_filesystems(zhp, zfs_check_snap_cb, dd);
zfs_close(zhp);
return (rv);
}
@ -3048,29 +3164,45 @@ zfs_check_snap_cb(zfs_handle_t *zhp, void *arg)
int
zfs_destroy_snaps(zfs_handle_t *zhp, char *snapname, boolean_t defer)
{
zfs_cmd_t zc = { 0 };
int ret;
struct destroydata dd = { 0 };
dd.snapname = snapname;
(void) zfs_check_snap_cb(zhp, &dd);
verify(nvlist_alloc(&dd.nvl, NV_UNIQUE_NAME, 0) == 0);
(void) zfs_check_snap_cb(zfs_handle_dup(zhp), &dd);
if (!dd.gotone) {
return (zfs_standard_error_fmt(zhp->zfs_hdl, ENOENT,
if (nvlist_next_nvpair(dd.nvl, NULL) == NULL) {
ret = zfs_standard_error_fmt(zhp->zfs_hdl, ENOENT,
dgettext(TEXT_DOMAIN, "cannot destroy '%s@%s'"),
zhp->zfs_name, snapname));
zhp->zfs_name, snapname);
} else {
ret = zfs_destroy_snaps_nvl(zhp, dd.nvl, defer);
}
nvlist_free(dd.nvl);
return (ret);
}
/*
* Destroys all the snapshots named in the nvlist. They must be underneath
* the zhp (either snapshots of it, or snapshots of its descendants).
*/
int
zfs_destroy_snaps_nvl(zfs_handle_t *zhp, nvlist_t *snaps, boolean_t defer)
{
int ret;
zfs_cmd_t zc = { 0 };
(void) strlcpy(zc.zc_name, zhp->zfs_name, sizeof (zc.zc_name));
(void) strlcpy(zc.zc_value, snapname, sizeof (zc.zc_value));
if (zcmd_write_src_nvlist(zhp->zfs_hdl, &zc, snaps) != 0)
return (-1);
zc.zc_defer_destroy = defer;
ret = zfs_ioctl(zhp->zfs_hdl, ZFS_IOC_DESTROY_SNAPS, &zc);
ret = zfs_ioctl(zhp->zfs_hdl, ZFS_IOC_DESTROY_SNAPS_NVL, &zc);
if (ret != 0) {
char errbuf[1024];
(void) snprintf(errbuf, sizeof (errbuf), dgettext(TEXT_DOMAIN,
"cannot destroy '%s@%s'"), zc.zc_name, snapname);
"cannot destroy snapshots in %s"), zc.zc_name);
switch (errno) {
case EEXIST:
@ -3106,7 +3238,7 @@ zfs_clone(zfs_handle_t *zhp, const char *target, nvlist_t *props)
(void) snprintf(errbuf, sizeof (errbuf), dgettext(TEXT_DOMAIN,
"cannot create '%s'"), target);
/* validate the target name */
/* validate the target/clone name */
if (!zfs_validate_name(hdl, target, ZFS_TYPE_FILESYSTEM, B_TRUE))
return (zfs_error(hdl, EZFS_INVALIDNAME, errbuf));
@ -3442,42 +3574,6 @@ zfs_rollback(zfs_handle_t *zhp, zfs_handle_t *snap, boolean_t force)
return (err);
}
/*
* Iterate over all dependents for a given dataset. This includes both
* hierarchical dependents (children) and data dependents (snapshots and
* clones). The bulk of the processing occurs in get_dependents() in
* libzfs_graph.c.
*/
int
zfs_iter_dependents(zfs_handle_t *zhp, boolean_t allowrecursion,
zfs_iter_f func, void *data)
{
char **dependents;
size_t count;
int i;
zfs_handle_t *child;
int ret = 0;
if (get_dependents(zhp->zfs_hdl, allowrecursion, zhp->zfs_name,
&dependents, &count) != 0)
return (-1);
for (i = 0; i < count; i++) {
if ((child = make_dataset_handle(zhp->zfs_hdl,
dependents[i])) == NULL)
continue;
if ((ret = func(child, data)) != 0)
break;
}
for (i = 0; i < count; i++)
free(dependents[i]);
free(dependents);
return (ret);
}
/*
* Renames the given dataset.
*/
@ -3947,7 +4043,7 @@ zfs_userspace(zfs_handle_t *zhp, zfs_userquota_prop_t type,
int error;
zfs_useracct_t buf[100];
(void) strncpy(zc.zc_name, zhp->zfs_name, sizeof (zc.zc_name));
(void) strlcpy(zc.zc_name, zhp->zfs_name, sizeof (zc.zc_name));
zc.zc_objset_type = type;
zc.zc_nvlist_dst = (uintptr_t)buf;

View File

@ -1,653 +0,0 @@
/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
* Copyright 2009 Sun Microsystems, Inc. All rights reserved.
* Use is subject to license terms.
*/
/*
* Iterate over all children of the current object. This includes the normal
* dataset hierarchy, but also arbitrary hierarchies due to clones. We want to
* walk all datasets in the pool, and construct a directed graph of the form:
*
* home
* |
* +----+----+
* | |
* v v ws
* bar baz |
* | |
* v v
* @yesterday ----> foo
*
* In order to construct this graph, we have to walk every dataset in the pool,
* because the clone parent is stored as a property of the child, not the
* parent. The parent only keeps track of the number of clones.
*
* In the normal case (without clones) this would be rather expensive. To avoid
* unnecessary computation, we first try a walk of the subtree hierarchy
* starting from the initial node. At each dataset, we construct a node in the
* graph and an edge leading from its parent. If we don't see any snapshots
* with a non-zero clone count, then we are finished.
*
* If we do find a cloned snapshot, then we finish the walk of the current
* subtree, but indicate that we need to do a complete walk. We then perform a
* global walk of all datasets, avoiding the subtree we already processed.
*
* At the end of this, we'll end up with a directed graph of all relevant (and
* possible some irrelevant) datasets in the system. We need to both find our
* limiting subgraph and determine a safe ordering in which to destroy the
* datasets. We do a topological ordering of our graph starting at our target
* dataset, and then walk the results in reverse.
*
* It's possible for the graph to have cycles if, for example, the user renames
* a clone to be the parent of its origin snapshot. The user can request to
* generate an error in this case, or ignore the cycle and continue.
*
* When removing datasets, we want to destroy the snapshots in chronological
* order (because this is the most efficient method). In order to accomplish
* this, we store the creation transaction group with each vertex and keep each
* vertex's edges sorted according to this value. The topological sort will
* automatically walk the snapshots in the correct order.
*/
#include <assert.h>
#include <libintl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <unistd.h>
#include <libzfs.h>
#include "libzfs_impl.h"
#include "zfs_namecheck.h"
#define MIN_EDGECOUNT 4
/*
* Vertex structure. Indexed by dataset name, this structure maintains a list
* of edges to other vertices.
*/
struct zfs_edge;
typedef struct zfs_vertex {
char zv_dataset[ZFS_MAXNAMELEN];
struct zfs_vertex *zv_next;
int zv_visited;
uint64_t zv_txg;
struct zfs_edge **zv_edges;
int zv_edgecount;
int zv_edgealloc;
} zfs_vertex_t;
enum {
VISIT_SEEN = 1,
VISIT_SORT_PRE,
VISIT_SORT_POST
};
/*
* Edge structure. Simply maintains a pointer to the destination vertex. There
* is no need to store the source vertex, since we only use edges in the context
* of the source vertex.
*/
typedef struct zfs_edge {
zfs_vertex_t *ze_dest;
struct zfs_edge *ze_next;
} zfs_edge_t;
#define ZFS_GRAPH_SIZE 1027 /* this could be dynamic some day */
/*
* Graph structure. Vertices are maintained in a hash indexed by dataset name.
*/
typedef struct zfs_graph {
zfs_vertex_t **zg_hash;
size_t zg_size;
size_t zg_nvertex;
const char *zg_root;
int zg_clone_count;
} zfs_graph_t;
/*
* Allocate a new edge pointing to the target vertex.
*/
static zfs_edge_t *
zfs_edge_create(libzfs_handle_t *hdl, zfs_vertex_t *dest)
{
zfs_edge_t *zep = zfs_alloc(hdl, sizeof (zfs_edge_t));
if (zep == NULL)
return (NULL);
zep->ze_dest = dest;
return (zep);
}
/*
* Destroy an edge.
*/
static void
zfs_edge_destroy(zfs_edge_t *zep)
{
free(zep);
}
/*
* Allocate a new vertex with the given name.
*/
static zfs_vertex_t *
zfs_vertex_create(libzfs_handle_t *hdl, const char *dataset)
{
zfs_vertex_t *zvp = zfs_alloc(hdl, sizeof (zfs_vertex_t));
if (zvp == NULL)
return (NULL);
assert(strlen(dataset) < ZFS_MAXNAMELEN);
(void) strlcpy(zvp->zv_dataset, dataset, sizeof (zvp->zv_dataset));
if ((zvp->zv_edges = zfs_alloc(hdl,
MIN_EDGECOUNT * sizeof (void *))) == NULL) {
free(zvp);
return (NULL);
}
zvp->zv_edgealloc = MIN_EDGECOUNT;
return (zvp);
}
/*
* Destroy a vertex. Frees up any associated edges.
*/
static void
zfs_vertex_destroy(zfs_vertex_t *zvp)
{
int i;
for (i = 0; i < zvp->zv_edgecount; i++)
zfs_edge_destroy(zvp->zv_edges[i]);
free(zvp->zv_edges);
free(zvp);
}
/*
* Given a vertex, add an edge to the destination vertex.
*/
static int
zfs_vertex_add_edge(libzfs_handle_t *hdl, zfs_vertex_t *zvp,
zfs_vertex_t *dest)
{
zfs_edge_t *zep = zfs_edge_create(hdl, dest);
if (zep == NULL)
return (-1);
if (zvp->zv_edgecount == zvp->zv_edgealloc) {
void *ptr;
if ((ptr = zfs_realloc(hdl, zvp->zv_edges,
zvp->zv_edgealloc * sizeof (void *),
zvp->zv_edgealloc * 2 * sizeof (void *))) == NULL)
return (-1);
zvp->zv_edges = ptr;
zvp->zv_edgealloc *= 2;
}
zvp->zv_edges[zvp->zv_edgecount++] = zep;
return (0);
}
static int
zfs_edge_compare(const void *a, const void *b)
{
const zfs_edge_t *ea = *((zfs_edge_t **)a);
const zfs_edge_t *eb = *((zfs_edge_t **)b);
if (ea->ze_dest->zv_txg < eb->ze_dest->zv_txg)
return (-1);
if (ea->ze_dest->zv_txg > eb->ze_dest->zv_txg)
return (1);
return (0);
}
/*
* Sort the given vertex edges according to the creation txg of each vertex.
*/
static void
zfs_vertex_sort_edges(zfs_vertex_t *zvp)
{
if (zvp->zv_edgecount == 0)
return;
qsort(zvp->zv_edges, zvp->zv_edgecount, sizeof (void *),
zfs_edge_compare);
}
/*
* Construct a new graph object. We allow the size to be specified as a
* parameter so in the future we can size the hash according to the number of
* datasets in the pool.
*/
static zfs_graph_t *
zfs_graph_create(libzfs_handle_t *hdl, const char *dataset, size_t size)
{
zfs_graph_t *zgp = zfs_alloc(hdl, sizeof (zfs_graph_t));
if (zgp == NULL)
return (NULL);
zgp->zg_size = size;
if ((zgp->zg_hash = zfs_alloc(hdl,
size * sizeof (zfs_vertex_t *))) == NULL) {
free(zgp);
return (NULL);
}
zgp->zg_root = dataset;
zgp->zg_clone_count = 0;
return (zgp);
}
/*
* Destroy a graph object. We have to iterate over all the hash chains,
* destroying each vertex in the process.
*/
static void
zfs_graph_destroy(zfs_graph_t *zgp)
{
int i;
zfs_vertex_t *current, *next;
for (i = 0; i < zgp->zg_size; i++) {
current = zgp->zg_hash[i];
while (current != NULL) {
next = current->zv_next;
zfs_vertex_destroy(current);
current = next;
}
}
free(zgp->zg_hash);
free(zgp);
}
/*
* Graph hash function. Classic bernstein k=33 hash function, taken from
* usr/src/cmd/sgs/tools/common/strhash.c
*/
static size_t
zfs_graph_hash(zfs_graph_t *zgp, const char *str)
{
size_t hash = 5381;
int c;
while ((c = *str++) != 0)
hash = ((hash << 5) + hash) + c; /* hash * 33 + c */
return (hash % zgp->zg_size);
}
/*
* Given a dataset name, finds the associated vertex, creating it if necessary.
*/
static zfs_vertex_t *
zfs_graph_lookup(libzfs_handle_t *hdl, zfs_graph_t *zgp, const char *dataset,
uint64_t txg)
{
size_t idx = zfs_graph_hash(zgp, dataset);
zfs_vertex_t *zvp;
for (zvp = zgp->zg_hash[idx]; zvp != NULL; zvp = zvp->zv_next) {
if (strcmp(zvp->zv_dataset, dataset) == 0) {
if (zvp->zv_txg == 0)
zvp->zv_txg = txg;
return (zvp);
}
}
if ((zvp = zfs_vertex_create(hdl, dataset)) == NULL)
return (NULL);
zvp->zv_next = zgp->zg_hash[idx];
zvp->zv_txg = txg;
zgp->zg_hash[idx] = zvp;
zgp->zg_nvertex++;
return (zvp);
}
/*
* Given two dataset names, create an edge between them. For the source vertex,
* mark 'zv_visited' to indicate that we have seen this vertex, and not simply
* created it as a destination of another edge. If 'dest' is NULL, then this
* is an individual vertex (i.e. the starting vertex), so don't add an edge.
*/
static int
zfs_graph_add(libzfs_handle_t *hdl, zfs_graph_t *zgp, const char *source,
const char *dest, uint64_t txg)
{
zfs_vertex_t *svp, *dvp;
if ((svp = zfs_graph_lookup(hdl, zgp, source, 0)) == NULL)
return (-1);
svp->zv_visited = VISIT_SEEN;
if (dest != NULL) {
dvp = zfs_graph_lookup(hdl, zgp, dest, txg);
if (dvp == NULL)
return (-1);
if (zfs_vertex_add_edge(hdl, svp, dvp) != 0)
return (-1);
}
return (0);
}
/*
* Iterate over all children of the given dataset, adding any vertices
* as necessary. Returns -1 if there was an error, or 0 otherwise.
* This is a simple recursive algorithm - the ZFS namespace typically
* is very flat. We manually invoke the necessary ioctl() calls to
* avoid the overhead and additional semantics of zfs_open().
*/
static int
iterate_children(libzfs_handle_t *hdl, zfs_graph_t *zgp, const char *dataset)
{
zfs_cmd_t zc = { 0 };
zfs_vertex_t *zvp;
/*
* Look up the source vertex, and avoid it if we've seen it before.
*/
zvp = zfs_graph_lookup(hdl, zgp, dataset, 0);
if (zvp == NULL)
return (-1);
if (zvp->zv_visited == VISIT_SEEN)
return (0);
/*
* Iterate over all children
*/
for ((void) strlcpy(zc.zc_name, dataset, sizeof (zc.zc_name));
ioctl(hdl->libzfs_fd, ZFS_IOC_DATASET_LIST_NEXT, &zc) == 0;
(void) strlcpy(zc.zc_name, dataset, sizeof (zc.zc_name))) {
/*
* Get statistics for this dataset, to determine the type of the
* dataset and clone statistics. If this fails, the dataset has
* since been removed, and we're pretty much screwed anyway.
*/
zc.zc_objset_stats.dds_origin[0] = '\0';
if (ioctl(hdl->libzfs_fd, ZFS_IOC_OBJSET_STATS, &zc) != 0)
continue;
if (zc.zc_objset_stats.dds_origin[0] != '\0') {
if (zfs_graph_add(hdl, zgp,
zc.zc_objset_stats.dds_origin, zc.zc_name,
zc.zc_objset_stats.dds_creation_txg) != 0)
return (-1);
/*
* Count origins only if they are contained in the graph
*/
if (isa_child_of(zc.zc_objset_stats.dds_origin,
zgp->zg_root))
zgp->zg_clone_count--;
}
/*
* Add an edge between the parent and the child.
*/
if (zfs_graph_add(hdl, zgp, dataset, zc.zc_name,
zc.zc_objset_stats.dds_creation_txg) != 0)
return (-1);
/*
* Recursively visit child
*/
if (iterate_children(hdl, zgp, zc.zc_name))
return (-1);
}
/*
* Now iterate over all snapshots.
*/
bzero(&zc, sizeof (zc));
for ((void) strlcpy(zc.zc_name, dataset, sizeof (zc.zc_name));
ioctl(hdl->libzfs_fd, ZFS_IOC_SNAPSHOT_LIST_NEXT, &zc) == 0;
(void) strlcpy(zc.zc_name, dataset, sizeof (zc.zc_name))) {
/*
* Get statistics for this dataset, to determine the type of the
* dataset and clone statistics. If this fails, the dataset has
* since been removed, and we're pretty much screwed anyway.
*/
if (ioctl(hdl->libzfs_fd, ZFS_IOC_OBJSET_STATS, &zc) != 0)
continue;
/*
* Add an edge between the parent and the child.
*/
if (zfs_graph_add(hdl, zgp, dataset, zc.zc_name,
zc.zc_objset_stats.dds_creation_txg) != 0)
return (-1);
zgp->zg_clone_count += zc.zc_objset_stats.dds_num_clones;
}
zvp->zv_visited = VISIT_SEEN;
return (0);
}
/*
* Returns false if there are no snapshots with dependent clones in this
* subtree or if all of those clones are also in this subtree. Returns
* true if there is an error or there are external dependents.
*/
static boolean_t
external_dependents(libzfs_handle_t *hdl, zfs_graph_t *zgp, const char *dataset)
{
zfs_cmd_t zc = { 0 };
/*
* Check whether this dataset is a clone or has clones since
* iterate_children() only checks the children.
*/
(void) strlcpy(zc.zc_name, dataset, sizeof (zc.zc_name));
if (ioctl(hdl->libzfs_fd, ZFS_IOC_OBJSET_STATS, &zc) != 0)
return (B_TRUE);
if (zc.zc_objset_stats.dds_origin[0] != '\0') {
if (zfs_graph_add(hdl, zgp,
zc.zc_objset_stats.dds_origin, zc.zc_name,
zc.zc_objset_stats.dds_creation_txg) != 0)
return (B_TRUE);
if (isa_child_of(zc.zc_objset_stats.dds_origin, dataset))
zgp->zg_clone_count--;
}
if ((zc.zc_objset_stats.dds_num_clones) ||
iterate_children(hdl, zgp, dataset))
return (B_TRUE);
return (zgp->zg_clone_count != 0);
}
/*
* Construct a complete graph of all necessary vertices. First, iterate over
* only our object's children. If no cloned snapshots are found, or all of
* the cloned snapshots are in this subtree then return a graph of the subtree.
* Otherwise, start at the root of the pool and iterate over all datasets.
*/
static zfs_graph_t *
construct_graph(libzfs_handle_t *hdl, const char *dataset)
{
zfs_graph_t *zgp = zfs_graph_create(hdl, dataset, ZFS_GRAPH_SIZE);
int ret = 0;
if (zgp == NULL)
return (zgp);
if ((strchr(dataset, '/') == NULL) ||
(external_dependents(hdl, zgp, dataset))) {
/*
* Determine pool name and try again.
*/
int len = strcspn(dataset, "/@") + 1;
char *pool = zfs_alloc(hdl, len);
if (pool == NULL) {
zfs_graph_destroy(zgp);
return (NULL);
}
(void) strlcpy(pool, dataset, len);
if (iterate_children(hdl, zgp, pool) == -1 ||
zfs_graph_add(hdl, zgp, pool, NULL, 0) != 0) {
free(pool);
zfs_graph_destroy(zgp);
return (NULL);
}
free(pool);
}
if (ret == -1 || zfs_graph_add(hdl, zgp, dataset, NULL, 0) != 0) {
zfs_graph_destroy(zgp);
return (NULL);
}
return (zgp);
}
/*
* Given a graph, do a recursive topological sort into the given array. This is
* really just a depth first search, so that the deepest nodes appear first.
* hijack the 'zv_visited' marker to avoid visiting the same vertex twice.
*/
static int
topo_sort(libzfs_handle_t *hdl, boolean_t allowrecursion, char **result,
size_t *idx, zfs_vertex_t *zgv)
{
int i;
if (zgv->zv_visited == VISIT_SORT_PRE && !allowrecursion) {
/*
* If we've already seen this vertex as part of our depth-first
* search, then we have a cyclic dependency, and we must return
* an error.
*/
zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
"recursive dependency at '%s'"),
zgv->zv_dataset);
return (zfs_error(hdl, EZFS_RECURSIVE,
dgettext(TEXT_DOMAIN,
"cannot determine dependent datasets")));
} else if (zgv->zv_visited >= VISIT_SORT_PRE) {
/*
* If we've already processed this as part of the topological
* sort, then don't bother doing so again.
*/
return (0);
}
zgv->zv_visited = VISIT_SORT_PRE;
/* avoid doing a search if we don't have to */
zfs_vertex_sort_edges(zgv);
for (i = 0; i < zgv->zv_edgecount; i++) {
if (topo_sort(hdl, allowrecursion, result, idx,
zgv->zv_edges[i]->ze_dest) != 0)
return (-1);
}
/* we may have visited this in the course of the above */
if (zgv->zv_visited == VISIT_SORT_POST)
return (0);
if ((result[*idx] = zfs_alloc(hdl,
strlen(zgv->zv_dataset) + 1)) == NULL)
return (-1);
(void) strcpy(result[*idx], zgv->zv_dataset);
*idx += 1;
zgv->zv_visited = VISIT_SORT_POST;
return (0);
}
/*
* The only public interface for this file. Do the dirty work of constructing a
* child list for the given object. Construct the graph, do the toplogical
* sort, and then return the array of strings to the caller.
*
* The 'allowrecursion' parameter controls behavior when cycles are found. If
* it is set, the the cycle is ignored and the results returned as if the cycle
* did not exist. If it is not set, then the routine will generate an error if
* a cycle is found.
*/
int
get_dependents(libzfs_handle_t *hdl, boolean_t allowrecursion,
const char *dataset, char ***result, size_t *count)
{
zfs_graph_t *zgp;
zfs_vertex_t *zvp;
if ((zgp = construct_graph(hdl, dataset)) == NULL)
return (-1);
if ((*result = zfs_alloc(hdl,
zgp->zg_nvertex * sizeof (char *))) == NULL) {
zfs_graph_destroy(zgp);
return (-1);
}
if ((zvp = zfs_graph_lookup(hdl, zgp, dataset, 0)) == NULL) {
free(*result);
zfs_graph_destroy(zgp);
return (-1);
}
*count = 0;
if (topo_sort(hdl, allowrecursion, *result, count, zvp) != 0) {
free(*result);
zfs_graph_destroy(zgp);
return (-1);
}
/*
* Get rid of the last entry, which is our starting vertex and not
* strictly a dependent.
*/
assert(*count > 0);
free((*result)[*count - 1]);
(*count)--;
zfs_graph_destroy(zgp);
return (0);
}

View File

@ -23,6 +23,7 @@
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 Pawel Jakub Dawidek <pawel@dawidek.net>.
* All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#ifndef _LIBFS_IMPL_H
@ -116,7 +117,7 @@ struct zpool_handle {
diskaddr_t zpool_start_block;
};
typedef enum {
typedef enum {
PROTO_NFS = 0,
PROTO_SMB = 1,
PROTO_END = 2
@ -148,6 +149,7 @@ int zpool_standard_error_fmt(libzfs_handle_t *, int, const char *, ...);
int get_dependents(libzfs_handle_t *, boolean_t, const char *, char ***,
size_t *);
zfs_handle_t *make_dataset_handle_zc(libzfs_handle_t *, zfs_cmd_t *);
int zprop_parse_value(libzfs_handle_t *, nvpair_t *, int, zfs_type_t,

View File

@ -20,6 +20,8 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
@ -435,7 +437,7 @@ get_configs(libzfs_handle_t *hdl, pool_list_t *pl, boolean_t active_ok)
uint_t i, nspares, nl2cache;
boolean_t config_seen;
uint64_t best_txg;
char *name, *hostname;
char *name, *hostname, *comment;
uint64_t version, guid;
uint_t children = 0;
nvlist_t **child = NULL;
@ -524,6 +526,7 @@ get_configs(libzfs_handle_t *hdl, pool_list_t *pl, boolean_t active_ok)
* version
* pool guid
* name
* comment (if available)
* pool state
* hostid (if available)
* hostname (if available)
@ -545,11 +548,24 @@ get_configs(libzfs_handle_t *hdl, pool_list_t *pl, boolean_t active_ok)
if (nvlist_add_string(config,
ZPOOL_CONFIG_POOL_NAME, name) != 0)
goto nomem;
/*
* COMMENT is optional, don't bail if it's not
* there, instead, set it to NULL.
*/
if (nvlist_lookup_string(tmp,
ZPOOL_CONFIG_COMMENT, &comment) != 0)
comment = NULL;
else if (nvlist_add_string(config,
ZPOOL_CONFIG_COMMENT, comment) != 0)
goto nomem;
verify(nvlist_lookup_uint64(tmp,
ZPOOL_CONFIG_POOL_STATE, &state) == 0);
if (nvlist_add_uint64(config,
ZPOOL_CONFIG_POOL_STATE, state) != 0)
goto nomem;
hostid = 0;
if (nvlist_lookup_uint64(tmp,
ZPOOL_CONFIG_HOSTID, &hostid) == 0) {

View File

@ -0,0 +1,462 @@
/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2010 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <stdio.h>
#include <stdlib.h>
#include <strings.h>
#include <unistd.h>
#include <stddef.h>
#include <libintl.h>
#include <libzfs.h>
#include "libzfs_impl.h"
int
zfs_iter_clones(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
nvlist_t *nvl = zfs_get_clones_nvl(zhp);
nvpair_t *pair;
if (nvl == NULL)
return (0);
for (pair = nvlist_next_nvpair(nvl, NULL); pair != NULL;
pair = nvlist_next_nvpair(nvl, pair)) {
zfs_handle_t *clone = zfs_open(zhp->zfs_hdl, nvpair_name(pair),
ZFS_TYPE_FILESYSTEM | ZFS_TYPE_VOLUME);
if (clone != NULL) {
int err = func(clone, data);
if (err != 0)
return (err);
}
}
return (0);
}
static int
zfs_do_list_ioctl(zfs_handle_t *zhp, unsigned long arg, zfs_cmd_t *zc)
{
int rc;
uint64_t orig_cookie;
orig_cookie = zc->zc_cookie;
top:
(void) strlcpy(zc->zc_name, zhp->zfs_name, sizeof (zc->zc_name));
rc = ioctl(zhp->zfs_hdl->libzfs_fd, arg, zc);
if (rc == -1) {
switch (errno) {
case ENOMEM:
/* expand nvlist memory and try again */
if (zcmd_expand_dst_nvlist(zhp->zfs_hdl, zc) != 0) {
zcmd_free_nvlists(zc);
return (-1);
}
zc->zc_cookie = orig_cookie;
goto top;
/*
* An errno value of ESRCH indicates normal completion.
* If ENOENT is returned, then the underlying dataset
* has been removed since we obtained the handle.
*/
case ESRCH:
case ENOENT:
rc = 1;
break;
default:
rc = zfs_standard_error(zhp->zfs_hdl, errno,
dgettext(TEXT_DOMAIN,
"cannot iterate filesystems"));
break;
}
}
return (rc);
}
/*
* Iterate over all child filesystems
*/
int
zfs_iter_filesystems(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
zfs_cmd_t zc = { 0 };
zfs_handle_t *nzhp;
int ret;
if (zhp->zfs_type != ZFS_TYPE_FILESYSTEM)
return (0);
if (zcmd_alloc_dst_nvlist(zhp->zfs_hdl, &zc, 0) != 0)
return (-1);
while ((ret = zfs_do_list_ioctl(zhp, ZFS_IOC_DATASET_LIST_NEXT,
&zc)) == 0) {
/*
* Silently ignore errors, as the only plausible explanation is
* that the pool has since been removed.
*/
if ((nzhp = make_dataset_handle_zc(zhp->zfs_hdl,
&zc)) == NULL) {
continue;
}
if ((ret = func(nzhp, data)) != 0) {
zcmd_free_nvlists(&zc);
return (ret);
}
}
zcmd_free_nvlists(&zc);
return ((ret < 0) ? ret : 0);
}
/*
* Iterate over all snapshots
*/
int
zfs_iter_snapshots(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
zfs_cmd_t zc = { 0 };
zfs_handle_t *nzhp;
int ret;
if (zhp->zfs_type == ZFS_TYPE_SNAPSHOT)
return (0);
if (zcmd_alloc_dst_nvlist(zhp->zfs_hdl, &zc, 0) != 0)
return (-1);
while ((ret = zfs_do_list_ioctl(zhp, ZFS_IOC_SNAPSHOT_LIST_NEXT,
&zc)) == 0) {
if ((nzhp = make_dataset_handle_zc(zhp->zfs_hdl,
&zc)) == NULL) {
continue;
}
if ((ret = func(nzhp, data)) != 0) {
zcmd_free_nvlists(&zc);
return (ret);
}
}
zcmd_free_nvlists(&zc);
return ((ret < 0) ? ret : 0);
}
/*
* Routines for dealing with the sorted snapshot functionality
*/
typedef struct zfs_node {
zfs_handle_t *zn_handle;
avl_node_t zn_avlnode;
} zfs_node_t;
static int
zfs_sort_snaps(zfs_handle_t *zhp, void *data)
{
avl_tree_t *avl = data;
zfs_node_t *node;
zfs_node_t search;
search.zn_handle = zhp;
node = avl_find(avl, &search, NULL);
if (node) {
/*
* If this snapshot was renamed while we were creating the
* AVL tree, it's possible that we already inserted it under
* its old name. Remove the old handle before adding the new
* one.
*/
zfs_close(node->zn_handle);
avl_remove(avl, node);
free(node);
}
node = zfs_alloc(zhp->zfs_hdl, sizeof (zfs_node_t));
node->zn_handle = zhp;
avl_add(avl, node);
return (0);
}
static int
zfs_snapshot_compare(const void *larg, const void *rarg)
{
zfs_handle_t *l = ((zfs_node_t *)larg)->zn_handle;
zfs_handle_t *r = ((zfs_node_t *)rarg)->zn_handle;
uint64_t lcreate, rcreate;
/*
* Sort them according to creation time. We use the hidden
* CREATETXG property to get an absolute ordering of snapshots.
*/
lcreate = zfs_prop_get_int(l, ZFS_PROP_CREATETXG);
rcreate = zfs_prop_get_int(r, ZFS_PROP_CREATETXG);
if (lcreate < rcreate)
return (-1);
else if (lcreate > rcreate)
return (+1);
else
return (0);
}
int
zfs_iter_snapshots_sorted(zfs_handle_t *zhp, zfs_iter_f callback, void *data)
{
int ret = 0;
zfs_node_t *node;
avl_tree_t avl;
void *cookie = NULL;
avl_create(&avl, zfs_snapshot_compare,
sizeof (zfs_node_t), offsetof(zfs_node_t, zn_avlnode));
ret = zfs_iter_snapshots(zhp, zfs_sort_snaps, &avl);
for (node = avl_first(&avl); node != NULL; node = AVL_NEXT(&avl, node))
ret |= callback(node->zn_handle, data);
while ((node = avl_destroy_nodes(&avl, &cookie)) != NULL)
free(node);
avl_destroy(&avl);
return (ret);
}
typedef struct {
char *ssa_first;
char *ssa_last;
boolean_t ssa_seenfirst;
boolean_t ssa_seenlast;
zfs_iter_f ssa_func;
void *ssa_arg;
} snapspec_arg_t;
static int
snapspec_cb(zfs_handle_t *zhp, void *arg) {
snapspec_arg_t *ssa = arg;
char *shortsnapname;
int err = 0;
if (ssa->ssa_seenlast)
return (0);
shortsnapname = zfs_strdup(zhp->zfs_hdl,
strchr(zfs_get_name(zhp), '@') + 1);
if (!ssa->ssa_seenfirst && strcmp(shortsnapname, ssa->ssa_first) == 0)
ssa->ssa_seenfirst = B_TRUE;
if (ssa->ssa_seenfirst) {
err = ssa->ssa_func(zhp, ssa->ssa_arg);
} else {
zfs_close(zhp);
}
if (strcmp(shortsnapname, ssa->ssa_last) == 0)
ssa->ssa_seenlast = B_TRUE;
free(shortsnapname);
return (err);
}
/*
* spec is a string like "A,B%C,D"
*
* <snaps>, where <snaps> can be:
* <snap> (single snapshot)
* <snap>%<snap> (range of snapshots, inclusive)
* %<snap> (range of snapshots, starting with earliest)
* <snap>% (range of snapshots, ending with last)
* % (all snapshots)
* <snaps>[,...] (comma separated list of the above)
*
* If a snapshot can not be opened, continue trying to open the others, but
* return ENOENT at the end.
*/
int
zfs_iter_snapspec(zfs_handle_t *fs_zhp, const char *spec_orig,
zfs_iter_f func, void *arg)
{
char buf[ZFS_MAXNAMELEN];
char *comma_separated, *cp;
int err = 0;
int ret = 0;
(void) strlcpy(buf, spec_orig, sizeof (buf));
cp = buf;
while ((comma_separated = strsep(&cp, ",")) != NULL) {
char *pct = strchr(comma_separated, '%');
if (pct != NULL) {
snapspec_arg_t ssa = { 0 };
ssa.ssa_func = func;
ssa.ssa_arg = arg;
if (pct == comma_separated)
ssa.ssa_seenfirst = B_TRUE;
else
ssa.ssa_first = comma_separated;
*pct = '\0';
ssa.ssa_last = pct + 1;
/*
* If there is a lastname specified, make sure it
* exists.
*/
if (ssa.ssa_last[0] != '\0') {
char snapname[ZFS_MAXNAMELEN];
(void) snprintf(snapname, sizeof (snapname),
"%s@%s", zfs_get_name(fs_zhp),
ssa.ssa_last);
if (!zfs_dataset_exists(fs_zhp->zfs_hdl,
snapname, ZFS_TYPE_SNAPSHOT)) {
ret = ENOENT;
continue;
}
}
err = zfs_iter_snapshots_sorted(fs_zhp,
snapspec_cb, &ssa);
if (ret == 0)
ret = err;
if (ret == 0 && (!ssa.ssa_seenfirst ||
(ssa.ssa_last[0] != '\0' && !ssa.ssa_seenlast))) {
ret = ENOENT;
}
} else {
char snapname[ZFS_MAXNAMELEN];
zfs_handle_t *snap_zhp;
(void) snprintf(snapname, sizeof (snapname), "%s@%s",
zfs_get_name(fs_zhp), comma_separated);
snap_zhp = make_dataset_handle(fs_zhp->zfs_hdl,
snapname);
if (snap_zhp == NULL) {
ret = ENOENT;
continue;
}
err = func(snap_zhp, arg);
if (ret == 0)
ret = err;
}
}
return (ret);
}
/*
* Iterate over all children, snapshots and filesystems
*/
int
zfs_iter_children(zfs_handle_t *zhp, zfs_iter_f func, void *data)
{
int ret;
if ((ret = zfs_iter_filesystems(zhp, func, data)) != 0)
return (ret);
return (zfs_iter_snapshots(zhp, func, data));
}
typedef struct iter_stack_frame {
struct iter_stack_frame *next;
zfs_handle_t *zhp;
} iter_stack_frame_t;
typedef struct iter_dependents_arg {
boolean_t first;
boolean_t allowrecursion;
iter_stack_frame_t *stack;
zfs_iter_f func;
void *data;
} iter_dependents_arg_t;
static int
iter_dependents_cb(zfs_handle_t *zhp, void *arg)
{
iter_dependents_arg_t *ida = arg;
int err;
boolean_t first = ida->first;
ida->first = B_FALSE;
if (zhp->zfs_type == ZFS_TYPE_SNAPSHOT) {
err = zfs_iter_clones(zhp, iter_dependents_cb, ida);
} else {
iter_stack_frame_t isf;
iter_stack_frame_t *f;
/*
* check if there is a cycle by seeing if this fs is already
* on the stack.
*/
for (f = ida->stack; f != NULL; f = f->next) {
if (f->zhp->zfs_dmustats.dds_guid ==
zhp->zfs_dmustats.dds_guid) {
if (ida->allowrecursion) {
zfs_close(zhp);
return (0);
} else {
zfs_error_aux(zhp->zfs_hdl,
dgettext(TEXT_DOMAIN,
"recursive dependency at '%s'"),
zfs_get_name(zhp));
err = zfs_error(zhp->zfs_hdl,
EZFS_RECURSIVE,
dgettext(TEXT_DOMAIN,
"cannot determine dependent "
"datasets"));
zfs_close(zhp);
return (err);
}
}
}
isf.zhp = zhp;
isf.next = ida->stack;
ida->stack = &isf;
err = zfs_iter_filesystems(zhp, iter_dependents_cb, ida);
if (err == 0)
err = zfs_iter_snapshots(zhp, iter_dependents_cb, ida);
ida->stack = isf.next;
}
if (!first && err == 0)
err = ida->func(zhp, ida->data);
return (err);
}
int
zfs_iter_dependents(zfs_handle_t *zhp, boolean_t allowrecursion,
zfs_iter_f func, void *data)
{
iter_dependents_arg_t ida;
ida.allowrecursion = allowrecursion;
ida.stack = NULL;
ida.func = func;
ida.data = data;
ida.first = B_TRUE;
return (iter_dependents_cb(zfs_handle_dup(zhp), &ida));
}

View File

@ -21,6 +21,8 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/types.h>
@ -261,6 +263,7 @@ zpool_get_prop(zpool_handle_t *zhp, zpool_prop_t prop, char *buf, size_t len,
case ZPOOL_PROP_ALTROOT:
case ZPOOL_PROP_CACHEFILE:
case ZPOOL_PROP_COMMENT:
if (zhp->zpool_props != NULL ||
zpool_get_all_props(zhp) == 0) {
(void) strlcpy(buf,
@ -412,7 +415,7 @@ zpool_valid_proplist(libzfs_handle_t *hdl, const char *poolname,
zpool_prop_t prop;
char *strval;
uint64_t intval;
char *slash;
char *slash, *check;
struct stat64 statbuf;
zpool_handle_t *zhp;
nvlist_t *nvroot;
@ -573,6 +576,26 @@ zpool_valid_proplist(libzfs_handle_t *hdl, const char *poolname,
*slash = '/';
break;
case ZPOOL_PROP_COMMENT:
for (check = strval; *check != '\0'; check++) {
if (!isprint(*check)) {
zfs_error_aux(hdl,
dgettext(TEXT_DOMAIN,
"comment may only have printable "
"characters"));
(void) zfs_error(hdl, EZFS_BADPROP,
errbuf);
goto error;
}
}
if (strlen(strval) > ZPROP_MAX_COMMENT) {
zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
"comment must not exceed %d characters"),
ZPROP_MAX_COMMENT);
(void) zfs_error(hdl, EZFS_BADPROP, errbuf);
goto error;
}
break;
case ZPOOL_PROP_READONLY:
if (!flags.import) {
zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
@ -3010,6 +3033,26 @@ zpool_vdev_clear(zpool_handle_t *zhp, uint64_t guid)
return (zpool_standard_error(hdl, errno, msg));
}
/*
* Change the GUID for a pool.
*/
int
zpool_reguid(zpool_handle_t *zhp)
{
char msg[1024];
libzfs_handle_t *hdl = zhp->zpool_hdl;
zfs_cmd_t zc = { 0 };
(void) snprintf(msg, sizeof (msg),
dgettext(TEXT_DOMAIN, "cannot reguid '%s'"), zhp->zpool_name);
(void) strlcpy(zc.zc_name, zhp->zpool_name, sizeof (zc.zc_name));
if (zfs_ioctl(hdl, ZFS_IOC_POOL_REGUID, &zc) == 0)
return (0);
return (zpool_standard_error(hdl, errno, msg));
}
/*
* Convert from a devid string to a path.
*/

File diff suppressed because it is too large Load Diff

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
@ -351,6 +352,7 @@ zfs_standard_error_fmt(libzfs_handle_t *hdl, int error, const char *fmt, ...)
switch (error) {
case ENXIO:
case ENODEV:
case EPIPE:
zfs_verror(hdl, EZFS_IO, fmt, ap);
break;
@ -1324,7 +1326,8 @@ addlist(libzfs_handle_t *hdl, char *propname, zprop_list_t **listp,
* dataset property,
*/
if (prop == ZPROP_INVAL && (type == ZFS_TYPE_POOL ||
(!zfs_prop_user(propname) && !zfs_prop_userquota(propname)))) {
(!zfs_prop_user(propname) && !zfs_prop_userquota(propname) &&
!zfs_prop_written(propname)))) {
zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
"invalid property '%s'"), propname);
return (zfs_error(hdl, EZFS_BADPROP,

View File

@ -20,8 +20,8 @@ SRCS+= libzfs_changelist.c \
libzfs_config.c \
libzfs_dataset.c \
libzfs_diff.c \
libzfs_graph.c \
libzfs_import.c \
libzfs_iter.c \
libzfs_mount.c \
libzfs_pool.c \
libzfs_sendrecv.c \

View File

@ -267,7 +267,7 @@ zfs_prop_init(void)
/* default index properties */
zprop_register_index(ZFS_PROP_VERSION, "version", 0, PROP_DEFAULT,
ZFS_TYPE_FILESYSTEM | ZFS_TYPE_SNAPSHOT,
"1 | 2 | 3 | 4 | current", "VERSION", version_table);
"1 | 2 | 3 | 4 | 5 | current", "VERSION", version_table);
zprop_register_index(ZFS_PROP_CANMOUNT, "canmount", ZFS_CANMOUNT_ON,
PROP_DEFAULT, ZFS_TYPE_FILESYSTEM, "on | off | noauto",
"CANMOUNT", canmount_table);
@ -297,6 +297,8 @@ zfs_prop_init(void)
/* string properties */
zprop_register_string(ZFS_PROP_ORIGIN, "origin", NULL, PROP_READONLY,
ZFS_TYPE_FILESYSTEM | ZFS_TYPE_VOLUME, "<snapshot>", "ORIGIN");
zprop_register_string(ZFS_PROP_CLONES, "clones", NULL, PROP_READONLY,
ZFS_TYPE_SNAPSHOT, "<dataset>[,...]", "CLONES");
zprop_register_string(ZFS_PROP_MOUNTPOINT, "mountpoint", "/",
PROP_INHERIT, ZFS_TYPE_FILESYSTEM, "<path> | legacy | none",
"MOUNTPOINT");
@ -342,6 +344,8 @@ zfs_prop_init(void)
ZFS_TYPE_FILESYSTEM | ZFS_TYPE_VOLUME, "<size>", "USEDREFRESERV");
zprop_register_number(ZFS_PROP_USERREFS, "userrefs", 0, PROP_READONLY,
ZFS_TYPE_SNAPSHOT, "<count>", "USERREFS");
zprop_register_number(ZFS_PROP_WRITTEN, "written", 0, PROP_READONLY,
ZFS_TYPE_DATASET, "<size>", "WRITTEN");
/* default number properties */
zprop_register_number(ZFS_PROP_QUOTA, "quota", 0, PROP_DEFAULT,
@ -467,6 +471,18 @@ zfs_prop_userquota(const char *name)
return (B_FALSE);
}
/*
* Returns true if this is a valid written@ property.
* Note that after the @, any character is valid (eg, another @, for
* written@pool/fs@origin).
*/
boolean_t
zfs_prop_written(const char *name)
{
static const char *prefix = "written@";
return (strncmp(name, prefix, strlen(prefix)) == 0);
}
/*
* Tables of index types, plus functions to convert between the user view
* (strings) and internal representation (uint64_t).

View File

@ -121,6 +121,8 @@ uint64_t zprop_random_value(int, uint64_t, zfs_type_t);
const char *zprop_values(int, zfs_type_t);
size_t zprop_width(int, boolean_t *, zfs_type_t);
boolean_t zprop_valid_for_type(int, zfs_type_t);
boolean_t zfs_prop_written(const char *name);
#ifdef __cplusplus
}

View File

@ -20,6 +20,8 @@
*/
/*
* Copyright (c) 2007, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/zio.h>
@ -69,6 +71,8 @@ zpool_prop_init(void)
ZFS_TYPE_POOL, "<filesystem>", "BOOTFS");
zprop_register_string(ZPOOL_PROP_CACHEFILE, "cachefile", NULL,
PROP_DEFAULT, ZFS_TYPE_POOL, "<file> | none", "CACHEFILE");
zprop_register_string(ZPOOL_PROP_COMMENT, "comment", NULL,
PROP_DEFAULT, ZFS_TYPE_POOL, "<comment-string>", "COMMENT");
/* readonly number properties */
zprop_register_number(ZPOOL_PROP_SIZE, "size", 0, PROP_READONLY,

View File

@ -20,6 +20,8 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
@ -1365,7 +1367,7 @@ arc_buf_alloc(spa_t *spa, int size, void *tag, arc_buf_contents_t type)
ASSERT(BUF_EMPTY(hdr));
hdr->b_size = size;
hdr->b_type = type;
hdr->b_spa = spa_guid(spa);
hdr->b_spa = spa_load_guid(spa);
hdr->b_state = arc_anon;
hdr->b_arc_access = 0;
buf = kmem_cache_alloc(buf_cache, KM_PUSHPAGE);
@ -2146,7 +2148,7 @@ arc_flush(spa_t *spa)
uint64_t guid = 0;
if (spa)
guid = spa_guid(spa);
guid = spa_load_guid(spa);
while (arc_mru->arcs_lsize[ARC_BUFC_DATA]) {
(void) arc_evict(arc_mru, guid, -1, FALSE, ARC_BUFC_DATA);
@ -2936,7 +2938,7 @@ arc_read_nolock(zio_t *pio, spa_t *spa, const blkptr_t *bp,
arc_buf_t *buf;
kmutex_t *hash_lock;
zio_t *rzio;
uint64_t guid = spa_guid(spa);
uint64_t guid = spa_load_guid(spa);
top:
hdr = buf_hash_find(guid, BP_IDENTITY(bp), BP_PHYSICAL_BIRTH(bp),
@ -4593,7 +4595,7 @@ l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz)
boolean_t have_lock, full;
l2arc_write_callback_t *cb;
zio_t *pio, *wzio;
uint64_t guid = spa_guid(spa);
uint64_t guid = spa_load_guid(spa);
int try;
ASSERT(dev->l2ad_vdev != NULL);

View File

@ -20,11 +20,13 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/bpobj.h>
#include <sys/zfs_context.h>
#include <sys/refcount.h>
#include <sys/dsl_pool.h>
uint64_t
bpobj_alloc(objset_t *os, int blocksize, dmu_tx_t *tx)
@ -440,7 +442,10 @@ space_range_cb(void *arg, const blkptr_t *bp, dmu_tx_t *tx)
struct space_range_arg *sra = arg;
if (bp->blk_birth > sra->mintxg && bp->blk_birth <= sra->maxtxg) {
sra->used += bp_get_dsize_sync(sra->spa, bp);
if (dsl_pool_sync_context(spa_get_dsl(sra->spa)))
sra->used += bp_get_dsize_sync(sra->spa, bp);
else
sra->used += bp_get_dsize(sra->spa, bp);
sra->comp += BP_GET_PSIZE(bp);
sra->uncomp += BP_GET_UCSIZE(bp);
}

View File

@ -20,9 +20,11 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/dmu.h>
@ -47,6 +49,9 @@
#include <sys/ddt.h>
#include <sys/zfs_onexit.h>
/* Set this tunable to TRUE to replace corrupt data with 0x2f5baddb10c */
int zfs_send_corrupt_data = B_FALSE;
static char *dmu_recv_tag = "dmu_recv_tag";
/*
@ -384,8 +389,20 @@ backup_cb(spa_t *spa, zilog_t *zilog, const blkptr_t *bp, arc_buf_t *pbuf,
if (dsl_read(NULL, spa, bp, pbuf,
arc_getbuf_func, &abuf, ZIO_PRIORITY_ASYNC_READ,
ZIO_FLAG_CANFAIL, &aflags, zb) != 0)
return (EIO);
ZIO_FLAG_CANFAIL, &aflags, zb) != 0) {
if (zfs_send_corrupt_data) {
/* Send a block filled with 0x"zfs badd bloc" */
abuf = arc_buf_alloc(spa, blksz, &abuf,
ARC_BUFC_DATA);
uint64_t *ptr;
for (ptr = abuf->b_data;
(char *)ptr < (char *)abuf->b_data + blksz;
ptr++)
*ptr = 0x2f5baddb10c;
} else {
return (EIO);
}
}
err = dump_data(ba, type, zb->zb_object, zb->zb_blkid * blksz,
blksz, bp, abuf->b_data);
@ -515,6 +532,86 @@ dmu_sendbackup(objset_t *tosnap, objset_t *fromsnap, boolean_t fromorigin,
return (0);
}
int
dmu_send_estimate(objset_t *tosnap, objset_t *fromsnap, boolean_t fromorigin,
uint64_t *sizep)
{
dsl_dataset_t *ds = tosnap->os_dsl_dataset;
dsl_dataset_t *fromds = fromsnap ? fromsnap->os_dsl_dataset : NULL;
dsl_pool_t *dp = ds->ds_dir->dd_pool;
int err;
uint64_t size;
/* tosnap must be a snapshot */
if (ds->ds_phys->ds_next_snap_obj == 0)
return (EINVAL);
/* fromsnap must be an earlier snapshot from the same fs as tosnap */
if (fromds && (ds->ds_dir != fromds->ds_dir ||
fromds->ds_phys->ds_creation_txg >= ds->ds_phys->ds_creation_txg))
return (EXDEV);
if (fromorigin) {
if (fromsnap)
return (EINVAL);
if (dsl_dir_is_clone(ds->ds_dir)) {
rw_enter(&dp->dp_config_rwlock, RW_READER);
err = dsl_dataset_hold_obj(dp,
ds->ds_dir->dd_phys->dd_origin_obj, FTAG, &fromds);
rw_exit(&dp->dp_config_rwlock);
if (err)
return (err);
} else {
fromorigin = B_FALSE;
}
}
/* Get uncompressed size estimate of changed data. */
if (fromds == NULL) {
size = ds->ds_phys->ds_uncompressed_bytes;
} else {
uint64_t used, comp;
err = dsl_dataset_space_written(fromds, ds,
&used, &comp, &size);
if (fromorigin)
dsl_dataset_rele(fromds, FTAG);
if (err)
return (err);
}
/*
* Assume that space (both on-disk and in-stream) is dominated by
* data. We will adjust for indirect blocks and the copies property,
* but ignore per-object space used (eg, dnodes and DRR_OBJECT records).
*/
/*
* Subtract out approximate space used by indirect blocks.
* Assume most space is used by data blocks (non-indirect, non-dnode).
* Assume all blocks are recordsize. Assume ditto blocks and
* internal fragmentation counter out compression.
*
* Therefore, space used by indirect blocks is sizeof(blkptr_t) per
* block, which we observe in practice.
*/
uint64_t recordsize;
rw_enter(&dp->dp_config_rwlock, RW_READER);
err = dsl_prop_get_ds(ds, "recordsize",
sizeof (recordsize), 1, &recordsize, NULL);
rw_exit(&dp->dp_config_rwlock);
if (err)
return (err);
size -= size / recordsize * sizeof (blkptr_t);
/* Add in the space for the record associated with each block. */
size += size / recordsize * sizeof (dmu_replay_record_t);
*sizep = size;
return (0);
}
struct recvbeginsyncarg {
const char *tofs;
const char *tosnap;
@ -1540,7 +1637,7 @@ dmu_recv_existing_end(dmu_recv_cookie_t *drc)
{
struct recvendsyncarg resa;
dsl_dataset_t *ds = drc->drc_logical_ds;
int err;
int err, myerr;
/*
* XXX hack; seems the ds is still dirty and dsl_pool_zil_clean()
@ -1578,7 +1675,8 @@ dmu_recv_existing_end(dmu_recv_cookie_t *drc)
if (err == 0 && drc->drc_guid_to_ds_map != NULL)
(void) add_ds_to_guidmap(drc->drc_guid_to_ds_map, ds);
dsl_dataset_disown(ds, dmu_recv_tag);
(void) dsl_dataset_destroy(drc->drc_real_ds, dmu_recv_tag, B_FALSE);
myerr = dsl_dataset_destroy(drc->drc_real_ds, dmu_recv_tag, B_FALSE);
ASSERT3U(myerr, ==, 0);
return (err);
}

View File

@ -23,6 +23,7 @@
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright (c) 2011 Pawel Jakub Dawidek <pawel@dawidek.net>.
* All rights reserved.
* Portions Copyright 2011 Martin Matuska <mm@FreeBSd.org>
*/
#include <sys/dmu_objset.h>
@ -908,69 +909,95 @@ dsl_dataset_create_sync(dsl_dir_t *pdd, const char *lastname,
return (dsobj);
}
#ifdef __FreeBSD__
/* FreeBSD ioctl compat begin */
struct destroyarg {
dsl_sync_task_group_t *dstg;
char *snapname;
char *failed;
boolean_t defer;
nvlist_t *nvl;
const char *snapname;
};
static int
dsl_snapshot_destroy_one(const char *name, void *arg)
dsl_check_snap_cb(const char *name, void *arg)
{
struct destroyarg *da = arg;
dsl_dataset_t *ds;
int err;
char *dsname;
dsname = kmem_asprintf("%s@%s", name, da->snapname);
err = dsl_dataset_own(dsname, B_TRUE, da->dstg, &ds);
strfree(dsname);
if (err == 0) {
struct dsl_ds_destroyarg *dsda;
VERIFY(nvlist_add_boolean(da->nvl, dsname) == 0);
dsl_dataset_make_exclusive(ds, da->dstg);
dsda = kmem_zalloc(sizeof (struct dsl_ds_destroyarg), KM_SLEEP);
dsda->ds = ds;
dsda->defer = da->defer;
dsl_sync_task_create(da->dstg, dsl_dataset_destroy_check,
dsl_dataset_destroy_sync, dsda, da->dstg, 0);
} else if (err == ENOENT) {
err = 0;
} else {
(void) strcpy(da->failed, name);
}
return (err);
return (0);
}
/*
* Destroy 'snapname' in all descendants of 'fsname'.
*/
#pragma weak dmu_snapshots_destroy = dsl_snapshots_destroy
int
dsl_snapshots_destroy(char *fsname, char *snapname, boolean_t defer)
dmu_get_recursive_snaps_nvl(const char *fsname, const char *snapname,
nvlist_t *snaps)
{
struct destroyarg *da;
int err;
da = kmem_zalloc(sizeof (struct destroyarg), KM_SLEEP);
da->nvl = snaps;
da->snapname = snapname;
err = dmu_objset_find(fsname, dsl_check_snap_cb, da,
DS_FIND_CHILDREN);
kmem_free(da, sizeof (struct destroyarg));
return (err);
}
/* FreeBSD ioctl compat end */
#endif /* __FreeBSD__ */
/*
* The snapshots must all be in the same pool.
*/
int
dmu_snapshots_destroy_nvl(nvlist_t *snaps, boolean_t defer, char *failed)
{
int err;
struct destroyarg da;
dsl_sync_task_t *dst;
spa_t *spa;
nvpair_t *pair;
dsl_sync_task_group_t *dstg;
err = spa_open(fsname, &spa, FTAG);
pair = nvlist_next_nvpair(snaps, NULL);
if (pair == NULL)
return (0);
err = spa_open(nvpair_name(pair), &spa, FTAG);
if (err)
return (err);
da.dstg = dsl_sync_task_group_create(spa_get_dsl(spa));
da.snapname = snapname;
da.failed = fsname;
da.defer = defer;
dstg = dsl_sync_task_group_create(spa_get_dsl(spa));
err = dmu_objset_find(fsname,
dsl_snapshot_destroy_one, &da, DS_FIND_CHILDREN);
for (pair = nvlist_next_nvpair(snaps, NULL); pair != NULL;
pair = nvlist_next_nvpair(snaps, pair)) {
dsl_dataset_t *ds;
int err;
err = dsl_dataset_own(nvpair_name(pair), B_TRUE, dstg, &ds);
if (err == 0) {
struct dsl_ds_destroyarg *dsda;
dsl_dataset_make_exclusive(ds, dstg);
dsda = kmem_zalloc(sizeof (struct dsl_ds_destroyarg),
KM_SLEEP);
dsda->ds = ds;
dsda->defer = defer;
dsl_sync_task_create(dstg, dsl_dataset_destroy_check,
dsl_dataset_destroy_sync, dsda, dstg, 0);
} else if (err == ENOENT) {
err = 0;
} else {
(void) strcpy(failed, nvpair_name(pair));
break;
}
}
if (err == 0)
err = dsl_sync_task_group_wait(da.dstg);
err = dsl_sync_task_group_wait(dstg);
for (dst = list_head(&da.dstg->dstg_tasks); dst;
dst = list_next(&da.dstg->dstg_tasks, dst)) {
for (dst = list_head(&dstg->dstg_tasks); dst;
dst = list_next(&dstg->dstg_tasks, dst)) {
struct dsl_ds_destroyarg *dsda = dst->dst_arg1;
dsl_dataset_t *ds = dsda->ds;
@ -978,17 +1005,17 @@ dsl_snapshots_destroy(char *fsname, char *snapname, boolean_t defer)
* Return the file system name that triggered the error
*/
if (dst->dst_err) {
dsl_dataset_name(ds, fsname);
*strchr(fsname, '@') = '\0';
dsl_dataset_name(ds, failed);
}
ASSERT3P(dsda->rm_origin, ==, NULL);
dsl_dataset_disown(ds, da.dstg);
dsl_dataset_disown(ds, dstg);
kmem_free(dsda, sizeof (struct dsl_ds_destroyarg));
}
dsl_sync_task_group_destroy(da.dstg);
dsl_sync_task_group_destroy(dstg);
spa_close(spa, FTAG);
return (err);
}
static boolean_t
@ -2150,6 +2177,55 @@ dsl_dataset_sync(dsl_dataset_t *ds, zio_t *zio, dmu_tx_t *tx)
dmu_objset_sync(ds->ds_objset, zio, tx);
}
static void
get_clones_stat(dsl_dataset_t *ds, nvlist_t *nv)
{
uint64_t count = 0;
objset_t *mos = ds->ds_dir->dd_pool->dp_meta_objset;
zap_cursor_t zc;
zap_attribute_t za;
nvlist_t *propval;
nvlist_t *val;
rw_enter(&ds->ds_dir->dd_pool->dp_config_rwlock, RW_READER);
VERIFY(nvlist_alloc(&propval, NV_UNIQUE_NAME, KM_SLEEP) == 0);
VERIFY(nvlist_alloc(&val, NV_UNIQUE_NAME, KM_SLEEP) == 0);
/*
* There may me missing entries in ds_next_clones_obj
* due to a bug in a previous version of the code.
* Only trust it if it has the right number of entries.
*/
if (ds->ds_phys->ds_next_clones_obj != 0) {
ASSERT3U(0, ==, zap_count(mos, ds->ds_phys->ds_next_clones_obj,
&count));
}
if (count != ds->ds_phys->ds_num_children - 1) {
goto fail;
}
for (zap_cursor_init(&zc, mos, ds->ds_phys->ds_next_clones_obj);
zap_cursor_retrieve(&zc, &za) == 0;
zap_cursor_advance(&zc)) {
dsl_dataset_t *clone;
char buf[ZFS_MAXNAMELEN];
if (dsl_dataset_hold_obj(ds->ds_dir->dd_pool,
za.za_first_integer, FTAG, &clone) != 0) {
goto fail;
}
dsl_dir_name(clone->ds_dir, buf);
VERIFY(nvlist_add_boolean(val, buf) == 0);
dsl_dataset_rele(clone, FTAG);
}
zap_cursor_fini(&zc);
VERIFY(nvlist_add_nvlist(propval, ZPROP_VALUE, val) == 0);
VERIFY(nvlist_add_nvlist(nv, zfs_prop_to_name(ZFS_PROP_CLONES),
propval) == 0);
fail:
nvlist_free(val);
nvlist_free(propval);
rw_exit(&ds->ds_dir->dd_pool->dp_config_rwlock);
}
void
dsl_dataset_stats(dsl_dataset_t *ds, nvlist_t *nv)
{
@ -2180,6 +2256,26 @@ dsl_dataset_stats(dsl_dataset_t *ds, nvlist_t *nv)
dsl_prop_nvlist_add_uint64(nv, ZFS_PROP_DEFER_DESTROY,
DS_IS_DEFER_DESTROY(ds) ? 1 : 0);
if (ds->ds_phys->ds_prev_snap_obj != 0) {
uint64_t written, comp, uncomp;
dsl_pool_t *dp = ds->ds_dir->dd_pool;
dsl_dataset_t *prev;
rw_enter(&dp->dp_config_rwlock, RW_READER);
int err = dsl_dataset_hold_obj(dp,
ds->ds_phys->ds_prev_snap_obj, FTAG, &prev);
rw_exit(&dp->dp_config_rwlock);
if (err == 0) {
err = dsl_dataset_space_written(prev, ds, &written,
&comp, &uncomp);
dsl_dataset_rele(prev, FTAG);
if (err == 0) {
dsl_prop_nvlist_add_uint64(nv, ZFS_PROP_WRITTEN,
written);
}
}
}
ratio = ds->ds_phys->ds_compressed_bytes == 0 ? 100 :
(ds->ds_phys->ds_uncompressed_bytes * 100 /
ds->ds_phys->ds_compressed_bytes);
@ -2193,6 +2289,8 @@ dsl_dataset_stats(dsl_dataset_t *ds, nvlist_t *nv)
dsl_prop_nvlist_add_uint64(nv, ZFS_PROP_USED,
ds->ds_phys->ds_unique_bytes);
dsl_prop_nvlist_add_uint64(nv, ZFS_PROP_COMPRESSRATIO, ratio);
get_clones_stat(ds, nv);
}
}
@ -4025,7 +4123,7 @@ dsl_dataset_get_holds(const char *dsname, nvlist_t **nvp)
}
/*
* Note, this fuction is used as the callback for dmu_objset_find(). We
* Note, this function is used as the callback for dmu_objset_find(). We
* always return 0 so that we will continue to find and process
* inconsistent datasets, even if we encounter an error trying to
* process one of them.
@ -4044,3 +4142,151 @@ dsl_destroy_inconsistent(const char *dsname, void *arg)
}
return (0);
}
/*
* Return (in *usedp) the amount of space written in new that is not
* present in oldsnap. New may be a snapshot or the head. Old must be
* a snapshot before new, in new's filesystem (or its origin). If not then
* fail and return EINVAL.
*
* The written space is calculated by considering two components: First, we
* ignore any freed space, and calculate the written as new's used space
* minus old's used space. Next, we add in the amount of space that was freed
* between the two snapshots, thus reducing new's used space relative to old's.
* Specifically, this is the space that was born before old->ds_creation_txg,
* and freed before new (ie. on new's deadlist or a previous deadlist).
*
* space freed [---------------------]
* snapshots ---O-------O--------O-------O------
* oldsnap new
*/
int
dsl_dataset_space_written(dsl_dataset_t *oldsnap, dsl_dataset_t *new,
uint64_t *usedp, uint64_t *compp, uint64_t *uncompp)
{
int err = 0;
uint64_t snapobj;
dsl_pool_t *dp = new->ds_dir->dd_pool;
*usedp = 0;
*usedp += new->ds_phys->ds_used_bytes;
*usedp -= oldsnap->ds_phys->ds_used_bytes;
*compp = 0;
*compp += new->ds_phys->ds_compressed_bytes;
*compp -= oldsnap->ds_phys->ds_compressed_bytes;
*uncompp = 0;
*uncompp += new->ds_phys->ds_uncompressed_bytes;
*uncompp -= oldsnap->ds_phys->ds_uncompressed_bytes;
rw_enter(&dp->dp_config_rwlock, RW_READER);
snapobj = new->ds_object;
while (snapobj != oldsnap->ds_object) {
dsl_dataset_t *snap;
uint64_t used, comp, uncomp;
err = dsl_dataset_hold_obj(dp, snapobj, FTAG, &snap);
if (err != 0)
break;
if (snap->ds_phys->ds_prev_snap_txg ==
oldsnap->ds_phys->ds_creation_txg) {
/*
* The blocks in the deadlist can not be born after
* ds_prev_snap_txg, so get the whole deadlist space,
* which is more efficient (especially for old-format
* deadlists). Unfortunately the deadlist code
* doesn't have enough information to make this
* optimization itself.
*/
dsl_deadlist_space(&snap->ds_deadlist,
&used, &comp, &uncomp);
} else {
dsl_deadlist_space_range(&snap->ds_deadlist,
0, oldsnap->ds_phys->ds_creation_txg,
&used, &comp, &uncomp);
}
*usedp += used;
*compp += comp;
*uncompp += uncomp;
/*
* If we get to the beginning of the chain of snapshots
* (ds_prev_snap_obj == 0) before oldsnap, then oldsnap
* was not a snapshot of/before new.
*/
snapobj = snap->ds_phys->ds_prev_snap_obj;
dsl_dataset_rele(snap, FTAG);
if (snapobj == 0) {
err = EINVAL;
break;
}
}
rw_exit(&dp->dp_config_rwlock);
return (err);
}
/*
* Return (in *usedp) the amount of space that will be reclaimed if firstsnap,
* lastsnap, and all snapshots in between are deleted.
*
* blocks that would be freed [---------------------------]
* snapshots ---O-------O--------O-------O--------O
* firstsnap lastsnap
*
* This is the set of blocks that were born after the snap before firstsnap,
* (birth > firstsnap->prev_snap_txg) and died before the snap after the
* last snap (ie, is on lastsnap->ds_next->ds_deadlist or an earlier deadlist).
* We calculate this by iterating over the relevant deadlists (from the snap
* after lastsnap, backward to the snap after firstsnap), summing up the
* space on the deadlist that was born after the snap before firstsnap.
*/
int
dsl_dataset_space_wouldfree(dsl_dataset_t *firstsnap,
dsl_dataset_t *lastsnap,
uint64_t *usedp, uint64_t *compp, uint64_t *uncompp)
{
int err = 0;
uint64_t snapobj;
dsl_pool_t *dp = firstsnap->ds_dir->dd_pool;
ASSERT(dsl_dataset_is_snapshot(firstsnap));
ASSERT(dsl_dataset_is_snapshot(lastsnap));
/*
* Check that the snapshots are in the same dsl_dir, and firstsnap
* is before lastsnap.
*/
if (firstsnap->ds_dir != lastsnap->ds_dir ||
firstsnap->ds_phys->ds_creation_txg >
lastsnap->ds_phys->ds_creation_txg)
return (EINVAL);
*usedp = *compp = *uncompp = 0;
rw_enter(&dp->dp_config_rwlock, RW_READER);
snapobj = lastsnap->ds_phys->ds_next_snap_obj;
while (snapobj != firstsnap->ds_object) {
dsl_dataset_t *ds;
uint64_t used, comp, uncomp;
err = dsl_dataset_hold_obj(dp, snapobj, FTAG, &ds);
if (err != 0)
break;
dsl_deadlist_space_range(&ds->ds_deadlist,
firstsnap->ds_phys->ds_prev_snap_txg, UINT64_MAX,
&used, &comp, &uncomp);
*usedp += used;
*compp += comp;
*uncompp += uncomp;
snapobj = ds->ds_phys->ds_prev_snap_obj;
ASSERT3U(snapobj, !=, 0);
dsl_dataset_rele(ds, FTAG);
}
rw_exit(&dp->dp_config_rwlock);
return (err);
}

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/dsl_dataset.h>
@ -29,6 +30,26 @@
#include <sys/zfs_context.h>
#include <sys/dsl_pool.h>
/*
* Deadlist concurrency:
*
* Deadlists can only be modified from the syncing thread.
*
* Except for dsl_deadlist_insert(), it can only be modified with the
* dp_config_rwlock held with RW_WRITER.
*
* The accessors (dsl_deadlist_space() and dsl_deadlist_space_range()) can
* be called concurrently, from open context, with the dl_config_rwlock held
* with RW_READER.
*
* Therefore, we only need to provide locking between dsl_deadlist_insert() and
* the accessors, protecting:
* dl_phys->dl_used,comp,uncomp
* and protecting the dl_tree from being loaded.
* The locking is provided by dl_lock. Note that locking on the bpobj_t
* provides its own locking, and dl_oldfmt is immutable.
*/
static int
dsl_deadlist_compare(const void *arg1, const void *arg2)
{
@ -309,14 +330,14 @@ dsl_deadlist_space(dsl_deadlist_t *dl,
* return space used in the range (mintxg, maxtxg].
* Includes maxtxg, does not include mintxg.
* mintxg and maxtxg must both be keys in the deadlist (unless maxtxg is
* UINT64_MAX).
* larger than any bp in the deadlist (eg. UINT64_MAX)).
*/
void
dsl_deadlist_space_range(dsl_deadlist_t *dl, uint64_t mintxg, uint64_t maxtxg,
uint64_t *usedp, uint64_t *compp, uint64_t *uncompp)
{
dsl_deadlist_entry_t dle_tofind;
dsl_deadlist_entry_t *dle;
dsl_deadlist_entry_t dle_tofind;
avl_index_t where;
if (dl->dl_oldfmt) {
@ -325,9 +346,10 @@ dsl_deadlist_space_range(dsl_deadlist_t *dl, uint64_t mintxg, uint64_t maxtxg,
return;
}
dsl_deadlist_load_tree(dl);
*usedp = *compp = *uncompp = 0;
mutex_enter(&dl->dl_lock);
dsl_deadlist_load_tree(dl);
dle_tofind.dle_mintxg = mintxg;
dle = avl_find(&dl->dl_tree, &dle_tofind, &where);
/*
@ -336,6 +358,7 @@ dsl_deadlist_space_range(dsl_deadlist_t *dl, uint64_t mintxg, uint64_t maxtxg,
*/
ASSERT(dle != NULL ||
avl_nearest(&dl->dl_tree, where, AVL_AFTER) == NULL);
for (; dle && dle->dle_mintxg < maxtxg;
dle = AVL_NEXT(&dl->dl_tree, dle)) {
uint64_t used, comp, uncomp;
@ -347,6 +370,7 @@ dsl_deadlist_space_range(dsl_deadlist_t *dl, uint64_t mintxg, uint64_t maxtxg,
*compp += comp;
*uncompp += uncomp;
}
mutex_exit(&dl->dl_lock);
}
static void

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2007, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
@ -525,10 +526,12 @@ dsl_load_user_sets(objset_t *mos, uint64_t zapobj, avl_tree_t *avl,
}
/*
* Check if user has requested permission.
* Check if user has requested permission. If descendent is set, must have
* descendent perms.
*/
int
dsl_deleg_access_impl(dsl_dataset_t *ds, const char *perm, cred_t *cr)
dsl_deleg_access_impl(dsl_dataset_t *ds, boolean_t descendent, const char *perm,
cred_t *cr)
{
dsl_dir_t *dd;
dsl_pool_t *dp;
@ -549,7 +552,7 @@ dsl_deleg_access_impl(dsl_dataset_t *ds, const char *perm, cred_t *cr)
SPA_VERSION_DELEGATED_PERMS)
return (EPERM);
if (dsl_dataset_is_snapshot(ds)) {
if (dsl_dataset_is_snapshot(ds) || descendent) {
/*
* Snapshots are treated as descendents only,
* local permissions do not apply.
@ -642,7 +645,7 @@ dsl_deleg_access(const char *dsname, const char *perm, cred_t *cr)
if (error)
return (error);
error = dsl_deleg_access_impl(ds, perm, cr);
error = dsl_deleg_access_impl(ds, B_FALSE, perm, cr);
dsl_dataset_rele(ds, FTAG);
return (error);

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/dsl_pool.h>
@ -316,7 +317,10 @@ static int
deadlist_enqueue_cb(void *arg, const blkptr_t *bp, dmu_tx_t *tx)
{
dsl_deadlist_t *dl = arg;
dsl_pool_t *dp = dmu_objset_pool(dl->dl_os);
rw_enter(&dp->dp_config_rwlock, RW_READER);
dsl_deadlist_insert(dl, bp, tx);
rw_exit(&dp->dp_config_rwlock);
return (0);
}

View File

@ -21,6 +21,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
@ -212,6 +213,11 @@ spa_prop_get_config(spa_t *spa, nvlist_t **nvp)
spa_prop_add_list(*nvp, ZPOOL_PROP_GUID, NULL, spa_guid(spa), src);
if (spa->spa_comment != NULL) {
spa_prop_add_list(*nvp, ZPOOL_PROP_COMMENT, spa->spa_comment,
0, ZPROP_SRC_LOCAL);
}
if (spa->spa_root != NULL)
spa_prop_add_list(*nvp, ZPOOL_PROP_ALTROOT, spa->spa_root,
0, ZPROP_SRC_LOCAL);
@ -351,7 +357,7 @@ spa_prop_validate(spa_t *spa, nvlist_t *props)
char *propname, *strval;
uint64_t intval;
objset_t *os;
char *slash;
char *slash, *check;
propname = nvpair_name(elem);
@ -471,6 +477,26 @@ spa_prop_validate(spa_t *spa, nvlist_t *props)
error = EINVAL;
break;
case ZPOOL_PROP_COMMENT:
if ((error = nvpair_value_string(elem, &strval)) != 0)
break;
for (check = strval; *check != '\0'; check++) {
/*
* The kernel doesn't have an easy isprint()
* check. For this kernel check, we merely
* check ASCII apart from DEL. Fix this if
* there is an easy-to-use kernel isprint().
*/
if (*check >= 0x7f) {
error = EINVAL;
break;
}
check++;
}
if (strlen(strval) > ZPROP_MAX_COMMENT)
error = E2BIG;
break;
case ZPOOL_PROP_DEDUPDITTO:
if (spa_version(spa) < SPA_VERSION_DEDUP)
error = ENOTSUP;
@ -571,6 +597,43 @@ spa_prop_clear_bootfs(spa_t *spa, uint64_t dsobj, dmu_tx_t *tx)
}
}
/*
* Change the GUID for the pool. This is done so that we can later
* re-import a pool built from a clone of our own vdevs. We will modify
* the root vdev's guid, our own pool guid, and then mark all of our
* vdevs dirty. Note that we must make sure that all our vdevs are
* online when we do this, or else any vdevs that weren't present
* would be orphaned from our pool. We are also going to issue a
* sysevent to update any watchers.
*/
int
spa_change_guid(spa_t *spa)
{
uint64_t oldguid, newguid;
uint64_t txg;
if (!(spa_mode_global & FWRITE))
return (EROFS);
txg = spa_vdev_enter(spa);
if (spa->spa_root_vdev->vdev_state != VDEV_STATE_HEALTHY)
return (spa_vdev_exit(spa, NULL, txg, ENXIO));
oldguid = spa_guid(spa);
newguid = spa_generate_guid(NULL);
ASSERT3U(oldguid, !=, newguid);
spa->spa_root_vdev->vdev_guid = newguid;
spa->spa_root_vdev->vdev_guid_sum += (newguid - oldguid);
vdev_config_dirty(spa->spa_root_vdev);
spa_event_notify(spa, NULL, ESC_ZFS_POOL_REGUID);
return (spa_vdev_exit(spa, NULL, txg, 0));
}
/*
* ==========================================================================
* SPA state manipulation (open/create/destroy/import/export)
@ -1025,6 +1088,11 @@ spa_unload(spa_t *spa)
spa->spa_async_suspended = 0;
if (spa->spa_comment != NULL) {
spa_strfree(spa->spa_comment);
spa->spa_comment = NULL;
}
spa_config_exit(spa, SCL_ALL, FTAG);
}
@ -1742,6 +1810,7 @@ spa_load(spa_t *spa, spa_load_state_t state, spa_import_type_t type,
{
nvlist_t *config = spa->spa_config;
char *ereport = FM_EREPORT_ZFS_POOL;
char *comment;
int error;
uint64_t pool_guid;
nvlist_t *nvl;
@ -1749,6 +1818,10 @@ spa_load(spa_t *spa, spa_load_state_t state, spa_import_type_t type,
if (nvlist_lookup_uint64(config, ZPOOL_CONFIG_POOL_GUID, &pool_guid))
return (EINVAL);
ASSERT(spa->spa_comment == NULL);
if (nvlist_lookup_string(config, ZPOOL_CONFIG_COMMENT, &comment) == 0)
spa->spa_comment = spa_strdup(comment);
/*
* Versioning wasn't explicitly added to the label until later, so if
* it's not present treat it as the initial version.
@ -1764,7 +1837,7 @@ spa_load(spa_t *spa, spa_load_state_t state, spa_import_type_t type,
spa_guid_exists(pool_guid, 0)) {
error = EEXIST;
} else {
spa->spa_load_guid = pool_guid;
spa->spa_config_guid = pool_guid;
if (nvlist_lookup_nvlist(config, ZPOOL_CONFIG_SPLIT,
&nvl) == 0) {
@ -5380,6 +5453,20 @@ spa_sync_props(void *arg1, void *arg2, dmu_tx_t *tx)
* properties.
*/
break;
case ZPOOL_PROP_COMMENT:
VERIFY(nvpair_value_string(elem, &strval) == 0);
if (spa->spa_comment != NULL)
spa_strfree(spa->spa_comment);
spa->spa_comment = spa_strdup(strval);
/*
* We need to dirty the configuration on all the vdevs
* so that their labels get updated. It's unnecessary
* to do this for pool creation since the vdev's
* configuratoin has already been dirtied.
*/
if (tx->tx_txg != TXG_INITIAL)
vdev_config_dirty(spa->spa_root_vdev);
break;
default:
/*
* Set pool property values in the poolprops mos object.

View File

@ -21,6 +21,8 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/zfs_context.h>
@ -343,6 +345,10 @@ spa_config_generate(spa_t *spa, vdev_t *vd, uint64_t txg, int getstats)
txg) == 0);
VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_POOL_GUID,
spa_guid(spa)) == 0);
VERIFY(spa->spa_comment == NULL || nvlist_add_string(config,
ZPOOL_CONFIG_COMMENT, spa->spa_comment) == 0);
#ifdef _KERNEL
hostid = zone_get_hostid(NULL);
#else /* _KERNEL */

View File

@ -21,6 +21,7 @@
/*
* Copyright (c) 2006, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/spa.h>
@ -101,11 +102,11 @@ spa_history_create_obj(spa_t *spa, dmu_tx_t *tx)
/*
* Figure out maximum size of history log. We set it at
* 1% of pool size, with a max of 32MB and min of 128KB.
* 0.1% of pool size, with a max of 1G and min of 128KB.
*/
shpp->sh_phys_max_off =
metaslab_class_get_dspace(spa_normal_class(spa)) / 100;
shpp->sh_phys_max_off = MIN(shpp->sh_phys_max_off, 32<<20);
metaslab_class_get_dspace(spa_normal_class(spa)) / 1000;
shpp->sh_phys_max_off = MIN(shpp->sh_phys_max_off, 1<<30);
shpp->sh_phys_max_off = MAX(shpp->sh_phys_max_off, 128<<10);
dmu_buf_rele(dbp, FTAG);

View File

@ -21,6 +21,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
*/
#include <sys/zfs_context.h>
@ -1308,13 +1309,24 @@ spa_guid(spa_t *spa)
/*
* If we fail to parse the config during spa_load(), we can go through
* the error path (which posts an ereport) and end up here with no root
* vdev. We stash the original pool guid in 'spa_load_guid' to handle
* vdev. We stash the original pool guid in 'spa_config_guid' to handle
* this case.
*/
if (spa->spa_root_vdev != NULL)
return (spa->spa_root_vdev->vdev_guid);
else
return (spa->spa_load_guid);
return (spa->spa_config_guid);
}
uint64_t
spa_load_guid(spa_t *spa)
{
/*
* This is a GUID that exists solely as a reference for the
* purposes of the arc. It is generated at load time, and
* is never written to persistent storage.
*/
return (spa->spa_load_guid);
}
uint64_t

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
/*
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
@ -194,7 +195,9 @@ int dmu_objset_create(const char *name, dmu_objset_type_t type, uint64_t flags,
int dmu_objset_clone(const char *name, struct dsl_dataset *clone_origin,
uint64_t flags);
int dmu_objset_destroy(const char *name, boolean_t defer);
int dmu_snapshots_destroy(char *fsname, char *snapname, boolean_t defer);
int dmu_get_recursive_snaps_nvl(const char *fsname, const char *snapname,
struct nvlist *snaps);
int dmu_snapshots_destroy_nvl(struct nvlist *snaps, boolean_t defer, char *);
int dmu_objset_snapshot(char *fsname, char *snapname, char *tag,
struct nvlist *props, boolean_t recursive, boolean_t temporary, int fd);
int dmu_objset_rename(const char *name, const char *newname,
@ -705,6 +708,8 @@ void dmu_traverse_objset(objset_t *os, uint64_t txg_start,
int dmu_sendbackup(objset_t *tosnap, objset_t *fromsnap, boolean_t fromorigin,
struct file *fp, offset_t *off);
int dmu_send_estimate(objset_t *tosnap, objset_t *fromsnap,
boolean_t fromorigin, uint64_t *sizep);
typedef struct dmu_recv_cookie {
/*

View File

@ -22,6 +22,7 @@
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 Pawel Jakub Dawidek <pawel@dawidek.net>.
* All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#ifndef _SYS_DSL_DATASET_H
@ -257,6 +258,10 @@ void dsl_dataset_space(dsl_dataset_t *ds,
uint64_t *refdbytesp, uint64_t *availbytesp,
uint64_t *usedobjsp, uint64_t *availobjsp);
uint64_t dsl_dataset_fsid_guid(dsl_dataset_t *ds);
int dsl_dataset_space_written(dsl_dataset_t *oldsnap, dsl_dataset_t *new,
uint64_t *usedp, uint64_t *compp, uint64_t *uncompp);
int dsl_dataset_space_wouldfree(dsl_dataset_t *firstsnap, dsl_dataset_t *last,
uint64_t *usedp, uint64_t *compp, uint64_t *uncompp);
int dsl_dsobj_to_dsname(char *pname, uint64_t obj, char *buf);

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2007, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#ifndef _SYS_DSL_DELEG_H
@ -64,7 +65,8 @@ extern "C" {
int dsl_deleg_get(const char *ddname, nvlist_t **nvp);
int dsl_deleg_set(const char *ddname, nvlist_t *nvp, boolean_t unset);
int dsl_deleg_access(const char *ddname, const char *perm, cred_t *cr);
int dsl_deleg_access_impl(struct dsl_dataset *ds, const char *perm, cred_t *cr);
int dsl_deleg_access_impl(struct dsl_dataset *ds, boolean_t descendent,
const char *perm, cred_t *cr);
void dsl_deleg_set_create_perms(dsl_dir_t *dd, dmu_tx_t *tx, cred_t *cr);
int dsl_deleg_can_allow(char *ddname, nvlist_t *nvp, cred_t *cr);
int dsl_deleg_can_unallow(char *ddname, nvlist_t *nvp, cred_t *cr);

View File

@ -21,6 +21,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
*/
#ifndef _SYS_SPA_H
@ -578,6 +579,7 @@ extern void spa_altroot(spa_t *, char *, size_t);
extern int spa_sync_pass(spa_t *spa);
extern char *spa_name(spa_t *spa);
extern uint64_t spa_guid(spa_t *spa);
extern uint64_t spa_load_guid(spa_t *spa);
extern uint64_t spa_last_synced_txg(spa_t *spa);
extern uint64_t spa_first_txg(spa_t *spa);
extern uint64_t spa_syncing_txg(spa_t *spa);
@ -611,6 +613,7 @@ extern uint64_t spa_get_random(uint64_t range);
extern uint64_t spa_generate_guid(spa_t *spa);
extern void sprintf_blkptr(char *buf, const blkptr_t *bp);
extern void spa_freeze(spa_t *spa);
extern int spa_change_guid(spa_t *spa);
extern void spa_upgrade(spa_t *spa, uint64_t version);
extern void spa_evict_all(void);
extern vdev_t *spa_lookup_by_guid(spa_t *spa, uint64_t guid,

View File

@ -21,6 +21,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
*/
#ifndef _SYS_SPA_IMPL_H
@ -111,6 +112,7 @@ struct spa {
* Fields protected by spa_namespace_lock.
*/
char spa_name[MAXNAMELEN]; /* pool name */
char *spa_comment; /* comment */
avl_node_t spa_avl; /* node in spa_namespace_avl */
nvlist_t *spa_config; /* last synced config */
nvlist_t *spa_config_syncing; /* currently syncing config */
@ -136,7 +138,8 @@ struct spa {
objset_t *spa_meta_objset; /* copy of dp->dp_meta_objset */
txg_list_t spa_vdev_txg_list; /* per-txg dirty vdev list */
vdev_t *spa_root_vdev; /* top-level vdev container */
uint64_t spa_load_guid; /* initial guid for spa_load */
uint64_t spa_config_guid; /* config pool guid */
uint64_t spa_load_guid; /* spa_load initialized guid */
list_t spa_config_dirty_list; /* vdevs with dirty config */
list_t spa_state_dirty_list; /* vdevs with dirty state */
spa_aux_vdev_t spa_spares; /* hot spares */

View File

@ -21,6 +21,8 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/zfs_context.h>
@ -297,6 +299,7 @@ vdev_alloc_common(spa_t *spa, uint_t id, uint64_t guid, vdev_ops_t *ops)
if (spa->spa_root_vdev == NULL) {
ASSERT(ops == &vdev_root_ops);
spa->spa_root_vdev = vd;
spa->spa_load_guid = spa_generate_guid(NULL);
}
if (guid == 0 && ops != &vdev_hole_ops) {

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/zio.h>
@ -1414,7 +1415,7 @@ zap_count_write(objset_t *os, uint64_t zapobj, const char *name, int add,
}
/*
* We lock the zap with adding == FALSE. Because, if we pass
* We lock the zap with adding == FALSE. Because, if we pass
* the actual value of add, it could trigger a mzap_upgrade().
* At present we are just evaluating the possibility of this operation
* and hence we donot want to trigger an upgrade.

View File

@ -23,6 +23,8 @@
* Copyright (c) 2011 Pawel Jakub Dawidek <pawel@dawidek.net>.
* All rights reserved.
* Portions Copyright 2011 Martin Matuska <mm@FreeBSD.org>
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
*/
#include <sys/types.h>
@ -348,17 +350,37 @@ zfs_dozonecheck_ds(const char *dataset, dsl_dataset_t *ds, cred_t *cr)
return (zfs_dozonecheck_impl(dataset, zoned, cr));
}
/*
* If name ends in a '@', then require recursive permissions.
*/
int
zfs_secpolicy_write_perms(const char *name, const char *perm, cred_t *cr)
{
int error;
boolean_t descendent = B_FALSE;
dsl_dataset_t *ds;
char *at;
error = zfs_dozonecheck(name, cr);
at = strchr(name, '@');
if (at != NULL && at[1] == '\0') {
*at = '\0';
descendent = B_TRUE;
}
error = dsl_dataset_hold(name, FTAG, &ds);
if (at != NULL)
*at = '@';
if (error != 0)
return (error);
error = zfs_dozonecheck_ds(name, ds, cr);
if (error == 0) {
error = secpolicy_zfs(cr);
if (error)
error = dsl_deleg_access(name, perm, cr);
error = dsl_deleg_access_impl(ds, descendent, perm, cr);
}
dsl_dataset_rele(ds, FTAG);
return (error);
}
@ -372,7 +394,7 @@ zfs_secpolicy_write_perms_ds(const char *name, dsl_dataset_t *ds,
if (error == 0) {
error = secpolicy_zfs(cr);
if (error)
error = dsl_deleg_access_impl(ds, perm, cr);
error = dsl_deleg_access_impl(ds, B_FALSE, perm, cr);
}
return (error);
}
@ -685,24 +707,14 @@ zfs_secpolicy_destroy(zfs_cmd_t *zc, cred_t *cr)
/*
* Destroying snapshots with delegated permissions requires
* descendent mount and destroy permissions.
* Reassemble the full filesystem@snap name so dsl_deleg_access()
* can do the correct permission check.
*
* Since this routine is used when doing a recursive destroy of snapshots
* and destroying snapshots requires descendent permissions, a successfull
* check of the top level snapshot applies to snapshots of all descendent
* datasets as well.
*
* The top level snapshot may not exist when doing a recursive destroy.
* In this case fallback to permissions of the parent dataset.
*/
static int
zfs_secpolicy_destroy_snaps(zfs_cmd_t *zc, cred_t *cr)
zfs_secpolicy_destroy_recursive(zfs_cmd_t *zc, cred_t *cr)
{
int error;
char *dsname;
dsname = kmem_asprintf("%s@%s", zc->zc_name, zc->zc_value);
dsname = kmem_asprintf("%s@", zc->zc_name);
error = zfs_secpolicy_destroy_perms(dsname, cr);
@ -1468,6 +1480,20 @@ zfs_ioc_pool_get_history(zfs_cmd_t *zc)
return (error);
}
static int
zfs_ioc_pool_reguid(zfs_cmd_t *zc)
{
spa_t *spa;
int error;
error = spa_open(zc->zc_name, &spa, FTAG);
if (error == 0) {
error = spa_change_guid(spa);
spa_close(spa, FTAG);
}
return (error);
}
static int
zfs_ioc_dsobj_to_dsname(zfs_cmd_t *zc)
{
@ -1765,9 +1791,12 @@ zfs_ioc_objset_stats_impl(zfs_cmd_t *zc, objset_t *os)
* inconsistent. So this is a bit of a workaround...
* XXX reading with out owning
*/
if (!zc->zc_objset_stats.dds_inconsistent) {
if (dmu_objset_type(os) == DMU_OST_ZVOL)
VERIFY(zvol_get_stats(os, nv) == 0);
if (!zc->zc_objset_stats.dds_inconsistent &&
dmu_objset_type(os) == DMU_OST_ZVOL) {
error = zvol_get_stats(os, nv);
if (error == EIO)
return (error);
VERIFY3S(error, ==, 0);
}
error = put_nvlist(zc, nv);
nvlist_free(nv);
@ -1978,8 +2007,7 @@ zfs_ioc_dataset_list_next(zfs_cmd_t *zc)
NULL, &zc->zc_cookie);
if (error == ENOENT)
error = ESRCH;
} while (error == 0 && dataset_name_hidden(zc->zc_name) &&
!(zc->zc_iflags & FKIOCTL));
} while (error == 0 && dataset_name_hidden(zc->zc_name));
dmu_objset_rele(os, FTAG);
/*
@ -2257,6 +2285,8 @@ zfs_set_prop_nvlist(const char *dsname, zprop_source_t source, nvlist_t *nvl,
if (nvpair_type(propval) !=
DATA_TYPE_UINT64_ARRAY)
err = EINVAL;
} else {
err = EINVAL;
}
} else if (err == 0) {
if (nvpair_type(propval) == DATA_TYPE_STRING) {
@ -3124,25 +3154,62 @@ zfs_unmount_snap(const char *name, void *arg)
/*
* inputs:
* zc_name name of filesystem
* zc_value short name of snapshot
* zc_name name of filesystem, snaps must be under it
* zc_nvlist_src[_size] full names of snapshots to destroy
* zc_defer_destroy mark for deferred destroy
*
* outputs: none
* outputs:
* zc_name on failure, name of failed snapshot
*/
static int
zfs_ioc_destroy_snaps(zfs_cmd_t *zc)
zfs_ioc_destroy_snaps_nvl(zfs_cmd_t *zc)
{
int err;
int err, len;
nvlist_t *nvl;
nvpair_t *pair;
if (snapshot_namecheck(zc->zc_value, NULL, NULL) != 0)
return (EINVAL);
err = dmu_objset_find(zc->zc_name,
zfs_unmount_snap, zc->zc_value, DS_FIND_CHILDREN);
if (err)
if ((err = get_nvlist(zc->zc_nvlist_src, zc->zc_nvlist_src_size,
zc->zc_iflags, &nvl)) != 0) {
#ifndef __FreeBSD__
return (err);
return (dmu_snapshots_destroy(zc->zc_name, zc->zc_value,
zc->zc_defer_destroy));
#else
/*
* We are probably called by older binaries,
* allocate and populate nvlist with recursive snapshots
*/
if (snapshot_namecheck(zc->zc_value, NULL, NULL) != 0)
return (EINVAL);
VERIFY(nvlist_alloc(&nvl, NV_UNIQUE_NAME, KM_SLEEP) == 0);
err = dmu_get_recursive_snaps_nvl(zc->zc_name,
zc->zc_value, nvl);
if (err) {
nvlist_free(nvl);
return (err);
}
#endif /* __FreeBSD__ */
}
len = strlen(zc->zc_name);
for (pair = nvlist_next_nvpair(nvl, NULL); pair != NULL;
pair = nvlist_next_nvpair(nvl, pair)) {
const char *name = nvpair_name(pair);
/*
* The snap name must be underneath the zc_name. This ensures
* that our permission checks were legitimate.
*/
if (strncmp(zc->zc_name, name, len) != 0 ||
(name[len] != '@' && name[len] != '/')) {
nvlist_free(nvl);
return (EINVAL);
}
(void) zfs_unmount_snap(name, NULL);
}
err = dmu_snapshots_destroy_nvl(nvl, zc->zc_defer_destroy,
zc->zc_name);
nvlist_free(nvl);
return (err);
}
/*
@ -3789,6 +3856,8 @@ zfs_ioc_recv(zfs_cmd_t *zc)
* zc_obj fromorigin flag (mutually exclusive with zc_fromobj)
* zc_sendobj objsetid of snapshot to send
* zc_fromobj objsetid of incremental fromsnap (may be zero)
* zc_guid if set, estimate size of stream only. zc_cookie is ignored.
* output size in zc_objset_type.
*
* outputs: none
*/
@ -3797,13 +3866,13 @@ zfs_ioc_send(zfs_cmd_t *zc)
{
objset_t *fromsnap = NULL;
objset_t *tosnap;
file_t *fp;
int error;
offset_t off;
dsl_dataset_t *ds;
dsl_dataset_t *dsfrom = NULL;
spa_t *spa;
dsl_pool_t *dp;
boolean_t estimate = (zc->zc_guid != 0);
error = spa_open(zc->zc_name, &spa, FTAG);
if (error)
@ -3844,20 +3913,25 @@ zfs_ioc_send(zfs_cmd_t *zc)
spa_close(spa, FTAG);
}
fp = getf(zc->zc_cookie);
if (fp == NULL) {
dsl_dataset_rele(ds, FTAG);
if (dsfrom)
dsl_dataset_rele(dsfrom, FTAG);
return (EBADF);
if (estimate) {
error = dmu_send_estimate(tosnap, fromsnap, zc->zc_obj,
&zc->zc_objset_type);
} else {
file_t *fp = getf(zc->zc_cookie);
if (fp == NULL) {
dsl_dataset_rele(ds, FTAG);
if (dsfrom)
dsl_dataset_rele(dsfrom, FTAG);
return (EBADF);
}
off = fp->f_offset;
error = dmu_sendbackup(tosnap, fromsnap, zc->zc_obj, fp, &off);
if (off >= 0 && off <= MAXOFFSET_T)
fp->f_offset = off;
releasef(zc->zc_cookie);
}
off = fp->f_offset;
error = dmu_sendbackup(tosnap, fromsnap, zc->zc_obj, fp, &off);
if (off >= 0 && off <= MAXOFFSET_T)
fp->f_offset = off;
releasef(zc->zc_cookie);
if (dsfrom)
dsl_dataset_rele(dsfrom, FTAG);
dsl_dataset_rele(ds, FTAG);
@ -4665,6 +4739,70 @@ zfs_ioc_get_holds(zfs_cmd_t *zc)
return (error);
}
/*
* inputs:
* zc_name name of new filesystem or snapshot
* zc_value full name of old snapshot
*
* outputs:
* zc_cookie space in bytes
* zc_objset_type compressed space in bytes
* zc_perm_action uncompressed space in bytes
*/
static int
zfs_ioc_space_written(zfs_cmd_t *zc)
{
int error;
dsl_dataset_t *new, *old;
error = dsl_dataset_hold(zc->zc_name, FTAG, &new);
if (error != 0)
return (error);
error = dsl_dataset_hold(zc->zc_value, FTAG, &old);
if (error != 0) {
dsl_dataset_rele(new, FTAG);
return (error);
}
error = dsl_dataset_space_written(old, new, &zc->zc_cookie,
&zc->zc_objset_type, &zc->zc_perm_action);
dsl_dataset_rele(old, FTAG);
dsl_dataset_rele(new, FTAG);
return (error);
}
/*
* inputs:
* zc_name full name of last snapshot
* zc_value full name of first snapshot
*
* outputs:
* zc_cookie space in bytes
* zc_objset_type compressed space in bytes
* zc_perm_action uncompressed space in bytes
*/
static int
zfs_ioc_space_snaps(zfs_cmd_t *zc)
{
int error;
dsl_dataset_t *new, *old;
error = dsl_dataset_hold(zc->zc_name, FTAG, &new);
if (error != 0)
return (error);
error = dsl_dataset_hold(zc->zc_value, FTAG, &old);
if (error != 0) {
dsl_dataset_rele(new, FTAG);
return (error);
}
error = dsl_dataset_space_wouldfree(old, new, &zc->zc_cookie,
&zc->zc_objset_type, &zc->zc_perm_action);
dsl_dataset_rele(old, FTAG);
dsl_dataset_rele(new, FTAG);
return (error);
}
/*
* pool create, destroy, and export don't log the history as part of
* zfsdev_ioctl, but rather zfs_ioc_pool_create, and zfs_ioc_pool_export
@ -4739,7 +4877,7 @@ static zfs_ioc_vec_t zfs_ioc_vec[] = {
B_TRUE },
{ zfs_ioc_rename, zfs_secpolicy_rename, DATASET_NAME, B_TRUE, B_TRUE },
{ zfs_ioc_recv, zfs_secpolicy_receive, DATASET_NAME, B_TRUE, B_TRUE },
{ zfs_ioc_send, zfs_secpolicy_send, DATASET_NAME, B_TRUE, B_FALSE },
{ zfs_ioc_send, zfs_secpolicy_send, DATASET_NAME, B_FALSE, B_FALSE },
{ zfs_ioc_inject_fault, zfs_secpolicy_inject, NO_NAME, B_FALSE,
B_FALSE },
{ zfs_ioc_clear_fault, zfs_secpolicy_inject, NO_NAME, B_FALSE,
@ -4751,7 +4889,7 @@ static zfs_ioc_vec_t zfs_ioc_vec[] = {
{ zfs_ioc_clear, zfs_secpolicy_config, POOL_NAME, B_TRUE, B_FALSE },
{ zfs_ioc_promote, zfs_secpolicy_promote, DATASET_NAME, B_TRUE,
B_TRUE },
{ zfs_ioc_destroy_snaps, zfs_secpolicy_destroy_snaps, DATASET_NAME,
{ zfs_ioc_destroy_snaps_nvl, zfs_secpolicy_destroy_recursive, DATASET_NAME,
B_TRUE, B_TRUE },
{ zfs_ioc_snapshot, zfs_secpolicy_snapshot, DATASET_NAME, B_TRUE,
B_TRUE },
@ -4795,7 +4933,13 @@ static zfs_ioc_vec_t zfs_ioc_vec[] = {
{ zfs_ioc_obj_to_stats, zfs_secpolicy_diff, DATASET_NAME, B_FALSE,
B_TRUE },
{ zfs_ioc_jail, zfs_secpolicy_config, DATASET_NAME, B_TRUE, B_FALSE },
{ zfs_ioc_unjail, zfs_secpolicy_config, DATASET_NAME, B_TRUE, B_FALSE }
{ zfs_ioc_unjail, zfs_secpolicy_config, DATASET_NAME, B_TRUE, B_FALSE },
{ zfs_ioc_pool_reguid, zfs_secpolicy_config, POOL_NAME, B_TRUE,
B_TRUE },
{ zfs_ioc_space_written, zfs_secpolicy_read, DATASET_NAME, B_FALSE,
B_TRUE },
{ zfs_ioc_space_snaps, zfs_secpolicy_read, DATASET_NAME, B_FALSE,
B_TRUE }
};
int

View File

@ -22,6 +22,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
*/
/* Portions Copyright 2010 Robert Milkowski */
@ -126,6 +127,8 @@ typedef enum {
ZFS_PROP_MLSLABEL,
ZFS_PROP_SYNC,
ZFS_PROP_REFRATIO,
ZFS_PROP_WRITTEN,
ZFS_PROP_CLONES,
ZFS_NUM_PROPS
} zfs_prop_t;
@ -165,9 +168,13 @@ typedef enum {
ZPOOL_PROP_FREE,
ZPOOL_PROP_ALLOCATED,
ZPOOL_PROP_READONLY,
ZPOOL_PROP_COMMENT,
ZPOOL_NUM_PROPS
} zpool_prop_t;
/* Small enough to not hog a whole line of printout in zpool(1M). */
#define ZPROP_MAX_COMMENT 32
#define ZPROP_CONT -2
#define ZPROP_INVAL -1
@ -492,6 +499,7 @@ typedef struct zpool_rewind_policy {
#define ZPOOL_CONFIG_SPLIT_LIST "guid_list"
#define ZPOOL_CONFIG_REMOVING "removing"
#define ZPOOL_CONFIG_RESILVERING "resilvering"
#define ZPOOL_CONFIG_COMMENT "comment"
#define ZPOOL_CONFIG_SUSPENDED "suspended" /* not stored on disk */
#define ZPOOL_CONFIG_TIMESTAMP "timestamp" /* not stored on disk */
#define ZPOOL_CONFIG_BOOTFS "bootfs" /* not stored on disk */
@ -758,7 +766,7 @@ typedef unsigned long zfs_ioc_t;
#define ZFS_IOC_ERROR_LOG _IOWR('Z', 32, struct zfs_cmd)
#define ZFS_IOC_CLEAR _IOWR('Z', 33, struct zfs_cmd)
#define ZFS_IOC_PROMOTE _IOWR('Z', 34, struct zfs_cmd)
#define ZFS_IOC_DESTROY_SNAPS _IOWR('Z', 35, struct zfs_cmd)
#define ZFS_IOC_DESTROY_SNAPS_NVL _IOWR('Z', 35, struct zfs_cmd)
#define ZFS_IOC_SNAPSHOT _IOWR('Z', 36, struct zfs_cmd)
#define ZFS_IOC_DSOBJ_TO_DSNAME _IOWR('Z', 37, struct zfs_cmd)
#define ZFS_IOC_OBJ_TO_PATH _IOWR('Z', 38, struct zfs_cmd)
@ -783,6 +791,9 @@ typedef unsigned long zfs_ioc_t;
#define ZFS_IOC_OBJ_TO_STATS _IOWR('Z', 57, struct zfs_cmd)
#define ZFS_IOC_JAIL _IOWR('Z', 58, struct zfs_cmd)
#define ZFS_IOC_UNJAIL _IOWR('Z', 59, struct zfs_cmd)
#define ZFS_IOC_POOL_REGUID _IOWR('Z', 60, struct zfs_cmd)
#define ZFS_IOC_SPACE_WRITTEN _IOWR('Z', 61, struct zfs_cmd)
#define ZFS_IOC_SPACE_SNAPS _IOWR('Z', 62, struct zfs_cmd)
/*
* Internal SPA load state. Used by FMA diagnosis engine.
@ -844,6 +855,7 @@ typedef enum {
* ESC_ZFS_RESILVER_START
* ESC_ZFS_RESILVER_END
* ESC_ZFS_POOL_DESTROY
* ESC_ZFS_POOL_REGUID
*
* ZFS_EV_POOL_NAME DATA_TYPE_STRING
* ZFS_EV_POOL_GUID DATA_TYPE_UINT64

View File

@ -20,6 +20,7 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved.
*/
#ifndef _SYS_SYSEVENT_EVENTDEFS_H
@ -256,6 +257,7 @@ extern "C" {
#define ESC_ZFS_SCRUB_FINISH "ESC_ZFS_scrub_finish"
#define ESC_ZFS_VDEV_SPARE "ESC_ZFS_vdev_spare"
#define ESC_ZFS_BOOTFS_VDEV_ATTACH "ESC_ZFS_bootfs_vdev_attach"
#define ESC_ZFS_POOL_REGUID "ESC_ZFS_pool_reguid"
#define ESC_ZFS_VDEV_AUTOEXPAND "ESC_ZFS_vdev_autoexpand"
/*