Commit Graph

526 Commits

Author SHA1 Message Date
Andriy Gapon
2bb93dd84d 7659 Missing thread_exit() in dmu_send.c
illumos/illumos-gate@f2c1e9bc48
f2c1e9bc48

https://www.illumos.org/issues/7659
  Two threads send_traverse_thread() and receive_writer_thread() should
  end with thread_exit();
  Mostly a cosmetic issue under IllumOS.
  https://github.com/openzfs/openzfs/pull/252

Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Jorgen Lundman <lundman@lundman.net>
2017-04-14 18:11:53 +00:00
Andriy Gapon
c5c6a287f1 7613 ms_freetree[4] is only used in syncing context
illumos/illumos-gate@5f14577801
5f14577801

https://www.illumos.org/issues/7613
  metaslab_t:ms_freetree[TXG_SIZE] is only used in syncing context. We should
  replace it with two trees: the freeing tree (ranges that we are freeing this
  syncing txg) and the freed tree (ranges which have been freed this txg).

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-14 18:11:16 +00:00
Andriy Gapon
26af00ef5b 7586 remove #ifdef __lint hack from dmu.h
illumos/illumos-gate@4ba5b96163
4ba5b96163

https://www.illumos.org/issues/7586
  The #ifdef __lint in dmu.h is ugly, and it would be nice not to duplicate it if
  we add other inline functions into header files in ZFS, especially since it is
  difficult to make any other solution work across all compilation targets. We
  should switch to disabling the lint flags that are failing instead.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Dan Kimmel <dan.kimmel@delphix.com>
2017-04-14 18:10:23 +00:00
Andriy Gapon
2bfa921051 7580 ztest failure in dbuf_read_impl
illumos/illumos-gate@1a01181fdc
1a01181fdc

https://www.illumos.org/issues/7580
  We need to prevent any reader whenever we're about the zero out all the
  blkptrs. To do this we need to grab the dn_struct_rwlock as writer in
  dbuf_write_children_ready and free_children just prior to calling bzero.

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: George Wilson <george.wilson@delphix.com>
2017-04-14 18:09:32 +00:00
Andriy Gapon
cdeb4e5f08 7606 dmu_objset_find_dp() takes a long time while importing pool
illumos/illumos-gate@7588687e6b
7588687e6b

https://www.illumos.org/issues/7606
  When importing a pool with a large number of filesystems within the same
  parent filesystem, we see that dmu_objset_find_dp() takes a long time.
  It is called from 3 places: spa_check_logs(), spa_ld_claim_log_blocks(),
  and spa_load_verify().
  There are several ways to improve performance here:
  1. We don't really need to do spa_check_logs() or
         spa_ld_claim_log_blocks() if the pool was closed cleanly.
  2. spa_load_verify() uses dmu_objset_find_dp() to check that no
         datasets have too long of names.
  3. dmu_objset_find_dp() is slow because it's doing
         zap_value_search() (which is O(N sibling datasets)) to determine
         the name of each dsl_dir when it's opened. In this case we
         actually know the name when we are opening it, so we can provide
         it and avoid the lookup.
  This change implements fix #3 from the above list; i.e. make
  dmu_objset_find_dp() provide the name of the dataset so that we don't
  have to search for it.

Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prashanth Sreenivasa <prashksp@gmail.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-14 18:08:50 +00:00
Andriy Gapon
b3264caf7b 7252 7628 compressed zfs send / receive
illumos/illumos-gate@5602294fda
5602294fda

https://www.illumos.org/issues/7252
  This feature includes code to allow a system with compressed ARC enabled to
  send data in its compressed form straight out of the ARC, and receive data in
  its compressed form directly into the ARC.

https://www.illumos.org/issues/7628
  We should have longer, more readable versions of the ZFS send / recv options.

7628 create long versions of ZFS send / receive options

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed by: David Quigley <dpquigl@davequigley.com>
Reviewed by: Thomas Caputi <tcaputi@datto.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Dan Kimmel <dan.kimmel@delphix.com>
2017-04-14 18:07:43 +00:00
Andriy Gapon
9c03f5f793 7604 if volblocksize property is the default, it displays as "-" rather than 8K
illumos/illumos-gate@4d86c0eab2
4d86c0eab2

https://www.illumos.org/issues/7604
  If a zvol has the default setting for the "volblocksize" property, it is
  8KB. However, it is displayed as "-" (not present), rather than "8K".
  The problem was introduced by:
  commit 25228e830e86924a41243343b1de9daf2d7dd43a
      Author: Matthew Ahrens &lt;mahrens@delphix.com&gt;
      Date:   Thu Nov 17 14:37:24 2016 -0800
  7571 non-present readonly numeric ZFS props do not have default value
  which changed changed get_numeric_property() to indicate that readonly
  default properties are not present. However, zfs_prop_readonly() returns
  TRUE for both readonly and set-once properties (e.g. volblocksize).
  Amusingly, that commit essentially reverted:
  6900484 default volblocksize is no longer being reported correctly
  from November 2009. However, that change was not correct either; the
  correct solution is to only do this check for "truly readonly" (i.e. not
  setonce) properties.
  $ zfs list -t volume -o name,volblocksize
      NAME
  VOLBLOCK
      domain0/group-100/appdata_container-101/appdata_windows_timeflow-102/
  archive            -
      domain0/group-100/appdata_container-101/appdata_windows_timeflow-102/
  datafile           -
      domain0/group-100/appdata_container-101/appdata_windows_timeflow-102/
  external           -
      rpool/dump
  128K
      rpool/swap
  4K
      rpool/swap1
  ===============================================================================

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-14 18:05:20 +00:00
Andriy Gapon
a3c68fd244 7602 minor issues with zfs manpage
illumos/illumos-gate@5c262fd009
5c262fd009

https://www.illumos.org/issues/7602
  The line volblocksize=blocksize should just read volblocksize in the same
  rendering as the other properties in the same section.
  The zfs.1m man page renders one variant of unallow as
  zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
               filesystem|volume
  There is an extra dash preceeding @setname that should not be there.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Daniel Hoffman <dj.hoffman@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Sara Hartse <sara.hartse@delphix.com>
2017-04-14 18:02:25 +00:00
Andriy Gapon
cfaa3c3d8f 7386 zfs get does not work properly with bookmarks
illumos/illumos-gate@edb901aab9
edb901aab9

https://www.illumos.org/issues/7386
  The zfs get command does not work with the bookmark parameter while it works
  properly with both filesystem and snapshot:
  # zfs get -t all -r creation rpool/test
  NAME               PROPERTY  VALUE                  SOURCE
  rpool/test         creation  Fri Sep 16 15:00 2016  -
  rpool/test@snap    creation  Fri Sep 16 15:00 2016  -
  rpool/test#bkmark  creation  Fri Sep 16 15:00 2016  -
  # zfs get -t all -r creation rpool/test@snap
  NAME             PROPERTY  VALUE                  SOURCE
  rpool/test@snap  creation  Fri Sep 16 15:00 2016  -
  # zfs get -t all -r creation rpool/test#bkmark
  cannot open 'rpool/test#bkmark': invalid dataset name
  #
  The zfs get command should be modified to work properly with bookmarks too.

Reviewed by: Simon Klinkert <simon.klinkert@gmail.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Matthew Ahrens <mahrens@delphix.com>
Author: Marcel Telka <marcel@telka.sk>
2017-04-14 18:01:43 +00:00
Andriy Gapon
4d9a21a808 7276 zfs(1m) manpage could better describe space properties (remove extra line)
illumos/illumos-gate@079d299664
079d299664

https://www.illumos.org/issues/7276
  The "used" and "written" properties could be described better by the zfs.1m
  manpage.
  "written" could be better described as "The amount of space referenced by this
  dataset, that was written since the previous snapshot (i.e. that is not
  referenced by the previous snapshot)."
  The "used" section needs more work, but at a minimum it could say that the
  "used" space of a snapshot is the space unique to the snapshot (i.e. the space
  referenced only by this snapshot). The "used" space of a snapshot is a subset
  of the "written" space of the snapshot.

Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-14 18:00:42 +00:00
Andriy Gapon
3af072ced1 7345 zfs(1m): Description for -d depth appeared in -r
illumos/illumos-gate@cb3c687bb9
cb3c687bb9

https://www.illumos.org/issues/7345
  The following part of the zfs(1m) man page:
         -d depth
             Recursively display any children of the dataset, limiting the
             recursion to

  ...

         -r  Recursively display any children of the dataset on the command
             line.  depth.  A depth of 1 will display only the dataset and its
             direct children.
  should be changed to:
         -d depth
             Recursively display any children of the dataset, limiting the
             recursion to depth.  A depth of 1 will display only the dataset and
  its
             direct children.

  ...

         -r  Recursively display any children of the dataset on the command
             line.

Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Marcel Telka <marcel@telka.sk>
2017-04-14 17:59:56 +00:00
Andriy Gapon
cce6c1c241 7276 zfs(1m) manpage could better describe space properties
illumos/illumos-gate@5749c35234
5749c35234

https://www.illumos.org/issues/7276
  The "used" and "written" properties could be described better by the zfs.1m
  manpage.
  "written" could be better described as "The amount of space referenced by this
  dataset, that was written since the previous snapshot (i.e. that is not
  referenced by the previous snapshot)."
  The "used" section needs more work, but at a minimum it could say that the
  "used" space of a snapshot is the space unique to the snapshot (i.e. the space
  referenced only by this snapshot). The "used" space of a snapshot is a subset
  of the "written" space of the snapshot.

Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Pavel Zakharov <pavel.zakharov@delphix.com>
2017-04-14 17:59:10 +00:00
Andriy Gapon
b689d45202 7257 zfs manpage user property length needs to be updated
illumos/illumos-gate@3bc7169503
3bc7169503

https://www.illumos.org/issues/7257
  The zfs.1m manpage says:
  User Properties
      ...
         Property values are limited to 1024 characters.
  Since zpool version 16, this limit is actually 8192 characters. Additionally,
  this limit is actually 8192 bytes, as it supports UTF-8.

Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Eli Rosenthal <eli.rosenthal@delphix.com>
2017-04-14 17:58:31 +00:00
Andriy Gapon
fc7f6c1fe6 7041 Fix spelling mistakes in sections 1 and 1M
illumos/illumos-gate@df23d905b9
df23d905b9

https://www.illumos.org/issues/7041
  With the changes made in #7040, I found and fixed misspellings in sections 1
  and 1M of the manual pages.

Reviewed by: Marcel Telka <marcel@telka.sk>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Cody Peter Mello <cody.mello@joyent.com>
2017-04-14 17:58:00 +00:00
Andriy Gapon
3619e6d9ab missing zfs.1m changes from r299449 2017-04-14 17:57:10 +00:00
Andriy Gapon
685bc6fed7 missing zfs.1m changes from r296518 2017-04-14 17:55:06 +00:00
Andriy Gapon
0c465e198f missing zfs.1m changes from r294814 2017-04-14 17:52:47 +00:00
Andriy Gapon
1383302d2d 6346 zfs(1M) has spurious comma
illumos/illumos-gate@1058dba45e
1058dba45e

https://www.illumos.org/issues/6346
  The xref to gzip(1) in the SEE ALSO puts the comma inside the parens, because a
  space is missing in the source
  .Xr gzip 1,
  should be
  .Xr gzip 1 ,
  It'd be cool if the manual page checks in pbchk could catch this, too, but I'm
  not sure how easy that'd be.

Reviewed by: Garrett D'Amore <garrett@damore.org>
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Gordon Ross <gwr@nexenta.com>
Author: Yuri Pankov <yuri.pankov@nexenta.com>
2017-04-14 17:49:37 +00:00
Andriy Gapon
99d91c7d5e missing zfs.1m changes from r289312 2017-04-14 17:48:45 +00:00
Andriy Gapon
cc685b610a missing zfs.1m changes from r289310 2017-04-14 17:46:40 +00:00
Andriy Gapon
7ecf4767d4 missing zfs.1m changes from r286704 2017-04-14 17:43:51 +00:00
Andriy Gapon
aad99e99a3 7571 non-present readonly numeric ZFS props do not have default value
illumos/illumos-gate@ad2760acbd
ad2760acbd

https://www.illumos.org/issues/7571
  ZFS displays the default value for non-present readonly numeric (and index)
  properties. However, these properties default values are not meaningful.
  Instead, we should display a "-", indicating that they are not present. For
  example, on a version-12 pool, the usedby* properties are not available, but
  they show up as the incorrect value "0":
     1. zfs get all test12
        ...
        test12 usedbysnapshots 0 -
        test12 usedbydataset 0 -
        test12 usedbychildren 0 -
        test12 usedbyrefreservation 0 -
  We will be introducing more sometimes-present numeric readonly properties, so
  it would be nice to fix this.

Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-14 17:27:09 +00:00
Andriy Gapon
910a9fff3b 7542 zfs_unmount failed with EZFS_UNSHARENFSFAILED
illumos/illumos-gate@09c9e6dc9b
09c9e6dc9b

https://www.illumos.org/issues/7542
  libshare keeps a cached copy of the sharetab listing in memory, which can
  become out of date if shares are destroyed or created while leaving a libzfs
  handle open. This results in a spurious unmounting failure when an NFS share
  exists but isn't in the stale libshare cache.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Amdur <matt.amdur@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Chris Williamson <chris.williamson@delphix.com>
2017-04-14 17:24:22 +00:00
Andriy Gapon
6a34841d8a 7336 vfork and O_CLOEXEC causes zfs_mount EBUSY
illumos/illumos-gate@873c4903a5
873c4903a5

https://www.illumos.org/issues/7336
  We can run into a problem where we call into zfs_mount, which in turn calls
  is_dir_empty, which opens the directory to try and make sure it's empty. The
  issue with the current approach is that it holds the directory open while it
  traverses it with readdir, which, due to subtle interaction with the Java JVM,
  vfork, and exec can cause a tricky race condition resulting in zfs_mount
  failures.
  The approach to resolving the issue in this patch is to drop the usage of
  readdir altogether, and instead rely on the fact that ZFS stores the number of
  entries contained in a directory using the st_size field of the stat structure.
  Thus, if the directory in question is a ZFS directory, we can check to see if
  it's empty by calling stat() and inspecting the st_size field of structure
  returned.
  ===============================================================================
  The root cause appears to be an interesting race between vfork, exec, and
  zfs_mount's usage of O_CLOEXEC when calling openat. Here's what is going on:
  1. We call zfs_mount, and this in turn calls openat to check if the directory
  is empty, which results in opening the directory we're trying to mount onto,
  and increment v_count.
  2. As we're in the middle of reading the directory, vfork is called by the JVM
  and proceeds to exec the jspawnhelper utility. As a result of the vfork, we
  take an additional hold on the directory, which increments v_count a second
  time. The semantics of vfork mean the parent process will wait for the child
  process to exit or exec before the parent can continue; at this point the
  parent is in the middle of zfs_mount, reading the directory to determine if
  it's empty or not.
  3. The child process exec-ing jspawnhelper gets to the relvm call within
  exec_args (which is called by exec_common). relvm is the function that releases
  the parent process, allowing the parent to proceed. The problem is, at this
  point of calling relvm, the child hasn't yet called close_exec which is
  responsible for closing the file descriptors inherited from the parent process

Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Prakash Surya <prakash.surya@delphix.com>
2017-04-14 17:23:52 +00:00
Andriy Gapon
68d57fec79 7233 dir_is_empty should open directory with CLOEXEC
illumos/illumos-gate@d420209d9c
d420209d9c

https://www.illumos.org/issues/7233
  This fixes a race where one thread is executing zfs_mount() while another
  thread forks and execs. If the fork occurs while the directory is open, the
  child process will inherit (but not necessarily close immediately) the open fd
  for the directory, preventing the mount.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Alex Reece <alex@delphix.com>
2017-04-14 17:22:54 +00:00
Andriy Gapon
8ba67dd37f 7502 ztest should run zdb with -G (debug mode)
illumos/illumos-gate@c3c65d17f7
c3c65d17f7

https://www.illumos.org/issues/7502
  Right now ztest executes zdb without -G, so when it has errors, the messages
  are often not very helpful:
  Executing zdb -bccsv -d -U /rpool/tmp/zpool.cache ztest
  zdb: can't open 'ztest': Operation not supported
  ztest: '/usr/sbin/amd64/zdb -bccsv -d -U /rpool/tmp/zpool.cache ztest' exit
  code 1
  With -G, we'd have:
  /usr/sbin/amd64/zdb -bccsv -d -U /rpool/tmp/zpool.cache -G ztest
  zdb: can't open 'ztest': Operation not supported

  ZFS_DBGMSG(zdb):
  spa_open_common: opening ztest
  spa_load(ztest): LOADING
  spa_load(ztest): FAILED: unable to parse config [error=48]
  spa_load(ztest): UNLOADING
  Which indicates where the error came from

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
Author: Pavel Zakharov <pavel.zakharov@delphix.com>
2017-04-14 17:20:26 +00:00
Andriy Gapon
8f77e16c27 7490 real checksum errors are silenced when zinject is on
illumos/illumos-gate@6cedfc397d
6cedfc397d

https://www.illumos.org/issues/7490
  When zinject is on, error codes from zfs_checksum_error() can be overwritten
  due to an incorrect and overly-complex if condition.

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Pavel Zakharov <pavel.zakharov@delphix.com>
2017-04-14 17:15:49 +00:00
Andriy Gapon
aa06caf86c 7448 ZFS doesn't notice when disk vdevs have no write cache
illumos/illumos-gate@295438ba32
295438ba32

https://www.illumos.org/issues/7448
       I built a SmartOS image with all the NVMe commits including 7372
       (support NVMe volatile write cache) and repeated my dd testing:
       > #!/bin/bash
       > for i in `seq 1 1000`; do
       > dd if=/dev/zero of=file00 bs=1M count=102400 oflag=sync &
       > dd if=/dev/zero of=file01 bs=1M count=102400 oflag=sync &
       > wait
       > rm file00 file01
       > done
       >
       Previously each dd command took ~145 seconds to finish, now it takes
       ~400 seconds.
       Eventually I figured out it is 7372 that causes unnecessary
       nvme_bd_sync() executions which wasted CPU cycles.
  If a NVMe device doesn't support a write cache, the nvme_bd_sync function will
  return ENOTSUP to indicate this to upper layers.
  It seems this returned value is ignored by ZFS, and as such this bug is not
  really specific to NVMe. In vdev_disk_io_start() ZFS sends the flush to the
  disk driver (blkdev) with a callback to vdev_disk_ioctl_done(). As nvme filled
  in the bd_sync_cache function pointer, blkdev will not return ENOTSUP, as the
  nvme driver in general does support cache flush. Instead it will issue an
  asynchronous flush to nvme and immediately return 0, and hence ZFS will not set
  vdev_nowritecache here. The nvme driver will at some point process the cache
  flush command, and if there is no write cache on the device it will return
  ENOTSUP, which will be delivered to the vdev_disk_ioctl_done() callback. This
  function will not check the error code and not set nowritecache.
  The right place to check the error code from the cache flush is in
  zio_vdev_io_assess(). This would catch both cases, synchronous and asynchronous
  cache flushes. This would also be independent of the implementation detail that
  some drivers can return ENOTSUP immediately.

Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
2017-04-14 17:15:11 +00:00
Andriy Gapon
40e6e931e0 7430 Backfill metadnode more intelligently
illumos/illumos-gate@af346df588
af346df588

https://www.illumos.org/issues/7430
  Description and patch from brought over from the following ZoL commit: https://
  github.com/zfsonlinux/zfs/commit/68cbd56e182ab949f58d004778d463aeb3f595c6
  Only attempt to backfill lower metadnode object numbers if at least
  4096 objects have been freed since the last rescan, and at most once
  per transaction group. This avoids a pathology in dmu_object_alloc()
  that caused O(N^2) behavior for create-heavy workloads and
  substantially improves object creation rates. As summarized by
  @mahrens in #4636:
  "Normally, the object allocator simply checks to see if the next
  object is available. The slow calls happened when dmu_object_alloc()
  checks to see if it can backfill lower object numbers. This happens
  every time we move on to a new L1 indirect block (i.e. every 32 *
  128 = 4096 objects). When re-checking lower object numbers, we use
  the on-disk fill count (blkptr_t:blk_fill) to quickly skip over
  indirect blocks that don’t have enough free dnodes (defined as an L2
  with at least 393,216 of 524,288 dnodes free). Therefore, we may
  find that a block of dnodes has a low (or zero) fill count, and yet
  we can’t allocate any of its dnodes, because they've been allocated
  in memory but not yet written to disk. In this case we have to hold
  each of the dnodes and then notice that it has been allocated in
  memory.
  The end result is that allocating N objects in the same TXG can
  require CPU usage proportional to N^2."
  Add a tunable dmu_rescan_dnode_threshold to define the number of
  objects that must be freed before a rescan is performed. Don't bother
  to export this as a module option because testing doesn't show a
  compelling reason to change it. The vast majority of the performance
  gain comes from limit the rescan to at most once per TXG.

Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
Author: Ned Bass <bass6@llnl.gov>
2017-04-14 17:13:49 +00:00
Andriy Gapon
9bad09e9a8 6392 zdb: introduce -V for verbatim import
illumos/illumos-gate@dfd5965f7e
dfd5965f7e

https://www.illumos.org/issues/6392
  When given a pool name via -e, zdb would attempt an import. If it
  failed, then it would attempt a verbatim import. This behavior is
  not always desirable so a -V switch is added to zdb to control the
  behavior. When specified, a verbatim import is done. Otherwise,
  the behavior is as it was previously, except no verbatim import
  is done on failure.
  a5778ea242
  Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
  Reviewed by: Matthew Ahrens <mahrens@delphix.com>

Reviewed by: Yuri Pankov <yuri.pankov@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Richard Yao <ryao@gentoo.org>
2017-04-14 17:01:00 +00:00
Andriy Gapon
e8fe3151e2 3871 fix issues introduced by 3604
illumos/illumos-gate@de05b58863
de05b58863

https://www.illumos.org/issues/3871
  GCC 4.5.3 on Gentoo Linux did not like a few of the changes made in the issue
  3604 patch. It printed an error and a couple of warnings:
  ../../cmd/zdb/zdb.c: In function 'dump_bpobj':
  ../../cmd/zdb/zdb.c:1257:3: error: 'for' loop initial declarations are only
  allowed in C99 mode
  ../../cmd/zdb/zdb.c:1257:3: note: use option -std=c99 or -std=gnu99 to compile
  your code
  ../../cmd/zdb/zdb.c: In function 'dump_deadlist':
  ../../cmd/zdb/zdb.c:1323:8: warning: too many arguments for format
  ../../cmd/zdb/zdb.c:1323:8: warning: too many arguments for format

Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Richard Yao <ryao@gentoo.org>
2017-04-14 16:59:53 +00:00
Andriy Gapon
e5cf3d39f9 6410 teach zdb to perform object lookups by path
illumos/illumos-gate@ed61ec1da9
ed61ec1da9

https://www.illumos.org/issues/6410
  This is primarily intended to ease debugging & testing ZFS when one is only
  interested in things like the on-disk location of a specific object's blocks,
  but doesn't know their object id. This allows doing things like the following
  (FreeBSD-based example):
          # zpool create -f foo da0
          # dd if=/dev/zero of=/foo/1 bs=1M count=4 >/dev/null 2>&1
          # zpool export foo
          # zdb -vvvvv -o "ZFS plain file" foo /1
          Object  lvl   iblk   dblk  dsize  lsize   %full  type
               8    2    16K   128K  3.99M     4M  100.00  ZFS plain file (K=inherit) (Z=inherit)
                                              168   bonus  System attributes
          dnode flags: USED_BYTES USERUSED_ACCOUNTED
          dnode maxblkid: 31
          path    /1
          uid     0
          gid     0
          atime   Thu Apr 23 22:45:32 2015
          mtime   Thu Apr 23 22:45:32 2015
          ctime   Thu Apr 23 22:45:32 2015
          crtime  Thu Apr 23 22:45:32 2015
          gen     7
          mode    100644
          size    4194304
          parent  4
          links   1
          pflags  40800000004
          Indirect blocks:
                0 L1  DVA[0]=<0:c19200:600> DVA[1]=<0:10800019200:600> [L1 ZFS
  plain file] fletcher4 lz4 LE contiguous unique double size=4000L/200P birth=7L/

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Yuri Pankov <yuri.pankov@nexenta.com>
2017-04-14 16:58:41 +00:00
Andriy Gapon
b91038ef7a 6866 zdb -l should return non-zero if it fails to find any label
illumos/illumos-gate@64723e3611
64723e3611

https://www.illumos.org/issues/6866
  Need this for #6865.
  To be generally more scripting-friendly, overload this issue with adding '-q'
  option which should skip printing any label information.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Yuri Pankov <yuri.pankov@nexenta.com>
2017-04-14 16:57:06 +00:00
Andriy Gapon
60d1053c22 7545 zdb should disable reference tracking
illumos/illumos-gate@4dd77f9e38
4dd77f9e38

https://www.illumos.org/issues/7545
  When evicting from the ARC, we manipulate some refcount_t's, e.g. arcs_size.
  When using zdb to examine a large amount of data (e.g. zdb -bb on a large pool
  with small blocks), the ARC may have a large number of entries. If reference
  tracking is enabled, there will be ~1 reference for each block in the ARC. When
  evicting, we decrement the refcount and have to search all the references to
  find the one that we are removing, which is very slow.
  Since zdb is typically used to find problems with the on-disk format, and not
  with the code it is running, we should disable reference tracking in zdb.

Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-14 16:55:15 +00:00
Andriy Gapon
be0769e638 7280 Allow changing global libzpool variables in zdb and ztest through command line
illumos/illumos-gate@0e60744c98
0e60744c98

https://www.illumos.org/issues/7280
  zdb is very handy for diagnosing problems with a pool in a safe and
  quick way. When a pool is in a bad shape, we often want to disable some
  fail-safes, or adjust some tunables in order to open them. In the
  kernel, this is done by changing public variables in mdb. The goal of
  this feature is to add the same capability to zdb and ztest, so that
  they can change libzpool tuneables from the command line.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Pavel Zakharov <pavel.zakharov@delphix.com>
2017-04-14 16:53:34 +00:00
Alan Somers
1d381d5c2c 7900 zdb shouldn't print the path of a znode at verbosity < 5
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Alan Somers <asomers@freebsd.org>

illumos/illumos-gate@e548d2fa41
https://www.illumos.org/issues/7900
2017-04-14 16:18:53 +00:00
Josh Paetzel
854fb3447d 5409 Remove shareiscsi description and example from zfs(1M)
illumos/illumos-gate@b3cff10cdd
b3cff10cdd

https://www.illumos.org/issues/5409

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gwr@nexenta.com>
2017-04-13 05:55:36 +00:00
Josh Paetzel
be64493050 5661 ZFS: "compression = on" should use lz4 if feature is enabled
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
Reviewed by: Xin LI <delphij@freebsd.org>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Justin T. Gibbs <justing@spectralogic.com>

illumos/illumos-gate@db1741f555
2017-04-13 05:47:52 +00:00
Pedro F. Giffuni
68c0499c8c 8046 Let calloc() do the multiplication in libzfs_fru_refresh
5697e03e6e

https://www.illumos.org/issues/8046

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author:	Pedro Giffuni <pfg@freebsd.org>
2017-04-10 22:46:34 +00:00
Josh Paetzel
fedc1382bd 7603 xuio_stat_wbuf_* should be declared (void)
illumos/illumos-gate@99aa8b5505
99aa8b5505

https://www.illumos.org/issues/7603

  The funcs are declared k&r style, where the args are not specified:

  void xuio_stat_wbuf_copied();
  They should be declared to take no arguments:

  void xuio_stat_wbuf_copied(void);
  Need to change both .c and .h.

Author: Prashanth Sreenivasa <pks@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
2017-03-26 16:49:20 +00:00
Josh Paetzel
7e45ac57cb 7303 dynamic metaslab selection
illumos/illumos-gate@8363e80ae7
https://github.com/illumos/illumos-gate/commit/8363e80ae72609660f6090766ca8c2c18aa53f0

https://www.illumos.org/issues/7303

  This change introduces a new weighting algorithm to improve metaslab selection.
  The new weighting algorithm relies on the SPACEMAP_HISTOGRAM feature. As a result,
  the metaslab weight now encodes the type of weighting algorithm used
  (size-based vs segment-based).

  This also introduce a new allocation tracing facility and two new dcmds to help
  debug allocation problems. Each zio now contains a zio_alloc_list_t structure
  that is populated as the zio goes through the allocations stage. Here's an
  example of how to use the tracing facility:

> c5ec000::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
  MSID    DVA    ASIZE      WEIGHT             RESULT               VDEV
     -      0      400           0    NOT_ALLOCATABLE           ztest.0a
     -      0      400           0    NOT_ALLOCATABLE           ztest.0a
     -      0      400           0             ENOSPC           ztest.0a
     -      0      200           0    NOT_ALLOCATABLE           ztest.0a
     -      0      200           0    NOT_ALLOCATABLE           ztest.0a
     -      0      200           0             ENOSPC           ztest.0a
     1      0      400      1 x 8M            17b1a00           ztest.0a

> 1ff2400::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
  MSID    DVA    ASIZE      WEIGHT             RESULT               VDEV
     -      0      200           0    NOT_ALLOCATABLE           mirror-2
     -      0      200           0    NOT_ALLOCATABLE           mirror-0
     1      0      200      1 x 4M            112ae00           mirror-1
     -      1      200           0    NOT_ALLOCATABLE           mirror-2
     -      1      200           0    NOT_ALLOCATABLE           mirror-0
     1      1      200      1 x 4M            112b000           mirror-1
     -      2      200           0    NOT_ALLOCATABLE           mirror-2

  If the metaslab is using segment-based weighting then the WEIGHT column will
  display the number of segments available in the bucket where the allocation
  attempt was made.

Author: George Wilson <george.wilson@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: Chris Siden <christopher.siden@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <paul.dagnelie@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
2017-03-15 04:18:40 +00:00
Josh Paetzel
137146f48c 7303 dynamic metaslab selection
illumos/illumos-gate@8363e80ae7
https://github.com/illumos/illumos-gate/commit/8363e80ae72609660f6090766ca8c2c18aa53f0

https://www.illumos.org/issues/7303

  This change introduces a new weighting algorithm to improve metaslab selection.
  The new weighting algorithm relies on the SPACEMAP_HISTOGRAM feature. As a result,
  the metaslab weight now encodes the type of weighting algorithm used
  (size-based vs segment-based).

  This also introduce a new allocation tracing facility and two new dcmds to help
  debug allocation problems. Each zio now contains a zio_alloc_list_t structure
  that is populated as the zio goes through the allocations stage. Here's an
  example of how to use the tracing facility:

> c5ec000::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
  MSID    DVA    ASIZE      WEIGHT             RESULT               VDEV
     -      0      400           0    NOT_ALLOCATABLE           ztest.0a
     -      0      400           0    NOT_ALLOCATABLE           ztest.0a
     -      0      400           0             ENOSPC           ztest.0a
     -      0      200           0    NOT_ALLOCATABLE           ztest.0a
     -      0      200           0    NOT_ALLOCATABLE           ztest.0a
     -      0      200           0             ENOSPC           ztest.0a
     1      0      400      1 x 8M            17b1a00           ztest.0a

> 1ff2400::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
  MSID    DVA    ASIZE      WEIGHT             RESULT               VDEV
     -      0      200           0    NOT_ALLOCATABLE           mirror-2
     -      0      200           0    NOT_ALLOCATABLE           mirror-0
     1      0      200      1 x 4M            112ae00           mirror-1
     -      1      200           0    NOT_ALLOCATABLE           mirror-2
     -      1      200           0    NOT_ALLOCATABLE           mirror-0
     1      1      200      1 x 4M            112b000           mirror-1
     -      2      200           0    NOT_ALLOCATABLE           mirror-2

  If the metaslab is using segment-based weighting then the WEIGHT column will
  display the number of segments available in the bucket where the allocation
  attempt was made.

Author: George Wilson <george.wilson@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: Chris Siden <christopher.siden@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <paul.dagnelie@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
2017-03-15 04:16:08 +00:00
Andriy Gapon
b060bbc16a 7867 ARC space accounting leak
illumos/illumos-gate@6de76ce2a9
6de76ce2a9

https://www.illumos.org/issues/7867
  It seems that in the case where arc_hdr_free_pdata() sees HDR_L2_WRITING() we
  would fail to update the ARC space statistics.
  In the normal case those statistics are updated in arc_free_data_buf(). But in
  the arc_hdr_free_on_write() path we don't do that.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <avg@FreeBSD.org>
2017-03-08 13:40:24 +00:00
Andriy Gapon
f1a99e4268 7843 get_clones_stat() is suboptimal for lots of clones
illumos/illumos-gate@c5bde7273e
c5bde7273e

https://www.illumos.org/issues/7843
  get_clones_stat() could be very slow if a snapshot has many (thousands) clones.
  Clone names are added to an nvlist that's created with NV_UNIQUE_NAME.
  So, each time a new name is appended to the list, the whole list is searched
  linearly to see if that name is not already in the list. That results in the
  quadratic complexity.
  That should be easy to fix as we know in advance that we should not get any
  duplicate names, so we can drop NV_UNIQUE_NAME when creating the list.

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <avg@FreeBSD.org>
2017-03-08 13:39:22 +00:00
Josh Paetzel
dbac7ab9c0 7570 tunable to allow zvol SCSI unmap to return on commit of txn to ZIL
illumos/illumos-gate@1c9272b861
1c9272b861

https://www.illumos.org/issues/7570

  Based on the discovery that every unmap waits for the commit of the txn to the ZIL,
  introducing a very high latency to unmap commands, this behavior was made into a
  tunable zvol_unmap_sync_enabled and set to false. The net impact of this change is
  that by default SCSI unmap commands will result in space being freed within the zvol
  (today they are ignored and returned with good status). However, unlike the code
  today, instead of 18+ms per unmap, they take about 30us.

  With the testing done on NTFS against a Win2k12 target, the new behavior should work
  seamlessly. Files on the zvol that have already been set with the zfree application
  will continue to write 0's when deleted, and any new files created since zvol
  creation will send unmap commands when deleted. This behavior exists today, but with
  this change the unmap commands will be processed and result in reclaim of space.

Author: Stephen Blinick <stephen.blinick@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Approved by: Robert Mustacchi <rm@joyent.com>
2017-02-25 19:16:20 +00:00
Josh Paetzel
acf1cb602b 6676 Race between unique_insert() and unique_remove() causes ZFS fsid change
illumos/illumos-gate@40510e8eba
40510e8eba

https://www.illumos.org/issues/6676

  The fsid of zfs filesystems might change after reboot or remount. The problem seems to
  be caused by a race between unique_insert() and unique_remove(). The unique_remove()
  is called from dsl_dataset_evict() which is now an asynchronous thread. In a case the
  dsl_dataset_evict() thread is very slow and calls unique_remove() too late we will end
  up with changed fsid on zfs mount.

  This problem is very likely caused by #5056.

  Steps to Reproduce
  Note: I'm able to reproduce this always on a single core (virtual) machine. On multicore
  machines it is not so easy to reproduce.

# uname -a
SunOS openindiana 5.11 illumos-633aa80 i86pc i386 i86pc Solaris
# zfs create rpool/TEST
# FS=$(echo ::fsinfo | mdb -k | grep TEST | awk '{print $1}')
# echo $FS::print vfs_t vfs_fsid | mdb -k
vfs_fsid = {
    vfs_fsid.val = [ 0x54d7028a, 0x70311508 ]
}
# zfs umount rpool/TEST
# zfs mount rpool/TEST
# FS=$(echo ::fsinfo | mdb -k | grep TEST | awk '{print $1}')
# echo $FS::print vfs_t vfs_fsid | mdb -k
vfs_fsid = {
    vfs_fsid.val = [ 0xd9454e49, 0x6b36d08 ]
}
#

  Impact
  The persistent fsid (filesystem id) is essential for proper NFS functionality.
  If the fsid of a filesystem changes on remount (or after reboot) the NFS
  clients might not be able to automatically recover from such event and the
  manual remount of the NFS filesystems on every NFS client might be needed.

Author: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Dan Vatca <dan.vatca@gmail.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Sebastien Roy <sebastien.roy@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
2017-02-25 03:34:22 +00:00
Josh Paetzel
91c038e77a 7504 kmem_reap hangs spa_sync and administrative tasks
illumos/illumos-gate@405a5a0f5c
https://github.com/illumos/illumos-gate/commit/405a5a0f5c3ab36cb76559467d1a62ba648bd80

https://www.illumos.org/issues/7504

  We see long spa_sync(). We are waiting to hold dp_config_rwlock for writer. Some
  other thread holds dp_config_rwlock for reader, then calls arc_get_data_buf(),
  which finds that arc_is_overflowing()==B_TRUE. So it waits (while holding
  dp_config_rwlock for reader) for arc_reclaim_thread to signal arc_reclaim_waiters_cv.
  Before signaling, arc_reclaim_thread does arc_kmem_reap_now(), which takes ~seconds.

Author: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
2017-02-17 15:00:13 +00:00
Josh Paetzel
281825cdf1 7500 Simplify dbuf_free_range by removing dn_unlisted_l0_blkid
illumos/illumos-gate@653af1b809
653af1b809

https://www.illumos.org/issues/7500
  With the integration of:

    commit 0f6d88aded0d165f5954688a9b13bac76c38da84
    Author: Alex Reece <alex@delphix.com>
    Date:   Sat Jul 26 13:40:04 2014 -0800
    4873 zvol unmap calls can take a very long time for larger datasets

  the dnode's dn_bufs field was changed from a list to a tree. As a result,
  the dn_unlisted_l0_blkid field is no longer necessary.

Author: Stephen Blinick <stephen.blinick@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
2017-02-16 01:44:56 +00:00
Josh Paetzel
1e37d7e555 6569 large file delete can starve out write ops
illumos/illumos-gate@ff5177ee8b
ff5177ee8b

https://www.illumos.org/issues/6569
  The core issue I've found is that there is no throttle for how many
  deletes get assigned to one TXG. As a results when deleting large files
  we end up filling consecutive TXGs with deletes/frees, then write
  throttling other (more important) ops.

  There is an easy test case for this problem. Try deleting several
  large files (at least 1/2 TB) while you do write ops on the same
  pool. What we've seen is performance of these write ops (let's
  call it sideload I/O) would drop to zero.

  More specifically the problem is that dmu_free_long_range_impl()
  can/will fill up all of the dirty data in the pool "instantly",
  before many of the sideload ops can get in. So sideload
  performance will be impacted until all the files are freed.

  The solution we have tested at Nexenta (with positive results)
  creates a relatively simple throttle for how many "free" ops we let
  into one TXG.

  However this solution exposes other problems that should also be
  addressed. If we are to slow down freeing of data that means one
  has to wait even longer (assuming vnode ref count of 1) to get shell
  back after an rm or for NFS thread to finish the free-ing op.
  To avoid this the proposed solution is to call zfs_inactive() async
  for "large" files. Async freeing then begs for the reclaimed space
  to be accounted for in the zpool's "freeing" prop.

  The other issue with having a longer delete is the inability to
  export/unmount for a longer period of time. The proposed solution
  is to interrupt freeing of blocks when a fs is unmounted.

Author: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
2017-01-19 20:44:29 +00:00
Andriy Gapon
aed3e94d2f 3821 Race in rollback, zil close, and zil flush
illumos/illumos-gate@43297f973a
43297f973a

https://www.illumos.org/issues/3821
  We recently had nodes with some of the latest zfs bits panic on us in a
  rollback-heavy environment. The following is from my preliminary analysis:
  Let's look at where we died:
  > $C
  ffffff01ea6b9a10 taskq_dispatch+0x3a(0, fffffffff7d20450, ffffff5551dea920, 1)
  ffffff01ea6b9a60 zil_clean+0xce(ffffff4b7106c080, 7e0f1)
  ffffff01ea6b9aa0 dsl_pool_sync_done+0x47(ffffff4313065680, 7e0f1)
  ffffff01ea6b9b70 spa_sync+0x55f(ffffff4310c1d040, 7e0f1)
  ffffff01ea6b9c20 txg_sync_thread+0x20f(ffffff4313065680)
  ffffff01ea6b9c30 thread_start+8()
  If we dig in we can find that this dataset corresponds to a zone:
  > ffffff4b7106c080::print zilog_t zl_os->os_dsl_dataset->ds_dir->dd_myname
  zl_os->os_dsl_dataset->ds_dir->dd_myname = [ "8ffce16a-13c2-4efa-a233-
  9e378e89877b" ]
  Okay so we have a null taskq pointer. That only happens during the calls to
  zil_open and zil_close. If we poke around we can see that we're actually in
  midst of a rollback:
  > ::pgrep zfs | ::printf "0x%x %s\\n" proc_t . p_user.u_psargs
  0xffffff43262800a0 zfs rollback zones/15714eb6-f5ea-469f-ac6d-
  4b8ab06213c2@marlin_init
  0xffffff54e22a1028 zfs rollback zones/8ffce16a-13c2-4efa-a233-
  9e378e89877b@marlin_init
  0xffffff4362f3a058 zfs rollback zones/0ddb8e49-ca7e-42e1-8fdc-
  4ac4ba8fe9f8@marlin_init
  0xffffff5748e8d020 zfs rollback zones/426357b5-832d-4430-953e-
  10cd45ff8e9f@marlin_init
  0xffffff436b867008 zfs rollback zones/8f36bf37-8a9c-4a44-995c-
  6d1b2751e6f5@marlin_init
  0xffffff4381ad4090 zfs rollback zones/6c8eca18-fbd6-46dd-ac24-
  2ed45cd0da70@marlin_init

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: George Wilson <george.wilson@delphix.com>
2016-11-28 15:09:58 +00:00