Commit Graph

1483 Commits

Author SHA1 Message Date
Andriy Gapon
e70097b50f MFV r318947: 7578 Fix/improve some aspects of ZIL writing.
FreeBSD note: this commit removes small differences between what mav
committed to FreeBSD in r308782 and what ended up committed to illumos
after addressing all review comments.

illumos/illumos-gate@c5ee46810f
c5ee46810f

https://www.illumos.org/issues/7578
  After some ZIL changes 6 years ago zil_slog_limit got partially broken
  due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
  Actually because of other changes about that time zl_itx_list_sz is not
  really required to implement the functionality, so this patch removes
  some unneeded broken code and variables.
  Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
  single heavy logger, that increased latency for other (more latency critical)
  loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
  huge latency increase for heavy writers, this implementation caused double
  write of all data, since the log records were explicitly prepared for SLOG.
  Since we now have I/O scheduler, I've found it can be much more efficient
  to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
  to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
  Existing ZIL implementation had problem with space efficiency when it
  has to write large chunks of data into log blocks of limited size. In some
  cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
  spinning rust, that also reduced log write speed in half, since head had to
  uselessly fly over allocated but not written areas. This change improves
  the situation by offloading problematic operations from z*_log_write() to
  zil_lwb_commit(), which knows real situation of log blocks allocation and
  can split large requests into pieces much more efficiently. Also as side
  effect it removes one of two data copy operations done by ZIL code WR_COPIED
  case.
  While there, untangle and unify code of z*_log_write() functions.
  Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
  block boundary, that may also improve efficiency if ZPL is made to do that.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Alexander Motin <mav@FreeBSD.org>

MFC after:	3 weeks
2017-06-22 16:52:22 +00:00
Andriy Gapon
a4a2976d8a fix several fallouts from r320156, ZFS ABD import
All of the problems were related to the FreeBSD-only features.
One was caused by a mismerge in the zfsbootcfg support code.
All others were in the TRIM support code.

MFC after:	1 week
X-MFC with:	r320156
2017-06-21 08:12:07 +00:00
Andriy Gapon
ebf3b53dac fix several fallouts from r320156, ZFS ABD import
All of the problems were related to the FreeBSD-only features.
One was caused by a mismerge in the zfsbootcfg support code.
All others were in the TRIM support code.

Reported by:	ken,
		O. Hartmann <ohartmann@walstatt.org>,
		Trond Endrestøl <Trond.Endrestol@fagskolen.gjovik.no>
MFC after:	1 week
X-MFC with:	r320156
2017-06-21 08:10:45 +00:00
Andriy Gapon
f9cdbaba8d MFV r318946: 8021 ARC buf data scatter-ization
illumos/illumos-gate@770499e185
770499e185

https://www.illumos.org/issues/8021
  The ARC buf data project (known simply as "ABD" since its genesis in the ZoL
  community) changes the way the ARC allocates `b_pdata` memory from using linear
  `void *` buffers to using scatter/gather lists of fixed-size 1KB chunks. This
  improves ZFS's performance by helping to defragment the address space occupied
  by the ARC, in particular for cases where compressed ARC is enabled. It could
  also ease future work to allocate pages directly from `segkpm` for minimal-
  overhead memory allocations, bypassing the `kmem` subsystem.
  This is essentially the same change as the one which recently landed in ZFS on
  Linux, although they made some platform-specific changes while adapting this
  work to their codebase:
  1. Implemented the equivalent of the `segkpm` suggestion for future work
  mentioned above to bypass issues that they've had with the Linux kernel memory
  allocator.
  2. Changed the internal representation of the ABD's scatter/gather list so it
  could be used to pass I/O directly into Linux block device drivers. (This
  feature is not available in the illumos block device interface yet.)

FreeBSD notes:
- the actual (default) chunk size is 4KB (despite the text above saying 1KB)
- we can try to reimplement ABDs, so that they are not permanently
  mapped into the KVA unless explicitly requested, especially on
  platforms with scarce KVA
- we can try to use unmapped I/O and avoid intermediate allocation of a
  linear, virtual memory mapped buffer
- we can try to avoid extra data copying by referring to chunks / pages
  in the original ABD

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Chris Williamson <chris.williamson@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Dan Kimmel <dan.kimmel@delphix.com>

MFC after:	3 weeks
2017-06-20 17:39:24 +00:00
Andriy Gapon
42ce346fcc revert r315852 which introduced zio_buf_alloc_nowait for use in vdev_queue_aggregate
I think that the change is still good, but reconciling it with a planned
merge of the ARC buf data scatter-ization is a bit more tedious
than I can handle.

MFC after:	17 days
2017-06-20 16:55:30 +00:00
Andriy Gapon
602cf4e4a7 MFV r319951: 8311 ZFS_READONLY is a little too strict
illumos/illumos-gate@2889ec41c0
2889ec41c0

https://www.illumos.org/issues/8311
  Description:
  There was a misunderstanding about the enforcement details of the "Read-only"
  flag introduced for SMB/CIFS compatibility, way back in 2007 in the Sun PSARC
  2007/315 case.
  The original authors thought enforcement of the READONLY flag should work
  similarly as the IMMUTABLE flag. Unfortunately, that enforcement is
  incompatible with the expectations of Windows applications using this feature
  through the SMB service. Applications assume (and the MS File System Algorithms
  MS-FSA confirms they should) that an SMB client can:
  (a) Open an SMB handle on a file with read/write access,
  (b) Set the DOS attributes to include the READONLY flag,
  (c) continue to have write access via that handle.
  This access model is essentially the same as a Unix/POSIX application that
  creates a file (with read/write access), uses fchmod() to change the file mode
  to something not granting write access (i.e. 0444), and then continues to write
  that file using the open handle it got before the mode change.
  Currently, the SMB server works-around this problem in a way that will become
  difficult to maintain as we implement support for SMB3 persistent handles, so
  SMB depends on this fix.
  I've written a test program that can be used to demonstrate this problem, and
  added it to zfs-tests (tests/functional/acl/cifs/cifs_attr_004_pos).
  It currently fails, but will pass when this problem fixed.
  Steps to Reproduce:
    Run the test program on a ZFS file system.
  Expected Results:
    Pass
  Actual Results:
    Fail.

Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Approved by: Prakash Surya <prakash.surya@delphix.com>
Author: Gordon Ross <gwr@nexenta.com>
MFC after:	2 weeks
2017-06-14 16:55:47 +00:00
Andriy Gapon
7d506d0d57 MFV r319948: 5428 provide fts(), reallocarray(), and strtonum()
illumos/illumos-gate@4585130b25
4585130b25

https://www.illumos.org/issues/5428

Most of the upstream change is not applicable to FreeBSD.
Only the renaming of strtonum to zfs_strtonum is relevant to us.
And we already had it partially done.

Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Joshua M. Clulow <josh@sysmgr.org>
Author: Yuri Pankov <yuri.pankov@nexenta.com>
MFC after:	1 week
2017-06-14 16:42:38 +00:00
Andriy Gapon
b8d341fe26 MFV r319945,r319946: 8264 want support for promoting datasets in libzfs_core
illumos/illumos-gate@a4b8c9aa65
a4b8c9aa65

https://www.illumos.org/issues/8264
  Oddly there is a lzc_clone function, but no lzc_promote function.

Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan McDonald <danmcd@kebe.com>
Approved by: Dan McDonald <danmcd@kebe.com>
Author: Andrew Stormont <astormont@racktopsystems.com>
MFC after:	1 week
2017-06-14 16:31:36 +00:00
Andriy Gapon
667002fa27 MFV r319741: 8156 dbuf_evict_notify() does not need dbuf_evict_lock
illumos/illumos-gate@dbfd9f9300
dbfd9f9300

https://www.illumos.org/issues/8156
  dbuf_evict_notify() holds the dbuf_evict_lock while checking if it should do
  the eviction itself (because the evict thread is not able to keep up).
  This can result in massive lock contention.
  It isn't necessary to hold the lock, because if we make the wrong choice
  occasionally, nothing bad will happen.

Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	1 week
2017-06-09 15:28:57 +00:00
Andriy Gapon
5f9cf93878 MFV r319739: 8005 poor performance of 1MB writes on certain RAID-Z configurations
illumos/illumos-gate@5b06278253
5b06278253

https://www.illumos.org/issues/8005
  RAID-Z requires that space be allocated in multiples of P+1 sectors,
  because this is the minimum size block that can have the required amount
  of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2
  sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector
  is a unit of 2^ashift bytes, typically 512B or 4KB.
  To satisfy this constraint, the allocation size is rounded up to the
  proper multiple, resulting in up to 3 "pad sectors" at the end of some
  blocks. The contents of these pad sectors are not used, so we do not
  need to read or write these sectors. However, some storage hardware
  performs much worse (around 1/2 as fast) on mostly-contiguous writes
  when there are small gaps of non-overwritten data between the writes.
  Therefore, ZFS creates "optional" zio's when writing RAID-Z blocks that
  include pad sectors. If writing a pad sector will fill the gap between
  two (required) writes, we will issue the optional zio, thus doubling
  performance. The gap-filling performance improvement was introduced in
  July 2009.
  Writing the optional zio is done by the io aggregation code in
  vdev_queue.c. The problem is that it is also subject to the limit on
  the size of aggregate writes, zfs_vdev_aggregation_limit, which is by
  default 128KB. For a given block, if the amount of data plus padding
  written to a leaf device exceeds zfs_vdev_aggregation_limit, the
  optional zio will not be written, resulting in a ~2x performance
  degradation.
  The problem occurs only for certain values of ashift, compressed block
  size, and RAID-Z configuration (number of parity and data disks). It
  cannot occur with the default recordsize=128KB. If compression is
  enabled, all configurations with recordsize=1MB or larger will be
  impacted to some degree.
  The problem notably occurs with recordsize=1MB, compression=off, with 10
  disks in a RAIDZ2 or RAIDZ3 group (with 512B or 4KB sectors). Therefore

Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	10 days
2017-06-09 15:27:22 +00:00
Andriy Gapon
9f141f8d71 MFV r319738: 8155 simplify dmu_write_policy handling of pre-compressed buffers
illumos/illumos-gate@adaec86ad2
adaec86ad2

https://www.illumos.org/issues/8155
  When writing pre-compressed buffers, arc_write() requires that the compression
  algorithm used to compress the buffer matches the compression algorithm
  requested by the zio_prop_t, which is set by dmu_write_policy().
  This makes dmu_write_policy() and its callers a bit more complicated.
  We can simplify this by making arc_write() trust the caller to supply the type
  of pre-compressed buffer that it wants to write, and override the compression
  setting in the zio_prop_t.

Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	10 days
2017-06-09 15:26:03 +00:00
Andriy Gapon
1628f75af1 zfs_lookup: fix bogus arguments to lookup of "snapshot" directory
When a parent directory lookup is done at the root of a snapshot mounted
under .zfs/snapshot directory, we need to look up that directory in
the parent filesystem.  We achieve that by doing a VOP_LOOKUP operation
on a .zfs vnode with "snapshot" as a target name.  But previously we
also passed ISDOTDOT flag to the lookup and, because of that, the lookup
actually returned the parent of the .zfs vnode, that is, a root vnode of
the parent filesystem.

Reported by:	lev
Tested by:	lev
MFC after:	3 days
2017-05-29 06:30:34 +00:00
Konstantin Belousov
03311f117b Use whole mnt_stat.f_fsid bits for st_dev.
Since ino64 expanded dev_t to 64bit, make VOP_GETATTR(9) provide all
bits of mnt_stat.f_fsid as va_fsid for vnodes on filesystems which use
f_fsid.  In particular, NFSv3 and sometimes NFSv4, and ZFS use this
method or reporting st_dev by stat(2).

Provide a new helper vn_fsid() to avoid duplicating code to copy
f_fsid to va_fsid.

Note that the change is mostly cosmetic.  Its motivation is to avoid
sign-extension of f_fsid[0] into 64bit dev_t value which happens after
dev_t becomes 64bit..

Reviewed by:	avg(zfs), rmacklem (nfs) (both for previous version)
Sponsored by:	The FreeBSD Foundation
2017-05-27 17:00:30 +00:00
Andriy Gapon
32ecf81aff MFV r318944: 8265 Reserve send stream flag for large dnode feature
illumos/illumos-gate@bc83969fdb
bc83969fdb

https://www.illumos.org/issues/8265
  Reserve bit 23 in the zfs send stream flags for the large
  dnode feature which has been implemented for Linux.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Brian Behlendorf <behlendorf1@llnl.gov>

MFC after:	1 week
2017-05-26 12:08:38 +00:00
Andriy Gapon
a51eb0a964 MFV r318942: 8166 zpool scrub thinks it repaired offline device
illumos/illumos-gate@2d2f193a21
2d2f193a21

https://www.illumos.org/issues/8166
  If we do a scrub while a leaf device is offline (via "zpool offline"),
  we will inadvertently clear the DTL (dirty time log) of the offline
  device, even though it is still damaged. When the device comes back
  online, we will incompletely resilver it, thinking that the scrub
  repaired blocks written before the scrub was started. The incomplete
  resilver can lead to data loss if there is a subsequent failure of a
  different leaf device.
  The fix is to never clear the DTL of offline devices. Note that if a
  device is onlined while a scrub is in progress, the scrub will be
  restarted.
  The problem can be worked around by running "zpool scrub" after
  "zpool online".
  See also https://github.com/zfsonlinux/zfs/issues/5806

Reviewed by: George Wilson george.wilson@delphix.com
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-05-26 12:04:21 +00:00
Andriy Gapon
2cd05c2473 MFV r318934: 8070 Add some ZFS comments
illumos/illumos-gate@40713f2b24
40713f2b24

https://www.illumos.org/issues/8070
  Add some ZFS comments left by various developers at different times

Reviewed by: Yuri Pankov <yuri.pankov@gmail.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Alan Somers <asomers@gmail.com>

MFC after:	1 week
2017-05-26 11:49:42 +00:00
Andriy Gapon
0a07ea0e2f MFV r318931: 8063 verify that we do not attempt to access inactive txg
illumos/illumos-gate@b7b2590dd9
b7b2590dd9

https://www.illumos.org/issues/8063
  A standard practice in ZFS is to keep track of "per-txg" state. Any of
  the 3 active TXG's (open, quiescing, syncing) can have different values
  for this state. We should assert that we do not attempt to modify other
  (inactive) TXG's.

Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>

MFC after:	2 weeks
2017-05-26 11:37:11 +00:00
Andriy Gapon
28c5e43e36 MFV r318929: 7786 zfs`vdev_online() needs better notification about state changes
illumos/illumos-gate@5f368aef86
5f368aef86

https://www.illumos.org/issues/7786
  Currently, vdev_online() will only post sysevent if previous state was
  "offline". It should also post the event when the state changes from "removed"
  or "faulted" to "healthy" or "degraded".
  This will fix the following scenario:
  - pull disk from slot A
  - check that hotspare has taken its place (if available)
  - insert disk into slot B
  - check that hotspare moved back to "avail" state (if spare was used)
  The problem here is that we don't get any ESC_ZFS_VDEV_* notification and fail
  to update the vdev FRU.

Reviewed by: Matthew Ahrens mahrens@delphix.com
Reviewed by: George Wilson george.wilson@delphix.com
Approved by: Albert Lee <trisk@forkgnu.org>
Author: Yuri Pankov <yuri.pankov@nexenta.com>

MFC after:	1 week
2017-05-26 11:33:34 +00:00
Andriy Gapon
9c2a3c861f MFV r318927: 8025 dbuf_read() creates unnecessary zio_root() for bonus buf
illumos/illumos-gate@def4fac588
def4fac588

https://www.illumos.org/issues/8025
  dbuf_read() creates a zio_root() to track and wait for all the zio's
  that may happen as part of this call. However, if the blkptr_t for
  this buffer is NULL or a hole, we will not create any more zio's, so
  this zio_root() is unnecessary. This is always the case when calling
  dbuf_read() on a bonus buffer, because it has no blkptr (it's part of
  the containing dnode). For workloads that read a lot of bonus buffers
  (e.g. file creation and removal), creating and destroying these
  unnecessary zio's can decrease performance by around 3%.

Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prashanth Sreenivasa <pks@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-05-26 11:30:55 +00:00
Andriy Gapon
ebaf416f95 MFV r316929: 6914 kernel virtual memory fragmentation leads to hang
illumos/illumos-gate@af868f46a5
af868f46a5

https://www.illumos.org/issues/6914

FreeBSD note: only a ZFS part of the change is merged, changes to the VM
subsystem are not ported (obviously).  Also, now that FreeBSD has
vmem(9) we don't have to ifdef-out the code that uses it.

MFC after:	2 weeks
2017-05-26 11:23:16 +00:00
Andriy Gapon
8629ec8394 arc_init: make code closer to upstream by introducing 'allmem' variable
All the differences in calculations are kept.
A comment about arc_max being 1/2 of all memory is fixed to reflect the
actual code that uses 5/8 as a factor.

MFC after:	1 week
2017-05-26 11:05:56 +00:00
Andriy Gapon
cf781c9b60 zfs_putpages: assert that sa_bulk_update() must succeed
Same as the upstream does in r316927.

MFC after:	1 week
2017-05-26 10:37:55 +00:00
Andriy Gapon
04b7c6b337 MFV r316928: 7256 low probability race in zfs_get_data
illumos/illumos-gate@0c94e1af67
0c94e1af67

https://www.illumos.org/issues/7256
                         error = dmu_sync(zio, lr->lr_common.lrc_txg,
                              zfs_get_done, zgd);
                         ASSERT(error || lr->lr_length <= zp->z_blksz);
  It's possible, although extremely rare, that the zfs_get_done() callback is
  executed before dmu_sync() returns.
  In that case the znode's range lock is dropped and the znode is unreferenced.
  Thus, the assertion can access some invalid or wrong data via the zp pointer.
  size variable caches the correct value of z_blksz and can be safely used here.

Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <andriy.gapon@clusterhq.com>

MFC after:	1 week
2017-05-26 10:31:05 +00:00
Andriy Gapon
7a94dd7aee MFC r316924: 8061 sa_find_idx_tab can be declared more type-safely
illumos/illumos-gate@7f0bdb4257
7f0bdb4257

https://www.illumos.org/issues/8061
  sa_find_idx_tab() is declared as taking and returning "void *" parameters.
  These can be declared to be the specific types.

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Chris Williamson <chris.williamson@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>

MFC after:	1 week
2017-05-26 10:27:35 +00:00
Andriy Gapon
8816c0bb48 MFV r316925: 6101 attempt to lzc_create() a filesystem under a volume results in a panic
illumos/illumos-gate@b127fe3c05
b127fe3c05

https://www.illumos.org/issues/6101
  lzc_create(), or more correctly, zfs_ioc_create() does not reject an attempt to
  create a filesystem as a child of a volume, instead it proceeds to a crash.
  A crash stack obtained on FreeBSD:
  page fault while in kernel mode

  zap_leaf_lookup()
  fzap_lookup()
  zap_lookup_norm()
  zap_lookup()
  zfs_get_zplprop()
  zfs_fill_zplprops_impl()
  zfs_ioc_create()
  zfsdev_ioctl()
  devfs_ioctl_f()
  kern_ioctl()
  sys_ioctl()
  This crash happened with a kernel without debugging assertions.
  The immediate cause of crash appears to an attempt to interpret a zvol object
  as a zap object.
  For filesystems:
  #define MASTER_NODE_OBJ 1
  For zvols:
  #define ZVOL_OBJ                1ULL
  #define ZVOL_ZAP_OBJ            2ULL
  So, I see two problems here:
     1. an attempt to create a filesystem under a zvol should be rejected as
        early as possible, maybe in zfs_fill_zplprops()
     2. maybe zap_lookup / zap_lockdir should reject objects that are not of one
        of the zap object types

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <avg@FreeBSD.org>
MFC after:	2 weeks
2017-05-24 22:34:54 +00:00
Andriy Gapon
e73f9f8a49 MFV r316923: 8026 retire zfs_throttle_delay and zfs_throttle_resolution
illumos/illumos-gate@6b03625981
6b03625981

https://www.illumos.org/issues/8026
  zfs_throttle_delay and zfs_throttle_resolution became disused since the new
  write throttling mechanism was introduced.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Author: Andriy Gapon <avg@FreeBSD.org>
MFC after:	1 week
2017-05-24 22:32:56 +00:00
Andriy Gapon
9fe5e04dfc MFC r316921: 8027 tighten up dsl_pool_dirty_delta
illumos/illumos-gate@313ae1e182
313ae1e182

https://www.illumos.org/issues/8027
  dsl_pool_dirty_delta() should not wake up waiters when dp->dp_dirty_total ==
  zfs_dirty_data_max, because they wait for dp_dirty_total to fall strictly below
  the threshold.
  It's probably very rare for that condition to occur, but it's better to have
  more accurate code.

Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <avg@FreeBSD.org>
MFC after:	1 week
2017-05-24 22:27:48 +00:00
Andriy Gapon
e1b8f10a5e MFV r316920: 8023 Panic destroying a metaslab deferred range tree
illumos/illumos-gate@3991b535a8
3991b535a8

https://www.illumos.org/issues/8023
       $C
  ffffff0011bc0970 vpanic()
  ffffff0011bc0a00 strlog()
  ffffff0011bc0a30 range_tree_destroy+0x72(ffffff043769ad00)
  ffffff0011bc0a70 metaslab_fini+0xd5(ffffff0449acf380)
  ffffff0011bc0ab0 vdev_metaslab_fini+0x56(ffffff0462bae800)
  ffffff0011bc0af0 spa_unload+0x9b(ffffff03e3dac000)
  ffffff0011bc0b70 spa_export_common+0x115(ffffff047f4b4000, 2, 0, 0, 0)
  ffffff0011bc0b90 spa_destroy+0x1d(ffffff047f4b4000)
  ffffff0011bc0bd0 zfs_ioc_pool_destroy+0x20(ffffff047f4b4000)
  ffffff0011bc0c80 zfsdev_ioctl+0x4d7(11400000000, 5a01, 8040190, 100003,
  ffffff03e1956b10, ffffff0011bc0e68)
  ffffff0011bc0cc0 cdev_ioctl+0x39(11400000000, 5a01, 8040190, 100003,
  ffffff03e1956b10, ffffff0011bc0e68)
  ffffff0011bc0d10 spec_ioctl+0x60(ffffff03d9153b00, 5a01, 8040190, 100003,
  ffffff03e1956b10, ffffff0011bc0e68, 0)
  ffffff0011bc0da0 fop_ioctl+0x55(ffffff03d9153b00, 5a01, 8040190, 100003,
  ffffff03e1956b10, ffffff0011bc0e68, 0)
  ffffff0011bc0ec0 ioctl+0x9b(3, 5a01, 8040190)
  ffffff0011bc0f10 _sys_sysenter_post_swapgs+0x149()

Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: George Wilson <george.wilson@delphix.com>
MFC after:	2 weeks
2017-05-24 22:25:26 +00:00
Andriy Gapon
5386d7295a MFV r316917: 7968 multi-threaded spa_sync()
illumos/illumos-gate@94c2d0eb22
94c2d0eb22

https://www.illumos.org/issues/7968
  spa_sync() iterates over all the dirty dnodes and processes each of them by
  calling dnode_sync(). If there are many dirty dnodes (e.g. because we created
  or removed a lot of files), the single thread of spa_sync() calling
  dnode_sync() can become a bottleneck. Additionally, if many dnodes are dirtied
  concurrently in open context (e.g. due to concurrent file creation), the
  os_lock will experience lock contention via dnode_setdirty().
  The solution is to track dirty dnodes on a multilist_t, and for spa_sync() to
  use separate threads to process each of the sublists in the multilist.
  On the concurrent file creation microbenchmark, the performance improvement
  from dnode_setdirty() is up to 7%. Additionally, the wall clock time spent in
  spa_sync() is reduced to 15%-40% of the single-threaded case. In terms of cost/
  reward, once the other bottlenecks are addressed, fixing this bug will provide
  a medium-large performance gain and require a medium amount of effort to
  implement.

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	3 weeks
2017-05-24 22:21:24 +00:00
Andriy Gapon
2ba631553c MFV r316916: 7970 zfs_arc_num_sublists_per_state should be common to all multilists
illumos/illumos-gate@10fbdecb05
10fbdecb05

https://www.illumos.org/issues/7970
  The global tunable zfs_arc_num_sublists_per_state is used by the ARC and
  the dbuf cache, and other users are planned. We should change this
  tunable to be common to all multilists.

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	3 weeks
2017-05-24 22:15:16 +00:00
Andriy Gapon
1d7634429c MFC r316915: 7801 add more by-dnode routines (lint)
illumos/illumos-gate@411be58a6e
411be58a6e
MFC after:	24 days
X-MFC with:	r318823
2017-05-24 21:52:20 +00:00
Andriy Gapon
31fd119cc2 MFC r316914: 7801 add more by-dnode routines
illumos/illumos-gate@b0c42cd470
b0c42cd470

https://www.illumos.org/issues/7801
  Add *_by_dnode() routines for accessing objects given their
  dnode_t *, this is more efficient than accessing the object by
  (objset_t *, uint64_t object). This change converts some but
  not all of the existing consumers. As performance-sensitive
  code paths are discovered they should be converted to use
  these routines.
  Ported from: 0eef1bde31

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: bzzz77 <bzzz.tomas@gmail.com>
MFC after:	24 days
2017-05-24 21:49:21 +00:00
Andriy Gapon
5aab788866 MFC r316913: 7869 panic in bpobj_space(): null pointer dereference
illumos/illumos-gate@a3905a4592
a3905a4592

https://www.illumos.org/issues/7869
  The issue fixed by this patch is a race condition in the deadlist code.
  A thread executing an administrative command that uses
  `dsl_deadlist_space_range()` holds the lock of the whole `deadlist_t` to
  protect the access of all its entries that the deadlist contains in an
  avl tree.
  Sync threads trying to insert a new entry in the deadlist
  (through `dsl_deadlist_insert()` -> `dle_enqueue()`) do not hold the
  deadlist lock at that moment. If the `dle_bpobj` is the empty bpobj (our
  sentinel value), we close and reopen it. Between these two operations,
  it is possible for the `dsl_deadlist_space_range()` thread to dereference
  that bpobj which is `NULL` during that window.
  Threads should hold the a deadlist's `dl_lock` when they manipulate its
  internal data so scenarios like the one above are avoided. In addition,
  threads should also hold the bpobj lock whenever they are allocating the
  subobj list of a bpobj, and not just when they actually insert the subobj
  to the list. This way we can avoid potential memory leaks.

Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: George Melikov <mail@gmelikov.ru>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Serapheim Dimitropoulos <serapheim@delphix.com>
MFC after:	2 weeks
2017-05-24 21:45:52 +00:00
Andriy Gapon
930f1af491 MFC r316912: 7793 ztest fails assertion in dmu_tx_willuse_space
illumos/illumos-gate@61e255ce72
61e255ce72

https://www.illumos.org/issues/7793
  Background information: This assertion about tx_space_* verifies that we
  are not dirtying more stuff than we thought we would. We “need” to know
  how much we will dirty so that we can check if we should fail this
  transaction with ENOSPC/EDQUOT, in dmu_tx_assign(). While the
  transaction is open (i.e. between dmu_tx_assign() and dmu_tx_commit() —
  typically less than a millisecond), we call dbuf_dirty() on the exact
  blocks that will be modified. Once this happens, the temporary
  accounting in tx_space_* is unnecessary, because we know exactly what
  blocks are newly dirtied; we call dnode_willuse_space() to track this
  more exact accounting.
  The fundamental problem causing this bug is that dmu_tx_hold_*() relies
  on the current state in the DMU (e.g. dn_nlevels) to predict how much
  will be dirtied by this transaction, but this state can change before we
  actually perform the transaction (i.e. call dbuf_dirty()).
  This bug will be fixed by removing the assertion that the tx_space_*
  accounting is perfectly accurate (i.e. we never dirty more than was
  predicted by dmu_tx_hold_*()). By removing the requirement that this
  accounting be perfectly accurate, we can also vastly simplify it, e.g.
  removing most of the logic in dmu_tx_count_*().
  The new tx space accounting will be very approximate, and may be more or
  less than what is actually dirtied. It will still be used to determine
  if this transaction will put us over quota. Transactions that are marked
  by dmu_tx_mark_netfree() will be excepted from this check. We won’t make
  an attempt to determine how much space will be freed by the transaction
  — this was rarely accurate enough to determine if a transaction should
  be permitted when we are over quota, which is why dmu_tx_mark_netfree()
  was introduced in 2014.
  We also won’t attempt to give “credit” when overwriting existing blocks,
  if those blocks may be freed. This allows us to remove the
  do_free_accounting logic in dbuf_dirty(), and associated routines. This

Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Matthew Ahrens <mahrens@delphix.com>
MFC after:	3 weeks
2017-05-24 21:43:34 +00:00
Andriy Gapon
3a9c923927 MFC r316907: 1300 filename normalization doesn't work for removes
illumos/illumos-gate@1c17160ac5
1c17160ac5

https://www.illumos.org/issues/1300

FreeBSD note: recent FreeBSD was not affected by the issue fixed as the
name cache is completely bypassed when normalization is enabled.
The change is imported for the sake of ZAP infrastructure modifications.

Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Kevin Crowe <kevin.crowe@nexenta.com>

MFC after:	3 weeks
2017-05-24 21:29:31 +00:00
Alan Somers
7ac72c256f vdev_geom may associate multiple vdevs per g_consumer
vdev_geom.c currently uses the g_consumer's private field to point to a
vdev_t. That way, a GEOM event can cause a change to a ZFS vdev. For
example, when you remove a disk, the vdev's status will change to REMOVED.
However, vdev_geom will sometimes attach multiple vdevs to the same GEOM
consumer. If this happens, then geom events will only be propagated to one
of the vdevs.

Fix this by storing a linked list of vdevs in g_consumer's private field.

sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c

* g_consumer.private now stores a linked list of vdev pointers associated
  with the consumer instead of just a single vdev pointer.

* Change vdev_geom_set_physpath's signature to more closely match
  vdev_geom_set_rotation_rate

* Don't bother calling g_access in vdev_geom_set_physpath. It's guaranteed
  that we've already accessed the consumer by the time we get here.

* Don't call vdev_geom_set_physpath in vdev_geom_attach. Instead, call it
  in vdev_geom_open, after we know that the open has succeeded.

PR:		218634
Reviewed by:	gibbs
MFC after:	1 week
Sponsored by:	Spectra Logic Corp
Differential Revision:	https://reviews.freebsd.org/D10391
2017-05-11 16:26:56 +00:00
Josh Paetzel
ba13ab83f2 Fix misport of compressed ZFS send/recv from 317414
Reported by:	Michael Jung <mikej@mikej.com>
Reviewed by:	avg
2017-05-01 12:56:12 +00:00
Mark Johnston
babf030fd6 Get rid of some ifdef soup in the fasttrap ioctl handler.
No functional change intended.

MFC after:	1 week
2017-04-28 22:25:22 +00:00
Josh Paetzel
285d85ab04 MFV 316905
7740 fix for 6513 only works in hole punching case, not truncation

illumos/illumos-gate@7de35a3ed0
7de35a3ed0

https://www.illumos.org/issues/7740
  The problem is that dbuf_findbp will return ENOENT if the block it's
  trying to find is beyond the end of the file. If that happens, we assume
  there is no birth time, and so we lose that information when we write
  out new blkptrs. We should teach dbuf_findbp to look for things that are
  beyond the current end, but not beyond the absolute end of the file.
  To verify, create a large file, truncate it to a short length, and then
  write beyond the end. Check with zdb to make sure that there are no
  holes with birth time zero (will appear as gaps).

Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Paul Dagnelie <pcd@delphix.com>
2017-04-28 02:11:29 +00:00
Josh Paetzel
358f157522 MFV 316900
7743 per-vdev-zaps have no initialize path on upgrade

illumos/illumos-gate@555da5111b
555da5111b

https://www.illumos.org/issues/7743
  When loading a pool that had been created before the existance of
  per-vdev zaps, on a system that knows about per-vdev zaps, the
  per-vdev zaps will not be allocated and initialized.
  This appears to be because the logic that would have done so, in
  spa_sync_config_object(), is not reached under normal operation. It is
  only reached if spa_config_dirty_list is non-empty.
  The fix is to add another `AVZ_ACTION_` enum that will allow this code
  to be reached when we detect that we're loading an old pool, even when
  there are no dirty configs.

Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Paul Dagnelie <pcd@delphix.com>
2017-04-27 23:31:38 +00:00
Josh Paetzel
8ad5797208 MFV 316898
7613 ms_freetree[4] is only used in syncing context

illumos/illumos-gate@5f14577801
5f14577801

https://www.illumos.org/issues/7613
  metaslab_t:ms_freetree[TXG_SIZE] is only used in syncing context. We should
  replace it with two trees: the freeing tree (ranges that we are freeing this
  syncing txg) and the freed tree (ranges which have been freed this txg).

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-27 22:00:03 +00:00
Josh Paetzel
e3cb0e99f8 MFV 316897
7586 remove #ifdef __lint hack from dmu.h

illumos/illumos-gate@4ba5b96163
4ba5b96163

https://www.illumos.org/issues/7586
  The #ifdef __lint in dmu.h is ugly, and it would be nice not to duplicate it if
  we add other inline functions into header files in ZFS, especially since it is
  difficult to make any other solution work across all compilation targets. We
  should switch to disabling the lint flags that are failing instead.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Dan Kimmel <dan.kimmel@delphix.com>
2017-04-27 21:11:57 +00:00
Josh Paetzel
011275233c MFV 316896
7580 ztest failure in dbuf_read_impl

illumos/illumos-gate@1a01181fdc
1a01181fdc

https://www.illumos.org/issues/7580
  We need to prevent any reader whenever we're about the zero out all the
  blkptrs. To do this we need to grab the dn_struct_rwlock as writer in
  dbuf_write_children_ready and free_children just prior to calling bzero.

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: George Wilson <george.wilson@delphix.com>
2017-04-27 16:38:28 +00:00
Josh Paetzel
fa88c78914 MFV 316895
7606 dmu_objset_find_dp() takes a long time while importing pool

illumos/illumos-gate@7588687e6b
7588687e6b

https://www.illumos.org/issues/7606
  When importing a pool with a large number of filesystems within the same
  parent filesystem, we see that dmu_objset_find_dp() takes a long time.
  It is called from 3 places: spa_check_logs(), spa_ld_claim_log_blocks(),
  and spa_load_verify().
  There are several ways to improve performance here:
  1. We don't really need to do spa_check_logs() or
         spa_ld_claim_log_blocks() if the pool was closed cleanly.
  2. spa_load_verify() uses dmu_objset_find_dp() to check that no
         datasets have too long of names.
  3. dmu_objset_find_dp() is slow because it's doing
         zap_value_search() (which is O(N sibling datasets)) to determine
         the name of each dsl_dir when it's opened. In this case we
         actually know the name when we are opening it, so we can provide
         it and avoid the lookup.
  This change implements fix #3 from the above list; i.e. make
  dmu_objset_find_dp() provide the name of the dataset so that we don't
  have to search for it.

Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prashanth Sreenivasa <prashksp@gmail.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
Author: Matthew Ahrens <mahrens@delphix.com>
2017-04-27 15:10:45 +00:00
Josh Paetzel
c78abb8b50 MFV 316894
7252 7628 compressed zfs send / receive

illumos/illumos-gate@5602294fda
5602294fda

https://www.illumos.org/issues/7252
  This feature includes code to allow a system with compressed ARC enabled to
  send data in its compressed form straight out of the ARC, and receive data in
  its compressed form directly into the ARC.

https://www.illumos.org/issues/7628
  We should have longer, more readable versions of the ZFS send / recv options.

7628 create long versions of ZFS send / receive options

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed by: David Quigley <dpquigl@davequigley.com>
Reviewed by: Thomas Caputi <tcaputi@datto.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Dan Kimmel <dan.kimmel@delphix.com>
2017-04-25 17:57:43 +00:00
Josh Paetzel
ef18459108 MFV 316891
7386 zfs get does not work properly with bookmarks

illumos/illumos-gate@edb901aab9
edb901aab9

https://www.illumos.org/issues/7386
  The zfs get command does not work with the bookmark parameter while it works
  properly with both filesystem and snapshot:
  # zfs get -t all -r creation rpool/test
  NAME               PROPERTY  VALUE                  SOURCE
  rpool/test         creation  Fri Sep 16 15:00 2016  -
  rpool/test@snap    creation  Fri Sep 16 15:00 2016  -
  rpool/test#bkmark  creation  Fri Sep 16 15:00 2016  -
  # zfs get -t all -r creation rpool/test@snap
  NAME             PROPERTY  VALUE                  SOURCE
  rpool/test@snap  creation  Fri Sep 16 15:00 2016  -
  # zfs get -t all -r creation rpool/test#bkmark
  cannot open 'rpool/test#bkmark': invalid dataset name
  #
  The zfs get command should be modified to work properly with bookmarks too.

Reviewed by: Simon Klinkert <simon.klinkert@gmail.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Matthew Ahrens <mahrens@delphix.com>
Author: Marcel Telka <marcel@telka.sk>
2017-04-21 19:53:52 +00:00
Josh Paetzel
36064ac2d5 MFV 316871
7490 real checksum errors are silenced when zinject is on

illumos/illumos-gate@6cedfc397d
6cedfc397d

https://www.illumos.org/issues/7490
  When zinject is on, error codes from zfs_checksum_error() can be overwritten
  due to an incorrect and overly-complex if condition.

Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Pavel Zakharov <pavel.zakharov@delphix.com>
2017-04-21 00:24:59 +00:00
Josh Paetzel
9a625bd31c MFV 316870
7448 ZFS doesn't notice when disk vdevs have no write cache

illumos/illumos-gate@295438ba32
295438ba32

https://www.illumos.org/issues/7448
       I built a SmartOS image with all the NVMe commits including 7372
       (support NVMe volatile write cache) and repeated my dd testing:
       > #!/bin/bash
       > for i in `seq 1 1000`; do
       > dd if=/dev/zero of=file00 bs=1M count=102400 oflag=sync &
       > dd if=/dev/zero of=file01 bs=1M count=102400 oflag=sync &
       > wait
       > rm file00 file01
       > done
       >
       Previously each dd command took ~145 seconds to finish, now it takes
       ~400 seconds.
       Eventually I figured out it is 7372 that causes unnecessary
       nvme_bd_sync() executions which wasted CPU cycles.
  If a NVMe device doesn't support a write cache, the nvme_bd_sync function will
  return ENOTSUP to indicate this to upper layers.
  It seems this returned value is ignored by ZFS, and as such this bug is not
  really specific to NVMe. In vdev_disk_io_start() ZFS sends the flush to the
  disk driver (blkdev) with a callback to vdev_disk_ioctl_done(). As nvme filled
  in the bd_sync_cache function pointer, blkdev will not return ENOTSUP, as the
  nvme driver in general does support cache flush. Instead it will issue an
  asynchronous flush to nvme and immediately return 0, and hence ZFS will not set
  vdev_nowritecache here. The nvme driver will at some point process the cache
  flush command, and if there is no write cache on the device it will return
  ENOTSUP, which will be delivered to the vdev_disk_ioctl_done() callback. This
  function will not check the error code and not set nowritecache.
  The right place to check the error code from the cache flush is in
  zio_vdev_io_assess(). This would catch both cases, synchronous and asynchronous
  cache flushes. This would also be independent of the implementation detail that
  some drivers can return ENOTSUP immediately.

Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Obtained from:	Illumos
2017-04-21 00:17:54 +00:00
Josh Paetzel
47e222432b MFV 316868
7430 Backfill metadnode more intelligently

illumos/illumos-gate@af346df588
af346df588

https://www.illumos.org/issues/7430
  Description and patch from brought over from the following ZoL commit: https://
  github.com/zfsonlinux/zfs/commit/68cbd56e182ab949f58d004778d463aeb3f595c6
  Only attempt to backfill lower metadnode object numbers if at least
  4096 objects have been freed since the last rescan, and at most once
  per transaction group. This avoids a pathology in dmu_object_alloc()
  that caused O(N^2) behavior for create-heavy workloads and
  substantially improves object creation rates. As summarized by
  @mahrens in #4636:
  "Normally, the object allocator simply checks to see if the next
  object is available. The slow calls happened when dmu_object_alloc()
  checks to see if it can backfill lower object numbers. This happens
  every time we move on to a new L1 indirect block (i.e. every 32 *
  128 = 4096 objects). When re-checking lower object numbers, we use
  the on-disk fill count (blkptr_t:blk_fill) to quickly skip over
  indirect blocks that don?t have enough free dnodes (defined as an L2
  with at least 393,216 of 524,288 dnodes free). Therefore, we may
  find that a block of dnodes has a low (or zero) fill count, and yet
  we can?t allocate any of its dnodes, because they've been allocated
  in memory but not yet written to disk. In this case we have to hold
  each of the dnodes and then notice that it has been allocated in
  memory.
  The end result is that allocating N objects in the same TXG can
  require CPU usage proportional to N^2."
  Add a tunable dmu_rescan_dnode_threshold to define the number of
  objects that must be freed before a rescan is performed. Don't bother
  to export this as a module option because testing doesn't show a
  compelling reason to change it. The vast majority of the performance
  gain comes from limit the rescan to at most once per TXG.

Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
Author: Ned Bass <bass6@llnl.gov>

Obtained from:	Illumos
2017-04-21 00:12:47 +00:00
Gleb Smirnoff
83c9dea1ba - Remove 'struct vmmeter' from 'struct pcpu', leaving only global vmmeter
in place.  To do per-cpu stats, convert all fields that previously were
  maintained in the vmmeters that sit in pcpus to counter(9).
- Since some vmmeter stats may be touched at very early stages of boot,
  before we have set up UMA and we can do counter_u64_alloc(), provide an
  early counter mechanism:
  o Leave one spare uint64_t in struct pcpu, named pc_early_dummy_counter.
  o Point counter(9) fields of vmmeter to pcpu[0].pc_early_dummy_counter,
    so that at early stages of boot, before counters are allocated we already
    point to a counter that can be safely written to.
  o For sparc64 that required a whole dummy pcpu[MAXCPU] array.

Further related changes:
- Don't include vmmeter.h into pcpu.h.
- vm.stats.vm.v_swappgsout and vm.stats.vm.v_swappgsin changed to 64-bit,
  to match kernel representation.
- struct vmmeter hidden under _KERNEL, and only vmstat(1) is an exclusion.

This is based on benno@'s 4-year old patch:
https://lists.freebsd.org/pipermail/freebsd-arch/2013-July/014471.html

Reviewed by:	kib, gallatin, marius, lidl
Differential Revision:	https://reviews.freebsd.org/D10156
2017-04-17 17:34:47 +00:00
Gleb Smirnoff
9ed01c32e0 All these files need sys/vmmeter.h, but now they got it implicitly
included via sys/pcpu.h.
2017-04-17 17:07:00 +00:00
Andriy Gapon
656074ea60 rename vfs.zfs.debug_flags to vfs.zfs.debugflags
While the former name is easier to read, the "_flags" suffix has a special
meaning for loader(8) and, thus, it was impossible to set the knob via
loader.conf(5).  The loader interpreted the setting as flags that should
be passed to a kernel module named "vfs.zfs.debug".

Discussed with:	smh
MFC after:	2 weeks
2017-04-14 15:35:07 +00:00
Alan Somers
d255847d9e Fix vdev_geom_attach_by_guids for partitioned disks
When opening a vdev whose path is unknown, vdev_geom must find a geom
provider with a label whose guids match the desired vdev. However, due to
partitioning, it is possible that two non-synonomous providers will share
some labels. For example, if the first partition starts at the beginning of
the drive, then ada0 and ada0p1 will share the first label. More troubling,
if the last partition runs to the end of the drive, then ada0p3 and ada0
will share the last label. If vdev_geom opens ada0 when it should've opened
ada0p3, then the pool won't be readable. If it opens ada0 when it should've
opened ada0p1, then it will corrupt some other partition when it writes the
3rd and 4th labels.

The easiest way to reproduce this problem is to install a mirrored root pool
with the default partition layout, then swap the positions of the two boot
drives and reboot.  Whether the bug manifests depends on the order in which
geom lists its providers, which is arbitrary.

Fix this situation by modifying the search algorithm to prefer geom
providers that have all four labels intact. If no such provider exists, then
open whichever provider has the most.

Reviewed by:	mav
MFC after:	3 weeks
Sponsored by:	Spectra Logic Corp
Differential Revision:	https://reviews.freebsd.org/D10365
2017-04-13 14:51:34 +00:00
Steven Hartland
3e856909b7 Fix expandsz 16.0E vals and vdev_min_asize of RAIDZ children
When a member of a RAIDZ has been replaced with a device smaller than the
original, then the top level vdev can report its expand size as 16.0E.

The reduced child asize causes the RAIDZ to have a vdev_asize lower than its
vdev_max_asize which then results in an underflow during the calculation of
the parents expand size.

Fix this by updating the vdev_asize if it shrinks, which is already
protected by a check against vdev_min_asize so should always be safe.

Also for RAIDZ vdevs, ensure that the sum of their child vdev_min_asize is
always greater than the parents vdev_min_size.

Fixes: https://www.illumos.org/issues/7885

MFC after:	2 weeks
Sponsored by:	Multiplay
2017-04-03 13:11:28 +00:00
Josh Paetzel
e106234416 MFV: 315989
7603 xuio_stat_wbuf_* should be declared (void)

illumos/illumos-gate@99aa8b5505
99aa8b5505

https://www.illumos.org/issues/7603

  The funcs are declared k&r style, where the args are not specified:

  void xuio_stat_wbuf_copied();
  They should be declared to take no arguments:

  void xuio_stat_wbuf_copied(void);
  Need to change both .c and .h.

Author: Prashanth Sreenivasa <pks@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
2017-03-27 17:27:46 +00:00
Alexander Motin
3aef5b286a MFV r315290, r315291: 7303 dynamic metaslab selection
illumos/illumos-gate@8363e80ae7
https://github.com/illumos/illumos-gate/commit/8363e80ae72609660f6090766ca8c2c18

https://www.illumos.org/issues/7303

  This change introduces a new weighting algorithm to improve metaslab selection.
  The new weighting algorithm relies on the SPACEMAP_HISTOGRAM feature. As a result,
  the metaslab weight now encodes the type of weighting algorithm used
  (size-based vs segment-based).

  This also introduce a new allocation tracing facility and two new dcmds to help
  debug allocation problems. Each zio now contains a zio_alloc_list_t structure
  that is populated as the zio goes through the allocations stage. Here's an
  example of how to use the tracing facility:

> c5ec000::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
  MSID    DVA    ASIZE      WEIGHT             RESULT               VDEV
     -      0      400           0    NOT_ALLOCATABLE           ztest.0a
     -      0      400           0    NOT_ALLOCATABLE           ztest.0a
     -      0      400           0             ENOSPC           ztest.0a
     -      0      200           0    NOT_ALLOCATABLE           ztest.0a
     -      0      200           0    NOT_ALLOCATABLE           ztest.0a
     -      0      200           0             ENOSPC           ztest.0a
     1      0      400      1 x 8M            17b1a00           ztest.0a

> 1ff2400::print zio_t io_alloc_list | ::walk list | ::metaslab_trace
  MSID    DVA    ASIZE      WEIGHT             RESULT               VDEV
     -      0      200           0    NOT_ALLOCATABLE           mirror-2
     -      0      200           0    NOT_ALLOCATABLE           mirror-0
     1      0      200      1 x 4M            112ae00           mirror-1
     -      1      200           0    NOT_ALLOCATABLE           mirror-2
     -      1      200           0    NOT_ALLOCATABLE           mirror-0
     1      1      200      1 x 4M            112b000           mirror-1
     -      2      200           0    NOT_ALLOCATABLE           mirror-2

  If the metaslab is using segment-based weighting then the WEIGHT column will
  display the number of segments available in the bucket where the allocation
  attempt was made.

Author: George Wilson <george.wilson@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: Chris Siden <christopher.siden@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <paul.dagnelie@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
2017-03-24 09:37:00 +00:00
Andriy Gapon
16b46572fa zfs_putpages: use TXG_WAIT
Explicit looping using TXG_NOWAIT is more verbose and may harm performance
under heavy load because of multiple waits.

MFC after:	1 week
2017-03-23 09:13:21 +00:00
Andriy Gapon
3d775e193e zfs: add zio_buf_alloc_nowait and use it in vdev_queue_aggregate
This way we can avoid blocking the whole queue in the low memory
situations.  It's better to sacrifice some I/O performance by not doing
the aggregation than to add an indefinite wait for more memory.

Reviewed by:	smh
MFC after:	2 weeks
Sponsored by:	Panzura
Differential Revision: https://reviews.freebsd.org/D9999
2017-03-23 08:59:17 +00:00
Steven Hartland
c76da62acf Reduce ARC fragmentation threshold
As ZFS can request up to SPA_MAXBLOCKSIZE memory block e.g. during zfs recv,
update the threshold at which we start agressive reclamation to use
SPA_MAXBLOCKSIZE (16M) instead of the lower zfs_max_recordsize which
defaults to 1M.

PR:		194513
Reviewed by:	avg, mav
MFC after:	1 month
Sponsored by:	Multiplay
Differential Revision:	https://reviews.freebsd.org/D10012
2017-03-17 12:34:57 +00:00
Andriy Gapon
520758a51d zfs: provide a special vptocnp method for the .zfs vnode
vop_stdvptocnp() doesn't work properly if .zfs directory is hidden.

Reported by:	swills, des
Tested by:	des
MFC after:	1 week
MFC with:	r314048
2017-03-11 16:00:49 +00:00
Andriy Gapon
1a3c849840 MFV r314911: 7867 ARC space accounting leak
illumos/illumos-gate@6de76ce2a9
6de76ce2a9

https://www.illumos.org/issues/7867
  It seems that in the case where arc_hdr_free_pdata() sees HDR_L2_WRITING() we
  would fail to update the ARC space statistics.
  In the normal case those statistics are updated in arc_free_data_buf(). But in
  the arc_hdr_free_on_write() path we don't do that.

Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <avg@FreeBSD.org>

MFC after:	10 days
2017-03-08 13:52:45 +00:00
Andriy Gapon
7e4b3a6fa2 MFV r314910: 7843 get_clones_stat() is suboptimal for lots of clones
illumos/illumos-gate@c5bde7273e
c5bde7273e

https://www.illumos.org/issues/7843
  get_clones_stat() could be very slow if a snapshot has many (thousands) clones.
  Clone names are added to an nvlist that's created with NV_UNIQUE_NAME.
  So, each time a new name is appended to the list, the whole list is searched
  linearly to see if that name is not already in the list. That results in the
  quadratic complexity.
  That should be easy to fix as we know in advance that we should not get any
  duplicate names, so we can drop NV_UNIQUE_NAME when creating the list.

Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
Author: Andriy Gapon <avg@FreeBSD.org>

MFC after:	1 week
Sponsored by:	ClusterHQ
2017-03-08 13:48:26 +00:00
Martin Matuska
0132c9cd4a Fix null pointer dereference in zfs_freebsd_setacl().
Prevents unprivileged users from panicking the kernel by calling
__acl_delete_*() on files or directories inside a ZFS mount.

MFC after:	3 days
2017-03-02 23:23:28 +00:00
Alexander Motin
6d1ccf40cc Execute last ZIO of log commit synchronously.
For short transactions overhead of context switch can be too large.
Skipping it gives significant latency reduction.  For large ones,
including multiple ZIOs, latency is less critical, while throughput
there may become limited by checksumming speed of single CPU core.
To get best of both cases, execute last ZIO directly from calling
thread context to save latency, while all others (if there are any)
enqueue to taskqueues in traditional way.

MFC after:	2 weeks
Sponsored by:	iXsystems, Inc.
2017-03-02 07:55:47 +00:00
Alexander Motin
e93f9c7708 Completely skip cache flushing for not supporting log devices.
MFC after:	2 weeks
Sponsored by:	iXsystems, Inc.
2017-03-02 07:50:06 +00:00
Andrey V. Elsukov
19b60f70c0 Do not invoke the resize event when previous provider's size was zero.
This is similar to r303637 fix for geom_disk.

Reported by:	avg
Tested by:	avg
MFC after:	1 week
2017-03-01 18:03:32 +00:00
Josh Paetzel
b98d22744f MFV 314276
7570 tunable to allow zvol SCSI unmap to return on commit of txn to ZIL

illumos/illumos-gate@1c9272b861
1c9272b861

https://www.illumos.org/issues/7570

  Based on the discovery that every unmap waits for the commit of the txn to the ZIL,
  introducing a very high latency to unmap commands, this behavior was made into a
  tunable zvol_unmap_sync_enabled and set to false. The net impact of this change is
  that by default SCSI unmap commands will result in space being freed within the zvol
  (today they are ignored and returned with good status). However, unlike the code
  today, instead of 18+ms per unmap, they take about 30us.

  With the testing done on NTFS against a Win2k12 target, the new behavior should work
  seamlessly. Files on the zvol that have already been set with the zfree application
  will continue to write 0's when deleted, and any new files created since zvol
  creation will send unmap commands when deleted. This behavior exists today, but with
  this change the unmap commands will be processed and result in reclaim of space.

Author: Stephen Blinick <stephen.blinick@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Approved by: Robert Mustacchi <rm@joyent.com>
2017-02-25 20:01:17 +00:00
Andriy Gapon
9211bb327f l2arc: try to fix write size calculation broken by Compressed ARC commit
While there, make a change to not evict a first buffer outside the
requested eviciton range.

To do:
- give more consistent names to the size variables
- upstream to OpenZFS

PR:		216178
Reported by:	lev
Tested by:	lev
MFC after:	2 weeks
2017-02-25 17:03:48 +00:00
Andriy Gapon
1e1065b60f zfs: call spa_deadman on a taskqueue thread
callout(9) prohibits callout functions from sleeping.
illumos mutexes are emulated using sx(9).
spa_deadman() calls vdev_deadman() and the latter acquires vq_lock.

As a result we can get a more confusing panic instead of a specific
panic or no panic:
sleepq_add: td 0xfffff80019669960 to sleep on wchan 0xfffff8001cff4d88 with sleeping prohibited

This change adds another level of indirection where the deadman
callout schedules spa_deadman() to be executed on taskqueue_thread.

While there, use callout_schedule(0 instead of callout_reset()
in spa_sync().

Discussed with:	mav
MFC after:	1 week
Differential Revision: https://reviews.freebsd.org/D9762
2017-02-25 16:45:53 +00:00
Josh Paetzel
029c0bfdbd MFV 314243
6676 Race between unique_insert() and unique_remove() causes ZFS fsid change

illumos/illumos-gate@40510e8eba
40510e8eba

https://www.illumos.org/issues/6676

  The fsid of zfs filesystems might change after reboot or remount. The problem seems to
  be caused by a race between unique_insert() and unique_remove(). The unique_remove()
  is called from dsl_dataset_evict() which is now an asynchronous thread. In a case the
  dsl_dataset_evict() thread is very slow and calls unique_remove() too late we will end
  up with changed fsid on zfs mount.

  This problem is very likely caused by #5056.

  Steps to Reproduce
  Note: I'm able to reproduce this always on a single core (virtual) machine. On multicore
  machines it is not so easy to reproduce.

# uname -a
SunOS openindiana 5.11 illumos-633aa80 i86pc i386 i86pc Solaris
# zfs create rpool/TEST
# FS=$(echo ::fsinfo | mdb -k | grep TEST | awk '{print $1}')
# echo $FS::print vfs_t vfs_fsid | mdb -k
vfs_fsid = {
    vfs_fsid.val = [ 0x54d7028a, 0x70311508 ]
}
# zfs umount rpool/TEST
# zfs mount rpool/TEST
# FS=$(echo ::fsinfo | mdb -k | grep TEST | awk '{print $1}')
# echo $FS::print vfs_t vfs_fsid | mdb -k
vfs_fsid = {
    vfs_fsid.val = [ 0xd9454e49, 0x6b36d08 ]
}
#

  Impact
  The persistent fsid (filesystem id) is essential for proper NFS functionality.
  If the fsid of a filesystem changes on remount (or after reboot) the NFS
  clients might not be able to automatically recover from such event and the
  manual remount of the NFS filesystems on every NFS client might be needed.

Author: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Dan Vatca <dan.vatca@gmail.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Sebastien Roy <sebastien.roy@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
2017-02-25 14:45:54 +00:00
Andriy Gapon
7fa27112f3 zfs: clean up unused files and definitions
MFC after:	1 month
X-MFC after:	r314048
2017-02-24 07:53:56 +00:00
Andriy Gapon
b93763e55d zfs: move zio_taskq_basedc under SYSDC
That knob is useless without SDC (or alike) scheduling class support.
That is, it's unused on FreeBSD.

MFC after:	4 days
2017-02-21 21:11:58 +00:00
Andriy Gapon
2b1bedaf06 zfs: lower priority of zio_write_issue threads by four
The difference of one was insignificant because zio_write_issue threads
ended up on the same run queues as other zio threads.
See sys/priority.h and sys/runq.h for more details.

Add a comment describing FreeBSD priority considerations and restore
the illumos variant of the code for comparison.

Obtained from:	Panzura
MFC after:	2 weeks
Sponsored by:	Panzura
2017-02-21 21:09:21 +00:00
Andriy Gapon
47c8e3d912 reimplement zfsctl (.zfs) support
The current code is written on top of GFS, a library with the generic
support for writing filesystems, which was ported from illumos.
Because of significant differences between illumos VFS and FreeBSD
VFS models, both the GFS and zfsctl code were heavily modified to
work on FreeBSD.  Nonetheless, they still contain quite a few ugly
hacks and bugs.

This is a reimplementation of the zfsctl code where the VFS-specific
bits are written from scratch and only the code that interacts with
the rest of ZFS is reused.

Some highlights.

We use two types of nodes, static and on-demand. The static nodes
are used for permanent directories like .zfs, .zfs/snapshot, etc. The
on-demand nodes are used for ephemeral directories that act as snapshot
mount points.
Initially only static nodes are created. Their vnodes are instantiated
when they are looked up. The on-demand nodes and vnodes are instantiated
as needed and the nodes are destroyed as soon as the corresponding
vnodes are reclaimed.
We also try very hard to ensure that uncovered snapshot vnodes do not
linger.  They are supposed to become inactive as soon as they are
uncovered and we try to recycle them immediately.
When a filesystem is unmounted all snapshots under .zfs are unmounted
first, then all vnodes are flushed and finally the static .zfs nodes
are destroyed.

There are some changes outside of zfsctl code too.
z_ctldir is never used directly (as it is an opaque pointer),
zfsctl_root() has to be used instead.  The function returns a locked
vnode now, so it accepts a lock flags parameter.  The function can
also fail now, e.g. during force unmounting, whereas previously it
was infallible.
zfsctl_root_lookup() is retired, instead of it VOP_LOOKUP() on the .zfs
vnode (obtained with zfsctl_root) is used.

Some ideas are picked from an independent work by will.

Reviewed by:	asomers, smh
MFC after:	1 month
Relnotes:	maybe
Differential Revision: https://reviews.freebsd.org/D7421
2017-02-21 17:47:08 +00:00
Josh Paetzel
aedc925301 MVF: 313876
7504 kmem_reap hangs spa_sync and administrative tasks

illumos/illumos-gate@405a5a0f5c
https://github.com/illumos/illumos-gate/commit/405a5a0f5c3ab36cb76559467d1a62ba648bd80

https://www.illumos.org/issues/7504

  We see long spa_sync(). We are waiting to hold dp_config_rwlock for writer. Some
  other thread holds dp_config_rwlock for reader, then calls arc_get_data_buf(),
  which finds that arc_is_overflowing()==B_TRUE. So it waits (while holding
  dp_config_rwlock for reader) for arc_reclaim_thread to signal arc_reclaim_waiters_cv.
  Before signaling, arc_reclaim_thread does arc_kmem_reap_now(), which takes ~seconds.

Author: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
2017-02-17 17:52:12 +00:00
Josh Paetzel
c53cc7187c MFV 313786
7500 Simplify dbuf_free_range by removing dn_unlisted_l0_blkid

illumos/illumos-gate@653af1b809
653af1b809

https://www.illumos.org/issues/7500
  With the integration of:

    commit 0f6d88aded0d165f5954688a9b13bac76c38da84
    Author: Alex Reece <alex@delphix.com>
    Date:   Sat Jul 26 13:40:04 2014 -0800
    4873 zvol unmap calls can take a very long time for larger datasets

  the dnode's dn_bufs field was changed from a list to a tree. As a result,
  the dn_unlisted_l0_blkid field is no longer necessary.

Author: Stephen Blinick <stephen.blinick@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
2017-02-16 19:00:09 +00:00
Mark Johnston
28180eff9e Use pget() instead of pfind() in fasttrap_pid_{enable,disable}().
Suggested by:	mjg
MFC after:	1 week
2017-02-15 06:07:01 +00:00
Mark Johnston
2fce30fa8f Check for an exiting process when enabling PID provider probes.
MFC after:	1 week
2017-02-15 01:35:26 +00:00
Andriy Gapon
909bacfc59 remove l2_padding_needed statistic from zfs arc
It became obsolete when the Compressed ARC support was committed.

MFC after:	1 week
2017-02-12 19:45:30 +00:00
Andriy Gapon
e776c4054f check remaining space in zfs implementations of vptocnp
PR:		216939
Submitted by:	Iouri V. Ivliev <fbsd@any.com.ru>
MFC after:	1 week
2017-02-12 19:40:59 +00:00
Alan Somers
6159fb2f9c Fix setting birthtime in ZFS
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
	* In zfs_freebsd_setattr, if the caller wants to set the birthtime,
	  set the bits that zfs_settattr expects

	* In zfs_setattr, if XAT_CREATETIME is set, set xoa_createtime,
	  expected by zfs_xvattr_set.  The two levels of indirection seem
	  excessive, but it minimizes diffs vs OpenZFS.

	* In zfs_setattr, check for overflow of va_birthtime (from delphij)

	* Remove red herring in zfs_getattr

sys/cddl/contrib/opensolaris/uts/common/sys/vnode.h
	* Un-booby-trap some macros

New tests are under review at https://github.com/pjd/pjdfstest/pull/6

Reviewed by:	avg
MFC after:	3 weeks
Sponsored by:	Spectra Logic Corp
Differential Revision:	https://reviews.freebsd.org/D9353
2017-02-09 21:30:53 +00:00
George V. Neville-Neil
c499408f8b Fix the ifdef protection and remove superfluous extern statements
Reported by:	Konstantin Belousov
MFC after:	2 weeks
Sponsored by:	DARPA, AFRL
2017-02-07 01:21:18 +00:00
Mark Johnston
9613442e83 Ensure that the DOF string length is divisible by 2.
It is an ASCII encoding of a hexadecimal representation of the DOF file
used to enable anonymous tracing, so its length should always be even.

MFC after:	1 week
2017-02-05 02:47:34 +00:00
Mark Johnston
e801af6fba Use PC-relative relocations for USDT probe sites on i386 and amd64.
When recording probe site addresses in the output DOF file, dtrace -G
needs to emit relocations for the .SUNW_dof section in order to obtain
the addresses of functions containing probe sites. DTrace expects the
addresses to be relative to the base address of the final ELF file,
and the amd64 USDT implementation was relying on some unspecified and
incorrect behaviour in the base system GNU ld to achieve this.

This change reimplements the probe site relocation handling to allow
USDT to be used with lld and newer GNU binutils. Specifically, it
makes use of R_X86_64_PC64/R_386_PC32 relocations to obtain the
probe site address relative to the DOF file address, and adds and uses a
new DOF relocation type which computes the final probe site address using
these relative offsets.

Reported by and discussed with:	Rafael Espíndola
MFC after:	1 month
Differential Revision:	https://reviews.freebsd.org/D9374
2017-02-05 02:39:12 +00:00
George V. Neville-Neil
c613d0c2ba Files which implement the new random number system code for DTrace
Submitted by:	Graeme Jenkinson
MFC after:	2 weeks
Sponsored by:	DARPA, AFRL
2017-02-03 22:40:13 +00:00
George V. Neville-Neil
00bb01a40c Replace the implementation of DTrace's RAND subroutine for generating
low-quality random numbers with a modern implementation (xoroshiro128+)
that is capable of generating better quality randomness without compromising performance.

Submitted by:	Graeme Jenkinson
Reviewed by:	markj
MFC after:	2 weeks
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D9051
2017-02-03 22:26:19 +00:00
Mark Johnston
da5320b9d0 Fix an off-by-one in an assertion on fasttrap tracepoint sizes.
FASTTRAP_MAX_INSTR_SIZE is the largest valid value of a tracepoint, so
correct the assertion accordingly. This limit was hit with a 15-byte NOP.

Reported by:	bdrewery
MFC after:	1 week
Sponsored by:	Dell EMC Isilon
2017-01-27 17:58:41 +00:00
Mark Johnston
792e2f09ee Remove the DTRACEHIOC_ADD ioctl.
This ioctl has been considered legacy by upstream since the DTrace code
was first imported, and is unused. The removal also allows some
simplification of dtrace_helper_slurp().

Also remove a bogus copyout in the DTRACEHIOC_ADDDOF handler. Due to a
bug, it would overwrite an in-memory copy of the DOF header rather than
the passed-in DOF helper. Moreover, DTRACEHIOC_ADDDOF already copies the
helper back out automatically since its argument has the IOC_OUT attribute.
2017-01-23 02:21:06 +00:00
Josh Paetzel
f2be81e92c MFV 312436
6569 large file delete can starve out write ops

  illumos/illumos-gate@ff5177ee8b
  ff5177ee8b

  https://www.illumos.org/issues/6569
    The core issue I've found is that there is no throttle for how many
    deletes get assigned to one TXG. As a results when deleting large files
    we end up filling consecutive TXGs with deletes/frees, then write
    throttling other (more important) ops.

    There is an easy test case for this problem. Try deleting several
    large files (at least 1/2 TB) while you do write ops on the same
    pool. What we've seen is performance of these write ops (let's
    call it sideload I/O) would drop to zero.

    More specifically the problem is that dmu_free_long_range_impl()
    can/will fill up all of the dirty data in the pool "instantly",
    before many of the sideload ops can get in. So sideload
    performance will be impacted until all the files are freed.

    The solution we have tested at Nexenta (with positive results)
    creates a relatively simple throttle for how many "free" ops we let
    into one TXG.

    However this solution exposes other problems that should also be
    addressed. If we are to slow down freeing of data that means one
    has to wait even longer (assuming vnode ref count of 1) to get shell
    back after an rm or for NFS thread to finish the free-ing op.
    To avoid this the proposed solution is to call zfs_inactive() async
    for "large" files. Async freeing then begs for the reclaimed space
    to be accounted for in the zpool's "freeing" prop.

    The other issue with having a longer delete is the inability to
    export/unmount for a longer period of time. The proposed solution
    is to interrupt freeing of blocks when a fs is unmounted.

  Author: Alek Pinchuk <alek@nexenta.com>
  Reviewed by: Matt Ahrens <mahrens@delphix.com>
  Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
  Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
  Approved by: Dan McDonald <danmcd@omniti.com>

Reviewed by:	avg
Differential Revision:	D9008
2017-01-20 15:01:04 +00:00
Mark Johnston
d01e6ad41b Have DTrace handle faults when dereferencing a lock object pointer.
MFC after:	1 week
2017-01-11 01:18:06 +00:00
Mark Johnston
4153c9b932 Ignore LC_SLEEPABLE when testing whether a mutex is adaptive.
MFC after:	1 week
2017-01-11 01:15:55 +00:00
Mark Johnston
91371de1fa Remove the "unused" DIF subroutine index left after r308582.
These indices are input to a build-time script that generates code to
validate subroutine names.
2017-01-03 00:24:12 +00:00
Mark Johnston
c71c814a97 Remove an obsolete pragma from dtrace.h.
It triggers a compiler warning and has been removed upstream.

MFC after:	1 week
2016-12-27 23:31:32 +00:00
George V. Neville-Neil
805e1842c8 Remove extra DOF_SEC_XLIMPORT from the DOF_SEC_ISLOADABLE macro
MFC after:	2 weeks
Sponsored by:	DARPA, AFRL
2016-12-16 20:44:14 +00:00
Alexander Motin
c5f74c4873 Revert r310023 for now.
After another look my new variable mapping was not exactly right.
2016-12-15 08:03:16 +00:00
Alexander Motin
d686b07132 Reduce diff from Illumos by better variables mapping. 2016-12-13 16:20:10 +00:00
Alexander Motin
2823b6467a Postpone ZVOL media/block size caching till first open.
At least on FreeBSD there are no legal way to access media or get its
size without opening device/provider first.  Postponing this caching
allows to skip several disk seeks per ZVOL/snapshot during import.

For HDD pool with 1 ZVOL in dev mode with 1000 snapshots this reduces
pool import time from 40 to 10 seconds.

MFC after:	2 weeks
Sponsored by:	iXsystems, Inc.
2016-12-11 19:50:39 +00:00
Alexander Motin
2fb5d72d58 Add missed vfs.zfs.zfetch.max_idistance sysctl. 2016-12-10 21:19:27 +00:00
Alexander Motin
9373759d13 Fix spa_alloc_tree sorting by offset in r305331.
Original commit "7090 zfs should improve allocation order" declares alloc
queue sorted by time and offset.  But in practice io_offset is always zero,
so sorting happened only by time, while order of writes with equal time was
completely random.  On Illumos this did not affected much thanks to using
high resolution timestamps.  On FreeBSD due to using much faster but low
resolution timestamps it caused bad data placement on disks, affecting
further read performance.

This change switches zio_timestamp_compare() from comparing uninitialized
io_offset to really populated io_bookmark values.  I haven't decided yet
what to do with timestampts, but on simple tests this change gives the
same peformance results by just making code to work as declared.

MFC after:	1 week
2016-12-08 15:58:03 +00:00
George V. Neville-Neil
af463464cf Fix a kernel panic in DTrace's rw_iswriter subroutine.
On FreeBSD the sense of rw_write_held() and rw_iswriter() were reversed,
probably due to a cut and paste error. Using rw_iswriter() would cause
the kernel to panic.

Reviewed by:	markj
MFC after:	2 weeks
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D8718
2016-12-07 07:27:47 +00:00