2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
|
2017-04-24 16:34:36 +00:00
|
|
|
* Copyright (c) 2011, 2017 by Delphix. All rights reserved.
|
2016-11-06 03:43:56 +00:00
|
|
|
* Copyright (c) 2014 Integros [integros.com]
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/* Portions Copyright 2010 Robert Milkowski */
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
#include <sys/spa.h>
|
|
|
|
#include <sys/dmu.h>
|
|
|
|
#include <sys/zap.h>
|
|
|
|
#include <sys/arc.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <sys/resource.h>
|
|
|
|
#include <sys/zil.h>
|
|
|
|
#include <sys/zil_impl.h>
|
|
|
|
#include <sys/dsl_dataset.h>
|
2010-08-26 21:24:34 +00:00
|
|
|
#include <sys/vdev_impl.h>
|
2008-11-20 20:01:55 +00:00
|
|
|
#include <sys/dmu_tx.h>
|
2010-05-28 20:45:14 +00:00
|
|
|
#include <sys/dsl_pool.h>
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
#include <sys/metaslab.h>
|
2014-12-13 02:07:39 +00:00
|
|
|
#include <sys/trace_zil.h>
|
2016-07-22 15:52:49 +00:00
|
|
|
#include <sys/abd.h>
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The zfs intent log (ZIL) saves transaction records of system calls
|
|
|
|
* that change the file system in memory with enough information
|
|
|
|
* to be able to replay them. These are stored in memory until
|
|
|
|
* either the DMU transaction group (txg) commits them to the stable pool
|
|
|
|
* and they can be discarded, or they are flushed to the stable log
|
|
|
|
* (also in the pool) due to a fsync, O_DSYNC or other synchronous
|
|
|
|
* requirement. In the event of a panic or power fail then those log
|
|
|
|
* records (transactions) are replayed.
|
|
|
|
*
|
|
|
|
* There is one ZIL per file system. Its on-disk (pool) format consists
|
|
|
|
* of 3 parts:
|
|
|
|
*
|
|
|
|
* - ZIL header
|
|
|
|
* - ZIL blocks
|
|
|
|
* - ZIL records
|
|
|
|
*
|
|
|
|
* A log record holds a system call transaction. Log blocks can
|
|
|
|
* hold many log records and the blocks are chained together.
|
|
|
|
* Each ZIL block contains a block pointer (blkptr_t) to the next
|
|
|
|
* ZIL block in the chain. The ZIL header points to the first
|
|
|
|
* block in the chain. Note there is not a fixed place in the pool
|
|
|
|
* to hold blocks. They are dynamically allocated and freed as
|
|
|
|
* needed from the blocks available. Figure X shows the ZIL structure:
|
|
|
|
*/
|
|
|
|
|
2012-06-15 14:22:14 +00:00
|
|
|
/*
|
|
|
|
* See zil.h for more information about these fields.
|
|
|
|
*/
|
|
|
|
zil_stats_t zil_stats = {
|
2013-11-01 19:26:11 +00:00
|
|
|
{ "zil_commit_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_commit_writer_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_indirect_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_indirect_bytes", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_copied_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_copied_bytes", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_needcopy_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_needcopy_bytes", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_metaslab_normal_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_metaslab_normal_bytes", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_metaslab_slog_count", KSTAT_DATA_UINT64 },
|
|
|
|
{ "zil_itx_metaslab_slog_bytes", KSTAT_DATA_UINT64 },
|
2012-06-15 14:22:14 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static kstat_t *zil_ksp;
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
2013-06-11 17:12:34 +00:00
|
|
|
* Disable intent logging replay. This global ZIL switch affects all pools.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2013-06-11 17:12:34 +00:00
|
|
|
int zil_replay_disable = 0;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Tunable parameter for debugging or performance analysis. Setting
|
|
|
|
* zfs_nocacheflush will cause corruption on power loss if a volatile
|
|
|
|
* out-of-order write cache is enabled.
|
|
|
|
*/
|
2011-05-03 22:09:28 +00:00
|
|
|
int zfs_nocacheflush = 0;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
/*
|
|
|
|
* Limit SLOG write size per commit executed with synchronous priority.
|
|
|
|
* Any writes above that will be executed with lower (asynchronous) priority
|
|
|
|
* to limit potential SLOG device abuse by single active ZIL writer.
|
|
|
|
*/
|
|
|
|
unsigned long zil_slog_bulk = 768 * 1024;
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
static kmem_cache_t *zil_lwb_cache;
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
static void zil_async_to_sync(zilog_t *zilog, uint64_t foid);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
#define LWB_EMPTY(lwb) ((BP_GET_LSIZE(&lwb->lwb_blk) - \
|
|
|
|
sizeof (zil_chain_t)) == (lwb->lwb_sz - lwb->lwb_nused))
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
static int
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_bp_compare(const void *x1, const void *x2)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
const dva_t *dva1 = &((zil_bp_node_t *)x1)->zn_dva;
|
|
|
|
const dva_t *dva2 = &((zil_bp_node_t *)x2)->zn_dva;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2016-08-27 18:12:53 +00:00
|
|
|
int cmp = AVL_CMP(DVA_GET_VDEV(dva1), DVA_GET_VDEV(dva2));
|
|
|
|
if (likely(cmp))
|
|
|
|
return (cmp);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2016-08-27 18:12:53 +00:00
|
|
|
return (AVL_CMP(DVA_GET_OFFSET(dva1), DVA_GET_OFFSET(dva2)));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_bp_tree_init(zilog_t *zilog)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
avl_create(&zilog->zl_bp_tree, zil_bp_compare,
|
|
|
|
sizeof (zil_bp_node_t), offsetof(zil_bp_node_t, zn_node));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_bp_tree_fini(zilog_t *zilog)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
avl_tree_t *t = &zilog->zl_bp_tree;
|
|
|
|
zil_bp_node_t *zn;
|
2008-11-20 20:01:55 +00:00
|
|
|
void *cookie = NULL;
|
|
|
|
|
|
|
|
while ((zn = avl_destroy_nodes(t, &cookie)) != NULL)
|
2010-05-28 20:45:14 +00:00
|
|
|
kmem_free(zn, sizeof (zil_bp_node_t));
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
avl_destroy(t);
|
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
int
|
|
|
|
zil_bp_tree_add(zilog_t *zilog, const blkptr_t *bp)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
avl_tree_t *t = &zilog->zl_bp_tree;
|
2014-06-05 21:19:08 +00:00
|
|
|
const dva_t *dva;
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_bp_node_t *zn;
|
2008-11-20 20:01:55 +00:00
|
|
|
avl_index_t where;
|
|
|
|
|
2014-06-05 21:19:08 +00:00
|
|
|
if (BP_IS_EMBEDDED(bp))
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
dva = BP_IDENTITY(bp);
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
if (avl_find(t, dva, &where) != NULL)
|
2013-03-08 18:41:28 +00:00
|
|
|
return (SET_ERROR(EEXIST));
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2014-11-21 00:09:39 +00:00
|
|
|
zn = kmem_alloc(sizeof (zil_bp_node_t), KM_SLEEP);
|
2008-11-20 20:01:55 +00:00
|
|
|
zn->zn_dva = *dva;
|
|
|
|
avl_insert(t, zn, where);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static zil_header_t *
|
|
|
|
zil_header_in_syncing_context(zilog_t *zilog)
|
|
|
|
{
|
|
|
|
return ((zil_header_t *)zilog->zl_header);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
zil_init_log_chain(zilog_t *zilog, blkptr_t *bp)
|
|
|
|
{
|
|
|
|
zio_cksum_t *zc = &bp->blk_cksum;
|
|
|
|
|
|
|
|
zc->zc_word[ZIL_ZC_GUID_0] = spa_get_random(-1ULL);
|
|
|
|
zc->zc_word[ZIL_ZC_GUID_1] = spa_get_random(-1ULL);
|
|
|
|
zc->zc_word[ZIL_ZC_OBJSET] = dmu_objset_id(zilog->zl_os);
|
|
|
|
zc->zc_word[ZIL_ZC_SEQ] = 1ULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* Read a log block and make sure it's valid.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
static int
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
zil_read_log_block(zilog_t *zilog, boolean_t decrypt, const blkptr_t *bp,
|
|
|
|
blkptr_t *nbp, void *dst, char **end)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
enum zio_flag zio_flags = ZIO_FLAG_CANFAIL;
|
2014-12-06 17:24:32 +00:00
|
|
|
arc_flags_t aflags = ARC_FLAG_WAIT;
|
2010-05-28 20:45:14 +00:00
|
|
|
arc_buf_t *abuf = NULL;
|
2014-06-25 18:37:59 +00:00
|
|
|
zbookmark_phys_t zb;
|
2008-11-20 20:01:55 +00:00
|
|
|
int error;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (zilog->zl_header->zh_claim_txg == 0)
|
|
|
|
zio_flags |= ZIO_FLAG_SPECULATIVE | ZIO_FLAG_SCRUB;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (!(zilog->zl_header->zh_flags & ZIL_CLAIM_LR_SEQ_VALID))
|
|
|
|
zio_flags |= ZIO_FLAG_SPECULATIVE;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
if (!decrypt)
|
|
|
|
zio_flags |= ZIO_FLAG_RAW;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
SET_BOOKMARK(&zb, bp->blk_cksum.zc_word[ZIL_ZC_OBJSET],
|
|
|
|
ZB_ZIL_OBJECT, ZB_ZIL_LEVEL, bp->blk_cksum.zc_word[ZIL_ZC_SEQ]);
|
|
|
|
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
error = arc_read(NULL, zilog->zl_spa, bp, arc_getbuf_func,
|
|
|
|
&abuf, ZIO_PRIORITY_SYNC_READ, zio_flags, &aflags, &zb);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (error == 0) {
|
|
|
|
zio_cksum_t cksum = bp->blk_cksum;
|
|
|
|
|
|
|
|
/*
|
2008-12-03 20:09:06 +00:00
|
|
|
* Validate the checksummed log block.
|
|
|
|
*
|
2008-11-20 20:01:55 +00:00
|
|
|
* Sequence numbers should be... sequential. The checksum
|
|
|
|
* verifier for the next block should be bp's checksum plus 1.
|
2008-12-03 20:09:06 +00:00
|
|
|
*
|
|
|
|
* Also check the log chain linkage and size used.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
cksum.zc_word[ZIL_ZC_SEQ]++;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (BP_GET_CHECKSUM(bp) == ZIO_CHECKSUM_ZILOG2) {
|
|
|
|
zil_chain_t *zilc = abuf->b_data;
|
|
|
|
char *lr = (char *)(zilc + 1);
|
|
|
|
uint64_t len = zilc->zc_nused - sizeof (zil_chain_t);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (bcmp(&cksum, &zilc->zc_next_blk.blk_cksum,
|
|
|
|
sizeof (cksum)) || BP_IS_HOLE(&zilc->zc_next_blk)) {
|
2013-03-08 18:41:28 +00:00
|
|
|
error = SET_ERROR(ECKSUM);
|
2010-05-28 20:45:14 +00:00
|
|
|
} else {
|
2014-11-03 20:15:08 +00:00
|
|
|
ASSERT3U(len, <=, SPA_OLD_MAXBLOCKSIZE);
|
2010-05-28 20:45:14 +00:00
|
|
|
bcopy(lr, dst, len);
|
|
|
|
*end = (char *)dst + len;
|
|
|
|
*nbp = zilc->zc_next_blk;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
char *lr = abuf->b_data;
|
|
|
|
uint64_t size = BP_GET_LSIZE(bp);
|
|
|
|
zil_chain_t *zilc = (zil_chain_t *)(lr + size) - 1;
|
|
|
|
|
|
|
|
if (bcmp(&cksum, &zilc->zc_next_blk.blk_cksum,
|
|
|
|
sizeof (cksum)) || BP_IS_HOLE(&zilc->zc_next_blk) ||
|
|
|
|
(zilc->zc_nused > (size - sizeof (*zilc)))) {
|
2013-03-08 18:41:28 +00:00
|
|
|
error = SET_ERROR(ECKSUM);
|
2010-05-28 20:45:14 +00:00
|
|
|
} else {
|
2014-11-03 20:15:08 +00:00
|
|
|
ASSERT3U(zilc->zc_nused, <=,
|
|
|
|
SPA_OLD_MAXBLOCKSIZE);
|
2010-05-28 20:45:14 +00:00
|
|
|
bcopy(lr, dst, zilc->zc_nused);
|
|
|
|
*end = (char *)dst + zilc->zc_nused;
|
|
|
|
*nbp = zilc->zc_next_blk;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2016-06-02 04:04:53 +00:00
|
|
|
arc_buf_destroy(abuf, &abuf);
|
2010-05-28 20:45:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Read a TX_WRITE log data block.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
zil_read_log_data(zilog_t *zilog, const lr_write_t *lr, void *wbuf)
|
|
|
|
{
|
|
|
|
enum zio_flag zio_flags = ZIO_FLAG_CANFAIL;
|
|
|
|
const blkptr_t *bp = &lr->lr_blkptr;
|
2014-12-06 17:24:32 +00:00
|
|
|
arc_flags_t aflags = ARC_FLAG_WAIT;
|
2010-05-28 20:45:14 +00:00
|
|
|
arc_buf_t *abuf = NULL;
|
2014-06-25 18:37:59 +00:00
|
|
|
zbookmark_phys_t zb;
|
2010-05-28 20:45:14 +00:00
|
|
|
int error;
|
|
|
|
|
|
|
|
if (BP_IS_HOLE(bp)) {
|
|
|
|
if (wbuf != NULL)
|
|
|
|
bzero(wbuf, MAX(BP_GET_LSIZE(bp), lr->lr_length));
|
|
|
|
return (0);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (zilog->zl_header->zh_claim_txg == 0)
|
|
|
|
zio_flags |= ZIO_FLAG_SPECULATIVE | ZIO_FLAG_SCRUB;
|
|
|
|
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
/*
|
|
|
|
* If we are not using the resulting data, we are just checking that
|
|
|
|
* it hasn't been corrupted so we don't need to waste CPU time
|
|
|
|
* decompressing and decrypting it.
|
|
|
|
*/
|
|
|
|
if (wbuf == NULL)
|
|
|
|
zio_flags |= ZIO_FLAG_RAW;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
SET_BOOKMARK(&zb, dmu_objset_id(zilog->zl_os), lr->lr_foid,
|
|
|
|
ZB_ZIL_LEVEL, lr->lr_offset / BP_GET_LSIZE(bp));
|
|
|
|
|
2013-07-02 20:26:24 +00:00
|
|
|
error = arc_read(NULL, zilog->zl_spa, bp, arc_getbuf_func, &abuf,
|
2010-05-28 20:45:14 +00:00
|
|
|
ZIO_PRIORITY_SYNC_READ, zio_flags, &aflags, &zb);
|
|
|
|
|
|
|
|
if (error == 0) {
|
|
|
|
if (wbuf != NULL)
|
|
|
|
bcopy(abuf->b_data, wbuf, arc_buf_size(abuf));
|
2016-06-02 04:04:53 +00:00
|
|
|
arc_buf_destroy(abuf, &abuf);
|
2010-05-28 20:45:14 +00:00
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Parse the intent log, and call parse_func for each valid record within.
|
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_parse(zilog_t *zilog, zil_parse_blk_func_t *parse_blk_func,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
zil_parse_lr_func_t *parse_lr_func, void *arg, uint64_t txg,
|
|
|
|
boolean_t decrypt)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
|
|
|
const zil_header_t *zh = zilog->zl_header;
|
2010-05-28 20:45:14 +00:00
|
|
|
boolean_t claimed = !!zh->zh_claim_txg;
|
|
|
|
uint64_t claim_blk_seq = claimed ? zh->zh_claim_blk_seq : UINT64_MAX;
|
|
|
|
uint64_t claim_lr_seq = claimed ? zh->zh_claim_lr_seq : UINT64_MAX;
|
|
|
|
uint64_t max_blk_seq = 0;
|
|
|
|
uint64_t max_lr_seq = 0;
|
|
|
|
uint64_t blk_count = 0;
|
|
|
|
uint64_t lr_count = 0;
|
|
|
|
blkptr_t blk, next_blk;
|
2008-11-20 20:01:55 +00:00
|
|
|
char *lrbuf, *lrp;
|
2010-05-28 20:45:14 +00:00
|
|
|
int error = 0;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2013-11-01 19:26:11 +00:00
|
|
|
bzero(&next_blk, sizeof (blkptr_t));
|
2010-08-26 16:58:04 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/*
|
|
|
|
* Old logs didn't record the maximum zh_claim_lr_seq.
|
|
|
|
*/
|
|
|
|
if (!(zh->zh_flags & ZIL_CLAIM_LR_SEQ_VALID))
|
|
|
|
claim_lr_seq = UINT64_MAX;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Starting at the block pointed to by zh_log we read the log chain.
|
|
|
|
* For each block in the chain we strongly check that block to
|
|
|
|
* ensure its validity. We stop when an invalid block is found.
|
|
|
|
* For each block pointer in the chain we call parse_blk_func().
|
|
|
|
* For each record in each valid block we call parse_lr_func().
|
|
|
|
* If the log has been claimed, stop if we encounter a sequence
|
|
|
|
* number greater than the highest claimed sequence number.
|
|
|
|
*/
|
2014-11-03 20:15:08 +00:00
|
|
|
lrbuf = zio_buf_alloc(SPA_OLD_MAXBLOCKSIZE);
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_bp_tree_init(zilog);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
for (blk = zh->zh_log; !BP_IS_HOLE(&blk); blk = next_blk) {
|
|
|
|
uint64_t blk_seq = blk.blk_cksum.zc_word[ZIL_ZC_SEQ];
|
|
|
|
int reclen;
|
2010-08-26 16:58:04 +00:00
|
|
|
char *end = NULL;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (blk_seq > claim_blk_seq)
|
|
|
|
break;
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
|
|
|
|
error = parse_blk_func(zilog, &blk, arg, txg);
|
|
|
|
if (error != 0)
|
2010-05-28 20:45:14 +00:00
|
|
|
break;
|
|
|
|
ASSERT3U(max_blk_seq, <, blk_seq);
|
|
|
|
max_blk_seq = blk_seq;
|
|
|
|
blk_count++;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (max_lr_seq == claim_lr_seq && max_blk_seq == claim_blk_seq)
|
|
|
|
break;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
error = zil_read_log_block(zilog, decrypt, &blk, &next_blk,
|
|
|
|
lrbuf, &end);
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0)
|
2008-11-20 20:01:55 +00:00
|
|
|
break;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
for (lrp = lrbuf; lrp < end; lrp += reclen) {
|
2008-11-20 20:01:55 +00:00
|
|
|
lr_t *lr = (lr_t *)lrp;
|
|
|
|
reclen = lr->lrc_reclen;
|
|
|
|
ASSERT3U(reclen, >=, sizeof (lr_t));
|
2010-05-28 20:45:14 +00:00
|
|
|
if (lr->lrc_seq > claim_lr_seq)
|
|
|
|
goto done;
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
|
|
|
|
error = parse_lr_func(zilog, lr, arg, txg);
|
|
|
|
if (error != 0)
|
2010-05-28 20:45:14 +00:00
|
|
|
goto done;
|
|
|
|
ASSERT3U(max_lr_seq, <, lr->lrc_seq);
|
|
|
|
max_lr_seq = lr->lrc_seq;
|
|
|
|
lr_count++;
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
done:
|
|
|
|
zilog->zl_parse_error = error;
|
|
|
|
zilog->zl_parse_blk_seq = max_blk_seq;
|
|
|
|
zilog->zl_parse_lr_seq = max_lr_seq;
|
|
|
|
zilog->zl_parse_blk_count = blk_count;
|
|
|
|
zilog->zl_parse_lr_count = lr_count;
|
|
|
|
|
|
|
|
ASSERT(!claimed || !(zh->zh_flags & ZIL_CLAIM_LR_SEQ_VALID) ||
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
(max_blk_seq == claim_blk_seq && max_lr_seq == claim_lr_seq) ||
|
|
|
|
(decrypt && error == EIO));
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
zil_bp_tree_fini(zilog);
|
2014-11-03 20:15:08 +00:00
|
|
|
zio_buf_free(lrbuf, SPA_OLD_MAXBLOCKSIZE);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
return (error);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
static int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_claim_log_block(zilog_t *zilog, blkptr_t *bp, void *tx, uint64_t first_txg)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Claim log block if not already committed and not already claimed.
|
2010-05-28 20:45:14 +00:00
|
|
|
* If tx == NULL, just verify that the block is claimable.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2013-12-09 18:37:51 +00:00
|
|
|
if (BP_IS_HOLE(bp) || bp->blk_birth < first_txg ||
|
|
|
|
zil_bp_tree_add(zilog, bp) != 0)
|
2010-05-28 20:45:14 +00:00
|
|
|
return (0);
|
|
|
|
|
|
|
|
return (zio_wait(zio_claim(NULL, zilog->zl_spa,
|
|
|
|
tx == NULL ? 0 : first_txg, bp, spa_claim_notify, NULL,
|
|
|
|
ZIO_FLAG_CANFAIL | ZIO_FLAG_SPECULATIVE | ZIO_FLAG_SCRUB)));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
static int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_claim_log_record(zilog_t *zilog, lr_t *lrc, void *tx, uint64_t first_txg)
|
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
lr_write_t *lr = (lr_write_t *)lrc;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
if (lrc->lrc_txtype != TX_WRITE)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the block is not readable, don't claim it. This can happen
|
|
|
|
* in normal operation when a log block is written to disk before
|
|
|
|
* some of the dmu_sync() blocks it points to. In this case, the
|
|
|
|
* transaction cannot have been committed to anyone (we would have
|
|
|
|
* waited for all writes to be stable first), so it is semantically
|
|
|
|
* correct to declare this the end of the log.
|
|
|
|
*/
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
if (lr->lr_blkptr.blk_birth >= first_txg) {
|
|
|
|
error = zil_read_log_data(zilog, lr, NULL);
|
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
return (zil_claim_log_block(zilog, &lr->lr_blkptr, tx, first_txg));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
2010-05-28 20:45:14 +00:00
|
|
|
static int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_free_log_block(zilog_t *zilog, blkptr_t *bp, void *tx, uint64_t claim_txg)
|
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_free_zil(zilog->zl_spa, dmu_tx_get_txg(tx), bp);
|
|
|
|
|
|
|
|
return (0);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
static int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_free_log_record(zilog_t *zilog, lr_t *lrc, void *tx, uint64_t claim_txg)
|
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
lr_write_t *lr = (lr_write_t *)lrc;
|
|
|
|
blkptr_t *bp = &lr->lr_blkptr;
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* If we previously claimed it, we need to free it.
|
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
if (claim_txg != 0 && lrc->lrc_txtype == TX_WRITE &&
|
2013-12-09 18:37:51 +00:00
|
|
|
bp->blk_birth >= claim_txg && zil_bp_tree_add(zilog, bp) == 0 &&
|
|
|
|
!BP_IS_HOLE(bp))
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_free(zilog->zl_spa, dmu_tx_get_txg(tx), bp);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static lwb_t *
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
zil_alloc_lwb(zilog_t *zilog, blkptr_t *bp, boolean_t slog, uint64_t txg,
|
|
|
|
boolean_t fastwrite)
|
2010-05-28 20:45:14 +00:00
|
|
|
{
|
|
|
|
lwb_t *lwb;
|
|
|
|
|
2014-11-21 00:09:39 +00:00
|
|
|
lwb = kmem_cache_alloc(zil_lwb_cache, KM_SLEEP);
|
2010-05-28 20:45:14 +00:00
|
|
|
lwb->lwb_zilog = zilog;
|
|
|
|
lwb->lwb_blk = *bp;
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
lwb->lwb_fastwrite = fastwrite;
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lwb->lwb_slog = slog;
|
2010-05-28 20:45:14 +00:00
|
|
|
lwb->lwb_buf = zio_buf_alloc(BP_GET_LSIZE(bp));
|
|
|
|
lwb->lwb_max_txg = txg;
|
|
|
|
lwb->lwb_zio = NULL;
|
|
|
|
lwb->lwb_tx = NULL;
|
|
|
|
if (BP_GET_CHECKSUM(bp) == ZIO_CHECKSUM_ZILOG2) {
|
|
|
|
lwb->lwb_nused = sizeof (zil_chain_t);
|
|
|
|
lwb->lwb_sz = BP_GET_LSIZE(bp);
|
|
|
|
} else {
|
|
|
|
lwb->lwb_nused = 0;
|
|
|
|
lwb->lwb_sz = BP_GET_LSIZE(bp) - sizeof (zil_chain_t);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
list_insert_tail(&zilog->zl_lwb_list, lwb);
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
|
|
|
|
return (lwb);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2012-12-15 00:13:40 +00:00
|
|
|
/*
|
|
|
|
* Called when we create in-memory log transactions so that we know
|
|
|
|
* to cleanup the itxs at the end of spa_sync().
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
zilog_dirty(zilog_t *zilog, uint64_t txg)
|
|
|
|
{
|
|
|
|
dsl_pool_t *dp = zilog->zl_dmu_pool;
|
|
|
|
dsl_dataset_t *ds = dmu_objset_ds(zilog->zl_os);
|
|
|
|
|
2015-04-02 03:44:32 +00:00
|
|
|
if (ds->ds_is_snapshot)
|
2012-12-15 00:13:40 +00:00
|
|
|
panic("dirtying snapshot!");
|
|
|
|
|
2013-09-04 12:00:57 +00:00
|
|
|
if (txg_list_add(&dp->dp_dirty_zilogs, zilog, txg)) {
|
2012-12-15 00:13:40 +00:00
|
|
|
/* up the hold count until we can be written out */
|
|
|
|
dmu_buf_add_ref(ds->ds_dbuf, zilog);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-11-06 03:43:56 +00:00
|
|
|
/*
|
|
|
|
* Determine if the zil is dirty in the specified txg. Callers wanting to
|
|
|
|
* ensure that the dirty state does not change must hold the itxg_lock for
|
|
|
|
* the specified txg. Holding the lock will ensure that the zil cannot be
|
|
|
|
* dirtied (zil_itx_assign) or cleaned (zil_clean) while we check its current
|
|
|
|
* state.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
zilog_is_dirty_in_txg(zilog_t *zilog, uint64_t txg)
|
|
|
|
{
|
|
|
|
dsl_pool_t *dp = zilog->zl_dmu_pool;
|
|
|
|
|
|
|
|
if (txg_list_member(&dp->dp_dirty_zilogs, zilog, txg & TXG_MASK))
|
|
|
|
return (B_TRUE);
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Determine if the zil is dirty. The zil is considered dirty if it has
|
|
|
|
* any pending itx records that have not been cleaned by zil_clean().
|
|
|
|
*/
|
2012-12-15 00:13:40 +00:00
|
|
|
boolean_t
|
|
|
|
zilog_is_dirty(zilog_t *zilog)
|
|
|
|
{
|
|
|
|
dsl_pool_t *dp = zilog->zl_dmu_pool;
|
|
|
|
int t;
|
|
|
|
|
|
|
|
for (t = 0; t < TXG_SIZE; t++) {
|
|
|
|
if (txg_list_member(&dp->dp_dirty_zilogs, zilog, t))
|
|
|
|
return (B_TRUE);
|
|
|
|
}
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* Create an on-disk intent log.
|
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
static lwb_t *
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_create(zilog_t *zilog)
|
|
|
|
{
|
|
|
|
const zil_header_t *zh = zilog->zl_header;
|
2010-05-28 20:45:14 +00:00
|
|
|
lwb_t *lwb = NULL;
|
2008-11-20 20:01:55 +00:00
|
|
|
uint64_t txg = 0;
|
|
|
|
dmu_tx_t *tx = NULL;
|
|
|
|
blkptr_t blk;
|
|
|
|
int error = 0;
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
boolean_t fastwrite = FALSE;
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
boolean_t slog = FALSE;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for any previous destroy to complete.
|
|
|
|
*/
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, zilog->zl_destroy_txg);
|
|
|
|
|
|
|
|
ASSERT(zh->zh_claim_txg == 0);
|
|
|
|
ASSERT(zh->zh_replay_seq == 0);
|
|
|
|
|
|
|
|
blk = zh->zh_log;
|
|
|
|
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* Allocate an initial log block if:
|
|
|
|
* - there isn't one already
|
2017-01-03 17:31:18 +00:00
|
|
|
* - the existing block is the wrong endianness
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2009-01-15 21:59:39 +00:00
|
|
|
if (BP_IS_HOLE(&blk) || BP_SHOULD_BYTESWAP(&blk)) {
|
2008-11-20 20:01:55 +00:00
|
|
|
tx = dmu_tx_create(zilog->zl_os);
|
2010-05-28 20:45:14 +00:00
|
|
|
VERIFY(dmu_tx_assign(tx, TXG_WAIT) == 0);
|
2008-11-20 20:01:55 +00:00
|
|
|
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx);
|
|
|
|
txg = dmu_tx_get_txg(tx);
|
|
|
|
|
2009-01-15 21:59:39 +00:00
|
|
|
if (!BP_IS_HOLE(&blk)) {
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_free_zil(zilog->zl_spa, txg, &blk);
|
2009-01-15 21:59:39 +00:00
|
|
|
BP_ZERO(&blk);
|
|
|
|
}
|
|
|
|
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
error = zio_alloc_zil(zilog->zl_spa, zilog->zl_os, txg, &blk,
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
ZIL_MIN_BLKSZ, &slog);
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
fastwrite = TRUE;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (error == 0)
|
|
|
|
zil_init_log_chain(zilog, &blk);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a log write buffer (lwb) for the first log block.
|
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
if (error == 0)
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lwb = zil_alloc_lwb(zilog, &blk, slog, txg, fastwrite);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we just allocated the first log block, commit our transaction
|
|
|
|
* and wait for zil_sync() to stuff the block poiner into zh_log.
|
|
|
|
* (zh is part of the MOS, so we cannot modify it in open context.)
|
|
|
|
*/
|
|
|
|
if (tx != NULL) {
|
|
|
|
dmu_tx_commit(tx);
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, txg);
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(bcmp(&blk, &zh->zh_log, sizeof (blk)) == 0);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
return (lwb);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In one tx, free all log blocks and clear the log header.
|
|
|
|
* If keep_first is set, then we're replaying a log with no content.
|
|
|
|
* We want to keep the first block, however, so that the first
|
|
|
|
* synchronous transaction doesn't require a txg_wait_synced()
|
|
|
|
* in zil_create(). We don't need to txg_wait_synced() here either
|
|
|
|
* when keep_first is set, because both zil_create() and zil_destroy()
|
|
|
|
* will wait for any in-progress destroys to complete.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
zil_destroy(zilog_t *zilog, boolean_t keep_first)
|
|
|
|
{
|
|
|
|
const zil_header_t *zh = zilog->zl_header;
|
|
|
|
lwb_t *lwb;
|
|
|
|
dmu_tx_t *tx;
|
|
|
|
uint64_t txg;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for any previous destroy to complete.
|
|
|
|
*/
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, zilog->zl_destroy_txg);
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
zilog->zl_old_header = *zh; /* debugging aid */
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
if (BP_IS_HOLE(&zh->zh_log))
|
|
|
|
return;
|
|
|
|
|
|
|
|
tx = dmu_tx_create(zilog->zl_os);
|
2010-05-28 20:45:14 +00:00
|
|
|
VERIFY(dmu_tx_assign(tx, TXG_WAIT) == 0);
|
2008-11-20 20:01:55 +00:00
|
|
|
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx);
|
|
|
|
txg = dmu_tx_get_txg(tx);
|
|
|
|
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
|
|
|
|
ASSERT3U(zilog->zl_destroy_txg, <, txg);
|
|
|
|
zilog->zl_destroy_txg = txg;
|
|
|
|
zilog->zl_keep_first = keep_first;
|
|
|
|
|
|
|
|
if (!list_is_empty(&zilog->zl_lwb_list)) {
|
|
|
|
ASSERT(zh->zh_claim_txg == 0);
|
2011-07-26 19:41:53 +00:00
|
|
|
VERIFY(!keep_first);
|
2008-11-20 20:01:55 +00:00
|
|
|
while ((lwb = list_head(&zilog->zl_lwb_list)) != NULL) {
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
ASSERT(lwb->lwb_zio == NULL);
|
|
|
|
if (lwb->lwb_fastwrite)
|
|
|
|
metaslab_fastwrite_unmark(zilog->zl_spa,
|
|
|
|
&lwb->lwb_blk);
|
2008-11-20 20:01:55 +00:00
|
|
|
list_remove(&zilog->zl_lwb_list, lwb);
|
|
|
|
if (lwb->lwb_buf != NULL)
|
|
|
|
zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_free_zil(zilog->zl_spa, txg, &lwb->lwb_blk);
|
2008-11-20 20:01:55 +00:00
|
|
|
kmem_cache_free(zil_lwb_cache, lwb);
|
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
} else if (!keep_first) {
|
2012-12-15 00:13:40 +00:00
|
|
|
zil_destroy_sync(zilog, tx);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
|
|
|
|
dmu_tx_commit(tx);
|
|
|
|
}
|
|
|
|
|
2012-12-15 00:13:40 +00:00
|
|
|
void
|
|
|
|
zil_destroy_sync(zilog_t *zilog, dmu_tx_t *tx)
|
|
|
|
{
|
|
|
|
ASSERT(list_is_empty(&zilog->zl_lwb_list));
|
|
|
|
(void) zil_parse(zilog, zil_free_log_block,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
zil_free_log_record, tx, zilog->zl_header->zh_claim_txg, B_FALSE);
|
2012-12-15 00:13:40 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
int
|
2015-05-06 16:07:55 +00:00
|
|
|
zil_claim(dsl_pool_t *dp, dsl_dataset_t *ds, void *txarg)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
|
|
|
dmu_tx_t *tx = txarg;
|
|
|
|
uint64_t first_txg = dmu_tx_get_txg(tx);
|
|
|
|
zilog_t *zilog;
|
|
|
|
zil_header_t *zh;
|
|
|
|
objset_t *os;
|
|
|
|
int error;
|
|
|
|
|
2015-05-06 16:07:55 +00:00
|
|
|
error = dmu_objset_own_obj(dp, ds->ds_object,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
DMU_OST_ANY, B_FALSE, B_FALSE, FTAG, &os);
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0) {
|
2014-09-07 15:37:25 +00:00
|
|
|
/*
|
|
|
|
* EBUSY indicates that the objset is inconsistent, in which
|
|
|
|
* case it can not have a ZIL.
|
|
|
|
*/
|
|
|
|
if (error != EBUSY) {
|
2015-05-06 16:07:55 +00:00
|
|
|
cmn_err(CE_WARN, "can't open objset for %llu, error %u",
|
|
|
|
(unsigned long long)ds->ds_object, error);
|
2014-09-07 15:37:25 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
zilog = dmu_objset_zil(os);
|
|
|
|
zh = zil_header_in_syncing_context(zilog);
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (spa_get_log_state(zilog->zl_spa) == SPA_LOG_CLEAR) {
|
2009-07-02 22:44:48 +00:00
|
|
|
if (!BP_IS_HOLE(&zh->zh_log))
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_free_zil(zilog->zl_spa, first_txg, &zh->zh_log);
|
2009-07-02 22:44:48 +00:00
|
|
|
BP_ZERO(&zh->zh_log);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
if (os->os_encrypted)
|
|
|
|
os->os_next_write_raw = B_TRUE;
|
2009-07-02 22:44:48 +00:00
|
|
|
dsl_dataset_dirty(dmu_objset_ds(os), tx);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
dmu_objset_disown(os, B_FALSE, FTAG);
|
2010-05-28 20:45:14 +00:00
|
|
|
return (0);
|
2009-07-02 22:44:48 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* Claim all log blocks if we haven't already done so, and remember
|
|
|
|
* the highest claimed sequence number. This ensures that if we can
|
|
|
|
* read only part of the log now (e.g. due to a missing device),
|
|
|
|
* but we can read the entire log later, we will not try to replay
|
|
|
|
* or destroy beyond the last block we successfully claimed.
|
|
|
|
*/
|
|
|
|
ASSERT3U(zh->zh_claim_txg, <=, first_txg);
|
|
|
|
if (zh->zh_claim_txg == 0 && !BP_IS_HOLE(&zh->zh_log)) {
|
2010-05-28 20:45:14 +00:00
|
|
|
(void) zil_parse(zilog, zil_claim_log_block,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
zil_claim_log_record, tx, first_txg, B_FALSE);
|
2010-05-28 20:45:14 +00:00
|
|
|
zh->zh_claim_txg = first_txg;
|
|
|
|
zh->zh_claim_blk_seq = zilog->zl_parse_blk_seq;
|
|
|
|
zh->zh_claim_lr_seq = zilog->zl_parse_lr_seq;
|
|
|
|
if (zilog->zl_parse_lr_count || zilog->zl_parse_blk_count > 1)
|
|
|
|
zh->zh_flags |= ZIL_REPLAY_NEEDED;
|
|
|
|
zh->zh_flags |= ZIL_CLAIM_LR_SEQ_VALID;
|
2008-11-20 20:01:55 +00:00
|
|
|
dsl_dataset_dirty(dmu_objset_ds(os), tx);
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(first_txg, ==, (spa_last_synced_txg(zilog->zl_spa) + 1));
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
dmu_objset_disown(os, B_FALSE, FTAG);
|
2008-11-20 20:01:55 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
/*
|
|
|
|
* Check the log by walking the log chain.
|
|
|
|
* Checksum errors are ok as they indicate the end of the chain.
|
|
|
|
* Any other error (no device or read failure) returns an error.
|
|
|
|
*/
|
2015-05-06 16:07:55 +00:00
|
|
|
/* ARGSUSED */
|
2008-12-03 20:09:06 +00:00
|
|
|
int
|
2015-05-06 16:07:55 +00:00
|
|
|
zil_check_log_chain(dsl_pool_t *dp, dsl_dataset_t *ds, void *tx)
|
2008-12-03 20:09:06 +00:00
|
|
|
{
|
|
|
|
zilog_t *zilog;
|
|
|
|
objset_t *os;
|
2010-08-26 21:24:34 +00:00
|
|
|
blkptr_t *bp;
|
2008-12-03 20:09:06 +00:00
|
|
|
int error;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
ASSERT(tx == NULL);
|
|
|
|
|
2015-05-06 16:07:55 +00:00
|
|
|
error = dmu_objset_from_ds(ds, &os);
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0) {
|
2015-05-06 16:07:55 +00:00
|
|
|
cmn_err(CE_WARN, "can't open objset %llu, error %d",
|
|
|
|
(unsigned long long)ds->ds_object, error);
|
2008-12-03 20:09:06 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
zilog = dmu_objset_zil(os);
|
2010-08-26 21:24:34 +00:00
|
|
|
bp = (blkptr_t *)&zilog->zl_header->zh_log;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check the first block and determine if it's on a log device
|
|
|
|
* which may have been removed or faulted prior to loading this
|
|
|
|
* pool. If so, there's no point in checking the rest of the log
|
|
|
|
* as its content should have already been synced to the pool.
|
|
|
|
*/
|
|
|
|
if (!BP_IS_HOLE(bp)) {
|
|
|
|
vdev_t *vd;
|
|
|
|
boolean_t valid = B_TRUE;
|
|
|
|
|
|
|
|
spa_config_enter(os->os_spa, SCL_STATE, FTAG, RW_READER);
|
|
|
|
vd = vdev_lookup_top(os->os_spa, DVA_GET_VDEV(&bp->blk_dva[0]));
|
|
|
|
if (vd->vdev_islog && vdev_is_dead(vd))
|
|
|
|
valid = vdev_log_state_valid(vd);
|
|
|
|
spa_config_exit(os->os_spa, SCL_STATE, FTAG);
|
|
|
|
|
2015-05-06 16:07:55 +00:00
|
|
|
if (!valid)
|
2010-08-26 21:24:34 +00:00
|
|
|
return (0);
|
|
|
|
}
|
2008-12-03 20:09:06 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/*
|
|
|
|
* Because tx == NULL, zil_claim_log_block() will not actually claim
|
|
|
|
* any blocks, but just determine whether it is possible to do so.
|
|
|
|
* In addition to checking the log chain, zil_claim_log_block()
|
|
|
|
* will invoke zio_claim() with a done func of spa_claim_notify(),
|
|
|
|
* which will update spa_max_claim_txg. See spa_load() for details.
|
|
|
|
*/
|
|
|
|
error = zil_parse(zilog, zil_claim_log_block, zil_claim_log_record, tx,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
zilog->zl_header->zh_claim_txg ? -1ULL : spa_first_txg(os->os_spa),
|
|
|
|
B_FALSE);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
return ((error == ECKSUM || error == ENOENT) ? 0 : error);
|
2008-12-03 20:09:06 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
static int
|
|
|
|
zil_vdev_compare(const void *x1, const void *x2)
|
|
|
|
{
|
2010-08-26 21:24:34 +00:00
|
|
|
const uint64_t v1 = ((zil_vdev_node_t *)x1)->zv_vdev;
|
|
|
|
const uint64_t v2 = ((zil_vdev_node_t *)x2)->zv_vdev;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2016-08-27 18:12:53 +00:00
|
|
|
return (AVL_CMP(v1, v2));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_add_block(zilog_t *zilog, const blkptr_t *bp)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
|
|
|
avl_tree_t *t = &zilog->zl_vdev_tree;
|
|
|
|
avl_index_t where;
|
|
|
|
zil_vdev_node_t *zv, zvsearch;
|
|
|
|
int ndvas = BP_GET_NDVAS(bp);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (zfs_nocacheflush)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ASSERT(zilog->zl_writer);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Even though we're zl_writer, we still need a lock because the
|
|
|
|
* zl_get_data() callbacks may have dmu_sync() done callbacks
|
|
|
|
* that will run concurrently.
|
|
|
|
*/
|
|
|
|
mutex_enter(&zilog->zl_vdev_lock);
|
|
|
|
for (i = 0; i < ndvas; i++) {
|
|
|
|
zvsearch.zv_vdev = DVA_GET_VDEV(&bp->blk_dva[i]);
|
|
|
|
if (avl_find(t, &zvsearch, &where) == NULL) {
|
2014-11-21 00:09:39 +00:00
|
|
|
zv = kmem_alloc(sizeof (*zv), KM_SLEEP);
|
2008-11-20 20:01:55 +00:00
|
|
|
zv->zv_vdev = zvsearch.zv_vdev;
|
|
|
|
avl_insert(t, zv, where);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_exit(&zilog->zl_vdev_lock);
|
|
|
|
}
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
static void
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_flush_vdevs(zilog_t *zilog)
|
|
|
|
{
|
|
|
|
spa_t *spa = zilog->zl_spa;
|
|
|
|
avl_tree_t *t = &zilog->zl_vdev_tree;
|
|
|
|
void *cookie = NULL;
|
|
|
|
zil_vdev_node_t *zv;
|
|
|
|
zio_t *zio;
|
|
|
|
|
|
|
|
ASSERT(zilog->zl_writer);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't need zl_vdev_lock here because we're the zl_writer,
|
|
|
|
* and all zl_get_data() callbacks are done.
|
|
|
|
*/
|
|
|
|
if (avl_numnodes(t) == 0)
|
|
|
|
return;
|
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
spa_config_enter(spa, SCL_STATE, FTAG, RW_READER);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
zio = zio_root(spa, NULL, NULL, ZIO_FLAG_CANFAIL);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
while ((zv = avl_destroy_nodes(t, &cookie)) != NULL) {
|
|
|
|
vdev_t *vd = vdev_lookup_top(spa, zv->zv_vdev);
|
|
|
|
if (vd != NULL)
|
|
|
|
zio_flush(zio, vd);
|
|
|
|
kmem_free(zv, sizeof (*zv));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for all the flushes to complete. Not all devices actually
|
|
|
|
* support the DKIOCFLUSHWRITECACHE ioctl, so it's OK if it fails.
|
|
|
|
*/
|
|
|
|
(void) zio_wait(zio);
|
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
spa_config_exit(spa, SCL_STATE, FTAG);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Function called when a log block write completes
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
zil_lwb_write_done(zio_t *zio)
|
|
|
|
{
|
|
|
|
lwb_t *lwb = zio->io_private;
|
|
|
|
zilog_t *zilog = lwb->lwb_zilog;
|
2010-05-28 20:45:14 +00:00
|
|
|
dmu_tx_t *tx = lwb->lwb_tx;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
ASSERT(BP_GET_COMPRESS(zio->io_bp) == ZIO_COMPRESS_OFF);
|
|
|
|
ASSERT(BP_GET_TYPE(zio->io_bp) == DMU_OT_INTENT_LOG);
|
|
|
|
ASSERT(BP_GET_LEVEL(zio->io_bp) == 0);
|
|
|
|
ASSERT(BP_GET_BYTEORDER(zio->io_bp) == ZFS_HOST_BYTEORDER);
|
|
|
|
ASSERT(!BP_IS_GANG(zio->io_bp));
|
|
|
|
ASSERT(!BP_IS_HOLE(zio->io_bp));
|
2014-06-05 21:19:08 +00:00
|
|
|
ASSERT(BP_GET_FILL(zio->io_bp) == 0);
|
2008-12-03 20:09:06 +00:00
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
2009-07-02 22:44:48 +00:00
|
|
|
* Ensure the lwb buffer pointer is cleared before releasing
|
|
|
|
* the txg. If we have had an allocation failure and
|
|
|
|
* the txg is waiting to sync then we want want zil_sync()
|
|
|
|
* to remove the lwb so that it's not picked up as the next new
|
|
|
|
* one in zil_commit_writer(). zil_sync() will only remove
|
|
|
|
* the lwb if lwb_buf is null.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2016-07-22 15:52:49 +00:00
|
|
|
abd_put(zio->io_abd);
|
2008-11-20 20:01:55 +00:00
|
|
|
zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
lwb->lwb_zio = NULL;
|
|
|
|
lwb->lwb_fastwrite = FALSE;
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb->lwb_buf = NULL;
|
2010-05-28 20:45:14 +00:00
|
|
|
lwb->lwb_tx = NULL;
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
2009-07-02 22:44:48 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Now that we've written this log block, we have a stable pointer
|
|
|
|
* to the next block in the chain, so it's OK to let the txg in
|
2010-05-28 20:45:14 +00:00
|
|
|
* which we allocated the next block sync.
|
2009-07-02 22:44:48 +00:00
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
dmu_tx_commit(tx);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the io for a log block.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
zil_lwb_write_init(zilog_t *zilog, lwb_t *lwb)
|
|
|
|
{
|
2014-06-25 18:37:59 +00:00
|
|
|
zbookmark_phys_t zb;
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
zio_priority_t prio;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
SET_BOOKMARK(&zb, lwb->lwb_blk.blk_cksum.zc_word[ZIL_ZC_OBJSET],
|
|
|
|
ZB_ZIL_OBJECT, ZB_ZIL_LEVEL,
|
|
|
|
lwb->lwb_blk.blk_cksum.zc_word[ZIL_ZC_SEQ]);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (zilog->zl_root_zio == NULL) {
|
|
|
|
zilog->zl_root_zio = zio_root(zilog->zl_spa, NULL, NULL,
|
|
|
|
ZIO_FLAG_CANFAIL);
|
|
|
|
}
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
|
|
|
|
/* Lock so zil_sync() doesn't fastwrite_unmark after zio is created */
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
2008-11-20 20:01:55 +00:00
|
|
|
if (lwb->lwb_zio == NULL) {
|
2016-07-22 15:52:49 +00:00
|
|
|
abd_t *lwb_abd = abd_get_from_buf(lwb->lwb_buf,
|
|
|
|
BP_GET_LSIZE(&lwb->lwb_blk));
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
if (!lwb->lwb_fastwrite) {
|
|
|
|
metaslab_fastwrite_mark(zilog->zl_spa, &lwb->lwb_blk);
|
|
|
|
lwb->lwb_fastwrite = 1;
|
|
|
|
}
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
if (!lwb->lwb_slog || zilog->zl_cur_used <= zil_slog_bulk)
|
|
|
|
prio = ZIO_PRIORITY_SYNC_WRITE;
|
|
|
|
else
|
|
|
|
prio = ZIO_PRIORITY_ASYNC_WRITE;
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb->lwb_zio = zio_rewrite(zilog->zl_root_zio, zilog->zl_spa,
|
2016-07-22 15:52:49 +00:00
|
|
|
0, &lwb->lwb_blk, lwb_abd, BP_GET_LSIZE(&lwb->lwb_blk),
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
zil_lwb_write_done, lwb, prio,
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE |
|
|
|
|
ZIO_FLAG_FASTWRITE, &zb);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
mutex_exit(&zilog->zl_lock);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/*
|
|
|
|
* Define a limited set of intent log block sizes.
|
2013-06-11 17:12:34 +00:00
|
|
|
*
|
2010-05-28 20:45:14 +00:00
|
|
|
* These must be a multiple of 4KB. Note only the amount used (again
|
|
|
|
* aligned to 4KB) actually gets written. However, we can't always just
|
2014-11-03 20:15:08 +00:00
|
|
|
* allocate SPA_OLD_MAXBLOCKSIZE as the slog space could be exhausted.
|
2010-05-28 20:45:14 +00:00
|
|
|
*/
|
|
|
|
uint64_t zil_block_buckets[] = {
|
|
|
|
4096, /* non TX_WRITE */
|
|
|
|
8192+4096, /* data base */
|
|
|
|
32*1024 + 4096, /* NFS writes */
|
|
|
|
UINT64_MAX
|
|
|
|
};
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* Start a log block write and advance to the next log block.
|
|
|
|
* Calls are serialized.
|
|
|
|
*/
|
|
|
|
static lwb_t *
|
|
|
|
zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
|
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
lwb_t *nlwb = NULL;
|
|
|
|
zil_chain_t *zilc;
|
2008-11-20 20:01:55 +00:00
|
|
|
spa_t *spa = zilog->zl_spa;
|
2010-05-28 20:45:14 +00:00
|
|
|
blkptr_t *bp;
|
|
|
|
dmu_tx_t *tx;
|
2008-11-20 20:01:55 +00:00
|
|
|
uint64_t txg;
|
2010-05-28 20:45:14 +00:00
|
|
|
uint64_t zil_blksz, wsz;
|
|
|
|
int i, error;
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
boolean_t slog;
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
if (BP_GET_CHECKSUM(&lwb->lwb_blk) == ZIO_CHECKSUM_ZILOG2) {
|
|
|
|
zilc = (zil_chain_t *)lwb->lwb_buf;
|
|
|
|
bp = &zilc->zc_next_blk;
|
|
|
|
} else {
|
|
|
|
zilc = (zil_chain_t *)(lwb->lwb_buf + lwb->lwb_sz);
|
|
|
|
bp = &zilc->zc_next_blk;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
ASSERT(lwb->lwb_nused <= lwb->lwb_sz);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate the next block and save its address in this block
|
|
|
|
* before writing it in order to establish the log chain.
|
|
|
|
* Note that if the allocation of nlwb synced before we wrote
|
|
|
|
* the block that points at it (lwb), we'd leak it if we crashed.
|
2010-05-28 20:45:14 +00:00
|
|
|
* Therefore, we don't do dmu_tx_commit() until zil_lwb_write_done().
|
|
|
|
* We dirty the dataset to ensure that zil_sync() will be called
|
|
|
|
* to clean up in the event of allocation failure or I/O failure.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
tx = dmu_tx_create(zilog->zl_os);
|
2017-07-17 12:31:30 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Since we are not going to create any new dirty data and we can even
|
|
|
|
* help with clearing the existing dirty data, we should not be subject
|
|
|
|
* to the dirty data based delays.
|
|
|
|
* We (ab)use TXG_WAITED to bypass the delay mechanism.
|
|
|
|
* One side effect from using TXG_WAITED is that dmu_tx_assign() can
|
|
|
|
* fail if the pool is suspended. Those are dramatic circumstances,
|
|
|
|
* so we return NULL to signal that the normal ZIL processing is not
|
|
|
|
* possible and txg_wait_synced() should be used to ensure that the data
|
|
|
|
* is on disk.
|
|
|
|
*/
|
|
|
|
error = dmu_tx_assign(tx, TXG_WAITED);
|
|
|
|
if (error != 0) {
|
|
|
|
ASSERT3S(error, ==, EIO);
|
|
|
|
dmu_tx_abort(tx);
|
|
|
|
return (NULL);
|
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx);
|
|
|
|
txg = dmu_tx_get_txg(tx);
|
|
|
|
|
|
|
|
lwb->lwb_tx = tx;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* Log blocks are pre-allocated. Here we select the size of the next
|
|
|
|
* block, based on size used in the last block.
|
|
|
|
* - first find the smallest bucket that will fit the block from a
|
|
|
|
* limited set of block sizes. This is because it's faster to write
|
|
|
|
* blocks allocated from the same metaslab as they are adjacent or
|
|
|
|
* close.
|
|
|
|
* - next find the maximum from the new suggested size and an array of
|
|
|
|
* previous sizes. This lessens a picket fence effect of wrongly
|
|
|
|
* guesssing the size if we have a stream of say 2k, 64k, 2k, 64k
|
|
|
|
* requests.
|
|
|
|
*
|
|
|
|
* Note we only write what is used, but we can't just allocate
|
|
|
|
* the maximum block size because we can exhaust the available
|
|
|
|
* pool log space.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_blksz = zilog->zl_cur_used + sizeof (zil_chain_t);
|
|
|
|
for (i = 0; zil_blksz > zil_block_buckets[i]; i++)
|
|
|
|
continue;
|
|
|
|
zil_blksz = zil_block_buckets[i];
|
|
|
|
if (zil_blksz == UINT64_MAX)
|
2014-11-03 20:15:08 +00:00
|
|
|
zil_blksz = SPA_OLD_MAXBLOCKSIZE;
|
2010-05-28 20:45:14 +00:00
|
|
|
zilog->zl_prev_blks[zilog->zl_prev_rotor] = zil_blksz;
|
|
|
|
for (i = 0; i < ZIL_PREV_BLKS; i++)
|
|
|
|
zil_blksz = MAX(zil_blksz, zilog->zl_prev_blks[i]);
|
|
|
|
zilog->zl_prev_rotor = (zilog->zl_prev_rotor + 1) & (ZIL_PREV_BLKS - 1);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
BP_ZERO(bp);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
error = zio_alloc_zil(spa, zilog->zl_os, txg, bp, zil_blksz, &slog);
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
if (slog) {
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_itx_metaslab_slog_count);
|
|
|
|
ZIL_STAT_INCR(zil_itx_metaslab_slog_bytes, lwb->lwb_nused);
|
2013-11-01 19:26:11 +00:00
|
|
|
} else {
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_itx_metaslab_normal_count);
|
|
|
|
ZIL_STAT_INCR(zil_itx_metaslab_normal_bytes, lwb->lwb_nused);
|
|
|
|
}
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error == 0) {
|
2010-05-28 20:45:14 +00:00
|
|
|
ASSERT3U(bp->blk_birth, ==, txg);
|
|
|
|
bp->blk_cksum = lwb->lwb_blk.blk_cksum;
|
|
|
|
bp->blk_cksum.zc_word[ZIL_ZC_SEQ]++;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* Allocate a new log write buffer (lwb).
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
nlwb = zil_alloc_lwb(zilog, bp, slog, txg, TRUE);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/* Record the block for later vdev flushing */
|
|
|
|
zil_add_block(zilog, &lwb->lwb_blk);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (BP_GET_CHECKSUM(&lwb->lwb_blk) == ZIO_CHECKSUM_ZILOG2) {
|
|
|
|
/* For Slim ZIL only write what is used. */
|
|
|
|
wsz = P2ROUNDUP_TYPED(lwb->lwb_nused, ZIL_MIN_BLKSZ, uint64_t);
|
|
|
|
ASSERT3U(wsz, <=, lwb->lwb_sz);
|
|
|
|
zio_shrink(lwb->lwb_zio, wsz);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
} else {
|
|
|
|
wsz = lwb->lwb_sz;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
zilc->zc_pad = 0;
|
|
|
|
zilc->zc_nused = lwb->lwb_nused;
|
|
|
|
zilc->zc_eck.zec_cksum = lwb->lwb_blk.blk_cksum;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* clear unused data for security
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
bzero(lwb->lwb_buf + lwb->lwb_nused, wsz - lwb->lwb_nused);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_nowait(lwb->lwb_zio); /* Kick off the write for the old log block */
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* If there was an allocation failure then nlwb will be null which
|
|
|
|
* forces a txg_wait_synced().
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
return (nlwb);
|
|
|
|
}
|
|
|
|
|
|
|
|
static lwb_t *
|
|
|
|
zil_lwb_commit(zilog_t *zilog, itx_t *itx, lwb_t *lwb)
|
|
|
|
{
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lr_t *lrcb, *lrc;
|
|
|
|
lr_write_t *lrwb, *lrw;
|
2010-05-28 20:45:14 +00:00
|
|
|
char *lr_buf;
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
uint64_t dlen, dnow, lwb_sp, reclen, txg;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (lwb == NULL)
|
|
|
|
return (NULL);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
ASSERT(lwb->lwb_buf != NULL);
|
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lrc = &itx->itx_lr; /* Common log record inside itx. */
|
|
|
|
lrw = (lr_write_t *)lrc; /* Write log record inside itx. */
|
|
|
|
if (lrc->lrc_txtype == TX_WRITE && itx->itx_wr_state == WR_NEED_COPY) {
|
2008-11-20 20:01:55 +00:00
|
|
|
dlen = P2ROUNDUP_TYPED(
|
2010-05-28 20:45:14 +00:00
|
|
|
lrw->lr_length, sizeof (uint64_t), uint64_t);
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
} else {
|
|
|
|
dlen = 0;
|
|
|
|
}
|
|
|
|
reclen = lrc->lrc_reclen;
|
2008-11-20 20:01:55 +00:00
|
|
|
zilog->zl_cur_used += (reclen + dlen);
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
txg = lrc->lrc_txg;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
zil_lwb_write_init(zilog, lwb);
|
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
cont:
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* If this record won't fit in the current log block, start a new one.
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
* For WR_NEED_COPY optimize layout for minimal number of chunks.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lwb_sp = lwb->lwb_sz - lwb->lwb_nused;
|
|
|
|
if (reclen > lwb_sp || (reclen + dlen > lwb_sp &&
|
|
|
|
lwb_sp < ZIL_MAX_WASTE_SPACE && (dlen % ZIL_MAX_LOG_DATA == 0 ||
|
|
|
|
lwb_sp < reclen + dlen % ZIL_MAX_LOG_DATA))) {
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb = zil_lwb_write_start(zilog, lwb);
|
|
|
|
if (lwb == NULL)
|
|
|
|
return (NULL);
|
|
|
|
zil_lwb_write_init(zilog, lwb);
|
2010-05-28 20:45:14 +00:00
|
|
|
ASSERT(LWB_EMPTY(lwb));
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lwb_sp = lwb->lwb_sz - lwb->lwb_nused;
|
|
|
|
ASSERT3U(reclen + MIN(dlen, sizeof (uint64_t)), <=, lwb_sp);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
dnow = MIN(dlen, lwb_sp - reclen);
|
2010-05-28 20:45:14 +00:00
|
|
|
lr_buf = lwb->lwb_buf + lwb->lwb_nused;
|
|
|
|
bcopy(lrc, lr_buf, reclen);
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lrcb = (lr_t *)lr_buf; /* Like lrc, but inside lwb. */
|
|
|
|
lrwb = (lr_write_t *)lrcb; /* Like lrw, but inside lwb. */
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_itx_count);
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* If it's a write, fetch the data or get its blkptr as appropriate.
|
|
|
|
*/
|
|
|
|
if (lrc->lrc_txtype == TX_WRITE) {
|
|
|
|
if (txg > spa_freeze_txg(zilog->zl_spa))
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, txg);
|
2012-06-15 14:22:14 +00:00
|
|
|
if (itx->itx_wr_state == WR_COPIED) {
|
|
|
|
ZIL_STAT_BUMP(zil_itx_copied_count);
|
|
|
|
ZIL_STAT_INCR(zil_itx_copied_bytes, lrw->lr_length);
|
|
|
|
} else {
|
2008-11-20 20:01:55 +00:00
|
|
|
char *dbuf;
|
|
|
|
int error;
|
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
if (itx->itx_wr_state == WR_NEED_COPY) {
|
2010-05-28 20:45:14 +00:00
|
|
|
dbuf = lr_buf + reclen;
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lrcb->lrc_reclen += dnow;
|
|
|
|
if (lrwb->lr_length > dnow)
|
|
|
|
lrwb->lr_length = dnow;
|
|
|
|
lrw->lr_offset += dnow;
|
|
|
|
lrw->lr_length -= dnow;
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_itx_needcopy_count);
|
2013-11-01 19:26:11 +00:00
|
|
|
ZIL_STAT_INCR(zil_itx_needcopy_bytes,
|
|
|
|
lrw->lr_length);
|
2008-11-20 20:01:55 +00:00
|
|
|
} else {
|
|
|
|
ASSERT(itx->itx_wr_state == WR_INDIRECT);
|
|
|
|
dbuf = NULL;
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_itx_indirect_count);
|
2013-11-01 19:26:11 +00:00
|
|
|
ZIL_STAT_INCR(zil_itx_indirect_bytes,
|
|
|
|
lrw->lr_length);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
error = zilog->zl_get_data(
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
itx->itx_private, lrwb, dbuf, lwb->lwb_zio);
|
2009-08-18 18:43:27 +00:00
|
|
|
if (error == EIO) {
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, txg);
|
|
|
|
return (lwb);
|
|
|
|
}
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0) {
|
2008-11-20 20:01:55 +00:00
|
|
|
ASSERT(error == ENOENT || error == EEXIST ||
|
|
|
|
error == EALREADY);
|
|
|
|
return (lwb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/*
|
|
|
|
* We're actually making an entry, so update lrc_seq to be the
|
|
|
|
* log record sequence number. Note that this is generally not
|
|
|
|
* equal to the itx sequence number because not all transactions
|
|
|
|
* are synchronous, and sometimes spa_sync() gets there first.
|
|
|
|
*/
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
lrcb->lrc_seq = ++zilog->zl_lr_seq; /* we are single threaded */
|
|
|
|
lwb->lwb_nused += reclen + dnow;
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb->lwb_max_txg = MAX(lwb->lwb_max_txg, txg);
|
2010-05-28 20:45:14 +00:00
|
|
|
ASSERT3U(lwb->lwb_nused, <=, lwb->lwb_sz);
|
2013-05-10 21:17:03 +00:00
|
|
|
ASSERT0(P2PHASE(lwb->lwb_nused, sizeof (uint64_t)));
|
2008-11-20 20:01:55 +00:00
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
dlen -= dnow;
|
|
|
|
if (dlen > 0) {
|
|
|
|
zilog->zl_cur_used += reclen;
|
|
|
|
goto cont;
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
return (lwb);
|
|
|
|
}
|
|
|
|
|
|
|
|
itx_t *
|
|
|
|
zil_itx_create(uint64_t txtype, size_t lrsize)
|
|
|
|
{
|
|
|
|
itx_t *itx;
|
|
|
|
|
|
|
|
lrsize = P2ROUNDUP_TYPED(lrsize, sizeof (uint64_t), size_t);
|
|
|
|
|
2015-01-29 23:09:51 +00:00
|
|
|
itx = zio_data_buf_alloc(offsetof(itx_t, itx_lr) + lrsize);
|
2008-11-20 20:01:55 +00:00
|
|
|
itx->itx_lr.lrc_txtype = txtype;
|
|
|
|
itx->itx_lr.lrc_reclen = lrsize;
|
|
|
|
itx->itx_lr.lrc_seq = 0; /* defensive */
|
2010-08-26 21:24:34 +00:00
|
|
|
itx->itx_sync = B_TRUE; /* default is synchronous */
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
itx->itx_callback = NULL;
|
|
|
|
itx->itx_callback_data = NULL;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
return (itx);
|
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
void
|
|
|
|
zil_itx_destroy(itx_t *itx)
|
|
|
|
{
|
2015-01-29 23:09:51 +00:00
|
|
|
zio_data_buf_free(itx, offsetof(itx_t, itx_lr)+itx->itx_lr.lrc_reclen);
|
2010-05-28 20:45:14 +00:00
|
|
|
}
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
/*
|
|
|
|
* Free up the sync and async itxs. The itxs_t has already been detached
|
|
|
|
* so no locks are needed.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
zil_itxg_clean(itxs_t *itxs)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-08-26 21:24:34 +00:00
|
|
|
itx_t *itx;
|
|
|
|
list_t *list;
|
|
|
|
avl_tree_t *t;
|
|
|
|
void *cookie;
|
|
|
|
itx_async_node_t *ian;
|
|
|
|
|
|
|
|
list = &itxs->i_sync_list;
|
|
|
|
while ((itx = list_head(list)) != NULL) {
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
if (itx->itx_callback != NULL)
|
|
|
|
itx->itx_callback(itx->itx_callback_data);
|
2010-08-26 21:24:34 +00:00
|
|
|
list_remove(list, itx);
|
2015-01-29 23:09:51 +00:00
|
|
|
zil_itx_destroy(itx);
|
2010-08-26 21:24:34 +00:00
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
cookie = NULL;
|
|
|
|
t = &itxs->i_async_tree;
|
|
|
|
while ((ian = avl_destroy_nodes(t, &cookie)) != NULL) {
|
|
|
|
list = &ian->ia_list;
|
|
|
|
while ((itx = list_head(list)) != NULL) {
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
if (itx->itx_callback != NULL)
|
|
|
|
itx->itx_callback(itx->itx_callback_data);
|
2010-08-26 21:24:34 +00:00
|
|
|
list_remove(list, itx);
|
2015-01-29 23:09:51 +00:00
|
|
|
zil_itx_destroy(itx);
|
2010-08-26 21:24:34 +00:00
|
|
|
}
|
|
|
|
list_destroy(list);
|
|
|
|
kmem_free(ian, sizeof (itx_async_node_t));
|
|
|
|
}
|
|
|
|
avl_destroy(t);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
kmem_free(itxs, sizeof (itxs_t));
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
static int
|
|
|
|
zil_aitx_compare(const void *x1, const void *x2)
|
|
|
|
{
|
|
|
|
const uint64_t o1 = ((itx_async_node_t *)x1)->ia_foid;
|
|
|
|
const uint64_t o2 = ((itx_async_node_t *)x2)->ia_foid;
|
|
|
|
|
2016-08-27 18:12:53 +00:00
|
|
|
return (AVL_CMP(o1, o2));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-08-26 21:24:34 +00:00
|
|
|
* Remove all async itx with the given oid.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
static void
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_remove_async(zilog_t *zilog, uint64_t oid)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-08-26 21:24:34 +00:00
|
|
|
uint64_t otxg, txg;
|
|
|
|
itx_async_node_t *ian;
|
|
|
|
avl_tree_t *t;
|
|
|
|
avl_index_t where;
|
2008-11-20 20:01:55 +00:00
|
|
|
list_t clean_list;
|
|
|
|
itx_t *itx;
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
ASSERT(oid != 0);
|
2008-11-20 20:01:55 +00:00
|
|
|
list_create(&clean_list, sizeof (itx_t), offsetof(itx_t, itx_node));
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
if (spa_freeze_txg(zilog->zl_spa) != UINT64_MAX) /* ziltest support */
|
|
|
|
otxg = ZILTEST_TXG;
|
|
|
|
else
|
|
|
|
otxg = spa_last_synced_txg(zilog->zl_spa) + 1;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
for (txg = otxg; txg < (otxg + TXG_CONCURRENT_STATES); txg++) {
|
|
|
|
itxg_t *itxg = &zilog->zl_itxg[txg & TXG_MASK];
|
|
|
|
|
|
|
|
mutex_enter(&itxg->itxg_lock);
|
|
|
|
if (itxg->itxg_txg != txg) {
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
continue;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
/*
|
|
|
|
* Locate the object node and append its list.
|
|
|
|
*/
|
|
|
|
t = &itxg->itxg_itxs->i_async_tree;
|
|
|
|
ian = avl_find(t, &oid, &where);
|
|
|
|
if (ian != NULL)
|
|
|
|
list_move_tail(&clean_list, &ian->ia_list);
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
while ((itx = list_head(&clean_list)) != NULL) {
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
if (itx->itx_callback != NULL)
|
|
|
|
itx->itx_callback(itx->itx_callback_data);
|
2008-11-20 20:01:55 +00:00
|
|
|
list_remove(&clean_list, itx);
|
2015-01-29 23:09:51 +00:00
|
|
|
zil_itx_destroy(itx);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
list_destroy(&clean_list);
|
|
|
|
}
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
void
|
|
|
|
zil_itx_assign(zilog_t *zilog, itx_t *itx, dmu_tx_t *tx)
|
|
|
|
{
|
|
|
|
uint64_t txg;
|
|
|
|
itxg_t *itxg;
|
|
|
|
itxs_t *itxs, *clean = NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Object ids can be re-instantiated in the next txg so
|
|
|
|
* remove any async transactions to avoid future leaks.
|
|
|
|
* This can happen if a fsync occurs on the re-instantiated
|
|
|
|
* object for a WR_INDIRECT or WR_NEED_COPY write, which gets
|
|
|
|
* the new file data and flushes a write record for the old object.
|
|
|
|
*/
|
|
|
|
if ((itx->itx_lr.lrc_txtype & ~TX_CI) == TX_REMOVE)
|
|
|
|
zil_remove_async(zilog, itx->itx_oid);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure the data of a renamed file is committed before the rename.
|
|
|
|
*/
|
|
|
|
if ((itx->itx_lr.lrc_txtype & ~TX_CI) == TX_RENAME)
|
|
|
|
zil_async_to_sync(zilog, itx->itx_oid);
|
|
|
|
|
2012-12-15 00:13:40 +00:00
|
|
|
if (spa_freeze_txg(zilog->zl_spa) != UINT64_MAX)
|
2010-08-26 21:24:34 +00:00
|
|
|
txg = ZILTEST_TXG;
|
|
|
|
else
|
|
|
|
txg = dmu_tx_get_txg(tx);
|
|
|
|
|
|
|
|
itxg = &zilog->zl_itxg[txg & TXG_MASK];
|
|
|
|
mutex_enter(&itxg->itxg_lock);
|
|
|
|
itxs = itxg->itxg_itxs;
|
|
|
|
if (itxg->itxg_txg != txg) {
|
|
|
|
if (itxs != NULL) {
|
|
|
|
/*
|
|
|
|
* The zil_clean callback hasn't got around to cleaning
|
|
|
|
* this itxg. Save the itxs for release below.
|
|
|
|
* This should be rare.
|
|
|
|
*/
|
2016-11-06 03:43:56 +00:00
|
|
|
zfs_dbgmsg("zil_itx_assign: missed itx cleanup for "
|
|
|
|
"txg %llu", itxg->itxg_txg);
|
2010-08-26 21:24:34 +00:00
|
|
|
clean = itxg->itxg_itxs;
|
|
|
|
}
|
|
|
|
itxg->itxg_txg = txg;
|
2013-11-01 19:26:11 +00:00
|
|
|
itxs = itxg->itxg_itxs = kmem_zalloc(sizeof (itxs_t),
|
2014-11-21 00:09:39 +00:00
|
|
|
KM_SLEEP);
|
2010-08-26 21:24:34 +00:00
|
|
|
|
|
|
|
list_create(&itxs->i_sync_list, sizeof (itx_t),
|
|
|
|
offsetof(itx_t, itx_node));
|
|
|
|
avl_create(&itxs->i_async_tree, zil_aitx_compare,
|
|
|
|
sizeof (itx_async_node_t),
|
|
|
|
offsetof(itx_async_node_t, ia_node));
|
|
|
|
}
|
|
|
|
if (itx->itx_sync) {
|
|
|
|
list_insert_tail(&itxs->i_sync_list, itx);
|
|
|
|
} else {
|
|
|
|
avl_tree_t *t = &itxs->i_async_tree;
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 01:25:34 +00:00
|
|
|
uint64_t foid =
|
|
|
|
LR_FOID_GET_OBJ(((lr_ooo_t *)&itx->itx_lr)->lr_foid);
|
2010-08-26 21:24:34 +00:00
|
|
|
itx_async_node_t *ian;
|
|
|
|
avl_index_t where;
|
|
|
|
|
|
|
|
ian = avl_find(t, &foid, &where);
|
|
|
|
if (ian == NULL) {
|
2013-11-01 19:26:11 +00:00
|
|
|
ian = kmem_alloc(sizeof (itx_async_node_t),
|
2014-11-21 00:09:39 +00:00
|
|
|
KM_SLEEP);
|
2010-08-26 21:24:34 +00:00
|
|
|
list_create(&ian->ia_list, sizeof (itx_t),
|
|
|
|
offsetof(itx_t, itx_node));
|
|
|
|
ian->ia_foid = foid;
|
|
|
|
avl_insert(t, ian, where);
|
|
|
|
}
|
|
|
|
list_insert_tail(&ian->ia_list, itx);
|
|
|
|
}
|
|
|
|
|
|
|
|
itx->itx_lr.lrc_txg = dmu_tx_get_txg(tx);
|
2012-12-15 00:13:40 +00:00
|
|
|
zilog_dirty(zilog, txg);
|
2010-08-26 21:24:34 +00:00
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
|
|
|
|
/* Release the old itxs now we've dropped the lock */
|
|
|
|
if (clean != NULL)
|
|
|
|
zil_itxg_clean(clean);
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* If there are any in-memory intent log transactions which have now been
|
2012-12-15 00:13:40 +00:00
|
|
|
* synced then start up a taskq to free them. We should only do this after we
|
|
|
|
* have written out the uberblocks (i.e. txg has been comitted) so that
|
|
|
|
* don't inadvertently clean out in-memory log records that would be required
|
|
|
|
* by zil_commit().
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
void
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_clean(zilog_t *zilog, uint64_t synced_txg)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-08-26 21:24:34 +00:00
|
|
|
itxg_t *itxg = &zilog->zl_itxg[synced_txg & TXG_MASK];
|
|
|
|
itxs_t *clean_me;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
mutex_enter(&itxg->itxg_lock);
|
|
|
|
if (itxg->itxg_itxs == NULL || itxg->itxg_txg == ZILTEST_TXG) {
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
ASSERT3U(itxg->itxg_txg, <=, synced_txg);
|
2017-10-26 19:57:53 +00:00
|
|
|
ASSERT3U(itxg->itxg_txg, !=, 0);
|
2010-08-26 21:24:34 +00:00
|
|
|
clean_me = itxg->itxg_itxs;
|
|
|
|
itxg->itxg_itxs = NULL;
|
|
|
|
itxg->itxg_txg = 0;
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
/*
|
|
|
|
* Preferably start a task queue to free up the old itxs but
|
|
|
|
* if taskq_dispatch can't allocate resources to do that then
|
|
|
|
* free it in-line. This should be rare. Note, using TQ_SLEEP
|
|
|
|
* created a bad performance problem.
|
|
|
|
*/
|
2017-10-26 19:57:53 +00:00
|
|
|
ASSERT3P(zilog->zl_dmu_pool, !=, NULL);
|
|
|
|
ASSERT3P(zilog->zl_dmu_pool->dp_zil_clean_taskq, !=, NULL);
|
|
|
|
taskqid_t id = taskq_dispatch(zilog->zl_dmu_pool->dp_zil_clean_taskq,
|
|
|
|
(void (*)(void *))zil_itxg_clean, clean_me, TQ_NOSLEEP);
|
|
|
|
if (id == TASKQID_INVALID)
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_itxg_clean(clean_me);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the list of itxs to commit into zl_itx_commit_list.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
zil_get_commit_list(zilog_t *zilog)
|
|
|
|
{
|
|
|
|
uint64_t otxg, txg;
|
|
|
|
list_t *commit_list = &zilog->zl_itx_commit_list;
|
|
|
|
|
|
|
|
if (spa_freeze_txg(zilog->zl_spa) != UINT64_MAX) /* ziltest support */
|
|
|
|
otxg = ZILTEST_TXG;
|
|
|
|
else
|
|
|
|
otxg = spa_last_synced_txg(zilog->zl_spa) + 1;
|
|
|
|
|
2016-11-06 03:43:56 +00:00
|
|
|
/*
|
|
|
|
* This is inherently racy, since there is nothing to prevent
|
|
|
|
* the last synced txg from changing. That's okay since we'll
|
|
|
|
* only commit things in the future.
|
|
|
|
*/
|
2010-08-26 21:24:34 +00:00
|
|
|
for (txg = otxg; txg < (otxg + TXG_CONCURRENT_STATES); txg++) {
|
|
|
|
itxg_t *itxg = &zilog->zl_itxg[txg & TXG_MASK];
|
|
|
|
|
|
|
|
mutex_enter(&itxg->itxg_lock);
|
|
|
|
if (itxg->itxg_txg != txg) {
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2016-11-06 03:43:56 +00:00
|
|
|
/*
|
|
|
|
* If we're adding itx records to the zl_itx_commit_list,
|
|
|
|
* then the zil better be dirty in this "txg". We can assert
|
|
|
|
* that here since we're holding the itxg_lock which will
|
|
|
|
* prevent spa_sync from cleaning it. Once we add the itxs
|
|
|
|
* to the zl_itx_commit_list we must commit it to disk even
|
|
|
|
* if it's unnecessary (i.e. the txg was synced).
|
|
|
|
*/
|
|
|
|
ASSERT(zilog_is_dirty_in_txg(zilog, txg) ||
|
|
|
|
spa_freeze_txg(zilog->zl_spa) != UINT64_MAX);
|
2010-08-26 21:24:34 +00:00
|
|
|
list_move_tail(commit_list, &itxg->itxg_itxs->i_sync_list);
|
|
|
|
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move the async itxs for a specified object to commit into sync lists.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
zil_async_to_sync(zilog_t *zilog, uint64_t foid)
|
|
|
|
{
|
|
|
|
uint64_t otxg, txg;
|
|
|
|
itx_async_node_t *ian;
|
|
|
|
avl_tree_t *t;
|
|
|
|
avl_index_t where;
|
|
|
|
|
|
|
|
if (spa_freeze_txg(zilog->zl_spa) != UINT64_MAX) /* ziltest support */
|
|
|
|
otxg = ZILTEST_TXG;
|
|
|
|
else
|
|
|
|
otxg = spa_last_synced_txg(zilog->zl_spa) + 1;
|
|
|
|
|
2016-11-06 03:43:56 +00:00
|
|
|
/*
|
|
|
|
* This is inherently racy, since there is nothing to prevent
|
|
|
|
* the last synced txg from changing.
|
|
|
|
*/
|
2010-08-26 21:24:34 +00:00
|
|
|
for (txg = otxg; txg < (otxg + TXG_CONCURRENT_STATES); txg++) {
|
|
|
|
itxg_t *itxg = &zilog->zl_itxg[txg & TXG_MASK];
|
|
|
|
|
|
|
|
mutex_enter(&itxg->itxg_lock);
|
|
|
|
if (itxg->itxg_txg != txg) {
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If a foid is specified then find that node and append its
|
|
|
|
* list. Otherwise walk the tree appending all the lists
|
|
|
|
* to the sync list. We add to the end rather than the
|
|
|
|
* beginning to ensure the create has happened.
|
|
|
|
*/
|
|
|
|
t = &itxg->itxg_itxs->i_async_tree;
|
|
|
|
if (foid != 0) {
|
|
|
|
ian = avl_find(t, &foid, &where);
|
|
|
|
if (ian != NULL) {
|
|
|
|
list_move_tail(&itxg->itxg_itxs->i_sync_list,
|
|
|
|
&ian->ia_list);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
void *cookie = NULL;
|
|
|
|
|
|
|
|
while ((ian = avl_destroy_nodes(t, &cookie)) != NULL) {
|
|
|
|
list_move_tail(&itxg->itxg_itxs->i_sync_list,
|
|
|
|
&ian->ia_list);
|
|
|
|
list_destroy(&ian->ia_list);
|
|
|
|
kmem_free(ian, sizeof (itx_async_node_t));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_exit(&itxg->itxg_lock);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
static void
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_commit_writer(zilog_t *zilog)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
|
|
|
uint64_t txg;
|
2010-08-26 21:24:34 +00:00
|
|
|
itx_t *itx;
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb_t *lwb;
|
2010-08-26 21:24:34 +00:00
|
|
|
spa_t *spa = zilog->zl_spa;
|
2010-05-28 20:45:14 +00:00
|
|
|
int error = 0;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2008-12-03 20:09:06 +00:00
|
|
|
ASSERT(zilog->zl_root_zio == NULL);
|
2010-08-26 21:24:34 +00:00
|
|
|
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
|
|
|
|
zil_get_commit_list(zilog);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return if there's nothing to commit before we dirty the fs by
|
|
|
|
* calling zil_create().
|
|
|
|
*/
|
|
|
|
if (list_head(&zilog->zl_itx_commit_list) == NULL) {
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
return;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (zilog->zl_suspend) {
|
|
|
|
lwb = NULL;
|
|
|
|
} else {
|
|
|
|
lwb = list_tail(&zilog->zl_lwb_list);
|
2010-08-26 21:24:34 +00:00
|
|
|
if (lwb == NULL)
|
2010-05-28 20:45:14 +00:00
|
|
|
lwb = zil_create(zilog);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
DTRACE_PROBE1(zil__cw1, zilog_t *, zilog);
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
for (itx = list_head(&zilog->zl_itx_commit_list); itx != NULL;
|
2013-11-01 19:26:11 +00:00
|
|
|
itx = list_next(&zilog->zl_itx_commit_list, itx)) {
|
2008-11-20 20:01:55 +00:00
|
|
|
txg = itx->itx_lr.lrc_txg;
|
2016-11-06 03:43:56 +00:00
|
|
|
ASSERT3U(txg, !=, 0);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2016-11-06 03:43:56 +00:00
|
|
|
/*
|
|
|
|
* This is inherently racy and may result in us writing
|
|
|
|
* out a log block for a txg that was just synced. This is
|
|
|
|
* ok since we'll end cleaning up that log block the next
|
|
|
|
* time we call zil_sync().
|
|
|
|
*/
|
2010-08-26 21:24:34 +00:00
|
|
|
if (txg > spa_last_synced_txg(spa) || txg > spa_freeze_txg(spa))
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb = zil_lwb_commit(zilog, itx, lwb);
|
|
|
|
}
|
|
|
|
DTRACE_PROBE1(zil__cw2, zilog_t *, zilog);
|
|
|
|
|
|
|
|
/* write the last block out */
|
|
|
|
if (lwb != NULL && lwb->lwb_zio != NULL)
|
|
|
|
lwb = zil_lwb_write_start(zilog, lwb);
|
|
|
|
|
|
|
|
zilog->zl_cur_used = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait if necessary for the log blocks to be on stable storage.
|
|
|
|
*/
|
|
|
|
if (zilog->zl_root_zio) {
|
2010-05-28 20:45:14 +00:00
|
|
|
error = zio_wait(zilog->zl_root_zio);
|
2008-12-03 20:09:06 +00:00
|
|
|
zilog->zl_root_zio = NULL;
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_flush_vdevs(zilog);
|
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (error || lwb == NULL)
|
2008-11-20 20:01:55 +00:00
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, 0);
|
|
|
|
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
while ((itx = list_head(&zilog->zl_itx_commit_list))) {
|
|
|
|
txg = itx->itx_lr.lrc_txg;
|
|
|
|
ASSERT(txg);
|
|
|
|
|
|
|
|
if (itx->itx_callback != NULL)
|
|
|
|
itx->itx_callback(itx->itx_callback_data);
|
|
|
|
list_remove(&zilog->zl_itx_commit_list, itx);
|
2015-01-29 23:09:51 +00:00
|
|
|
zil_itx_destroy(itx);
|
Only commit the ZIL once in zpl_writepages() (msync() case).
Currently, using msync() results in the following code path:
sys_msync -> zpl_fsync -> filemap_write_and_wait_range -> zpl_writepages -> write_cache_pages -> zpl_putpage
In such a code path, zil_commit() is called as part of zpl_putpage().
This means that for each page, the write is handed to the DMU, the ZIL
is committed, and only then do we move on to the next page. As one might
imagine, this results in atrocious performance where there is a large
number of pages to write: instead of committing a batch of N writes,
we do N commits containing one page each. In some extreme cases this
can result in msync() being ~700 times slower than it should be, as well
as very inefficient use of ZIL resources.
This patch fixes this issue by making sure that the requested writes
are batched and then committed only once. Unfortunately, the
implementation is somewhat non-trivial because there is no way to run
write_cache_pages in SYNC mode (so that we get all pages) without
making it wait on the writeback tag for each page.
The solution implemented here is composed of two parts:
- I added a new callback system to the ZIL, which allows the caller to
be notified when its ITX gets written to stable storage. One nice
thing is that the callback is called not only in zil_commit() but
in zil_sync() as well, which means that the caller doesn't have to
care whether the write ended up in the ZIL or the DMU: it will get
notified as soon as it's safe, period. This is an improvement over
dmu_tx_callback_register() that was used previously, which only
supports DMU writes. The rationale for this change is to allow
zpl_putpage() to be notified when a ZIL commit is completed without
having to block on zil_commit() itself.
- zpl_writepages() now calls write_cache_pages in non-SYNC mode, which
will prevent (1) write_cache_pages from blocking, and (2) zpl_putpage
from issuing ZIL commits. zpl_writepages() will issue the commit
itself instead of relying on zpl_putpage() to do it, thus nicely
batching the writes. Note, however, that we still have to call
write_cache_pages() again in SYNC mode because there is an edge case
documented in the implementation of write_cache_pages() whereas it
will not give us all dirty pages when running in non-SYNC mode. Thus
we need to run it at least once in SYNC mode to make sure we honor
persistency guarantees. This only happens when the pages are
modified at the same time msync() is running, which should be rare.
In most cases there won't be any additional pages and this second
call will do nothing.
Note that this change also fixes a bug related to #907 whereas calling
msync() on pages that were already handed over to the DMU in a previous
writepages() call would make msync() block until the next TXG sync
instead of returning as soon as the ZIL commit is complete. The new
callback system fixes that problem.
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1849
Closes #907
2013-11-10 15:00:11 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_enter(&zilog->zl_lock);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Remember the highest committed log sequence number for ztest.
|
|
|
|
* We only update this value when all the log writes succeeded,
|
|
|
|
* because ztest wants to ASSERT that it got the whole log chain.
|
|
|
|
*/
|
|
|
|
if (error == 0 && lwb != NULL)
|
|
|
|
zilog->zl_commit_lr_seq = zilog->zl_lr_seq;
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-08-26 21:24:34 +00:00
|
|
|
* Commit zfs transactions to stable storage.
|
2008-11-20 20:01:55 +00:00
|
|
|
* If foid is 0 push out all transactions, otherwise push only those
|
2010-08-26 21:24:34 +00:00
|
|
|
* for that object or might reference that object.
|
|
|
|
*
|
|
|
|
* itxs are committed in batches. In a heavily stressed zil there will be
|
|
|
|
* a commit writer thread who is writing out a bunch of itxs to the log
|
|
|
|
* for a set of committing threads (cthreads) in the same batch as the writer.
|
|
|
|
* Those cthreads are all waiting on the same cv for that batch.
|
|
|
|
*
|
|
|
|
* There will also be a different and growing batch of threads that are
|
|
|
|
* waiting to commit (qthreads). When the committing batch completes
|
|
|
|
* a transition occurs such that the cthreads exit and the qthreads become
|
|
|
|
* cthreads. One of the new cthreads becomes the writer thread for the
|
|
|
|
* batch. Any new threads arriving become new qthreads.
|
|
|
|
*
|
|
|
|
* Only 2 condition variables are needed and there's no transition
|
|
|
|
* between the two cvs needed. They just flip-flop between qthreads
|
|
|
|
* and cthreads.
|
|
|
|
*
|
|
|
|
* Using this scheme we can efficiently wakeup up only those threads
|
|
|
|
* that have been committed.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
void
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_commit(zilog_t *zilog, uint64_t foid)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-08-26 21:24:34 +00:00
|
|
|
uint64_t mybatch;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
if (zilog->zl_sync == ZFS_SYNC_DISABLED)
|
|
|
|
return;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_commit_count);
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
/* move the async itxs for the foid to the sync queues */
|
|
|
|
zil_async_to_sync(zilog, foid);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
mybatch = zilog->zl_next_batch;
|
2008-11-20 20:01:55 +00:00
|
|
|
while (zilog->zl_writer) {
|
2010-08-26 21:24:34 +00:00
|
|
|
cv_wait(&zilog->zl_cv_batch[mybatch & 1], &zilog->zl_lock);
|
|
|
|
if (mybatch <= zilog->zl_com_batch) {
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
zilog->zl_next_batch++;
|
|
|
|
zilog->zl_writer = B_TRUE;
|
2012-06-15 14:22:14 +00:00
|
|
|
ZIL_STAT_BUMP(zil_commit_writer_count);
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_commit_writer(zilog);
|
|
|
|
zilog->zl_com_batch = mybatch;
|
|
|
|
zilog->zl_writer = B_FALSE;
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
/* wake up one thread to become the next writer */
|
|
|
|
cv_signal(&zilog->zl_cv_batch[(mybatch+1) & 1]);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
/* wake up all threads waiting for this batch to be committed */
|
|
|
|
cv_broadcast(&zilog->zl_cv_batch[mybatch & 1]);
|
2012-10-15 20:40:07 +00:00
|
|
|
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
2010-05-28 20:45:14 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* Called in syncing context to free committed log blocks and update log header.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
zil_sync(zilog_t *zilog, dmu_tx_t *tx)
|
|
|
|
{
|
|
|
|
zil_header_t *zh = zil_header_in_syncing_context(zilog);
|
|
|
|
uint64_t txg = dmu_tx_get_txg(tx);
|
|
|
|
spa_t *spa = zilog->zl_spa;
|
2010-05-28 20:45:14 +00:00
|
|
|
uint64_t *replayed_seq = &zilog->zl_replayed_seq[txg & TXG_MASK];
|
2008-11-20 20:01:55 +00:00
|
|
|
lwb_t *lwb;
|
|
|
|
|
2009-07-02 22:44:48 +00:00
|
|
|
/*
|
|
|
|
* We don't zero out zl_destroy_txg, so make sure we don't try
|
|
|
|
* to destroy it twice.
|
|
|
|
*/
|
|
|
|
if (spa_sync_pass(spa) != 1)
|
|
|
|
return;
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
|
|
|
|
ASSERT(zilog->zl_stop_sync == 0);
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (*replayed_seq != 0) {
|
|
|
|
ASSERT(zh->zh_replay_seq < *replayed_seq);
|
|
|
|
zh->zh_replay_seq = *replayed_seq;
|
|
|
|
*replayed_seq = 0;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (zilog->zl_destroy_txg == txg) {
|
|
|
|
blkptr_t blk = zh->zh_log;
|
|
|
|
|
|
|
|
ASSERT(list_head(&zilog->zl_lwb_list) == NULL);
|
|
|
|
|
|
|
|
bzero(zh, sizeof (zil_header_t));
|
2009-01-15 21:59:39 +00:00
|
|
|
bzero(zilog->zl_replayed_seq, sizeof (zilog->zl_replayed_seq));
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (zilog->zl_keep_first) {
|
|
|
|
/*
|
|
|
|
* If this block was part of log chain that couldn't
|
|
|
|
* be claimed because a device was missing during
|
|
|
|
* zil_claim(), but that device later returns,
|
|
|
|
* then this block could erroneously appear valid.
|
|
|
|
* To guard against this, assign a new GUID to the new
|
|
|
|
* log chain so it doesn't matter what blk points to.
|
|
|
|
*/
|
|
|
|
zil_init_log_chain(zilog, &blk);
|
|
|
|
zh->zh_log = blk;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-07-02 22:44:48 +00:00
|
|
|
while ((lwb = list_head(&zilog->zl_lwb_list)) != NULL) {
|
2008-11-20 20:01:55 +00:00
|
|
|
zh->zh_log = lwb->lwb_blk;
|
|
|
|
if (lwb->lwb_buf != NULL || lwb->lwb_max_txg > txg)
|
|
|
|
break;
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
|
|
|
|
ASSERT(lwb->lwb_zio == NULL);
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
list_remove(&zilog->zl_lwb_list, lwb);
|
2010-05-28 20:45:14 +00:00
|
|
|
zio_free_zil(spa, txg, &lwb->lwb_blk);
|
2008-11-20 20:01:55 +00:00
|
|
|
kmem_cache_free(zil_lwb_cache, lwb);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we don't have anything left in the lwb list then
|
|
|
|
* we've had an allocation failure and we need to zero
|
|
|
|
* out the zil_header blkptr so that we don't end
|
|
|
|
* up freeing the same block twice.
|
|
|
|
*/
|
|
|
|
if (list_head(&zilog->zl_lwb_list) == NULL)
|
|
|
|
BP_ZERO(&zh->zh_log);
|
|
|
|
}
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove fastwrite on any blocks that have been pre-allocated for
|
|
|
|
* the next commit. This prevents fastwrite counter pollution by
|
|
|
|
* unused, long-lived LWBs.
|
|
|
|
*/
|
|
|
|
for (; lwb != NULL; lwb = list_next(&zilog->zl_lwb_list, lwb)) {
|
|
|
|
if (lwb->lwb_fastwrite && !lwb->lwb_zio) {
|
|
|
|
metaslab_fastwrite_unmark(zilog->zl_spa, &lwb->lwb_blk);
|
|
|
|
lwb->lwb_fastwrite = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
zil_init(void)
|
|
|
|
{
|
|
|
|
zil_lwb_cache = kmem_cache_create("zil_lwb_cache",
|
|
|
|
sizeof (struct lwb), 0, NULL, NULL, NULL, NULL, NULL, 0);
|
2012-06-15 14:22:14 +00:00
|
|
|
|
|
|
|
zil_ksp = kstat_create("zfs", 0, "zil", "misc",
|
2013-11-01 19:26:11 +00:00
|
|
|
KSTAT_TYPE_NAMED, sizeof (zil_stats) / sizeof (kstat_named_t),
|
2012-06-15 14:22:14 +00:00
|
|
|
KSTAT_FLAG_VIRTUAL);
|
|
|
|
|
|
|
|
if (zil_ksp != NULL) {
|
|
|
|
zil_ksp->ks_data = &zil_stats;
|
|
|
|
kstat_install(zil_ksp);
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
zil_fini(void)
|
|
|
|
{
|
|
|
|
kmem_cache_destroy(zil_lwb_cache);
|
2012-06-15 14:22:14 +00:00
|
|
|
|
|
|
|
if (zil_ksp != NULL) {
|
|
|
|
kstat_delete(zil_ksp);
|
|
|
|
zil_ksp = NULL;
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
void
|
|
|
|
zil_set_sync(zilog_t *zilog, uint64_t sync)
|
|
|
|
{
|
|
|
|
zilog->zl_sync = sync;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
zil_set_logbias(zilog_t *zilog, uint64_t logbias)
|
|
|
|
{
|
|
|
|
zilog->zl_logbias = logbias;
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
zilog_t *
|
|
|
|
zil_alloc(objset_t *os, zil_header_t *zh_phys)
|
|
|
|
{
|
|
|
|
zilog_t *zilog;
|
2010-08-26 16:52:39 +00:00
|
|
|
int i;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2014-11-21 00:09:39 +00:00
|
|
|
zilog = kmem_zalloc(sizeof (zilog_t), KM_SLEEP);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
zilog->zl_header = zh_phys;
|
|
|
|
zilog->zl_os = os;
|
|
|
|
zilog->zl_spa = dmu_objset_spa(os);
|
|
|
|
zilog->zl_dmu_pool = dmu_objset_pool(os);
|
|
|
|
zilog->zl_destroy_txg = TXG_INITIAL - 1;
|
2010-05-28 20:45:14 +00:00
|
|
|
zilog->zl_logbias = dmu_objset_logbias(os);
|
|
|
|
zilog->zl_sync = dmu_objset_syncprop(os);
|
2010-08-26 21:24:34 +00:00
|
|
|
zilog->zl_next_batch = 1;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
mutex_init(&zilog->zl_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
|
2010-08-26 16:52:39 +00:00
|
|
|
for (i = 0; i < TXG_SIZE; i++) {
|
2010-08-26 21:24:34 +00:00
|
|
|
mutex_init(&zilog->zl_itxg[i].itxg_lock, NULL,
|
|
|
|
MUTEX_DEFAULT, NULL);
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
list_create(&zilog->zl_lwb_list, sizeof (lwb_t),
|
|
|
|
offsetof(lwb_t, lwb_node));
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
list_create(&zilog->zl_itx_commit_list, sizeof (itx_t),
|
|
|
|
offsetof(itx_t, itx_node));
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_init(&zilog->zl_vdev_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
|
|
|
|
avl_create(&zilog->zl_vdev_tree, zil_vdev_compare,
|
|
|
|
sizeof (zil_vdev_node_t), offsetof(zil_vdev_node_t, zv_node));
|
|
|
|
|
|
|
|
cv_init(&zilog->zl_cv_writer, NULL, CV_DEFAULT, NULL);
|
|
|
|
cv_init(&zilog->zl_cv_suspend, NULL, CV_DEFAULT, NULL);
|
2010-08-26 21:24:34 +00:00
|
|
|
cv_init(&zilog->zl_cv_batch[0], NULL, CV_DEFAULT, NULL);
|
|
|
|
cv_init(&zilog->zl_cv_batch[1], NULL, CV_DEFAULT, NULL);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
return (zilog);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
zil_free(zilog_t *zilog)
|
|
|
|
{
|
2010-08-26 16:52:39 +00:00
|
|
|
int i;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
zilog->zl_stop_sync = 1;
|
|
|
|
|
2013-09-04 12:00:57 +00:00
|
|
|
ASSERT0(zilog->zl_suspend);
|
|
|
|
ASSERT0(zilog->zl_suspending);
|
|
|
|
|
2011-07-26 19:41:53 +00:00
|
|
|
ASSERT(list_is_empty(&zilog->zl_lwb_list));
|
2008-11-20 20:01:55 +00:00
|
|
|
list_destroy(&zilog->zl_lwb_list);
|
|
|
|
|
|
|
|
avl_destroy(&zilog->zl_vdev_tree);
|
|
|
|
mutex_destroy(&zilog->zl_vdev_lock);
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
ASSERT(list_is_empty(&zilog->zl_itx_commit_list));
|
|
|
|
list_destroy(&zilog->zl_itx_commit_list);
|
|
|
|
|
2010-08-26 16:52:39 +00:00
|
|
|
for (i = 0; i < TXG_SIZE; i++) {
|
2010-08-26 21:24:34 +00:00
|
|
|
/*
|
|
|
|
* It's possible for an itx to be generated that doesn't dirty
|
|
|
|
* a txg (e.g. ztest TX_TRUNCATE). So there's no zil_clean()
|
|
|
|
* callback to remove the entry. We remove those here.
|
|
|
|
*
|
|
|
|
* Also free up the ziltest itxs.
|
|
|
|
*/
|
|
|
|
if (zilog->zl_itxg[i].itxg_itxs)
|
|
|
|
zil_itxg_clean(zilog->zl_itxg[i].itxg_itxs);
|
|
|
|
mutex_destroy(&zilog->zl_itxg[i].itxg_lock);
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_destroy(&zilog->zl_lock);
|
|
|
|
|
|
|
|
cv_destroy(&zilog->zl_cv_writer);
|
|
|
|
cv_destroy(&zilog->zl_cv_suspend);
|
2010-08-26 21:24:34 +00:00
|
|
|
cv_destroy(&zilog->zl_cv_batch[0]);
|
|
|
|
cv_destroy(&zilog->zl_cv_batch[1]);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
kmem_free(zilog, sizeof (zilog_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Open an intent log.
|
|
|
|
*/
|
|
|
|
zilog_t *
|
|
|
|
zil_open(objset_t *os, zil_get_data_t *get_data)
|
|
|
|
{
|
|
|
|
zilog_t *zilog = dmu_objset_zil(os);
|
|
|
|
|
2011-07-26 19:41:53 +00:00
|
|
|
ASSERT(zilog->zl_get_data == NULL);
|
|
|
|
ASSERT(list_is_empty(&zilog->zl_lwb_list));
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
zilog->zl_get_data = get_data;
|
|
|
|
|
|
|
|
return (zilog);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Close an intent log.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
zil_close(zilog_t *zilog)
|
|
|
|
{
|
2011-07-26 19:41:53 +00:00
|
|
|
lwb_t *lwb;
|
2010-08-26 21:24:34 +00:00
|
|
|
uint64_t txg = 0;
|
|
|
|
|
|
|
|
zil_commit(zilog, 0); /* commit all itx */
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
2010-08-26 21:24:34 +00:00
|
|
|
* The lwb_max_txg for the stubby lwb will reflect the last activity
|
|
|
|
* for the zil. After a txg_wait_synced() on the txg we know all the
|
|
|
|
* callbacks have occurred that may clean the zil. Only then can we
|
|
|
|
* destroy the zl_clean_taskq.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2010-08-26 21:24:34 +00:00
|
|
|
mutex_enter(&zilog->zl_lock);
|
2011-07-26 19:41:53 +00:00
|
|
|
lwb = list_tail(&zilog->zl_lwb_list);
|
|
|
|
if (lwb != NULL)
|
|
|
|
txg = lwb->lwb_max_txg;
|
2010-08-26 21:24:34 +00:00
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
if (txg)
|
2008-11-20 20:01:55 +00:00
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, txg);
|
2016-11-06 03:43:56 +00:00
|
|
|
|
|
|
|
if (zilog_is_dirty(zilog))
|
|
|
|
zfs_dbgmsg("zil (%p) is dirty, txg %llu", zilog, txg);
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 01:25:34 +00:00
|
|
|
if (txg < spa_freeze_txg(zilog->zl_spa))
|
2016-11-06 03:43:56 +00:00
|
|
|
VERIFY(!zilog_is_dirty(zilog));
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
zilog->zl_get_data = NULL;
|
2011-07-26 19:41:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We should have only one LWB left on the list; remove it now.
|
|
|
|
*/
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
lwb = list_head(&zilog->zl_lwb_list);
|
|
|
|
if (lwb != NULL) {
|
|
|
|
ASSERT(lwb == list_tail(&zilog->zl_lwb_list));
|
Add FASTWRITE algorithm for synchronous writes.
Currently, ZIL blocks are spread over vdevs using hint block pointers
managed by the ZIL commit code and passed to metaslab_alloc(). Spreading
log blocks accross vdevs is important for performance: indeed, using
mutliple disks in parallel decreases the ZIL commit latency, which is
the main performance metric for synchronous writes. However, the current
implementation suffers from the following issues:
1) It would be best if the ZIL module was not aware of such low-level
details. They should be handled by the ZIO and metaslab modules;
2) Because the hint block pointer is managed per log, simultaneous
commits from multiple logs might use the same vdevs at the same time,
which is inefficient;
3) Because dmu_write() does not honor the block pointer hint, indirect
writes are not spread.
The naive solution of rotating the metaslab rotor each time a block is
allocated for the ZIL or dmu_sync() doesn't work in practice because the
first ZIL block to be written is actually allocated during the previous
commit. Consequently, when metaslab_alloc() decides the vdev for this
block, it will do so while a bunch of other allocations are happening at
the same time (from dmu_sync() and other ZILs). This means the vdev for
this block is chosen more or less at random. When the next commit
happens, there is a high chance (especially when the number of blocks
per commit is slightly less than the number of the disks) that one disk
will have to write two blocks (with a potential seek) while other disks
are sitting idle, which defeats spreading and increases the commit
latency.
This commit introduces a new concept in the metaslab allocator:
fastwrites. Basically, each top-level vdev maintains a counter
indicating the number of synchronous writes (from dmu_sync() and the
ZIL) which have been allocated but not yet completed. When the metaslab
is called with the FASTWRITE flag, it will choose the vdev with the
least amount of pending synchronous writes. If there are multiple vdevs
with the same value, the first matching vdev (starting from the rotor)
is used. Once metaslab_alloc() has decided which vdev the block is
allocated to, it updates the fastwrite counter for this vdev.
The rationale goes like this: when an allocation is done with
FASTWRITE, it "reserves" the vdev until the data is written. Until then,
all future allocations will naturally avoid this vdev, even after a full
rotation of the rotor. As a result, pending synchronous writes at a
given point in time will be nicely spread over all vdevs. This contrasts
with the previous algorithm, which is based on the implicit assumption
that blocks are written instantaneously after they're allocated.
metaslab_fastwrite_mark() and metaslab_fastwrite_unmark() are used to
manually increase or decrease fastwrite counters, respectively. They
should be used with caution, as there is no per-BP tracking of fastwrite
information, so leaks and "double-unmarks" are possible. There is,
however, an assert in the vdev teardown code which will fire if the
fastwrite counters are not zero when the pool is exported or the vdev
removed. Note that as stated above, marking is also done implictly by
metaslab_alloc().
ZIO also got a new FASTWRITE flag; when it is used, ZIO will pass it to
the metaslab when allocating (assuming ZIO does the allocation, which is
only true in the case of dmu_sync). This flag will also trigger an
unmark when zio_done() fires.
A side-effect of the new algorithm is that when a ZIL stops being used,
its last block can stay in the pending state (allocated but not yet
written) for a long time, polluting the fastwrite counters. To avoid
that, I've implemented a somewhat crude but working solution which
unmarks these pending blocks in zil_sync(), thus guaranteeing that
linguering fastwrites will get pruned at each sync event.
The best performance improvements are observed with pools using a large
number of top-level vdevs and heavy synchronous write workflows
(especially indirect writes and concurrent writes from multiple ZILs).
Real-life testing shows a 200% to 300% performance increase with
indirect writes and various commit sizes.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1013
2012-06-27 13:20:20 +00:00
|
|
|
ASSERT(lwb->lwb_zio == NULL);
|
|
|
|
if (lwb->lwb_fastwrite)
|
|
|
|
metaslab_fastwrite_unmark(zilog->zl_spa, &lwb->lwb_blk);
|
2011-07-26 19:41:53 +00:00
|
|
|
list_remove(&zilog->zl_lwb_list, lwb);
|
|
|
|
zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
|
|
|
|
kmem_cache_free(zil_lwb_cache, lwb);
|
|
|
|
}
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2013-09-04 12:00:57 +00:00
|
|
|
static char *suspend_tag = "zil suspending";
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* Suspend an intent log. While in suspended mode, we still honor
|
|
|
|
* synchronous semantics, but we rely on txg_wait_synced() to do it.
|
2013-09-04 12:00:57 +00:00
|
|
|
* On old version pools, we suspend the log briefly when taking a
|
|
|
|
* snapshot so that it will have an empty intent log.
|
|
|
|
*
|
|
|
|
* Long holds are not really intended to be used the way we do here --
|
|
|
|
* held for such a short time. A concurrent caller of dsl_dataset_long_held()
|
|
|
|
* could fail. Therefore we take pains to only put a long hold if it is
|
|
|
|
* actually necessary. Fortunately, it will only be necessary if the
|
|
|
|
* objset is currently mounted (or the ZVOL equivalent). In that case it
|
|
|
|
* will already have a long hold, so we are not really making things any worse.
|
|
|
|
*
|
|
|
|
* Ideally, we would locate the existing long-holder (i.e. the zfsvfs_t or
|
|
|
|
* zvol_state_t), and use their mechanism to prevent their hold from being
|
|
|
|
* dropped (e.g. VFS_HOLD()). However, that would be even more pain for
|
|
|
|
* very little gain.
|
|
|
|
*
|
|
|
|
* if cookiep == NULL, this does both the suspend & resume.
|
|
|
|
* Otherwise, it returns with the dataset "long held", and the cookie
|
|
|
|
* should be passed into zil_resume().
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
int
|
2013-09-04 12:00:57 +00:00
|
|
|
zil_suspend(const char *osname, void **cookiep)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2013-09-04 12:00:57 +00:00
|
|
|
objset_t *os;
|
|
|
|
zilog_t *zilog;
|
|
|
|
const zil_header_t *zh;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = dmu_objset_hold(osname, suspend_tag, &os);
|
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
|
|
|
zilog = dmu_objset_zil(os);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
2013-09-04 12:00:57 +00:00
|
|
|
zh = zilog->zl_header;
|
|
|
|
|
2009-07-02 22:44:48 +00:00
|
|
|
if (zh->zh_flags & ZIL_REPLAY_NEEDED) { /* unplayed log */
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_exit(&zilog->zl_lock);
|
2013-09-04 12:00:57 +00:00
|
|
|
dmu_objset_rele(os, suspend_tag);
|
2013-03-08 18:41:28 +00:00
|
|
|
return (SET_ERROR(EBUSY));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
2013-09-04 12:00:57 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't put a long hold in the cases where we can avoid it. This
|
|
|
|
* is when there is no cookie so we are doing a suspend & resume
|
|
|
|
* (i.e. called from zil_vdev_offline()), and there's nothing to do
|
|
|
|
* for the suspend because it's already suspended, or there's no ZIL.
|
|
|
|
*/
|
|
|
|
if (cookiep == NULL && !zilog->zl_suspending &&
|
|
|
|
(zilog->zl_suspend > 0 || BP_IS_HOLE(&zh->zh_log))) {
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
dmu_objset_rele(os, suspend_tag);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
dsl_dataset_long_hold(dmu_objset_ds(os), suspend_tag);
|
|
|
|
dsl_pool_rele(dmu_objset_pool(os), suspend_tag);
|
|
|
|
|
|
|
|
zilog->zl_suspend++;
|
|
|
|
|
|
|
|
if (zilog->zl_suspend > 1) {
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
2013-09-04 12:00:57 +00:00
|
|
|
* Someone else is already suspending it.
|
2008-11-20 20:01:55 +00:00
|
|
|
* Just wait for them to finish.
|
|
|
|
*/
|
2013-09-04 12:00:57 +00:00
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
while (zilog->zl_suspending)
|
|
|
|
cv_wait(&zilog->zl_cv_suspend, &zilog->zl_lock);
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
2013-09-04 12:00:57 +00:00
|
|
|
|
|
|
|
if (cookiep == NULL)
|
|
|
|
zil_resume(os);
|
|
|
|
else
|
|
|
|
*cookiep = os;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there is no pointer to an on-disk block, this ZIL must not
|
|
|
|
* be active (e.g. filesystem not mounted), so there's nothing
|
|
|
|
* to clean up.
|
|
|
|
*/
|
|
|
|
if (BP_IS_HOLE(&zh->zh_log)) {
|
|
|
|
ASSERT(cookiep != NULL); /* fast path already handled */
|
|
|
|
|
|
|
|
*cookiep = os;
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
2008-11-20 20:01:55 +00:00
|
|
|
return (0);
|
|
|
|
}
|
2013-09-04 12:00:57 +00:00
|
|
|
|
2017-09-12 20:15:11 +00:00
|
|
|
/*
|
|
|
|
* The ZIL has work to do. Ensure that the associated encryption
|
|
|
|
* key will remain mapped while we are committing the log by
|
|
|
|
* grabbing a reference to it. If the key isn't loaded we have no
|
|
|
|
* choice but to return an error until the wrapping key is loaded.
|
|
|
|
*/
|
|
|
|
if (os->os_encrypted && spa_keystore_create_mapping(os->os_spa,
|
|
|
|
dmu_objset_ds(os), FTAG) != 0) {
|
|
|
|
zilog->zl_suspend--;
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
dsl_dataset_long_rele(dmu_objset_ds(os), suspend_tag);
|
|
|
|
dsl_dataset_rele(dmu_objset_ds(os), suspend_tag);
|
|
|
|
return (SET_ERROR(EBUSY));
|
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
zilog->zl_suspending = B_TRUE;
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
|
2010-08-26 21:24:34 +00:00
|
|
|
zil_commit(zilog, 0);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
zil_destroy(zilog, B_FALSE);
|
|
|
|
|
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
zilog->zl_suspending = B_FALSE;
|
|
|
|
cv_broadcast(&zilog->zl_cv_suspend);
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
|
|
|
|
2017-09-12 20:15:11 +00:00
|
|
|
if (os->os_encrypted) {
|
|
|
|
/*
|
|
|
|
* Encrypted datasets need to wait for all data to be
|
|
|
|
* synced out before removing the mapping.
|
|
|
|
*
|
|
|
|
* XXX: Depending on the number of datasets with
|
|
|
|
* outstanding ZIL data on a given log device, this
|
|
|
|
* might cause spa_offline_log() to take a long time.
|
|
|
|
*/
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, zilog->zl_destroy_txg);
|
|
|
|
VERIFY0(spa_keystore_remove_mapping(os->os_spa,
|
|
|
|
dmu_objset_id(os), FTAG));
|
|
|
|
}
|
|
|
|
|
2013-09-04 12:00:57 +00:00
|
|
|
if (cookiep == NULL)
|
|
|
|
zil_resume(os);
|
|
|
|
else
|
|
|
|
*cookiep = os;
|
2008-11-20 20:01:55 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2013-09-04 12:00:57 +00:00
|
|
|
zil_resume(void *cookie)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2013-09-04 12:00:57 +00:00
|
|
|
objset_t *os = cookie;
|
|
|
|
zilog_t *zilog = dmu_objset_zil(os);
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
mutex_enter(&zilog->zl_lock);
|
|
|
|
ASSERT(zilog->zl_suspend != 0);
|
|
|
|
zilog->zl_suspend--;
|
|
|
|
mutex_exit(&zilog->zl_lock);
|
2013-09-04 12:00:57 +00:00
|
|
|
dsl_dataset_long_rele(dmu_objset_ds(os), suspend_tag);
|
|
|
|
dsl_dataset_rele(dmu_objset_ds(os), suspend_tag);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
typedef struct zil_replay_arg {
|
2017-10-27 19:46:35 +00:00
|
|
|
zil_replay_func_t **zr_replay;
|
2008-11-20 20:01:55 +00:00
|
|
|
void *zr_arg;
|
|
|
|
boolean_t zr_byteswap;
|
2010-05-28 20:45:14 +00:00
|
|
|
char *zr_lr;
|
2008-11-20 20:01:55 +00:00
|
|
|
} zil_replay_arg_t;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
static int
|
|
|
|
zil_replay_error(zilog_t *zilog, lr_t *lr, int error)
|
|
|
|
{
|
2016-06-15 21:28:36 +00:00
|
|
|
char name[ZFS_MAX_DATASET_NAME_LEN];
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
zilog->zl_replaying_seq--; /* didn't actually replay this one */
|
|
|
|
|
|
|
|
dmu_objset_name(zilog->zl_os, name);
|
|
|
|
|
|
|
|
cmn_err(CE_WARN, "ZFS replay transaction error %d, "
|
|
|
|
"dataset %s, seq 0x%llx, txtype %llu %s\n", error, name,
|
|
|
|
(u_longlong_t)lr->lrc_seq,
|
|
|
|
(u_longlong_t)(lr->lrc_txtype & ~TX_CI),
|
|
|
|
(lr->lrc_txtype & TX_CI) ? "CI" : "");
|
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_replay_log_record(zilog_t *zilog, lr_t *lr, void *zra, uint64_t claim_txg)
|
|
|
|
{
|
|
|
|
zil_replay_arg_t *zr = zra;
|
|
|
|
const zil_header_t *zh = zilog->zl_header;
|
|
|
|
uint64_t reclen = lr->lrc_reclen;
|
|
|
|
uint64_t txtype = lr->lrc_txtype;
|
2010-05-28 20:45:14 +00:00
|
|
|
int error = 0;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
zilog->zl_replaying_seq = lr->lrc_seq;
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
if (lr->lrc_seq <= zh->zh_replay_seq) /* already replayed */
|
2010-05-28 20:45:14 +00:00
|
|
|
return (0);
|
|
|
|
|
|
|
|
if (lr->lrc_txg < claim_txg) /* already committed */
|
|
|
|
return (0);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/* Strip case-insensitive bit, still present in log record */
|
|
|
|
txtype &= ~TX_CI;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (txtype == 0 || txtype >= TX_MAX_TYPE)
|
|
|
|
return (zil_replay_error(zilog, lr, EINVAL));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this record type can be logged out of order, the object
|
|
|
|
* (lr_foid) may no longer exist. That's legitimate, not an error.
|
|
|
|
*/
|
|
|
|
if (TX_OOO(txtype)) {
|
|
|
|
error = dmu_object_info(zilog->zl_os,
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 01:25:34 +00:00
|
|
|
LR_FOID_GET_OBJ(((lr_ooo_t *)lr)->lr_foid), NULL);
|
2010-05-28 20:45:14 +00:00
|
|
|
if (error == ENOENT || error == EEXIST)
|
|
|
|
return (0);
|
2009-01-15 21:59:39 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* Make a copy of the data so we can revise and extend it.
|
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
bcopy(lr, zr->zr_lr, reclen);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this is a TX_WRITE with a blkptr, suck in the data.
|
|
|
|
*/
|
|
|
|
if (txtype == TX_WRITE && reclen == sizeof (lr_write_t)) {
|
|
|
|
error = zil_read_log_data(zilog, (lr_write_t *)lr,
|
|
|
|
zr->zr_lr + reclen);
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0)
|
2010-05-28 20:45:14 +00:00
|
|
|
return (zil_replay_error(zilog, lr, error));
|
|
|
|
}
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The log block containing this lr may have been byteswapped
|
|
|
|
* so that we can easily examine common fields like lrc_txtype.
|
2010-05-28 20:45:14 +00:00
|
|
|
* However, the log is a mix of different record types, and only the
|
2008-11-20 20:01:55 +00:00
|
|
|
* replay vectors know how to byteswap their records. Therefore, if
|
|
|
|
* the lr was byteswapped, undo it before invoking the replay vector.
|
|
|
|
*/
|
|
|
|
if (zr->zr_byteswap)
|
2010-05-28 20:45:14 +00:00
|
|
|
byteswap_uint64_array(zr->zr_lr, reclen);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We must now do two things atomically: replay this log record,
|
2009-01-15 21:59:39 +00:00
|
|
|
* and update the log header sequence number to reflect the fact that
|
|
|
|
* we did so. At the end of each replay function the sequence number
|
|
|
|
* is updated if we are in replay mode.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
error = zr->zr_replay[txtype](zr->zr_arg, zr->zr_lr, zr->zr_byteswap);
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0) {
|
2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* The DMU's dnode layer doesn't see removes until the txg
|
|
|
|
* commits, so a subsequent claim can spuriously fail with
|
2009-01-15 21:59:39 +00:00
|
|
|
* EEXIST. So if we receive any error we try syncing out
|
2010-05-28 20:45:14 +00:00
|
|
|
* any removes then retry the transaction. Note that we
|
|
|
|
* specify B_FALSE for byteswap now, so we don't do it twice.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
txg_wait_synced(spa_get_dsl(zilog->zl_spa), 0);
|
|
|
|
error = zr->zr_replay[txtype](zr->zr_arg, zr->zr_lr, B_FALSE);
|
2013-09-04 12:00:57 +00:00
|
|
|
if (error != 0)
|
2010-05-28 20:45:14 +00:00
|
|
|
return (zil_replay_error(zilog, lr, error));
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
2010-05-28 20:45:14 +00:00
|
|
|
return (0);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
2010-05-28 20:45:14 +00:00
|
|
|
static int
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_incr_blks(zilog_t *zilog, blkptr_t *bp, void *arg, uint64_t claim_txg)
|
|
|
|
{
|
|
|
|
zilog->zl_replay_blks++;
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
return (0);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this dataset has a non-empty intent log, replay it and destroy it.
|
|
|
|
*/
|
|
|
|
void
|
2017-10-27 19:46:35 +00:00
|
|
|
zil_replay(objset_t *os, void *arg, zil_replay_func_t *replay_func[TX_MAX_TYPE])
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
|
|
|
zilog_t *zilog = dmu_objset_zil(os);
|
|
|
|
const zil_header_t *zh = zilog->zl_header;
|
|
|
|
zil_replay_arg_t zr;
|
|
|
|
|
2009-07-02 22:44:48 +00:00
|
|
|
if ((zh->zh_flags & ZIL_REPLAY_NEEDED) == 0) {
|
2008-11-20 20:01:55 +00:00
|
|
|
zil_destroy(zilog, B_TRUE);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
zr.zr_replay = replay_func;
|
|
|
|
zr.zr_arg = arg;
|
|
|
|
zr.zr_byteswap = BP_SHOULD_BYTESWAP(&zh->zh_log);
|
2014-11-21 00:09:39 +00:00
|
|
|
zr.zr_lr = vmem_alloc(2 * SPA_MAXBLOCKSIZE, KM_SLEEP);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for in-progress removes to sync before starting replay.
|
|
|
|
*/
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, 0);
|
|
|
|
|
2009-01-15 21:59:39 +00:00
|
|
|
zilog->zl_replay = B_TRUE;
|
2010-05-28 20:45:14 +00:00
|
|
|
zilog->zl_replay_time = ddi_get_lbolt();
|
2008-11-20 20:01:55 +00:00
|
|
|
ASSERT(zilog->zl_replay_blks == 0);
|
|
|
|
(void) zil_parse(zilog, zil_incr_blks, zil_replay_log_record, &zr,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
zh->zh_claim_txg, B_TRUE);
|
2010-08-26 18:46:09 +00:00
|
|
|
vmem_free(zr.zr_lr, 2 * SPA_MAXBLOCKSIZE);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
zil_destroy(zilog, B_FALSE);
|
|
|
|
txg_wait_synced(zilog->zl_dmu_pool, zilog->zl_destroy_txg);
|
2009-01-15 21:59:39 +00:00
|
|
|
zilog->zl_replay = B_FALSE;
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
boolean_t
|
|
|
|
zil_replaying(zilog_t *zilog, dmu_tx_t *tx)
|
2008-11-20 20:01:55 +00:00
|
|
|
{
|
2010-05-28 20:45:14 +00:00
|
|
|
if (zilog->zl_sync == ZFS_SYNC_DISABLED)
|
|
|
|
return (B_TRUE);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
if (zilog->zl_replay) {
|
|
|
|
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx);
|
|
|
|
zilog->zl_replayed_seq[dmu_tx_get_txg(tx) & TXG_MASK] =
|
|
|
|
zilog->zl_replaying_seq;
|
|
|
|
return (B_TRUE);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
return (B_FALSE);
|
2008-11-20 20:01:55 +00:00
|
|
|
}
|
2009-07-02 22:44:48 +00:00
|
|
|
|
|
|
|
/* ARGSUSED */
|
|
|
|
int
|
2010-05-28 20:45:14 +00:00
|
|
|
zil_vdev_offline(const char *osname, void *arg)
|
2009-07-02 22:44:48 +00:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
2013-09-04 12:00:57 +00:00
|
|
|
error = zil_suspend(osname, NULL);
|
|
|
|
if (error != 0)
|
2013-03-08 18:41:28 +00:00
|
|
|
return (SET_ERROR(EEXIST));
|
2013-09-04 12:00:57 +00:00
|
|
|
return (0);
|
2009-07-02 22:44:48 +00:00
|
|
|
}
|
2011-05-03 22:09:28 +00:00
|
|
|
|
|
|
|
#if defined(_KERNEL) && defined(HAVE_SPL)
|
2014-11-13 18:09:05 +00:00
|
|
|
EXPORT_SYMBOL(zil_alloc);
|
|
|
|
EXPORT_SYMBOL(zil_free);
|
|
|
|
EXPORT_SYMBOL(zil_open);
|
|
|
|
EXPORT_SYMBOL(zil_close);
|
|
|
|
EXPORT_SYMBOL(zil_replay);
|
|
|
|
EXPORT_SYMBOL(zil_replaying);
|
|
|
|
EXPORT_SYMBOL(zil_destroy);
|
|
|
|
EXPORT_SYMBOL(zil_destroy_sync);
|
|
|
|
EXPORT_SYMBOL(zil_itx_create);
|
|
|
|
EXPORT_SYMBOL(zil_itx_destroy);
|
|
|
|
EXPORT_SYMBOL(zil_itx_assign);
|
|
|
|
EXPORT_SYMBOL(zil_commit);
|
|
|
|
EXPORT_SYMBOL(zil_vdev_offline);
|
|
|
|
EXPORT_SYMBOL(zil_claim);
|
|
|
|
EXPORT_SYMBOL(zil_check_log_chain);
|
|
|
|
EXPORT_SYMBOL(zil_sync);
|
|
|
|
EXPORT_SYMBOL(zil_clean);
|
|
|
|
EXPORT_SYMBOL(zil_suspend);
|
|
|
|
EXPORT_SYMBOL(zil_resume);
|
|
|
|
EXPORT_SYMBOL(zil_add_block);
|
|
|
|
EXPORT_SYMBOL(zil_bp_tree_add);
|
|
|
|
EXPORT_SYMBOL(zil_set_sync);
|
|
|
|
EXPORT_SYMBOL(zil_set_logbias);
|
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
/* BEGIN CSTYLED */
|
2011-05-03 22:09:28 +00:00
|
|
|
module_param(zil_replay_disable, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zil_replay_disable, "Disable intent logging replay");
|
|
|
|
|
|
|
|
module_param(zfs_nocacheflush, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_nocacheflush, "Disable cache flushes");
|
2012-06-12 09:40:36 +00:00
|
|
|
|
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Andriy Gapon <avg@FreeBSD.org>
Reviewed by: Steven Hartland <steven.hartland@multiplay.co.uk>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Yao <ryao@gentoo.org>
Ported-by: Giuseppe Di Natale <dinatale2@llnl.gov>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
2017-06-09 16:15:37 +00:00
|
|
|
module_param(zil_slog_bulk, ulong, 0644);
|
|
|
|
MODULE_PARM_DESC(zil_slog_bulk, "Limit in bytes slog sync writes per commit");
|
|
|
|
/* END CSTYLED */
|
2011-05-03 22:09:28 +00:00
|
|
|
#endif
|