freebsd-nq/module/zfs/vdev_cache.c

435 lines
12 KiB
C
Raw Normal View History

2008-11-20 20:01:55 +00:00
/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
2009-02-18 20:51:31 +00:00
* Copyright 2009 Sun Microsystems, Inc. All rights reserved.
2008-11-20 20:01:55 +00:00
* Use is subject to license terms.
*/
/*
* Copyright (c) 2013 by Delphix. All rights reserved.
*/
2008-11-20 20:01:55 +00:00
#include <sys/zfs_context.h>
#include <sys/spa.h>
#include <sys/vdev_impl.h>
#include <sys/zio.h>
#include <sys/kstat.h>
/*
* Virtual device read-ahead caching.
*
* This file implements a simple LRU read-ahead cache. When the DMU reads
* a given block, it will often want other, nearby blocks soon thereafter.
* We take advantage of this by reading a larger disk region and caching
* the result. In the best case, this can turn 128 back-to-back 512-byte
* reads into a single 64k read followed by 127 cache hits; this reduces
* latency dramatically. In the worst case, it can turn an isolated 512-byte
* read into a 64k read, which doesn't affect latency all that much but is
* terribly wasteful of bandwidth. A more intelligent version of the cache
* could keep track of access patterns and not do read-ahead unless it sees
* at least two temporally close I/Os to the same region. Currently, only
* metadata I/O is inflated. A futher enhancement could take advantage of
* more semantic information about the I/O. And it could use something
* faster than an AVL tree; that was chosen solely for convenience.
*
* There are five cache operations: allocate, fill, read, write, evict.
*
* (1) Allocate. This reserves a cache entry for the specified region.
* We separate the allocate and fill operations so that multiple threads
* don't generate I/O for the same cache miss.
*
* (2) Fill. When the I/O for a cache miss completes, the fill routine
* places the data in the previously allocated cache entry.
*
* (3) Read. Read data from the cache.
*
* (4) Write. Update cache contents after write completion.
*
* (5) Evict. When allocating a new entry, we evict the oldest (LRU) entry
* if the total cache size exceeds zfs_vdev_cache_size.
*/
/*
* These tunables are for performance analysis.
*/
/*
* All i/os smaller than zfs_vdev_cache_max will be turned into
* 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
* track buffer). At most zfs_vdev_cache_size bytes will be kept in each
* vdev's vdev_cache.
*
* TODO: Note that with the current ZFS code, it turns out that the
* vdev cache is not helpful, and in some cases actually harmful. It
* is better if we disable this. Once some time has passed, we should
* actually remove this to simplify the code. For now we just disable
* it by setting the zfs_vdev_cache_size to zero. Note that Solaris 11
* has made these same changes.
2008-11-20 20:01:55 +00:00
*/
int zfs_vdev_cache_max = 1<<14; /* 16KB */
int zfs_vdev_cache_size = 0;
2008-11-20 20:01:55 +00:00
int zfs_vdev_cache_bshift = 16;
#define VCBS (1 << zfs_vdev_cache_bshift) /* 64KB */
kstat_t *vdc_ksp = NULL;
typedef struct vdc_stats {
kstat_named_t vdc_stat_delegations;
kstat_named_t vdc_stat_hits;
kstat_named_t vdc_stat_misses;
} vdc_stats_t;
static vdc_stats_t vdc_stats = {
{ "delegations", KSTAT_DATA_UINT64 },
{ "hits", KSTAT_DATA_UINT64 },
{ "misses", KSTAT_DATA_UINT64 }
};
#define VDCSTAT_BUMP(stat) atomic_inc_64(&vdc_stats.stat.value.ui64);
2008-11-20 20:01:55 +00:00
Performance optimization of AVL tree comparator functions perf: 2.75x faster ddt_entry_compare() First 256bits of ddt_key_t is a block checksum, which are expected to be close to random data. Hence, on average, comparison only needs to look at first few bytes of the keys. To reduce number of conditional jump instructions, the result is computed as: sign(memcmp(k1, k2)). Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} , which is computed efficiently. Synthetic performance evaluation of original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R) CPU E5-2660 v3: old 6.85789 s new 2.49089 s perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare() Compute the result directly instead of using conditionals perf: zfs_range_compare() Speedup between 1.1x - 2.5x, depending on compiler version and optimization level. perf: spa_error_entry_compare() `bcmp()` is not suitable for comparator use. Use `memcmp()` instead. perf: 2.8x faster metaslab_compare() and metaslab_rangesize_compare() perf: 2.8x faster zil_bp_compare() perf: 2.8x faster mze_compare() perf: faster dbuf_compare() perf: faster compares in spa_misc perf: 2.8x faster layout_hash_compare() perf: 2.8x faster space_reftree_compare() perf: libzfs: faster avl tree comparators perf: guid_compare() perf: dsl_deadlist_compare() perf: perm_set_compare() perf: 2x faster range_tree_seg_compare() perf: faster unique_compare() perf: faster vdev_cache _compare() perf: faster vdev_uberblock_compare() perf: faster fuid _compare() perf: faster zfs_znode_hold_compare() Signed-off-by: Gvozden Neskovic <neskovic@gmail.com> Signed-off-by: Richard Elling <richard.elling@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #5033
2016-08-27 18:12:53 +00:00
static inline int
2008-11-20 20:01:55 +00:00
vdev_cache_offset_compare(const void *a1, const void *a2)
{
Performance optimization of AVL tree comparator functions perf: 2.75x faster ddt_entry_compare() First 256bits of ddt_key_t is a block checksum, which are expected to be close to random data. Hence, on average, comparison only needs to look at first few bytes of the keys. To reduce number of conditional jump instructions, the result is computed as: sign(memcmp(k1, k2)). Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} , which is computed efficiently. Synthetic performance evaluation of original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R) CPU E5-2660 v3: old 6.85789 s new 2.49089 s perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare() Compute the result directly instead of using conditionals perf: zfs_range_compare() Speedup between 1.1x - 2.5x, depending on compiler version and optimization level. perf: spa_error_entry_compare() `bcmp()` is not suitable for comparator use. Use `memcmp()` instead. perf: 2.8x faster metaslab_compare() and metaslab_rangesize_compare() perf: 2.8x faster zil_bp_compare() perf: 2.8x faster mze_compare() perf: faster dbuf_compare() perf: faster compares in spa_misc perf: 2.8x faster layout_hash_compare() perf: 2.8x faster space_reftree_compare() perf: libzfs: faster avl tree comparators perf: guid_compare() perf: dsl_deadlist_compare() perf: perm_set_compare() perf: 2x faster range_tree_seg_compare() perf: faster unique_compare() perf: faster vdev_cache _compare() perf: faster vdev_uberblock_compare() perf: faster fuid _compare() perf: faster zfs_znode_hold_compare() Signed-off-by: Gvozden Neskovic <neskovic@gmail.com> Signed-off-by: Richard Elling <richard.elling@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #5033
2016-08-27 18:12:53 +00:00
const vdev_cache_entry_t *ve1 = (const vdev_cache_entry_t *)a1;
const vdev_cache_entry_t *ve2 = (const vdev_cache_entry_t *)a2;
2008-11-20 20:01:55 +00:00
Performance optimization of AVL tree comparator functions perf: 2.75x faster ddt_entry_compare() First 256bits of ddt_key_t is a block checksum, which are expected to be close to random data. Hence, on average, comparison only needs to look at first few bytes of the keys. To reduce number of conditional jump instructions, the result is computed as: sign(memcmp(k1, k2)). Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} , which is computed efficiently. Synthetic performance evaluation of original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R) CPU E5-2660 v3: old 6.85789 s new 2.49089 s perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare() Compute the result directly instead of using conditionals perf: zfs_range_compare() Speedup between 1.1x - 2.5x, depending on compiler version and optimization level. perf: spa_error_entry_compare() `bcmp()` is not suitable for comparator use. Use `memcmp()` instead. perf: 2.8x faster metaslab_compare() and metaslab_rangesize_compare() perf: 2.8x faster zil_bp_compare() perf: 2.8x faster mze_compare() perf: faster dbuf_compare() perf: faster compares in spa_misc perf: 2.8x faster layout_hash_compare() perf: 2.8x faster space_reftree_compare() perf: libzfs: faster avl tree comparators perf: guid_compare() perf: dsl_deadlist_compare() perf: perm_set_compare() perf: 2x faster range_tree_seg_compare() perf: faster unique_compare() perf: faster vdev_cache _compare() perf: faster vdev_uberblock_compare() perf: faster fuid _compare() perf: faster zfs_znode_hold_compare() Signed-off-by: Gvozden Neskovic <neskovic@gmail.com> Signed-off-by: Richard Elling <richard.elling@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #5033
2016-08-27 18:12:53 +00:00
return (AVL_CMP(ve1->ve_offset, ve2->ve_offset));
2008-11-20 20:01:55 +00:00
}
static int
vdev_cache_lastused_compare(const void *a1, const void *a2)
{
Performance optimization of AVL tree comparator functions perf: 2.75x faster ddt_entry_compare() First 256bits of ddt_key_t is a block checksum, which are expected to be close to random data. Hence, on average, comparison only needs to look at first few bytes of the keys. To reduce number of conditional jump instructions, the result is computed as: sign(memcmp(k1, k2)). Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} , which is computed efficiently. Synthetic performance evaluation of original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R) CPU E5-2660 v3: old 6.85789 s new 2.49089 s perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare() Compute the result directly instead of using conditionals perf: zfs_range_compare() Speedup between 1.1x - 2.5x, depending on compiler version and optimization level. perf: spa_error_entry_compare() `bcmp()` is not suitable for comparator use. Use `memcmp()` instead. perf: 2.8x faster metaslab_compare() and metaslab_rangesize_compare() perf: 2.8x faster zil_bp_compare() perf: 2.8x faster mze_compare() perf: faster dbuf_compare() perf: faster compares in spa_misc perf: 2.8x faster layout_hash_compare() perf: 2.8x faster space_reftree_compare() perf: libzfs: faster avl tree comparators perf: guid_compare() perf: dsl_deadlist_compare() perf: perm_set_compare() perf: 2x faster range_tree_seg_compare() perf: faster unique_compare() perf: faster vdev_cache _compare() perf: faster vdev_uberblock_compare() perf: faster fuid _compare() perf: faster zfs_znode_hold_compare() Signed-off-by: Gvozden Neskovic <neskovic@gmail.com> Signed-off-by: Richard Elling <richard.elling@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #5033
2016-08-27 18:12:53 +00:00
const vdev_cache_entry_t *ve1 = (const vdev_cache_entry_t *)a1;
const vdev_cache_entry_t *ve2 = (const vdev_cache_entry_t *)a2;
2008-11-20 20:01:55 +00:00
Performance optimization of AVL tree comparator functions perf: 2.75x faster ddt_entry_compare() First 256bits of ddt_key_t is a block checksum, which are expected to be close to random data. Hence, on average, comparison only needs to look at first few bytes of the keys. To reduce number of conditional jump instructions, the result is computed as: sign(memcmp(k1, k2)). Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} , which is computed efficiently. Synthetic performance evaluation of original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R) CPU E5-2660 v3: old 6.85789 s new 2.49089 s perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare() Compute the result directly instead of using conditionals perf: zfs_range_compare() Speedup between 1.1x - 2.5x, depending on compiler version and optimization level. perf: spa_error_entry_compare() `bcmp()` is not suitable for comparator use. Use `memcmp()` instead. perf: 2.8x faster metaslab_compare() and metaslab_rangesize_compare() perf: 2.8x faster zil_bp_compare() perf: 2.8x faster mze_compare() perf: faster dbuf_compare() perf: faster compares in spa_misc perf: 2.8x faster layout_hash_compare() perf: 2.8x faster space_reftree_compare() perf: libzfs: faster avl tree comparators perf: guid_compare() perf: dsl_deadlist_compare() perf: perm_set_compare() perf: 2x faster range_tree_seg_compare() perf: faster unique_compare() perf: faster vdev_cache _compare() perf: faster vdev_uberblock_compare() perf: faster fuid _compare() perf: faster zfs_znode_hold_compare() Signed-off-by: Gvozden Neskovic <neskovic@gmail.com> Signed-off-by: Richard Elling <richard.elling@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #5033
2016-08-27 18:12:53 +00:00
int cmp = AVL_CMP(ve1->ve_lastused, ve2->ve_lastused);
if (likely(cmp))
return (cmp);
2008-11-20 20:01:55 +00:00
/*
* Among equally old entries, sort by offset to ensure uniqueness.
*/
return (vdev_cache_offset_compare(a1, a2));
}
/*
* Evict the specified entry from the cache.
*/
static void
vdev_cache_evict(vdev_cache_t *vc, vdev_cache_entry_t *ve)
{
ASSERT(MUTEX_HELD(&vc->vc_lock));
ASSERT(ve->ve_fill_io == NULL);
ASSERT(ve->ve_data != NULL);
avl_remove(&vc->vc_lastused_tree, ve);
avl_remove(&vc->vc_offset_tree, ve);
zio_buf_free(ve->ve_data, VCBS);
kmem_free(ve, sizeof (vdev_cache_entry_t));
}
/*
* Allocate an entry in the cache. At the point we don't have the data,
* we're just creating a placeholder so that multiple threads don't all
* go off and read the same blocks.
*/
static vdev_cache_entry_t *
vdev_cache_allocate(zio_t *zio)
{
vdev_cache_t *vc = &zio->io_vd->vdev_cache;
uint64_t offset = P2ALIGN(zio->io_offset, VCBS);
vdev_cache_entry_t *ve;
ASSERT(MUTEX_HELD(&vc->vc_lock));
if (zfs_vdev_cache_size == 0)
return (NULL);
/*
* If adding a new entry would exceed the cache size,
* evict the oldest entry (LRU).
*/
if ((avl_numnodes(&vc->vc_lastused_tree) << zfs_vdev_cache_bshift) >
zfs_vdev_cache_size) {
ve = avl_first(&vc->vc_lastused_tree);
if (ve->ve_fill_io != NULL)
2008-11-20 20:01:55 +00:00
return (NULL);
ASSERT(ve->ve_hits != 0);
vdev_cache_evict(vc, ve);
}
ve = kmem_zalloc(sizeof (vdev_cache_entry_t), KM_SLEEP);
2008-11-20 20:01:55 +00:00
ve->ve_offset = offset;
ve->ve_lastused = ddi_get_lbolt();
2008-11-20 20:01:55 +00:00
ve->ve_data = zio_buf_alloc(VCBS);
avl_add(&vc->vc_offset_tree, ve);
avl_add(&vc->vc_lastused_tree, ve);
return (ve);
}
static void
vdev_cache_hit(vdev_cache_t *vc, vdev_cache_entry_t *ve, zio_t *zio)
{
uint64_t cache_phase = P2PHASE(zio->io_offset, VCBS);
ASSERT(MUTEX_HELD(&vc->vc_lock));
ASSERT(ve->ve_fill_io == NULL);
if (ve->ve_lastused != ddi_get_lbolt()) {
2008-11-20 20:01:55 +00:00
avl_remove(&vc->vc_lastused_tree, ve);
ve->ve_lastused = ddi_get_lbolt();
2008-11-20 20:01:55 +00:00
avl_add(&vc->vc_lastused_tree, ve);
}
ve->ve_hits++;
bcopy(ve->ve_data + cache_phase, zio->io_data, zio->io_size);
}
/*
* Fill a previously allocated cache entry with data.
*/
static void
2009-02-18 20:51:31 +00:00
vdev_cache_fill(zio_t *fio)
2008-11-20 20:01:55 +00:00
{
2009-02-18 20:51:31 +00:00
vdev_t *vd = fio->io_vd;
2008-11-20 20:01:55 +00:00
vdev_cache_t *vc = &vd->vdev_cache;
2009-02-18 20:51:31 +00:00
vdev_cache_entry_t *ve = fio->io_private;
zio_t *pio;
2008-11-20 20:01:55 +00:00
2009-02-18 20:51:31 +00:00
ASSERT(fio->io_size == VCBS);
2008-11-20 20:01:55 +00:00
/*
* Add data to the cache.
*/
mutex_enter(&vc->vc_lock);
2009-02-18 20:51:31 +00:00
ASSERT(ve->ve_fill_io == fio);
ASSERT(ve->ve_offset == fio->io_offset);
ASSERT(ve->ve_data == fio->io_data);
2008-11-20 20:01:55 +00:00
ve->ve_fill_io = NULL;
/*
* Even if this cache line was invalidated by a missed write update,
* any reads that were queued up before the missed update are still
* valid, so we can satisfy them from this line before we evict it.
*/
2009-02-18 20:51:31 +00:00
while ((pio = zio_walk_parents(fio)) != NULL)
vdev_cache_hit(vc, ve, pio);
2008-11-20 20:01:55 +00:00
2009-02-18 20:51:31 +00:00
if (fio->io_error || ve->ve_missed_update)
2008-11-20 20:01:55 +00:00
vdev_cache_evict(vc, ve);
mutex_exit(&vc->vc_lock);
}
/*
* Read data from the cache. Returns B_TRUE cache hit, B_FALSE on miss.
2008-11-20 20:01:55 +00:00
*/
boolean_t
2008-11-20 20:01:55 +00:00
vdev_cache_read(zio_t *zio)
{
vdev_cache_t *vc = &zio->io_vd->vdev_cache;
vdev_cache_entry_t *ve, *ve_search;
2008-11-20 20:01:55 +00:00
uint64_t cache_offset = P2ALIGN(zio->io_offset, VCBS);
zio_t *fio;
ASSERTV(uint64_t cache_phase = P2PHASE(zio->io_offset, VCBS));
2008-11-20 20:01:55 +00:00
ASSERT(zio->io_type == ZIO_TYPE_READ);
if (zio->io_flags & ZIO_FLAG_DONT_CACHE)
return (B_FALSE);
2008-11-20 20:01:55 +00:00
if (zio->io_size > zfs_vdev_cache_max)
return (B_FALSE);
2008-11-20 20:01:55 +00:00
/*
* If the I/O straddles two or more cache blocks, don't cache it.
*/
if (P2BOUNDARY(zio->io_offset, zio->io_size, VCBS))
return (B_FALSE);
2008-11-20 20:01:55 +00:00
ASSERT(cache_phase + zio->io_size <= VCBS);
mutex_enter(&vc->vc_lock);
ve_search = kmem_alloc(sizeof (vdev_cache_entry_t), KM_SLEEP);
ve_search->ve_offset = cache_offset;
ve = avl_find(&vc->vc_offset_tree, ve_search, NULL);
kmem_free(ve_search, sizeof (vdev_cache_entry_t));
2008-11-20 20:01:55 +00:00
if (ve != NULL) {
if (ve->ve_missed_update) {
mutex_exit(&vc->vc_lock);
return (B_FALSE);
2008-11-20 20:01:55 +00:00
}
if ((fio = ve->ve_fill_io) != NULL) {
zio_vdev_io_bypass(zio);
2009-02-18 20:51:31 +00:00
zio_add_child(zio, fio);
2008-11-20 20:01:55 +00:00
mutex_exit(&vc->vc_lock);
VDCSTAT_BUMP(vdc_stat_delegations);
return (B_TRUE);
2008-11-20 20:01:55 +00:00
}
vdev_cache_hit(vc, ve, zio);
zio_vdev_io_bypass(zio);
mutex_exit(&vc->vc_lock);
VDCSTAT_BUMP(vdc_stat_hits);
return (B_TRUE);
2008-11-20 20:01:55 +00:00
}
ve = vdev_cache_allocate(zio);
if (ve == NULL) {
mutex_exit(&vc->vc_lock);
return (B_FALSE);
2008-11-20 20:01:55 +00:00
}
fio = zio_vdev_delegated_io(zio->io_vd, cache_offset,
Illumos #4045 write throttle & i/o scheduler performance work 4045 zfs write throttle & i/o scheduler performance work 1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync read, sync write, async read, async write, and scrub/resilver. The scheduler issues a number of concurrent i/os from each class to the device. Once a class has been selected, an i/o is selected from this class using either an elevator algorithem (async, scrub classes) or FIFO (sync classes). The number of concurrent async write i/os is tuned dynamically based on i/o load, to achieve good sync i/o latency when there is not a high load of writes, and good write throughput when there is. See the block comment in vdev_queue.c (reproduced below) for more details. 2. The write throttle (dsl_pool_tempreserve_space() and txg_constrain_throughput()) is rewritten to produce much more consistent delays when under constant load. The new write throttle is based on the amount of dirty data, rather than guesses about future performance of the system. When there is a lot of dirty data, each transaction (e.g. write() syscall) will be delayed by the same small amount. This eliminates the "brick wall of wait" that the old write throttle could hit, causing all transactions to wait several seconds until the next txg opens. One of the keys to the new write throttle is decrementing the amount of dirty data as i/o completes, rather than at the end of spa_sync(). Note that the write throttle is only applied once the i/o scheduler is issuing the maximum number of outstanding async writes. See the block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for more details. This diff has several other effects, including: * the commonly-tuned global variable zfs_vdev_max_pending has been removed; use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead. * the size of each txg (meaning the amount of dirty data written, and thus the time it takes to write out) is now controlled differently. There is no longer an explicit time goal; the primary determinant is amount of dirty data. Systems that are under light or medium load will now often see that a txg is always syncing, but the impact to performance (e.g. read latency) is minimal. Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this. * zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression, checksum, etc. This improves latency by not allowing these CPU-intensive tasks to consume all CPU (on machines with at least 4 CPU's; the percentage is rounded up). --matt APPENDIX: problems with the current i/o scheduler The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem with this is that if there are always i/os pending, then certain classes of i/os can see very long delays. For example, if there are always synchronous reads outstanding, then no async writes will be serviced until they become "past due". One symptom of this situation is that each pass of the txg sync takes at least several seconds (typically 3 seconds). If many i/os become "past due" (their deadline is in the past), then we must service all of these overdue i/os before any new i/os. This happens when we enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in the future. If we can't complete all the i/os in 2.5 seconds (e.g. because there were always reads pending), then these i/os will become past due. Now we must service all the "async" writes (which could be hundreds of megabytes) before we service any reads, introducing considerable latency to synchronous i/os (reads or ZIL writes). Notes on porting to ZFS on Linux: - zio_t gained new members io_physdone and io_phys_children. Because object caches in the Linux port call the constructor only once at allocation time, objects may contain residual data when retrieved from the cache. Therefore zio_create() was updated to zero out the two new fields. - vdev_mirror_pending() relied on the depth of the per-vdev pending queue (vq->vq_pending_tree) to select the least-busy leaf vdev to read from. This tree has been replaced by vq->vq_active_tree which is now used for the same purpose. - vdev_queue_init() used the value of zfs_vdev_max_pending to determine the number of vdev I/O buffers to pre-allocate. That global no longer exists, so we instead use the sum of the *_max_active values for each of the five I/O classes described above. - The Illumos implementation of dmu_tx_delay() delays a transaction by sleeping in condition variable embedded in the thread (curthread->t_delay_cv). We do not have an equivalent CV to use in Linux, so this change replaced the delay logic with a wrapper called zfs_sleep_until(). This wrapper could be adopted upstream and in other downstream ports to abstract away operating system-specific delay logic. - These tunables are added as module parameters, and descriptions added to the zfs-module-parameters.5 man page. spa_asize_inflation zfs_deadman_synctime_ms zfs_vdev_max_active zfs_vdev_async_write_active_min_dirty_percent zfs_vdev_async_write_active_max_dirty_percent zfs_vdev_async_read_max_active zfs_vdev_async_read_min_active zfs_vdev_async_write_max_active zfs_vdev_async_write_min_active zfs_vdev_scrub_max_active zfs_vdev_scrub_min_active zfs_vdev_sync_read_max_active zfs_vdev_sync_read_min_active zfs_vdev_sync_write_max_active zfs_vdev_sync_write_min_active zfs_dirty_data_max_percent zfs_delay_min_dirty_percent zfs_dirty_data_max_max_percent zfs_dirty_data_max zfs_dirty_data_max_max zfs_dirty_data_sync zfs_delay_scale The latter four have type unsigned long, whereas they are uint64_t in Illumos. This accommodates Linux's module_param() supported types, but means they may overflow on 32-bit architectures. The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most likely to overflow on 32-bit systems, since they express physical RAM sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to 2^32 which does overflow. To resolve that, this port instead initializes it in arc_init() to 25% of physical RAM, and adds the tunable zfs_dirty_data_max_max_percent to override that percentage. While this solution doesn't completely avoid the overflow issue, it should be a reasonable default for most systems, and the minority of affected systems can work around the issue by overriding the defaults. - Fixed reversed logic in comment above zfs_delay_scale declaration. - Clarified comments in vdev_queue.c regarding when per-queue minimums take effect. - Replaced dmu_tx_write_limit in the dmu_tx kstat file with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts how many times a transaction has been delayed because the pool dirty data has exceeded zfs_delay_min_dirty_percent. The latter counts how many times the pool dirty data has exceeded zfs_dirty_data_max (which we expect to never happen). - The original patch would have regressed the bug fixed in zfsonlinux/zfs@c418410, which prevented users from setting the zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE. A similar fix is added to vdev_queue_aggregate(). - In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the heap instead of the stack. In Linux we can't afford such large structures on the stack. Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Adam Leventhal <ahl@delphix.com> Reviewed by: Christopher Siden <christopher.siden@delphix.com> Reviewed by: Ned Bass <bass6@llnl.gov> Reviewed by: Brendan Gregg <brendan.gregg@joyent.com> Approved by: Robert Mustacchi <rm@joyent.com> References: http://www.illumos.org/issues/4045 illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e Ported-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #1913
2013-08-29 03:01:20 +00:00
ve->ve_data, VCBS, ZIO_TYPE_READ, ZIO_PRIORITY_NOW,
ZIO_FLAG_DONT_CACHE, vdev_cache_fill, ve);
2008-11-20 20:01:55 +00:00
ve->ve_fill_io = fio;
zio_vdev_io_bypass(zio);
2009-02-18 20:51:31 +00:00
zio_add_child(zio, fio);
2008-11-20 20:01:55 +00:00
mutex_exit(&vc->vc_lock);
zio_nowait(fio);
VDCSTAT_BUMP(vdc_stat_misses);
return (B_TRUE);
2008-11-20 20:01:55 +00:00
}
/*
* Update cache contents upon write completion.
*/
void
vdev_cache_write(zio_t *zio)
{
vdev_cache_t *vc = &zio->io_vd->vdev_cache;
vdev_cache_entry_t *ve, ve_search;
uint64_t io_start = zio->io_offset;
uint64_t io_end = io_start + zio->io_size;
uint64_t min_offset = P2ALIGN(io_start, VCBS);
uint64_t max_offset = P2ROUNDUP(io_end, VCBS);
avl_index_t where;
ASSERT(zio->io_type == ZIO_TYPE_WRITE);
mutex_enter(&vc->vc_lock);
ve_search.ve_offset = min_offset;
ve = avl_find(&vc->vc_offset_tree, &ve_search, &where);
if (ve == NULL)
ve = avl_nearest(&vc->vc_offset_tree, where, AVL_AFTER);
while (ve != NULL && ve->ve_offset < max_offset) {
uint64_t start = MAX(ve->ve_offset, io_start);
uint64_t end = MIN(ve->ve_offset + VCBS, io_end);
if (ve->ve_fill_io != NULL) {
ve->ve_missed_update = 1;
} else {
bcopy((char *)zio->io_data + start - io_start,
ve->ve_data + start - ve->ve_offset, end - start);
}
ve = AVL_NEXT(&vc->vc_offset_tree, ve);
}
mutex_exit(&vc->vc_lock);
}
void
vdev_cache_purge(vdev_t *vd)
{
vdev_cache_t *vc = &vd->vdev_cache;
vdev_cache_entry_t *ve;
mutex_enter(&vc->vc_lock);
while ((ve = avl_first(&vc->vc_offset_tree)) != NULL)
vdev_cache_evict(vc, ve);
mutex_exit(&vc->vc_lock);
}
void
vdev_cache_init(vdev_t *vd)
{
vdev_cache_t *vc = &vd->vdev_cache;
mutex_init(&vc->vc_lock, NULL, MUTEX_DEFAULT, NULL);
avl_create(&vc->vc_offset_tree, vdev_cache_offset_compare,
sizeof (vdev_cache_entry_t),
offsetof(struct vdev_cache_entry, ve_offset_node));
avl_create(&vc->vc_lastused_tree, vdev_cache_lastused_compare,
sizeof (vdev_cache_entry_t),
offsetof(struct vdev_cache_entry, ve_lastused_node));
}
void
vdev_cache_fini(vdev_t *vd)
{
vdev_cache_t *vc = &vd->vdev_cache;
vdev_cache_purge(vd);
avl_destroy(&vc->vc_offset_tree);
avl_destroy(&vc->vc_lastused_tree);
mutex_destroy(&vc->vc_lock);
}
void
vdev_cache_stat_init(void)
{
vdc_ksp = kstat_create("zfs", 0, "vdev_cache_stats", "misc",
KSTAT_TYPE_NAMED, sizeof (vdc_stats) / sizeof (kstat_named_t),
KSTAT_FLAG_VIRTUAL);
if (vdc_ksp != NULL) {
vdc_ksp->ks_data = &vdc_stats;
kstat_install(vdc_ksp);
}
}
void
vdev_cache_stat_fini(void)
{
if (vdc_ksp != NULL) {
kstat_delete(vdc_ksp);
vdc_ksp = NULL;
}
}
Add missing ZFS tunables This commit adds module options for all existing zfs tunables. Ideally the average user should never need to modify any of these values. However, in practice sometimes you do need to tweak these values for one reason or another. In those cases it's nice not to have to resort to rebuilding from source. All tunables are visable to modinfo and the list is as follows: $ modinfo module/zfs/zfs.ko filename: module/zfs/zfs.ko license: CDDL author: Sun Microsystems/Oracle, Lawrence Livermore National Laboratory description: ZFS srcversion: 8EAB1D71DACE05B5AA61567 depends: spl,znvpair,zcommon,zunicode,zavl vermagic: 2.6.32-131.0.5.el6.x86_64 SMP mod_unload modversions parm: zvol_major:Major number for zvol device (uint) parm: zvol_threads:Number of threads for zvol device (uint) parm: zio_injection_enabled:Enable fault injection (int) parm: zio_bulk_flags:Additional flags to pass to bulk buffers (int) parm: zio_delay_max:Max zio millisec delay before posting event (int) parm: zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (bool) parm: zil_replay_disable:Disable intent logging replay (int) parm: zfs_nocacheflush:Disable cache flushes (bool) parm: zfs_read_chunk_size:Bytes to read per chunk (long) parm: zfs_vdev_max_pending:Max pending per-vdev I/Os (int) parm: zfs_vdev_min_pending:Min pending per-vdev I/Os (int) parm: zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int) parm: zfs_vdev_time_shift:Deadline time shift for vdev I/O (int) parm: zfs_vdev_ramp_rate:Exponential I/O issue ramp-up rate (int) parm: zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int) parm: zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int) parm: zfs_vdev_scheduler:I/O scheduler (charp) parm: zfs_vdev_cache_max:Inflate reads small than max (int) parm: zfs_vdev_cache_size:Total size of the per-disk cache (int) parm: zfs_vdev_cache_bshift:Shift size to inflate reads too (int) parm: zfs_scrub_limit:Max scrub/resilver I/O per leaf vdev (int) parm: zfs_recover:Set to attempt to recover from fatal errors (int) parm: spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp) parm: zfs_zevent_len_max:Max event queue length (int) parm: zfs_zevent_cols:Max event column width (int) parm: zfs_zevent_console:Log events to the console (int) parm: zfs_top_maxinflight:Max I/Os per top-level (int) parm: zfs_resilver_delay:Number of ticks to delay resilver (int) parm: zfs_scrub_delay:Number of ticks to delay scrub (int) parm: zfs_scan_idle:Idle window in clock ticks (int) parm: zfs_scan_min_time_ms:Min millisecs to scrub per txg (int) parm: zfs_free_min_time_ms:Min millisecs to free per txg (int) parm: zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int) parm: zfs_no_scrub_io:Set to disable scrub I/O (bool) parm: zfs_no_scrub_prefetch:Set to disable scrub prefetching (bool) parm: zfs_txg_timeout:Max seconds worth of delta per txg (int) parm: zfs_no_write_throttle:Disable write throttling (int) parm: zfs_write_limit_shift:log2(fraction of memory) per txg (int) parm: zfs_txg_synctime_ms:Target milliseconds between tgx sync (int) parm: zfs_write_limit_min:Min tgx write limit (ulong) parm: zfs_write_limit_max:Max tgx write limit (ulong) parm: zfs_write_limit_inflated:Inflated tgx write limit (ulong) parm: zfs_write_limit_override:Override tgx write limit (ulong) parm: zfs_prefetch_disable:Disable all ZFS prefetching (int) parm: zfetch_max_streams:Max number of streams per zfetch (uint) parm: zfetch_min_sec_reap:Min time before stream reclaim (uint) parm: zfetch_block_cap:Max number of blocks to fetch at a time (uint) parm: zfetch_array_rd_sz:Number of bytes in a array_read (ulong) parm: zfs_pd_blks_max:Max number of blocks to prefetch (int) parm: zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int) parm: zfs_arc_min:Min arc size (ulong) parm: zfs_arc_max:Max arc size (ulong) parm: zfs_arc_meta_limit:Meta limit for arc size (ulong) parm: zfs_arc_reduce_dnlc_percent:Meta reclaim percentage (int) parm: zfs_arc_grow_retry:Seconds before growing arc size (int) parm: zfs_arc_shrink_shift:log2(fraction of arc to reclaim) (int) parm: zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p (int)
2011-05-03 22:09:28 +00:00
#if defined(_KERNEL) && defined(HAVE_SPL)
module_param(zfs_vdev_cache_max, int, 0644);
MODULE_PARM_DESC(zfs_vdev_cache_max, "Inflate reads small than max");
module_param(zfs_vdev_cache_size, int, 0444);
MODULE_PARM_DESC(zfs_vdev_cache_size, "Total size of the per-disk cache");
module_param(zfs_vdev_cache_bshift, int, 0644);
MODULE_PARM_DESC(zfs_vdev_cache_bshift, "Shift size to inflate reads too");
#endif