Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
#include <sys/spa_impl.h>
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
#include <sys/vdev_impl.h>
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Keeps stats on last N reads per spa_t, disabled by default.
|
|
|
|
*/
|
|
|
|
int zfs_read_history = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Include cache hits in history, disabled by default.
|
|
|
|
*/
|
|
|
|
int zfs_read_history_hits = 0;
|
|
|
|
|
2013-10-01 16:50:50 +00:00
|
|
|
/*
|
|
|
|
* Keeps stats on the last N txgs, disabled by default.
|
|
|
|
*/
|
|
|
|
int zfs_txg_history = 0;
|
|
|
|
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
/*
|
|
|
|
* Keeps stats on the last N MMP updates, disabled by default.
|
|
|
|
*/
|
|
|
|
int zfs_multihost_history = 0;
|
|
|
|
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA Read History Routines
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Read statistics - Information exported regarding each arc_read call
|
|
|
|
*/
|
|
|
|
typedef struct spa_read_history {
|
|
|
|
uint64_t uid; /* unique identifier */
|
|
|
|
hrtime_t start; /* time read completed */
|
|
|
|
uint64_t objset; /* read from this objset */
|
|
|
|
uint64_t object; /* read of this object number */
|
|
|
|
uint64_t level; /* block's indirection level */
|
|
|
|
uint64_t blkid; /* read of this block id */
|
|
|
|
char origin[24]; /* read originated from here */
|
|
|
|
uint32_t aflags; /* ARC flags (cached, prefetch, etc.) */
|
|
|
|
pid_t pid; /* PID of task doing read */
|
|
|
|
char comm[16]; /* process name of task doing read */
|
|
|
|
list_node_t srh_link;
|
|
|
|
} spa_read_history_t;
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_read_history_headers(char *buf, size_t size)
|
|
|
|
{
|
2014-11-07 02:18:32 +00:00
|
|
|
(void) snprintf(buf, size, "%-8s %-16s %-8s %-8s %-8s %-8s %-8s "
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
"%-24s %-8s %-16s\n", "UID", "start", "objset", "object",
|
|
|
|
"level", "blkid", "aflags", "origin", "pid", "process");
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_read_history_data(char *buf, size_t size, void *data)
|
|
|
|
{
|
|
|
|
spa_read_history_t *srh = (spa_read_history_t *)data;
|
|
|
|
|
2014-11-07 02:18:32 +00:00
|
|
|
(void) snprintf(buf, size, "%-8llu %-16llu 0x%-6llx "
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
"%-8lli %-8lli %-8lli 0x%-6x %-24s %-8i %-16s\n",
|
|
|
|
(u_longlong_t)srh->uid, srh->start,
|
|
|
|
(longlong_t)srh->objset, (longlong_t)srh->object,
|
|
|
|
(longlong_t)srh->level, (longlong_t)srh->blkid,
|
|
|
|
srh->aflags, srh->origin, srh->pid, srh->comm);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate the address for the next spa_stats_history_t entry. The
|
|
|
|
* ssh->lock will be held until ksp->ks_ndata entries are processed.
|
|
|
|
*/
|
|
|
|
static void *
|
|
|
|
spa_read_history_addr(kstat_t *ksp, loff_t n)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.read_history;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&ssh->lock));
|
|
|
|
|
|
|
|
if (n == 0)
|
|
|
|
ssh->private = list_tail(&ssh->list);
|
|
|
|
else if (ssh->private)
|
|
|
|
ssh->private = list_prev(&ssh->list, ssh->private);
|
|
|
|
|
|
|
|
return (ssh->private);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-01-03 17:31:18 +00:00
|
|
|
* When the kstat is written discard all spa_read_history_t entries. The
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
* ssh->lock will be held until ksp->ks_ndata entries are processed.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
spa_read_history_update(kstat_t *ksp, int rw)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.read_history;
|
|
|
|
|
|
|
|
if (rw == KSTAT_WRITE) {
|
|
|
|
spa_read_history_t *srh;
|
|
|
|
|
|
|
|
while ((srh = list_remove_head(&ssh->list))) {
|
|
|
|
ssh->size--;
|
2013-11-01 19:26:11 +00:00
|
|
|
kmem_free(srh, sizeof (spa_read_history_t));
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(ssh->size, ==, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
ksp->ks_ndata = ssh->size;
|
2013-11-01 19:26:11 +00:00
|
|
|
ksp->ks_data_size = ssh->size * sizeof (spa_read_history_t);
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_read_history_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.read_history;
|
2017-08-11 15:56:24 +00:00
|
|
|
char *name;
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
mutex_init(&ssh->lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
list_create(&ssh->list, sizeof (spa_read_history_t),
|
|
|
|
offsetof(spa_read_history_t, srh_link));
|
|
|
|
|
|
|
|
ssh->count = 0;
|
|
|
|
ssh->size = 0;
|
|
|
|
ssh->private = NULL;
|
|
|
|
|
2017-08-11 15:56:24 +00:00
|
|
|
name = kmem_asprintf("zfs/%s", spa_name(spa));
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
|
|
|
|
ksp = kstat_create(name, 0, "reads", "misc",
|
|
|
|
KSTAT_TYPE_RAW, 0, KSTAT_FLAG_VIRTUAL);
|
|
|
|
ssh->kstat = ksp;
|
|
|
|
|
|
|
|
if (ksp) {
|
|
|
|
ksp->ks_lock = &ssh->lock;
|
|
|
|
ksp->ks_data = NULL;
|
|
|
|
ksp->ks_private = spa;
|
|
|
|
ksp->ks_update = spa_read_history_update;
|
|
|
|
kstat_set_raw_ops(ksp, spa_read_history_headers,
|
|
|
|
spa_read_history_data, spa_read_history_addr);
|
|
|
|
kstat_install(ksp);
|
|
|
|
}
|
2017-08-11 15:56:24 +00:00
|
|
|
strfree(name);
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_read_history_destroy(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.read_history;
|
|
|
|
spa_read_history_t *srh;
|
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
ksp = ssh->kstat;
|
|
|
|
if (ksp)
|
|
|
|
kstat_delete(ksp);
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
while ((srh = list_remove_head(&ssh->list))) {
|
|
|
|
ssh->size--;
|
2013-11-01 19:26:11 +00:00
|
|
|
kmem_free(srh, sizeof (spa_read_history_t));
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(ssh->size, ==, 0);
|
|
|
|
list_destroy(&ssh->list);
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
|
|
|
|
mutex_destroy(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2014-06-25 18:37:59 +00:00
|
|
|
spa_read_history_add(spa_t *spa, const zbookmark_phys_t *zb, uint32_t aflags)
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.read_history;
|
|
|
|
spa_read_history_t *srh, *rm;
|
|
|
|
|
|
|
|
ASSERT3P(spa, !=, NULL);
|
|
|
|
ASSERT3P(zb, !=, NULL);
|
|
|
|
|
|
|
|
if (zfs_read_history == 0 && ssh->size == 0)
|
|
|
|
return;
|
|
|
|
|
2014-12-06 17:24:32 +00:00
|
|
|
if (zfs_read_history_hits == 0 && (aflags & ARC_FLAG_CACHED))
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
return;
|
|
|
|
|
2014-11-21 00:09:39 +00:00
|
|
|
srh = kmem_zalloc(sizeof (spa_read_history_t), KM_SLEEP);
|
2013-11-01 19:26:11 +00:00
|
|
|
strlcpy(srh->comm, getcomm(), sizeof (srh->comm));
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
srh->start = gethrtime();
|
|
|
|
srh->objset = zb->zb_objset;
|
|
|
|
srh->object = zb->zb_object;
|
|
|
|
srh->level = zb->zb_level;
|
|
|
|
srh->blkid = zb->zb_blkid;
|
|
|
|
srh->aflags = aflags;
|
|
|
|
srh->pid = getpid();
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
|
|
|
|
srh->uid = ssh->count++;
|
|
|
|
list_insert_head(&ssh->list, srh);
|
|
|
|
ssh->size++;
|
|
|
|
|
|
|
|
while (ssh->size > zfs_read_history) {
|
|
|
|
ssh->size--;
|
|
|
|
rm = list_remove_tail(&ssh->list);
|
2013-11-01 19:26:11 +00:00
|
|
|
kmem_free(rm, sizeof (spa_read_history_t));
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
2013-10-01 16:50:50 +00:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA TXG History Routines
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Txg statistics - Information exported regarding each txg sync
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct spa_txg_history {
|
|
|
|
uint64_t txg; /* txg id */
|
|
|
|
txg_state_t state; /* active txg state */
|
|
|
|
uint64_t nread; /* number of bytes read */
|
|
|
|
uint64_t nwritten; /* number of bytes written */
|
|
|
|
uint64_t reads; /* number of read operations */
|
|
|
|
uint64_t writes; /* number of write operations */
|
2014-02-28 00:32:36 +00:00
|
|
|
uint64_t ndirty; /* number of dirty bytes */
|
2013-10-01 16:50:50 +00:00
|
|
|
hrtime_t times[TXG_STATE_COMMITTED]; /* completion times */
|
|
|
|
list_node_t sth_link;
|
|
|
|
} spa_txg_history_t;
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_txg_history_headers(char *buf, size_t size)
|
|
|
|
{
|
2014-11-07 02:18:32 +00:00
|
|
|
(void) snprintf(buf, size, "%-8s %-16s %-5s %-12s %-12s %-12s "
|
2014-01-16 09:41:27 +00:00
|
|
|
"%-8s %-8s %-12s %-12s %-12s %-12s\n", "txg", "birth", "state",
|
2014-02-28 00:32:36 +00:00
|
|
|
"ndirty", "nread", "nwritten", "reads", "writes",
|
2014-01-16 09:41:27 +00:00
|
|
|
"otime", "qtime", "wtime", "stime");
|
2013-10-01 16:50:50 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_txg_history_data(char *buf, size_t size, void *data)
|
|
|
|
{
|
|
|
|
spa_txg_history_t *sth = (spa_txg_history_t *)data;
|
2014-01-16 09:41:27 +00:00
|
|
|
uint64_t open = 0, quiesce = 0, wait = 0, sync = 0;
|
2013-10-01 16:50:50 +00:00
|
|
|
char state;
|
|
|
|
|
|
|
|
switch (sth->state) {
|
|
|
|
case TXG_STATE_BIRTH: state = 'B'; break;
|
|
|
|
case TXG_STATE_OPEN: state = 'O'; break;
|
|
|
|
case TXG_STATE_QUIESCED: state = 'Q'; break;
|
2014-01-16 09:41:27 +00:00
|
|
|
case TXG_STATE_WAIT_FOR_SYNC: state = 'W'; break;
|
2013-10-01 16:50:50 +00:00
|
|
|
case TXG_STATE_SYNCED: state = 'S'; break;
|
|
|
|
case TXG_STATE_COMMITTED: state = 'C'; break;
|
|
|
|
default: state = '?'; break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sth->times[TXG_STATE_OPEN])
|
|
|
|
open = sth->times[TXG_STATE_OPEN] -
|
|
|
|
sth->times[TXG_STATE_BIRTH];
|
|
|
|
|
|
|
|
if (sth->times[TXG_STATE_QUIESCED])
|
|
|
|
quiesce = sth->times[TXG_STATE_QUIESCED] -
|
|
|
|
sth->times[TXG_STATE_OPEN];
|
|
|
|
|
2014-01-16 09:41:27 +00:00
|
|
|
if (sth->times[TXG_STATE_WAIT_FOR_SYNC])
|
|
|
|
wait = sth->times[TXG_STATE_WAIT_FOR_SYNC] -
|
|
|
|
sth->times[TXG_STATE_QUIESCED];
|
|
|
|
|
2013-10-01 16:50:50 +00:00
|
|
|
if (sth->times[TXG_STATE_SYNCED])
|
|
|
|
sync = sth->times[TXG_STATE_SYNCED] -
|
2014-01-16 09:41:27 +00:00
|
|
|
sth->times[TXG_STATE_WAIT_FOR_SYNC];
|
2013-10-01 16:50:50 +00:00
|
|
|
|
2014-11-07 02:18:32 +00:00
|
|
|
(void) snprintf(buf, size, "%-8llu %-16llu %-5c %-12llu "
|
2014-01-16 09:41:27 +00:00
|
|
|
"%-12llu %-12llu %-8llu %-8llu %-12llu %-12llu %-12llu %-12llu\n",
|
2013-10-01 16:50:50 +00:00
|
|
|
(longlong_t)sth->txg, sth->times[TXG_STATE_BIRTH], state,
|
2014-02-28 00:32:36 +00:00
|
|
|
(u_longlong_t)sth->ndirty,
|
2013-10-01 16:50:50 +00:00
|
|
|
(u_longlong_t)sth->nread, (u_longlong_t)sth->nwritten,
|
|
|
|
(u_longlong_t)sth->reads, (u_longlong_t)sth->writes,
|
2014-01-16 09:41:27 +00:00
|
|
|
(u_longlong_t)open, (u_longlong_t)quiesce, (u_longlong_t)wait,
|
|
|
|
(u_longlong_t)sync);
|
2013-10-01 16:50:50 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate the address for the next spa_stats_history_t entry. The
|
|
|
|
* ssh->lock will be held until ksp->ks_ndata entries are processed.
|
|
|
|
*/
|
|
|
|
static void *
|
|
|
|
spa_txg_history_addr(kstat_t *ksp, loff_t n)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&ssh->lock));
|
|
|
|
|
|
|
|
if (n == 0)
|
|
|
|
ssh->private = list_tail(&ssh->list);
|
|
|
|
else if (ssh->private)
|
|
|
|
ssh->private = list_prev(&ssh->list, ssh->private);
|
|
|
|
|
|
|
|
return (ssh->private);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-01-03 17:31:18 +00:00
|
|
|
* When the kstat is written discard all spa_txg_history_t entries. The
|
2013-10-01 16:50:50 +00:00
|
|
|
* ssh->lock will be held until ksp->ks_ndata entries are processed.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
spa_txg_history_update(kstat_t *ksp, int rw)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&ssh->lock));
|
|
|
|
|
|
|
|
if (rw == KSTAT_WRITE) {
|
|
|
|
spa_txg_history_t *sth;
|
|
|
|
|
|
|
|
while ((sth = list_remove_head(&ssh->list))) {
|
|
|
|
ssh->size--;
|
2013-11-01 19:26:11 +00:00
|
|
|
kmem_free(sth, sizeof (spa_txg_history_t));
|
2013-10-01 16:50:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(ssh->size, ==, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
ksp->ks_ndata = ssh->size;
|
2013-11-01 19:26:11 +00:00
|
|
|
ksp->ks_data_size = ssh->size * sizeof (spa_txg_history_t);
|
2013-10-01 16:50:50 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_txg_history_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
2017-08-11 15:56:24 +00:00
|
|
|
char *name;
|
2013-10-01 16:50:50 +00:00
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
mutex_init(&ssh->lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
list_create(&ssh->list, sizeof (spa_txg_history_t),
|
|
|
|
offsetof(spa_txg_history_t, sth_link));
|
|
|
|
|
|
|
|
ssh->count = 0;
|
|
|
|
ssh->size = 0;
|
|
|
|
ssh->private = NULL;
|
|
|
|
|
2017-08-11 15:56:24 +00:00
|
|
|
name = kmem_asprintf("zfs/%s", spa_name(spa));
|
2013-10-01 16:50:50 +00:00
|
|
|
|
|
|
|
ksp = kstat_create(name, 0, "txgs", "misc",
|
|
|
|
KSTAT_TYPE_RAW, 0, KSTAT_FLAG_VIRTUAL);
|
|
|
|
ssh->kstat = ksp;
|
|
|
|
|
|
|
|
if (ksp) {
|
|
|
|
ksp->ks_lock = &ssh->lock;
|
|
|
|
ksp->ks_data = NULL;
|
|
|
|
ksp->ks_private = spa;
|
|
|
|
ksp->ks_update = spa_txg_history_update;
|
|
|
|
kstat_set_raw_ops(ksp, spa_txg_history_headers,
|
|
|
|
spa_txg_history_data, spa_txg_history_addr);
|
|
|
|
kstat_install(ksp);
|
|
|
|
}
|
2017-08-11 15:56:24 +00:00
|
|
|
strfree(name);
|
2013-10-01 16:50:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_txg_history_destroy(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
|
|
|
spa_txg_history_t *sth;
|
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
ksp = ssh->kstat;
|
|
|
|
if (ksp)
|
|
|
|
kstat_delete(ksp);
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
while ((sth = list_remove_head(&ssh->list))) {
|
|
|
|
ssh->size--;
|
2013-11-01 19:26:11 +00:00
|
|
|
kmem_free(sth, sizeof (spa_txg_history_t));
|
2013-10-01 16:50:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(ssh->size, ==, 0);
|
|
|
|
list_destroy(&ssh->list);
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
|
|
|
|
mutex_destroy(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a new txg to historical record.
|
|
|
|
*/
|
|
|
|
void
|
2014-01-15 09:26:12 +00:00
|
|
|
spa_txg_history_add(spa_t *spa, uint64_t txg, hrtime_t birth_time)
|
2013-10-01 16:50:50 +00:00
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
|
|
|
spa_txg_history_t *sth, *rm;
|
|
|
|
|
|
|
|
if (zfs_txg_history == 0 && ssh->size == 0)
|
|
|
|
return;
|
|
|
|
|
2014-11-21 00:09:39 +00:00
|
|
|
sth = kmem_zalloc(sizeof (spa_txg_history_t), KM_SLEEP);
|
2013-10-01 16:50:50 +00:00
|
|
|
sth->txg = txg;
|
|
|
|
sth->state = TXG_STATE_OPEN;
|
2014-01-15 09:26:12 +00:00
|
|
|
sth->times[TXG_STATE_BIRTH] = birth_time;
|
2013-10-01 16:50:50 +00:00
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
|
|
|
|
list_insert_head(&ssh->list, sth);
|
|
|
|
ssh->size++;
|
|
|
|
|
|
|
|
while (ssh->size > zfs_txg_history) {
|
|
|
|
ssh->size--;
|
|
|
|
rm = list_remove_tail(&ssh->list);
|
2013-11-01 19:26:11 +00:00
|
|
|
kmem_free(rm, sizeof (spa_txg_history_t));
|
2013-10-01 16:50:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set txg state completion time and increment current state.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
spa_txg_history_set(spa_t *spa, uint64_t txg, txg_state_t completed_state,
|
|
|
|
hrtime_t completed_time)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
|
|
|
spa_txg_history_t *sth;
|
|
|
|
int error = ENOENT;
|
|
|
|
|
|
|
|
if (zfs_txg_history == 0)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
for (sth = list_head(&ssh->list); sth != NULL;
|
2013-11-01 19:26:11 +00:00
|
|
|
sth = list_next(&ssh->list, sth)) {
|
2013-10-01 16:50:50 +00:00
|
|
|
if (sth->txg == txg) {
|
|
|
|
sth->times[completed_state] = completed_time;
|
|
|
|
sth->state++;
|
|
|
|
error = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set txg IO stats.
|
|
|
|
*/
|
2016-12-02 23:57:49 +00:00
|
|
|
static int
|
2013-10-01 16:50:50 +00:00
|
|
|
spa_txg_history_set_io(spa_t *spa, uint64_t txg, uint64_t nread,
|
2014-02-28 00:32:36 +00:00
|
|
|
uint64_t nwritten, uint64_t reads, uint64_t writes, uint64_t ndirty)
|
2013-10-01 16:50:50 +00:00
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.txg_history;
|
|
|
|
spa_txg_history_t *sth;
|
|
|
|
int error = ENOENT;
|
|
|
|
|
|
|
|
if (zfs_txg_history == 0)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
for (sth = list_head(&ssh->list); sth != NULL;
|
2013-11-01 19:26:11 +00:00
|
|
|
sth = list_next(&ssh->list, sth)) {
|
2013-10-01 16:50:50 +00:00
|
|
|
if (sth->txg == txg) {
|
|
|
|
sth->nread = nread;
|
|
|
|
sth->nwritten = nwritten;
|
|
|
|
sth->reads = reads;
|
|
|
|
sth->writes = writes;
|
2014-02-28 00:32:36 +00:00
|
|
|
sth->ndirty = ndirty;
|
2013-10-01 16:50:50 +00:00
|
|
|
error = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2016-12-02 23:57:49 +00:00
|
|
|
txg_stat_t *
|
|
|
|
spa_txg_history_init_io(spa_t *spa, uint64_t txg, dsl_pool_t *dp)
|
|
|
|
{
|
|
|
|
txg_stat_t *ts;
|
|
|
|
|
|
|
|
if (zfs_txg_history == 0)
|
|
|
|
return (NULL);
|
|
|
|
|
|
|
|
ts = kmem_alloc(sizeof (txg_stat_t), KM_SLEEP);
|
|
|
|
|
|
|
|
spa_config_enter(spa, SCL_ALL, FTAG, RW_READER);
|
|
|
|
vdev_get_stats(spa->spa_root_vdev, &ts->vs1);
|
|
|
|
spa_config_exit(spa, SCL_ALL, FTAG);
|
|
|
|
|
|
|
|
ts->txg = txg;
|
|
|
|
ts->ndirty = dp->dp_dirty_pertxg[txg & TXG_MASK];
|
|
|
|
|
|
|
|
spa_txg_history_set(spa, txg, TXG_STATE_WAIT_FOR_SYNC, gethrtime());
|
|
|
|
|
|
|
|
return (ts);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_txg_history_fini_io(spa_t *spa, txg_stat_t *ts)
|
|
|
|
{
|
|
|
|
if (ts == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (zfs_txg_history == 0) {
|
|
|
|
kmem_free(ts, sizeof (txg_stat_t));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
spa_config_enter(spa, SCL_ALL, FTAG, RW_READER);
|
|
|
|
vdev_get_stats(spa->spa_root_vdev, &ts->vs2);
|
|
|
|
spa_config_exit(spa, SCL_ALL, FTAG);
|
|
|
|
|
|
|
|
spa_txg_history_set(spa, ts->txg, TXG_STATE_SYNCED, gethrtime());
|
|
|
|
spa_txg_history_set_io(spa, ts->txg,
|
|
|
|
ts->vs2.vs_bytes[ZIO_TYPE_READ] - ts->vs1.vs_bytes[ZIO_TYPE_READ],
|
|
|
|
ts->vs2.vs_bytes[ZIO_TYPE_WRITE] - ts->vs1.vs_bytes[ZIO_TYPE_WRITE],
|
|
|
|
ts->vs2.vs_ops[ZIO_TYPE_READ] - ts->vs1.vs_ops[ZIO_TYPE_READ],
|
|
|
|
ts->vs2.vs_ops[ZIO_TYPE_WRITE] - ts->vs1.vs_ops[ZIO_TYPE_WRITE],
|
|
|
|
ts->ndirty);
|
|
|
|
|
|
|
|
kmem_free(ts, sizeof (txg_stat_t));
|
|
|
|
}
|
|
|
|
|
2013-10-02 18:43:52 +00:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA TX Assign Histogram Routines
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tx statistics - Information exported regarding dmu_tx_assign time.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When the kstat is written zero all buckets. When the kstat is read
|
|
|
|
* count the number of trailing buckets set to zero and update ks_ndata
|
|
|
|
* such that they are not output.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
spa_tx_assign_update(kstat_t *ksp, int rw)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.tx_assign_histogram;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (rw == KSTAT_WRITE) {
|
|
|
|
for (i = 0; i < ssh->count; i++)
|
|
|
|
((kstat_named_t *)ssh->private)[i].value.ui64 = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = ssh->count; i > 0; i--)
|
|
|
|
if (((kstat_named_t *)ssh->private)[i-1].value.ui64 != 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
ksp->ks_ndata = i;
|
2013-11-01 19:26:11 +00:00
|
|
|
ksp->ks_data_size = i * sizeof (kstat_named_t);
|
2013-10-02 18:43:52 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_tx_assign_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.tx_assign_histogram;
|
2017-08-11 15:56:24 +00:00
|
|
|
char *name;
|
2013-10-02 18:43:52 +00:00
|
|
|
kstat_named_t *ks;
|
|
|
|
kstat_t *ksp;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
mutex_init(&ssh->lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
|
|
|
|
ssh->count = 42; /* power of two buckets for 1ns to 2,199s */
|
2013-11-01 19:26:11 +00:00
|
|
|
ssh->size = ssh->count * sizeof (kstat_named_t);
|
2013-10-02 18:43:52 +00:00
|
|
|
ssh->private = kmem_alloc(ssh->size, KM_SLEEP);
|
|
|
|
|
2017-08-11 15:56:24 +00:00
|
|
|
name = kmem_asprintf("zfs/%s", spa_name(spa));
|
2013-10-02 18:43:52 +00:00
|
|
|
|
|
|
|
for (i = 0; i < ssh->count; i++) {
|
|
|
|
ks = &((kstat_named_t *)ssh->private)[i];
|
|
|
|
ks->data_type = KSTAT_DATA_UINT64;
|
|
|
|
ks->value.ui64 = 0;
|
|
|
|
(void) snprintf(ks->name, KSTAT_STRLEN, "%llu ns",
|
|
|
|
(u_longlong_t)1 << i);
|
|
|
|
}
|
|
|
|
|
|
|
|
ksp = kstat_create(name, 0, "dmu_tx_assign", "misc",
|
|
|
|
KSTAT_TYPE_NAMED, 0, KSTAT_FLAG_VIRTUAL);
|
|
|
|
ssh->kstat = ksp;
|
|
|
|
|
|
|
|
if (ksp) {
|
|
|
|
ksp->ks_lock = &ssh->lock;
|
|
|
|
ksp->ks_data = ssh->private;
|
|
|
|
ksp->ks_ndata = ssh->count;
|
|
|
|
ksp->ks_data_size = ssh->size;
|
|
|
|
ksp->ks_private = spa;
|
|
|
|
ksp->ks_update = spa_tx_assign_update;
|
|
|
|
kstat_install(ksp);
|
|
|
|
}
|
2017-08-11 15:56:24 +00:00
|
|
|
strfree(name);
|
2013-10-02 18:43:52 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_tx_assign_destroy(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.tx_assign_histogram;
|
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
ksp = ssh->kstat;
|
|
|
|
if (ksp)
|
|
|
|
kstat_delete(ksp);
|
|
|
|
|
|
|
|
kmem_free(ssh->private, ssh->size);
|
|
|
|
mutex_destroy(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_tx_assign_add_nsecs(spa_t *spa, uint64_t nsecs)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.tx_assign_histogram;
|
|
|
|
uint64_t idx = 0;
|
|
|
|
|
2016-09-02 13:07:00 +00:00
|
|
|
while (((1ULL << idx) < nsecs) && (idx < ssh->size - 1))
|
2013-10-02 18:43:52 +00:00
|
|
|
idx++;
|
|
|
|
|
|
|
|
atomic_inc_64(&((kstat_named_t *)ssh->private)[idx].value.ui64);
|
|
|
|
}
|
|
|
|
|
2013-08-27 00:09:29 +00:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA IO History Routines
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
spa_io_history_update(kstat_t *ksp, int rw)
|
|
|
|
{
|
|
|
|
if (rw == KSTAT_WRITE)
|
|
|
|
memset(ksp->ks_data, 0, ksp->ks_data_size);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_io_history_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.io_history;
|
2017-08-11 15:56:24 +00:00
|
|
|
char *name;
|
2013-08-27 00:09:29 +00:00
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
mutex_init(&ssh->lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
|
2017-08-11 15:56:24 +00:00
|
|
|
name = kmem_asprintf("zfs/%s", spa_name(spa));
|
2013-08-27 00:09:29 +00:00
|
|
|
|
|
|
|
ksp = kstat_create(name, 0, "io", "disk", KSTAT_TYPE_IO, 1, 0);
|
|
|
|
ssh->kstat = ksp;
|
|
|
|
|
|
|
|
if (ksp) {
|
|
|
|
ksp->ks_lock = &ssh->lock;
|
|
|
|
ksp->ks_private = spa;
|
|
|
|
ksp->ks_update = spa_io_history_update;
|
|
|
|
kstat_install(ksp);
|
|
|
|
}
|
2017-08-11 15:56:24 +00:00
|
|
|
strfree(name);
|
2013-08-27 00:09:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_io_history_destroy(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.io_history;
|
|
|
|
|
|
|
|
if (ssh->kstat)
|
|
|
|
kstat_delete(ssh->kstat);
|
|
|
|
|
|
|
|
mutex_destroy(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA MMP History Routines
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* MMP statistics - Information exported regarding each MMP update
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct spa_mmp_history {
|
|
|
|
uint64_t txg; /* txg of last sync */
|
|
|
|
uint64_t timestamp; /* UTC time of of last sync */
|
|
|
|
uint64_t mmp_delay; /* nanosec since last MMP write */
|
|
|
|
uint64_t vdev_guid; /* unique ID of leaf vdev */
|
|
|
|
char *vdev_path;
|
|
|
|
uint64_t vdev_label; /* vdev label */
|
|
|
|
list_node_t smh_link;
|
|
|
|
} spa_mmp_history_t;
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_mmp_history_headers(char *buf, size_t size)
|
|
|
|
{
|
|
|
|
(void) snprintf(buf, size, "%-10s %-10s %-12s %-24s %-10s %s\n",
|
|
|
|
"txg", "timestamp", "mmp_delay", "vdev_guid", "vdev_label",
|
|
|
|
"vdev_path");
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_mmp_history_data(char *buf, size_t size, void *data)
|
|
|
|
{
|
|
|
|
spa_mmp_history_t *smh = (spa_mmp_history_t *)data;
|
|
|
|
|
|
|
|
(void) snprintf(buf, size, "%-10llu %-10llu %-12llu %-24llu %-10llu "
|
|
|
|
"%s\n",
|
|
|
|
(u_longlong_t)smh->txg, (u_longlong_t)smh->timestamp,
|
|
|
|
(u_longlong_t)smh->mmp_delay, (u_longlong_t)smh->vdev_guid,
|
|
|
|
(u_longlong_t)smh->vdev_label,
|
|
|
|
(smh->vdev_path ? smh->vdev_path : "-"));
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate the address for the next spa_stats_history_t entry. The
|
|
|
|
* ssh->lock will be held until ksp->ks_ndata entries are processed.
|
|
|
|
*/
|
|
|
|
static void *
|
|
|
|
spa_mmp_history_addr(kstat_t *ksp, loff_t n)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.mmp_history;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&ssh->lock));
|
|
|
|
|
|
|
|
if (n == 0)
|
|
|
|
ssh->private = list_tail(&ssh->list);
|
|
|
|
else if (ssh->private)
|
|
|
|
ssh->private = list_prev(&ssh->list, ssh->private);
|
|
|
|
|
|
|
|
return (ssh->private);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When the kstat is written discard all spa_mmp_history_t entries. The
|
|
|
|
* ssh->lock will be held until ksp->ks_ndata entries are processed.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
spa_mmp_history_update(kstat_t *ksp, int rw)
|
|
|
|
{
|
|
|
|
spa_t *spa = ksp->ks_private;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.mmp_history;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&ssh->lock));
|
|
|
|
|
|
|
|
if (rw == KSTAT_WRITE) {
|
|
|
|
spa_mmp_history_t *smh;
|
|
|
|
|
|
|
|
while ((smh = list_remove_head(&ssh->list))) {
|
|
|
|
ssh->size--;
|
|
|
|
if (smh->vdev_path)
|
|
|
|
strfree(smh->vdev_path);
|
|
|
|
kmem_free(smh, sizeof (spa_mmp_history_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(ssh->size, ==, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
ksp->ks_ndata = ssh->size;
|
|
|
|
ksp->ks_data_size = ssh->size * sizeof (spa_mmp_history_t);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_mmp_history_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.mmp_history;
|
2017-08-11 15:56:24 +00:00
|
|
|
char *name;
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
mutex_init(&ssh->lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
list_create(&ssh->list, sizeof (spa_mmp_history_t),
|
|
|
|
offsetof(spa_mmp_history_t, smh_link));
|
|
|
|
|
|
|
|
ssh->count = 0;
|
|
|
|
ssh->size = 0;
|
|
|
|
ssh->private = NULL;
|
|
|
|
|
2017-08-11 15:56:24 +00:00
|
|
|
name = kmem_asprintf("zfs/%s", spa_name(spa));
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
|
|
|
|
ksp = kstat_create(name, 0, "multihost", "misc",
|
|
|
|
KSTAT_TYPE_RAW, 0, KSTAT_FLAG_VIRTUAL);
|
|
|
|
ssh->kstat = ksp;
|
|
|
|
|
|
|
|
if (ksp) {
|
|
|
|
ksp->ks_lock = &ssh->lock;
|
|
|
|
ksp->ks_data = NULL;
|
|
|
|
ksp->ks_private = spa;
|
|
|
|
ksp->ks_update = spa_mmp_history_update;
|
|
|
|
kstat_set_raw_ops(ksp, spa_mmp_history_headers,
|
|
|
|
spa_mmp_history_data, spa_mmp_history_addr);
|
|
|
|
kstat_install(ksp);
|
|
|
|
}
|
2017-08-11 15:56:24 +00:00
|
|
|
strfree(name);
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
spa_mmp_history_destroy(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.mmp_history;
|
|
|
|
spa_mmp_history_t *smh;
|
|
|
|
kstat_t *ksp;
|
|
|
|
|
|
|
|
ksp = ssh->kstat;
|
|
|
|
if (ksp)
|
|
|
|
kstat_delete(ksp);
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
while ((smh = list_remove_head(&ssh->list))) {
|
|
|
|
ssh->size--;
|
|
|
|
if (smh->vdev_path)
|
|
|
|
strfree(smh->vdev_path);
|
|
|
|
kmem_free(smh, sizeof (spa_mmp_history_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3U(ssh->size, ==, 0);
|
|
|
|
list_destroy(&ssh->list);
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
|
|
|
|
mutex_destroy(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a new MMP update to historical record.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_mmp_history_add(uint64_t txg, uint64_t timestamp, uint64_t mmp_delay,
|
|
|
|
vdev_t *vd, int label)
|
|
|
|
{
|
|
|
|
spa_t *spa = vd->vdev_spa;
|
|
|
|
spa_stats_history_t *ssh = &spa->spa_stats.mmp_history;
|
|
|
|
spa_mmp_history_t *smh, *rm;
|
|
|
|
|
|
|
|
if (zfs_multihost_history == 0 && ssh->size == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
smh = kmem_zalloc(sizeof (spa_mmp_history_t), KM_SLEEP);
|
|
|
|
smh->txg = txg;
|
|
|
|
smh->timestamp = timestamp;
|
|
|
|
smh->mmp_delay = mmp_delay;
|
|
|
|
smh->vdev_guid = vd->vdev_guid;
|
|
|
|
if (vd->vdev_path)
|
|
|
|
smh->vdev_path = strdup(vd->vdev_path);
|
|
|
|
smh->vdev_label = label;
|
|
|
|
|
|
|
|
mutex_enter(&ssh->lock);
|
|
|
|
|
|
|
|
list_insert_head(&ssh->list, smh);
|
|
|
|
ssh->size++;
|
|
|
|
|
|
|
|
while (ssh->size > zfs_multihost_history) {
|
|
|
|
ssh->size--;
|
|
|
|
rm = list_remove_tail(&ssh->list);
|
|
|
|
if (rm->vdev_path)
|
|
|
|
strfree(rm->vdev_path);
|
|
|
|
kmem_free(rm, sizeof (spa_mmp_history_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_exit(&ssh->lock);
|
|
|
|
}
|
|
|
|
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
void
|
|
|
|
spa_stats_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
spa_read_history_init(spa);
|
2013-10-01 16:50:50 +00:00
|
|
|
spa_txg_history_init(spa);
|
2013-10-02 18:43:52 +00:00
|
|
|
spa_tx_assign_init(spa);
|
2013-08-27 00:09:29 +00:00
|
|
|
spa_io_history_init(spa);
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
spa_mmp_history_init(spa);
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_stats_destroy(spa_t *spa)
|
|
|
|
{
|
2013-10-02 18:43:52 +00:00
|
|
|
spa_tx_assign_destroy(spa);
|
2013-10-01 16:50:50 +00:00
|
|
|
spa_txg_history_destroy(spa);
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
spa_read_history_destroy(spa);
|
2013-08-27 00:09:29 +00:00
|
|
|
spa_io_history_destroy(spa);
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
spa_mmp_history_destroy(spa);
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(_KERNEL) && defined(HAVE_SPL)
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
/* CSTYLED */
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
module_param(zfs_read_history, int, 0644);
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
MODULE_PARM_DESC(zfs_read_history,
|
|
|
|
"Historical statistics for the last N reads");
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
|
|
|
|
module_param(zfs_read_history_hits, int, 0644);
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
MODULE_PARM_DESC(zfs_read_history_hits,
|
|
|
|
"Include cache hits in read history");
|
2013-10-01 16:50:50 +00:00
|
|
|
|
|
|
|
module_param(zfs_txg_history, int, 0644);
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
MODULE_PARM_DESC(zfs_txg_history,
|
|
|
|
"Historical statistics for the last N txgs");
|
|
|
|
|
|
|
|
module_param(zfs_multihost_history, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_multihost_history,
|
|
|
|
"Historical statistics for last N multihost writes");
|
|
|
|
/* END CSTYLED */
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-06 23:09:05 +00:00
|
|
|
#endif
|