2003-04-09 10:52:10 +00:00
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2003 Poul-Henning Kamp
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
.\" Copyright (c) 2017 Alexander Motin <mav@FreeBSD.org>
|
2003-04-09 10:52:10 +00:00
|
|
|
.\" All rights reserved.
|
|
|
|
.\"
|
|
|
|
.\" Redistribution and use in source and binary forms, with or without
|
|
|
|
.\" modification, are permitted provided that the following conditions
|
|
|
|
.\" are met:
|
|
|
|
.\" 1. Redistributions of source code must retain the above copyright
|
|
|
|
.\" notice, this list of conditions and the following disclaimer.
|
|
|
|
.\" 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
.\" notice, this list of conditions and the following disclaimer in the
|
|
|
|
.\" documentation and/or other materials provided with the distribution.
|
|
|
|
.\" 3. The names of the authors may not be used to endorse or promote
|
|
|
|
.\" products derived from this software without specific prior written
|
|
|
|
.\" permission.
|
|
|
|
.\"
|
|
|
|
.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
.\" SUCH DAMAGE.
|
|
|
|
.\"
|
|
|
|
.\" $FreeBSD$
|
|
|
|
.\"
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
.Dd July 4, 2017
|
2003-04-09 10:52:10 +00:00
|
|
|
.Dt DISKINFO 8
|
|
|
|
.Os
|
|
|
|
.Sh NAME
|
|
|
|
.Nm diskinfo
|
|
|
|
.Nd get information about disk device
|
|
|
|
.Sh SYNOPSIS
|
|
|
|
.Nm
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
.Op Fl citSvw
|
2003-04-09 10:52:10 +00:00
|
|
|
.Ar disk ...
|
2017-07-01 21:34:57 +00:00
|
|
|
.Nm
|
|
|
|
.Op Fl p
|
|
|
|
.Ar disk ...
|
|
|
|
.Nm
|
|
|
|
.Op Fl s
|
|
|
|
.Ar disk ...
|
2003-04-09 10:52:10 +00:00
|
|
|
.Sh DESCRIPTION
|
|
|
|
The
|
|
|
|
.Nm
|
2003-05-31 18:07:09 +00:00
|
|
|
utility prints out information about a disk device,
|
|
|
|
and optionally runs a naive performance test on the device.
|
2003-04-09 10:52:10 +00:00
|
|
|
.Pp
|
2016-09-22 07:55:07 +00:00
|
|
|
The following options are available:
|
|
|
|
.Bl -tag -width ".Fl v"
|
|
|
|
.It Fl v
|
|
|
|
Print fields one per line with a descriptive comment.
|
|
|
|
.It Fl c
|
|
|
|
Perform a simple measurement of the I/O read command overhead.
|
|
|
|
.It Fl i
|
|
|
|
Perform a simple IOPS benchmark.
|
2017-07-01 21:34:57 +00:00
|
|
|
.It Fl p
|
|
|
|
Return the physical path of the disk.
|
|
|
|
This is a string that identifies the physical path to the disk in the
|
2017-07-02 16:20:49 +00:00
|
|
|
storage enclosure.
|
2017-07-01 21:34:57 +00:00
|
|
|
.It Fl s
|
2017-10-03 17:00:01 +00:00
|
|
|
Return the disk ident, usually the serial number.
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
.It Fl S
|
|
|
|
Perform synchronous random write test (ZFS SLOG test),
|
|
|
|
measuring time required to write data blocks of different size and
|
|
|
|
flush disk cache.
|
|
|
|
Blocks of more then 128KB are written with multiple parallel operations.
|
2016-09-22 07:55:07 +00:00
|
|
|
.It Fl t
|
|
|
|
Perform a simple and rather naive benchmark of the disks seek
|
|
|
|
and transfer performance.
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
.It Fl w
|
|
|
|
Allow disruptive write tests.
|
2016-09-22 07:55:07 +00:00
|
|
|
.El
|
|
|
|
.Pp
|
2003-05-31 18:07:09 +00:00
|
|
|
If given no arguments, the output will be a single line per specified device
|
|
|
|
with the following fields: device name, sectorsize, media size in bytes,
|
2009-12-24 21:39:30 +00:00
|
|
|
media size in sectors, stripe size, stripe offset, firmware cylinders,
|
|
|
|
firmware heads, and firmware sectors.
|
2003-04-09 10:52:10 +00:00
|
|
|
The last three fields are only present if the information is available.
|
|
|
|
.Sh HISTORY
|
2003-05-31 18:07:09 +00:00
|
|
|
The
|
2003-04-09 10:52:10 +00:00
|
|
|
.Nm
|
|
|
|
command appeared in
|
2003-05-31 18:07:09 +00:00
|
|
|
.Fx 5.1 .
|
2005-01-18 20:02:45 +00:00
|
|
|
.Sh AUTHORS
|
2016-09-22 07:55:07 +00:00
|
|
|
The
|
|
|
|
.Nm
|
|
|
|
utility was written by
|
|
|
|
.An Poul-Henning Kamp Aq Mt phk@FreeBSD.org .
|
2005-01-18 20:02:45 +00:00
|
|
|
.Sh BUGS
|
|
|
|
There are in order of increasing severity: lies,
|
|
|
|
damn lies, statistics, and computer benchmarks.
|