2003-04-09 10:52:10 +00:00
|
|
|
/*-
|
2017-11-27 15:37:16 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
2003-04-09 10:52:10 +00:00
|
|
|
* Copyright (c) 2003 Poul-Henning Kamp
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
* Copyright (c) 2015 Spectra Logic Corporation
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
* Copyright (c) 2017 Alexander Motin <mav@FreeBSD.org>
|
2003-04-09 10:52:10 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. The names of the authors may not be used to endorse or promote
|
|
|
|
* products derived from this software without specific prior written
|
|
|
|
* permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* $FreeBSD$
|
|
|
|
*/
|
|
|
|
|
2017-10-04 15:09:49 +00:00
|
|
|
#include <stdbool.h>
|
2003-04-09 10:52:10 +00:00
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdint.h>
|
2004-04-05 08:15:04 +00:00
|
|
|
#include <stdlib.h>
|
2017-07-06 09:05:38 +00:00
|
|
|
#include <string.h>
|
2011-07-21 19:39:40 +00:00
|
|
|
#include <strings.h>
|
2003-04-09 10:52:10 +00:00
|
|
|
#include <unistd.h>
|
|
|
|
#include <errno.h>
|
|
|
|
#include <fcntl.h>
|
2004-05-24 22:52:32 +00:00
|
|
|
#include <libutil.h>
|
2003-04-09 10:52:10 +00:00
|
|
|
#include <paths.h>
|
|
|
|
#include <err.h>
|
2017-10-04 15:09:49 +00:00
|
|
|
#include <geom/geom_disk.h>
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
#include <sysexits.h>
|
2016-09-22 07:33:43 +00:00
|
|
|
#include <sys/aio.h>
|
2003-04-09 10:52:10 +00:00
|
|
|
#include <sys/disk.h>
|
Plumb device physical path reporting from CAM devices, through GEOM and
DEVFS, and make it accessible via the diskinfo utility.
Extend GEOM's generic attribute query mechanism into generic disk consumers.
sys/geom/geom_disk.c:
sys/geom/geom_disk.h:
sys/cam/scsi/scsi_da.c:
sys/cam/ata/ata_da.c:
- Allow disk providers to implement a new method which can override
the default BIO_GETATTR response, d_getattr(struct bio *). This
function returns -1 if not handled, otherwise it returns 0 or an
errno to be passed to g_io_deliver().
sys/cam/scsi/scsi_da.c:
sys/cam/ata/ata_da.c:
- Don't copy the serial number to dp->d_ident anymore, as the CAM XPT
is now responsible for returning this information via
d_getattr()->(a)dagetattr()->xpt_getatr().
sys/geom/geom_dev.c:
- Implement a new ioctl, DIOCGPHYSPATH, which returns the GEOM
attribute "GEOM::physpath", if possible. If the attribute request
returns a zero-length string, ENOENT is returned.
usr.sbin/diskinfo/diskinfo.c:
- If the DIOCGPHYSPATH ioctl is successful, report physical path
data when diskinfo is executed with the '-v' option.
Submitted by: will
Reviewed by: gibbs
Sponsored by: Spectra Logic Corporation
Add generic attribute change notification support to GEOM.
sys/sys/geom/geom.h:
Add a new attrchanged method field to both g_class
and g_geom.
sys/sys/geom/geom.h:
sys/geom/geom_event.c:
- Provide the g_attr_changed() function that providers
can use to advertise attribute changes.
- Perform delivery of attribute change notifications
from a thread context via the standard GEOM event
mechanism.
sys/geom/geom_subr.c:
Inherit the attrchanged method from class to geom (class instance).
sys/geom/geom_disk.c:
Provide disk_attr_changed() to provide g_attr_changed() access
to consumers of the disk API.
sys/cam/scsi/scsi_pass.c:
sys/cam/scsi/scsi_da.c:
sys/geom/geom_dev.c:
sys/geom/geom_disk.c:
Use attribute changed events to track updates to physical path
information.
sys/cam/scsi/scsi_da.c:
Add AC_ADVINFO_CHANGED to the registered asynchronous CAM
events for this driver. When this event occurs, and
the updated buffer type references our physical path
attribute, emit a GEOM attribute changed event via the
disk_attr_changed() API.
sys/cam/scsi/scsi_pass.c:
Add AC_ADVINFO_CHANGED to the registered asynchronous CAM
events for this driver. When this event occurs, update
the physical patch devfs alias for this pass instance.
Submitted by: gibbs
Sponsored by: Spectra Logic Corporation
2011-06-14 17:10:32 +00:00
|
|
|
#include <sys/param.h>
|
2016-09-21 11:17:58 +00:00
|
|
|
#include <sys/stat.h>
|
2004-04-05 08:15:04 +00:00
|
|
|
#include <sys/time.h>
|
2003-04-09 10:52:10 +00:00
|
|
|
|
2016-09-22 07:33:43 +00:00
|
|
|
#define NAIO 128
|
2017-10-01 16:59:02 +00:00
|
|
|
#define MAXTX (8*1024*1024)
|
|
|
|
#define MEGATX (1024*1024)
|
2016-09-22 07:33:43 +00:00
|
|
|
|
2003-04-09 10:52:10 +00:00
|
|
|
static void
|
|
|
|
usage(void)
|
|
|
|
{
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
fprintf(stderr, "usage: diskinfo [-cipsStvw] disk ...\n");
|
2003-04-09 10:52:10 +00:00
|
|
|
exit (1);
|
|
|
|
}
|
|
|
|
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
static int opt_c, opt_i, opt_p, opt_s, opt_S, opt_t, opt_v, opt_w;
|
2003-04-09 10:52:10 +00:00
|
|
|
|
2017-10-04 15:09:49 +00:00
|
|
|
static bool candelete(int fd);
|
2004-04-05 08:15:04 +00:00
|
|
|
static void speeddisk(int fd, off_t mediasize, u_int sectorsize);
|
2004-11-09 12:28:41 +00:00
|
|
|
static void commandtime(int fd, off_t mediasize, u_int sectorsize);
|
2016-09-22 07:33:43 +00:00
|
|
|
static void iopsbench(int fd, off_t mediasize, u_int sectorsize);
|
2017-10-04 15:09:49 +00:00
|
|
|
static void rotationrate(int fd, char *buf, size_t buflen);
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
static void slogbench(int fd, int isreg, off_t mediasize, u_int sectorsize);
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
static int zonecheck(int fd, uint32_t *zone_mode, char *zone_str,
|
|
|
|
size_t zone_str_len);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
2017-10-01 16:59:02 +00:00
|
|
|
static uint8_t *buf;
|
|
|
|
|
2003-04-09 10:52:10 +00:00
|
|
|
int
|
|
|
|
main(int argc, char **argv)
|
|
|
|
{
|
2016-09-21 11:17:58 +00:00
|
|
|
struct stat sb;
|
2011-02-08 11:32:22 +00:00
|
|
|
int i, ch, fd, error, exitval = 0;
|
2017-10-01 16:59:02 +00:00
|
|
|
char tstr[BUFSIZ], ident[DISK_IDENT_SIZE], physpath[MAXPATHLEN];
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
char zone_desc[64];
|
2017-10-04 15:09:49 +00:00
|
|
|
char rrate[64];
|
2017-07-06 09:05:38 +00:00
|
|
|
struct diocgattr_arg arg;
|
2009-12-24 21:39:30 +00:00
|
|
|
off_t mediasize, stripesize, stripeoffset;
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
u_int sectorsize, fwsectors, fwheads, zoned = 0, isreg;
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
uint32_t zone_mode;
|
2003-04-09 10:52:10 +00:00
|
|
|
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
while ((ch = getopt(argc, argv, "cipsStvw")) != -1) {
|
2003-04-09 10:52:10 +00:00
|
|
|
switch (ch) {
|
2004-11-09 12:28:41 +00:00
|
|
|
case 'c':
|
|
|
|
opt_c = 1;
|
|
|
|
opt_v = 1;
|
|
|
|
break;
|
2016-09-22 07:33:43 +00:00
|
|
|
case 'i':
|
|
|
|
opt_i = 1;
|
|
|
|
opt_v = 1;
|
|
|
|
break;
|
2017-07-01 21:34:57 +00:00
|
|
|
case 'p':
|
|
|
|
opt_p = 1;
|
|
|
|
break;
|
|
|
|
case 's':
|
|
|
|
opt_s = 1;
|
|
|
|
break;
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
case 'S':
|
|
|
|
opt_S = 1;
|
|
|
|
opt_v = 1;
|
|
|
|
break;
|
2003-04-09 10:52:10 +00:00
|
|
|
case 't':
|
|
|
|
opt_t = 1;
|
|
|
|
opt_v = 1;
|
|
|
|
break;
|
|
|
|
case 'v':
|
|
|
|
opt_v = 1;
|
|
|
|
break;
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
case 'w':
|
|
|
|
opt_w = 1;
|
|
|
|
break;
|
2003-04-09 10:52:10 +00:00
|
|
|
default:
|
|
|
|
usage();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
argc -= optind;
|
|
|
|
argv += optind;
|
|
|
|
|
2004-03-30 07:37:04 +00:00
|
|
|
if (argc < 1)
|
|
|
|
usage();
|
|
|
|
|
2017-07-01 21:34:57 +00:00
|
|
|
if ((opt_p && opt_s) || ((opt_p || opt_s) && (opt_c || opt_i || opt_t || opt_v))) {
|
|
|
|
warnx("-p or -s cannot be used with other options");
|
|
|
|
usage();
|
|
|
|
}
|
|
|
|
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
if (opt_S && !opt_w) {
|
|
|
|
warnx("-S require also -w");
|
|
|
|
usage();
|
|
|
|
}
|
|
|
|
|
2017-10-01 16:59:02 +00:00
|
|
|
if (posix_memalign((void **)&buf, PAGE_SIZE, MAXTX))
|
|
|
|
errx(1, "Can't allocate memory buffer");
|
2003-04-09 10:52:10 +00:00
|
|
|
for (i = 0; i < argc; i++) {
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
fd = open(argv[i], (opt_w ? O_RDWR : O_RDONLY) | O_DIRECT);
|
2003-04-09 10:52:10 +00:00
|
|
|
if (fd < 0 && errno == ENOENT && *argv[i] != '/') {
|
2017-10-01 16:59:02 +00:00
|
|
|
snprintf(tstr, sizeof(tstr), "%s%s", _PATH_DEV, argv[i]);
|
|
|
|
fd = open(tstr, O_RDONLY);
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
2011-02-08 11:32:22 +00:00
|
|
|
if (fd < 0) {
|
|
|
|
warn("%s", argv[i]);
|
2017-01-04 00:39:06 +00:00
|
|
|
exit(1);
|
2011-02-08 11:32:22 +00:00
|
|
|
}
|
2016-09-21 11:17:58 +00:00
|
|
|
error = fstat(fd, &sb);
|
|
|
|
if (error != 0) {
|
|
|
|
warn("cannot stat %s", argv[i]);
|
2011-02-08 11:32:22 +00:00
|
|
|
exitval = 1;
|
|
|
|
goto out;
|
|
|
|
}
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
isreg = S_ISREG(sb.st_mode);
|
|
|
|
if (isreg) {
|
2016-09-21 11:17:58 +00:00
|
|
|
mediasize = sb.st_size;
|
|
|
|
sectorsize = S_BLKSIZE;
|
2003-04-09 10:52:10 +00:00
|
|
|
fwsectors = 0;
|
|
|
|
fwheads = 0;
|
2016-09-21 11:17:58 +00:00
|
|
|
stripesize = sb.st_blksize;
|
2009-12-24 21:39:30 +00:00
|
|
|
stripeoffset = 0;
|
2017-07-01 21:34:57 +00:00
|
|
|
if (opt_p || opt_s) {
|
|
|
|
warnx("-p and -s only operate on physical devices: %s", argv[i]);
|
|
|
|
goto out;
|
|
|
|
}
|
2016-09-21 11:17:58 +00:00
|
|
|
} else {
|
2017-07-01 21:34:57 +00:00
|
|
|
if (opt_p) {
|
|
|
|
if (ioctl(fd, DIOCGPHYSPATH, physpath) == 0) {
|
|
|
|
printf("%s\n", physpath);
|
|
|
|
} else {
|
|
|
|
warnx("Failed to determine physpath for: %s", argv[i]);
|
|
|
|
}
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (opt_s) {
|
|
|
|
if (ioctl(fd, DIOCGIDENT, ident) == 0) {
|
|
|
|
printf("%s\n", ident);
|
|
|
|
} else {
|
|
|
|
warnx("Failed to determine serial number for: %s", argv[i]);
|
|
|
|
}
|
|
|
|
goto out;
|
|
|
|
}
|
2016-09-21 11:17:58 +00:00
|
|
|
error = ioctl(fd, DIOCGMEDIASIZE, &mediasize);
|
|
|
|
if (error) {
|
|
|
|
warnx("%s: ioctl(DIOCGMEDIASIZE) failed, probably not a disk.", argv[i]);
|
|
|
|
exitval = 1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
error = ioctl(fd, DIOCGSECTORSIZE, §orsize);
|
|
|
|
if (error) {
|
|
|
|
warnx("%s: ioctl(DIOCGSECTORSIZE) failed, probably not a disk.", argv[i]);
|
|
|
|
exitval = 1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
error = ioctl(fd, DIOCGFWSECTORS, &fwsectors);
|
|
|
|
if (error)
|
|
|
|
fwsectors = 0;
|
|
|
|
error = ioctl(fd, DIOCGFWHEADS, &fwheads);
|
|
|
|
if (error)
|
|
|
|
fwheads = 0;
|
|
|
|
error = ioctl(fd, DIOCGSTRIPESIZE, &stripesize);
|
|
|
|
if (error)
|
|
|
|
stripesize = 0;
|
|
|
|
error = ioctl(fd, DIOCGSTRIPEOFFSET, &stripeoffset);
|
|
|
|
if (error)
|
|
|
|
stripeoffset = 0;
|
|
|
|
error = zonecheck(fd, &zone_mode, zone_desc, sizeof(zone_desc));
|
|
|
|
if (error == 0)
|
|
|
|
zoned = 1;
|
|
|
|
}
|
2003-04-09 10:52:10 +00:00
|
|
|
if (!opt_v) {
|
|
|
|
printf("%s", argv[i]);
|
|
|
|
printf("\t%u", sectorsize);
|
|
|
|
printf("\t%jd", (intmax_t)mediasize);
|
|
|
|
printf("\t%jd", (intmax_t)mediasize/sectorsize);
|
2009-12-24 21:39:30 +00:00
|
|
|
printf("\t%jd", (intmax_t)stripesize);
|
|
|
|
printf("\t%jd", (intmax_t)stripeoffset);
|
2003-04-09 10:52:10 +00:00
|
|
|
if (fwsectors != 0 && fwheads != 0) {
|
|
|
|
printf("\t%jd", (intmax_t)mediasize /
|
|
|
|
(fwsectors * fwheads * sectorsize));
|
|
|
|
printf("\t%u", fwheads);
|
|
|
|
printf("\t%u", fwsectors);
|
|
|
|
}
|
|
|
|
} else {
|
2017-10-01 16:59:02 +00:00
|
|
|
humanize_number(tstr, 5, (int64_t)mediasize, "",
|
2004-05-24 22:52:32 +00:00
|
|
|
HN_AUTOSCALE, HN_B | HN_NOSPACE | HN_DECIMAL);
|
2003-04-09 10:52:10 +00:00
|
|
|
printf("%s\n", argv[i]);
|
|
|
|
printf("\t%-12u\t# sectorsize\n", sectorsize);
|
2004-05-24 22:52:32 +00:00
|
|
|
printf("\t%-12jd\t# mediasize in bytes (%s)\n",
|
2017-10-01 16:59:02 +00:00
|
|
|
(intmax_t)mediasize, tstr);
|
2003-04-09 10:52:10 +00:00
|
|
|
printf("\t%-12jd\t# mediasize in sectors\n",
|
|
|
|
(intmax_t)mediasize/sectorsize);
|
2009-12-24 21:39:30 +00:00
|
|
|
printf("\t%-12jd\t# stripesize\n", stripesize);
|
|
|
|
printf("\t%-12jd\t# stripeoffset\n", stripeoffset);
|
2003-04-09 10:52:10 +00:00
|
|
|
if (fwsectors != 0 && fwheads != 0) {
|
|
|
|
printf("\t%-12jd\t# Cylinders according to firmware.\n", (intmax_t)mediasize /
|
|
|
|
(fwsectors * fwheads * sectorsize));
|
|
|
|
printf("\t%-12u\t# Heads according to firmware.\n", fwheads);
|
|
|
|
printf("\t%-12u\t# Sectors according to firmware.\n", fwsectors);
|
|
|
|
}
|
2017-07-06 09:05:38 +00:00
|
|
|
strlcpy(arg.name, "GEOM::descr", sizeof(arg.name));
|
|
|
|
arg.len = sizeof(arg.value.str);
|
|
|
|
if (ioctl(fd, DIOCGATTR, &arg) == 0)
|
|
|
|
printf("\t%-12s\t# Disk descr.\n", arg.value.str);
|
2009-09-03 22:19:09 +00:00
|
|
|
if (ioctl(fd, DIOCGIDENT, ident) == 0)
|
2007-05-06 00:25:21 +00:00
|
|
|
printf("\t%-12s\t# Disk ident.\n", ident);
|
Plumb device physical path reporting from CAM devices, through GEOM and
DEVFS, and make it accessible via the diskinfo utility.
Extend GEOM's generic attribute query mechanism into generic disk consumers.
sys/geom/geom_disk.c:
sys/geom/geom_disk.h:
sys/cam/scsi/scsi_da.c:
sys/cam/ata/ata_da.c:
- Allow disk providers to implement a new method which can override
the default BIO_GETATTR response, d_getattr(struct bio *). This
function returns -1 if not handled, otherwise it returns 0 or an
errno to be passed to g_io_deliver().
sys/cam/scsi/scsi_da.c:
sys/cam/ata/ata_da.c:
- Don't copy the serial number to dp->d_ident anymore, as the CAM XPT
is now responsible for returning this information via
d_getattr()->(a)dagetattr()->xpt_getatr().
sys/geom/geom_dev.c:
- Implement a new ioctl, DIOCGPHYSPATH, which returns the GEOM
attribute "GEOM::physpath", if possible. If the attribute request
returns a zero-length string, ENOENT is returned.
usr.sbin/diskinfo/diskinfo.c:
- If the DIOCGPHYSPATH ioctl is successful, report physical path
data when diskinfo is executed with the '-v' option.
Submitted by: will
Reviewed by: gibbs
Sponsored by: Spectra Logic Corporation
Add generic attribute change notification support to GEOM.
sys/sys/geom/geom.h:
Add a new attrchanged method field to both g_class
and g_geom.
sys/sys/geom/geom.h:
sys/geom/geom_event.c:
- Provide the g_attr_changed() function that providers
can use to advertise attribute changes.
- Perform delivery of attribute change notifications
from a thread context via the standard GEOM event
mechanism.
sys/geom/geom_subr.c:
Inherit the attrchanged method from class to geom (class instance).
sys/geom/geom_disk.c:
Provide disk_attr_changed() to provide g_attr_changed() access
to consumers of the disk API.
sys/cam/scsi/scsi_pass.c:
sys/cam/scsi/scsi_da.c:
sys/geom/geom_dev.c:
sys/geom/geom_disk.c:
Use attribute changed events to track updates to physical path
information.
sys/cam/scsi/scsi_da.c:
Add AC_ADVINFO_CHANGED to the registered asynchronous CAM
events for this driver. When this event occurs, and
the updated buffer type references our physical path
attribute, emit a GEOM attribute changed event via the
disk_attr_changed() API.
sys/cam/scsi/scsi_pass.c:
Add AC_ADVINFO_CHANGED to the registered asynchronous CAM
events for this driver. When this event occurs, update
the physical patch devfs alias for this pass instance.
Submitted by: gibbs
Sponsored by: Spectra Logic Corporation
2011-06-14 17:10:32 +00:00
|
|
|
if (ioctl(fd, DIOCGPHYSPATH, physpath) == 0)
|
|
|
|
printf("\t%-12s\t# Physical path\n", physpath);
|
2017-10-04 15:09:49 +00:00
|
|
|
printf("\t%-12s\t# TRIM/UNMAP support\n",
|
|
|
|
candelete(fd) ? "Yes" : "No");
|
|
|
|
rotationrate(fd, rrate, sizeof(rrate));
|
|
|
|
printf("\t%-12s\t# Rotation rate in RPM\n", rrate);
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
if (zoned != 0)
|
|
|
|
printf("\t%-12s\t# Zone Mode\n", zone_desc);
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
printf("\n");
|
2004-11-09 12:28:41 +00:00
|
|
|
if (opt_c)
|
|
|
|
commandtime(fd, mediasize, sectorsize);
|
2003-04-09 10:52:10 +00:00
|
|
|
if (opt_t)
|
2004-04-05 08:15:04 +00:00
|
|
|
speeddisk(fd, mediasize, sectorsize);
|
2016-09-22 07:33:43 +00:00
|
|
|
if (opt_i)
|
|
|
|
iopsbench(fd, mediasize, sectorsize);
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
if (opt_S)
|
|
|
|
slogbench(fd, isreg, mediasize, sectorsize);
|
2011-02-08 11:32:22 +00:00
|
|
|
out:
|
2003-04-09 10:52:10 +00:00
|
|
|
close(fd);
|
|
|
|
}
|
2017-10-01 16:59:02 +00:00
|
|
|
free(buf);
|
2011-02-08 11:32:22 +00:00
|
|
|
exit (exitval);
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
|
2017-10-04 15:09:49 +00:00
|
|
|
static bool
|
|
|
|
candelete(int fd)
|
|
|
|
{
|
|
|
|
struct diocgattr_arg arg;
|
|
|
|
|
|
|
|
strlcpy(arg.name, "GEOM::candelete", sizeof(arg.name));
|
|
|
|
arg.len = sizeof(arg.value.i);
|
|
|
|
if (ioctl(fd, DIOCGATTR, &arg) == 0)
|
|
|
|
return (arg.value.i != 0);
|
|
|
|
else
|
|
|
|
return (false);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rotationrate(int fd, char *rate, size_t buflen)
|
|
|
|
{
|
|
|
|
struct diocgattr_arg arg;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
strlcpy(arg.name, "GEOM::rotation_rate", sizeof(arg.name));
|
|
|
|
arg.len = sizeof(arg.value.u16);
|
|
|
|
|
|
|
|
ret = ioctl(fd, DIOCGATTR, &arg);
|
|
|
|
if (ret < 0 || arg.value.u16 == DISK_RR_UNKNOWN)
|
|
|
|
snprintf(rate, buflen, "Unknown");
|
|
|
|
else if (arg.value.u16 == DISK_RR_NON_ROTATING)
|
|
|
|
snprintf(rate, buflen, "%d", 0);
|
|
|
|
else if (arg.value.u16 >= DISK_RR_MIN && arg.value.u16 <= DISK_RR_MAX)
|
|
|
|
snprintf(rate, buflen, "%d", arg.value.u16);
|
|
|
|
else
|
|
|
|
snprintf(rate, buflen, "Invalid");
|
|
|
|
}
|
|
|
|
|
2003-04-09 10:52:10 +00:00
|
|
|
static void
|
2011-07-21 19:39:40 +00:00
|
|
|
rdsect(int fd, off_t blockno, u_int sectorsize)
|
2003-04-09 10:52:10 +00:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
2017-01-04 00:39:06 +00:00
|
|
|
if (lseek(fd, (off_t)blockno * sectorsize, SEEK_SET) == -1)
|
|
|
|
err(1, "lseek");
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
error = read(fd, buf, sectorsize);
|
2012-03-09 18:34:14 +00:00
|
|
|
if (error == -1)
|
|
|
|
err(1, "read");
|
2004-04-05 08:15:04 +00:00
|
|
|
if (error != (int)sectorsize)
|
2012-03-09 18:34:14 +00:00
|
|
|
errx(1, "disk too small for test.");
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rdmega(int fd)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
error = read(fd, buf, MEGATX);
|
2012-03-09 18:34:14 +00:00
|
|
|
if (error == -1)
|
|
|
|
err(1, "read");
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
if (error != MEGATX)
|
2012-03-09 18:34:14 +00:00
|
|
|
errx(1, "disk too small for test.");
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct timeval tv1, tv2;
|
|
|
|
|
|
|
|
static void
|
|
|
|
T0(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
fflush(stdout);
|
|
|
|
sync();
|
|
|
|
sleep(1);
|
|
|
|
sync();
|
|
|
|
sync();
|
|
|
|
gettimeofday(&tv1, NULL);
|
|
|
|
}
|
|
|
|
|
2016-09-21 18:07:25 +00:00
|
|
|
static double
|
|
|
|
delta_t(void)
|
2003-04-09 10:52:10 +00:00
|
|
|
{
|
|
|
|
double dt;
|
|
|
|
|
|
|
|
gettimeofday(&tv2, NULL);
|
|
|
|
dt = (tv2.tv_usec - tv1.tv_usec) / 1e6;
|
|
|
|
dt += (tv2.tv_sec - tv1.tv_sec);
|
2016-09-21 18:07:25 +00:00
|
|
|
|
|
|
|
return (dt);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
TN(int count)
|
|
|
|
{
|
|
|
|
double dt;
|
|
|
|
|
|
|
|
dt = delta_t();
|
2003-04-09 10:52:10 +00:00
|
|
|
printf("%5d iter in %10.6f sec = %8.3f msec\n",
|
|
|
|
count, dt, dt * 1000.0 / count);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
TR(double count)
|
|
|
|
{
|
|
|
|
double dt;
|
|
|
|
|
2016-09-21 18:07:25 +00:00
|
|
|
dt = delta_t();
|
2003-04-09 10:52:10 +00:00
|
|
|
printf("%8.0f kbytes in %10.6f sec = %8.0f kbytes/sec\n",
|
|
|
|
count, dt, count / dt);
|
|
|
|
}
|
|
|
|
|
2016-09-22 07:33:43 +00:00
|
|
|
static void
|
|
|
|
TI(double count)
|
|
|
|
{
|
|
|
|
double dt;
|
|
|
|
|
|
|
|
dt = delta_t();
|
|
|
|
printf("%8.0f ops in %10.6f sec = %8.0f IOPS\n",
|
|
|
|
count, dt, count / dt);
|
|
|
|
}
|
|
|
|
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
static void
|
|
|
|
TS(u_int size, int count)
|
|
|
|
{
|
|
|
|
double dt;
|
|
|
|
|
|
|
|
dt = delta_t();
|
|
|
|
printf("%8.1f usec/IO = %8.1f Mbytes/s\n",
|
2017-11-27 20:01:43 +00:00
|
|
|
dt * 1000000.0 / count, (double)size * count / dt / (1024 * 1024));
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
}
|
|
|
|
|
2003-04-09 10:52:10 +00:00
|
|
|
static void
|
2004-04-05 08:15:04 +00:00
|
|
|
speeddisk(int fd, off_t mediasize, u_int sectorsize)
|
2003-04-09 10:52:10 +00:00
|
|
|
{
|
2011-07-21 19:39:40 +00:00
|
|
|
int bulk, i;
|
|
|
|
off_t b0, b1, sectorcount, step;
|
2003-04-09 10:52:10 +00:00
|
|
|
|
2018-01-06 12:34:03 +00:00
|
|
|
/*
|
|
|
|
* Drives smaller than 1MB produce negative sector numbers,
|
|
|
|
* as do 2048 or fewer sectors.
|
|
|
|
*/
|
2003-04-09 10:52:10 +00:00
|
|
|
sectorcount = mediasize / sectorsize;
|
2018-01-06 12:34:03 +00:00
|
|
|
if (mediasize < 1024 * 1024 || sectorcount < 2048)
|
|
|
|
return;
|
|
|
|
|
2017-01-04 00:39:06 +00:00
|
|
|
|
2011-07-21 19:39:40 +00:00
|
|
|
step = 1ULL << (flsll(sectorcount / (4 * 200)) - 1);
|
|
|
|
if (step > 16384)
|
|
|
|
step = 16384;
|
|
|
|
bulk = mediasize / (1024 * 1024);
|
|
|
|
if (bulk > 100)
|
|
|
|
bulk = 100;
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("Seek times:\n");
|
|
|
|
printf("\tFull stroke:\t");
|
|
|
|
b0 = 0;
|
2011-07-21 19:39:40 +00:00
|
|
|
b1 = sectorcount - step;
|
2003-04-09 10:52:10 +00:00
|
|
|
T0();
|
|
|
|
for (i = 0; i < 125; i++) {
|
|
|
|
rdsect(fd, b0, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 += step;
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b1, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b1 -= step;
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
TN(250);
|
|
|
|
|
|
|
|
printf("\tHalf stroke:\t");
|
|
|
|
b0 = sectorcount / 4;
|
|
|
|
b1 = b0 + sectorcount / 2;
|
|
|
|
T0();
|
|
|
|
for (i = 0; i < 125; i++) {
|
|
|
|
rdsect(fd, b0, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 += step;
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b1, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b1 += step;
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
TN(250);
|
|
|
|
printf("\tQuarter stroke:\t");
|
|
|
|
b0 = sectorcount / 4;
|
|
|
|
b1 = b0 + sectorcount / 4;
|
|
|
|
T0();
|
|
|
|
for (i = 0; i < 250; i++) {
|
|
|
|
rdsect(fd, b0, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 += step;
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b1, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b1 += step;
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
|
|
|
TN(500);
|
|
|
|
|
|
|
|
printf("\tShort forward:\t");
|
|
|
|
b0 = sectorcount / 2;
|
|
|
|
T0();
|
2003-04-09 14:25:04 +00:00
|
|
|
for (i = 0; i < 400; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b0, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 += step;
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
2003-04-09 14:25:04 +00:00
|
|
|
TN(400);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("\tShort backward:\t");
|
|
|
|
b0 = sectorcount / 2;
|
|
|
|
T0();
|
2003-04-09 14:25:04 +00:00
|
|
|
for (i = 0; i < 400; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b0, sectorsize);
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 -= step;
|
2003-04-09 10:52:10 +00:00
|
|
|
}
|
2003-04-09 14:25:04 +00:00
|
|
|
TN(400);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("\tSeq outer:\t");
|
|
|
|
b0 = 0;
|
|
|
|
T0();
|
2003-04-09 14:25:04 +00:00
|
|
|
for (i = 0; i < 2048; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b0, sectorsize);
|
|
|
|
b0++;
|
|
|
|
}
|
2003-04-09 14:25:04 +00:00
|
|
|
TN(2048);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("\tSeq inner:\t");
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 = sectorcount - 2048;
|
2003-04-09 10:52:10 +00:00
|
|
|
T0();
|
2003-04-09 14:25:04 +00:00
|
|
|
for (i = 0; i < 2048; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b0, sectorsize);
|
|
|
|
b0++;
|
|
|
|
}
|
2003-04-09 14:25:04 +00:00
|
|
|
TN(2048);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
2016-09-21 11:27:56 +00:00
|
|
|
printf("\nTransfer rates:\n");
|
2003-04-09 10:52:10 +00:00
|
|
|
printf("\toutside: ");
|
|
|
|
rdsect(fd, 0, sectorsize);
|
|
|
|
T0();
|
2011-07-21 19:39:40 +00:00
|
|
|
for (i = 0; i < bulk; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdmega(fd);
|
|
|
|
}
|
2011-07-21 19:39:40 +00:00
|
|
|
TR(bulk * 1024);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("\tmiddle: ");
|
2011-07-21 19:39:40 +00:00
|
|
|
b0 = sectorcount / 2 - bulk * (1024*1024 / sectorsize) / 2 - 1;
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b0, sectorsize);
|
|
|
|
T0();
|
2011-07-21 19:39:40 +00:00
|
|
|
for (i = 0; i < bulk; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdmega(fd);
|
|
|
|
}
|
2011-07-21 19:39:40 +00:00
|
|
|
TR(bulk * 1024);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("\tinside: ");
|
2012-10-22 03:00:37 +00:00
|
|
|
b0 = sectorcount - bulk * (1024*1024 / sectorsize) - 1;
|
2003-04-09 10:52:10 +00:00
|
|
|
rdsect(fd, b0, sectorsize);
|
|
|
|
T0();
|
2011-07-21 19:39:40 +00:00
|
|
|
for (i = 0; i < bulk; i++) {
|
2003-04-09 10:52:10 +00:00
|
|
|
rdmega(fd);
|
|
|
|
}
|
2011-07-21 19:39:40 +00:00
|
|
|
TR(bulk * 1024);
|
2003-04-09 10:52:10 +00:00
|
|
|
|
|
|
|
printf("\n");
|
2004-11-09 12:28:41 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
commandtime(int fd, off_t mediasize, u_int sectorsize)
|
|
|
|
{
|
|
|
|
double dtmega, dtsector;
|
|
|
|
int i;
|
2003-04-09 10:52:10 +00:00
|
|
|
|
2004-11-09 12:28:41 +00:00
|
|
|
printf("I/O command overhead:\n");
|
|
|
|
i = mediasize;
|
|
|
|
rdsect(fd, 0, sectorsize);
|
|
|
|
T0();
|
|
|
|
for (i = 0; i < 10; i++)
|
|
|
|
rdmega(fd);
|
2016-09-21 18:07:25 +00:00
|
|
|
dtmega = delta_t();
|
2004-11-09 12:28:41 +00:00
|
|
|
|
|
|
|
printf("\ttime to read 10MB block %10.6f sec\t= %8.3f msec/sector\n",
|
|
|
|
dtmega, dtmega*100/2048);
|
|
|
|
|
|
|
|
rdsect(fd, 0, sectorsize);
|
|
|
|
T0();
|
|
|
|
for (i = 0; i < 20480; i++)
|
|
|
|
rdsect(fd, 0, sectorsize);
|
2016-09-21 18:07:25 +00:00
|
|
|
dtsector = delta_t();
|
2004-11-09 12:28:41 +00:00
|
|
|
|
|
|
|
printf("\ttime to read 20480 sectors %10.6f sec\t= %8.3f msec/sector\n",
|
|
|
|
dtsector, dtsector*100/2048);
|
|
|
|
printf("\tcalculated command overhead\t\t\t= %8.3f msec/sector\n",
|
|
|
|
(dtsector - dtmega)*100/2048);
|
|
|
|
|
|
|
|
printf("\n");
|
2003-04-09 10:52:10 +00:00
|
|
|
return;
|
|
|
|
}
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
|
2016-09-22 07:33:43 +00:00
|
|
|
static void
|
|
|
|
iops(int fd, off_t mediasize, u_int sectorsize)
|
|
|
|
{
|
|
|
|
struct aiocb aios[NAIO], *aiop;
|
|
|
|
ssize_t ret;
|
|
|
|
off_t sectorcount;
|
|
|
|
int error, i, queued, completed;
|
|
|
|
|
|
|
|
sectorcount = mediasize / sectorsize;
|
|
|
|
|
|
|
|
for (i = 0; i < NAIO; i++) {
|
|
|
|
aiop = &(aios[i]);
|
|
|
|
bzero(aiop, sizeof(*aiop));
|
|
|
|
aiop->aio_buf = malloc(sectorsize);
|
|
|
|
if (aiop->aio_buf == NULL)
|
|
|
|
err(1, "malloc");
|
|
|
|
}
|
|
|
|
|
|
|
|
T0();
|
|
|
|
for (i = 0; i < NAIO; i++) {
|
|
|
|
aiop = &(aios[i]);
|
|
|
|
|
|
|
|
aiop->aio_fildes = fd;
|
|
|
|
aiop->aio_offset = (random() % (sectorcount)) * sectorsize;
|
|
|
|
aiop->aio_nbytes = sectorsize;
|
|
|
|
|
|
|
|
error = aio_read(aiop);
|
|
|
|
if (error != 0)
|
|
|
|
err(1, "aio_read");
|
|
|
|
}
|
|
|
|
|
|
|
|
queued = i;
|
|
|
|
completed = 0;
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
ret = aio_waitcomplete(&aiop, NULL);
|
|
|
|
if (ret < 0)
|
|
|
|
err(1, "aio_waitcomplete");
|
|
|
|
if (ret != (ssize_t)sectorsize)
|
|
|
|
errx(1, "short read");
|
|
|
|
|
|
|
|
completed++;
|
|
|
|
|
|
|
|
if (delta_t() < 3.0) {
|
|
|
|
aiop->aio_fildes = fd;
|
|
|
|
aiop->aio_offset = (random() % (sectorcount)) * sectorsize;
|
|
|
|
aiop->aio_nbytes = sectorsize;
|
|
|
|
|
|
|
|
error = aio_read(aiop);
|
|
|
|
if (error != 0)
|
|
|
|
err(1, "aio_read");
|
|
|
|
|
|
|
|
queued++;
|
|
|
|
} else if (completed == queued) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
TI(completed);
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
iopsbench(int fd, off_t mediasize, u_int sectorsize)
|
|
|
|
{
|
|
|
|
printf("Asynchronous random reads:\n");
|
|
|
|
|
|
|
|
printf("\tsectorsize: ");
|
|
|
|
iops(fd, mediasize, sectorsize);
|
|
|
|
|
|
|
|
if (sectorsize != 4096) {
|
|
|
|
printf("\t4 kbytes: ");
|
|
|
|
iops(fd, mediasize, 4096);
|
|
|
|
}
|
|
|
|
|
|
|
|
printf("\t32 kbytes: ");
|
|
|
|
iops(fd, mediasize, 32 * 1024);
|
|
|
|
|
|
|
|
printf("\t128 kbytes: ");
|
|
|
|
iops(fd, mediasize, 128 * 1024);
|
|
|
|
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
#define MAXIO (128*1024)
|
|
|
|
#define MAXIOS (MAXTX / MAXIO)
|
|
|
|
|
|
|
|
static void
|
|
|
|
parwrite(int fd, size_t size, off_t off)
|
|
|
|
{
|
|
|
|
struct aiocb aios[MAXIOS];
|
|
|
|
off_t o;
|
|
|
|
int n, error;
|
|
|
|
struct aiocb *aiop;
|
|
|
|
|
2017-11-27 21:10:50 +00:00
|
|
|
// if size > MAXIO, use AIO to write n - 1 pieces in parallel
|
|
|
|
for (n = 0, o = 0; size > MAXIO; n++, size -= MAXIO, o += MAXIO) {
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
aiop = &aios[n];
|
|
|
|
bzero(aiop, sizeof(*aiop));
|
|
|
|
aiop->aio_buf = &buf[o];
|
|
|
|
aiop->aio_fildes = fd;
|
|
|
|
aiop->aio_offset = off + o;
|
2017-11-27 21:10:50 +00:00
|
|
|
aiop->aio_nbytes = MAXIO;
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
error = aio_write(aiop);
|
|
|
|
if (error != 0)
|
|
|
|
err(EX_IOERR, "AIO write submit error");
|
|
|
|
}
|
2017-11-27 21:10:50 +00:00
|
|
|
// Use synchronous writes for the runt of size <= MAXIO
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
error = pwrite(fd, &buf[o], size, off + o);
|
|
|
|
if (error < 0)
|
|
|
|
err(EX_IOERR, "Sync write error");
|
|
|
|
for (; n > 0; n--) {
|
|
|
|
error = aio_waitcomplete(&aiop, NULL);
|
|
|
|
if (error < 0)
|
|
|
|
err(EX_IOERR, "AIO write wait error");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
slogbench(int fd, int isreg, off_t mediasize, u_int sectorsize)
|
|
|
|
{
|
|
|
|
off_t off;
|
|
|
|
u_int size;
|
2017-09-28 15:58:41 +00:00
|
|
|
int error, n, N, nowritecache = 0;
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
|
|
|
|
printf("Synchronous random writes:\n");
|
|
|
|
for (size = sectorsize; size <= MAXTX; size *= 2) {
|
|
|
|
printf("\t%4.4g kbytes: ", (double)size / 1024);
|
|
|
|
N = 0;
|
|
|
|
T0();
|
|
|
|
do {
|
|
|
|
for (n = 0; n < 250; n++) {
|
|
|
|
off = random() % (mediasize / size);
|
|
|
|
parwrite(fd, size, off * size);
|
2017-09-28 15:58:41 +00:00
|
|
|
if (nowritecache)
|
|
|
|
continue;
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
if (isreg)
|
|
|
|
error = fsync(fd);
|
|
|
|
else
|
|
|
|
error = ioctl(fd, DIOCGFLUSH);
|
2017-09-28 15:58:41 +00:00
|
|
|
if (error < 0) {
|
|
|
|
if (errno == ENOTSUP)
|
|
|
|
nowritecache = 1;
|
|
|
|
else
|
|
|
|
err(EX_IOERR, "Flush error");
|
|
|
|
}
|
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes,
which none of benchmarks I know can simulate. Since SSD vendors rarely
specify cache flush time, this measurement can be useful to explain why
some ZFS pools are slower then expected. This test writes data chunks
of different size followed by cache flush, alike to what ZFS SLOG does,
and measures average time.
To illustrate, here is result for 6 years old SATA Intel 710 Series SSD:
Synchronous random writes:
0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s
1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s
2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s
4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s
8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s
16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s
32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s
64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s
128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s
256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s
512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s
1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s
2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s
4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s
8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s
, and much newer and supposedly much faster NVMe Samsung 950 PRO SSD:
Synchronous random writes:
0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s
1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s
2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s
4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s
8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s
16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s
32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s
64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s
128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s
256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s
512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s
1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s
2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s
4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s
8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s
While the first one obviously has maximal throughput limitations, the
second one has so high cache flush latency (about 2 millisecond), that
it makes one almost useless in SLOG role, despite of its good throughput
numbers. Power loss protection is out of scope of this test, but I
suspect it can be related.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
2017-07-05 16:20:22 +00:00
|
|
|
}
|
|
|
|
N += 250;
|
|
|
|
} while (delta_t() < 1.0);
|
|
|
|
TS(size, N);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Add support for managing Shingled Magnetic Recording (SMR) drives.
This change includes support for SCSI SMR drives (which conform to the
Zoned Block Commands or ZBC spec) and ATA SMR drives (which conform to
the Zoned ATA Command Set or ZAC spec) behind SAS expanders.
This includes full management support through the GEOM BIO interface, and
through a new userland utility, zonectl(8), and through camcontrol(8).
This is now ready for filesystems to use to detect and manage zoned drives.
(There is no work in progress that I know of to use this for ZFS or UFS, if
anyone is interested, let me know and I may have some suggestions.)
Also, improve ATA command passthrough and dispatch support, both via ATA
and ATA passthrough over SCSI.
Also, add support to camcontrol(8) for the ATA Extended Power Conditions
feature set. You can now manage ATA device power states, and set various
idle time thresholds for a drive to enter lower power states.
Note that this change cannot be MFCed in full, because it depends on
changes to the struct bio API that break compatilibity. In order to
avoid breaking the stable API, only changes that don't touch or depend on
the struct bio changes can be merged. For example, the camcontrol(8)
changes don't depend on the new bio API, but zonectl(8) and the probe
changes to the da(4) and ada(4) drivers do depend on it.
Also note that the SMR changes have not yet been tested with an actual
SCSI ZBC device, or a SCSI to ATA translation layer (SAT) that supports
ZBC to ZAC translation. I have not yet gotten a suitable drive or SAT
layer, so any testing help would be appreciated. These changes have been
tested with Seagate Host Aware SATA drives attached to both SAS and SATA
controllers. Also, I do not have any SATA Host Managed devices, and I
suspect that it may take additional (hopefully minor) changes to support
them.
Thanks to Seagate for supplying the test hardware and answering questions.
sbin/camcontrol/Makefile:
Add epc.c and zone.c.
sbin/camcontrol/camcontrol.8:
Document the zone and epc subcommands.
sbin/camcontrol/camcontrol.c:
Add the zone and epc subcommands.
Add auxiliary register support to build_ata_cmd(). Make sure to
set the CAM_ATAIO_NEEDRESULT, CAM_ATAIO_DMA, and CAM_ATAIO_FPDMA
flags as appropriate for ATA commands.
Add a new get_ata_status() function to parse ATA result from SCSI
sense descriptors (for ATA passthrough over SCSI) and ATA I/O
requests.
sbin/camcontrol/camcontrol.h:
Update the build_ata_cmd() prototype
Add get_ata_status(), zone(), and epc().
sbin/camcontrol/epc.c:
Support for ATA Extended Power Conditions features. This includes
support for all features documented in the ACS-4 Revision 12
specification from t13.org (dated February 18, 2016).
The EPC feature set allows putting a drive into a power power mode
immediately, or setting timeouts so that the drive will
automatically enter progressively lower power states after various
idle times.
sbin/camcontrol/fwdownload.c:
Update the firmware download code for the new build_ata_cmd()
arguments.
sbin/camcontrol/zone.c:
Implement support for Shingled Magnetic Recording (SMR) drives
via SCSI Zoned Block Commands (ZBC) and ATA Zoned Device ATA
Command Set (ZAC).
These specs were developed in concert, and are functionally
identical. The primary differences are due to SCSI and ATA
differences. (SCSI is big endian, ATA is little endian, for
example.)
This includes support for all commands defined in the ZBC and
ZAC specs.
sys/cam/ata/ata_all.c:
Decode a number of additional ATA command names in ata_op_string().
Add a new CCB building function, ata_read_log().
Add ata_zac_mgmt_in() and ata_zac_mgmt_out() CCB building
functions. These support both DMA and NCQ encapsulation.
sys/cam/ata/ata_all.h:
Add prototypes for ata_read_log(), ata_zac_mgmt_out(), and
ata_zac_mgmt_in().
sys/cam/ata/ata_da.c:
Revamp the ada(4) driver to support zoned devices.
Add four new probe states to gather information needed for zone
support.
Add a new adasetflags() function to avoid duplication of large
blocks of flag setting between the async handler and register
functions.
Add new sysctl variables that describe zone support and paramters.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
sys/cam/scsi/scsi_all.c:
Add command descriptions for the ZBC IN/OUT commands.
Add descriptions for ZBC Host Managed devices.
Add a new function, scsi_ata_pass() to do ATA passthrough over
SCSI. This will eventually replace scsi_ata_pass_16() -- it
can create the 12, 16, and 32-byte variants of the ATA
PASS-THROUGH command, and supports setting all of the
registers defined as of SAT-4, Revision 5 (March 11, 2016).
Change scsi_ata_identify() to use scsi_ata_pass() instead of
scsi_ata_pass_16().
Add a new scsi_ata_read_log() function to facilitate reading
ATA logs via SCSI.
sys/cam/scsi/scsi_all.h:
Add the new ATA PASS-THROUGH(32) command CDB. Add extended and
variable CDB opcodes.
Add Zoned Block Device Characteristics VPD page.
Add ATA Return SCSI sense descriptor.
Add prototypes for scsi_ata_read_log() and scsi_ata_pass().
sys/cam/scsi/scsi_da.c:
Revamp the da(4) driver to support zoned devices.
Add five new probe states, four of which are needed for ATA
devices.
Add five new sysctl variables that describe zone support and
parameters.
The da(4) driver supports SCSI ZBC devices, as well as ATA ZAC
devices when they are attached via a SCSI to ATA Translation (SAT)
layer. Since ZBC -> ZAC translation is a new feature in the T10
SAT-4 spec, most SATA drives will be supported via ATA commands
sent via the SCSI ATA PASS-THROUGH command. The da(4) driver will
prefer the ZBC interface, if it is available, for performance
reasons, but will use the ATA PASS-THROUGH interface to the ZAC
command set if the SAT layer doesn't support translation yet.
As I mentioned above, ZBC command support is untested.
Add support for the new BIO_ZONE bio, and all of its subcommands:
DISK_ZONE_OPEN, DISK_ZONE_CLOSE, DISK_ZONE_FINISH, DISK_ZONE_RWP,
DISK_ZONE_REPORT_ZONES, and DISK_ZONE_GET_PARAMS.
Add scsi_zbc_in() and scsi_zbc_out() CCB building functions.
Add scsi_ata_zac_mgmt_out() and scsi_ata_zac_mgmt_in() CCB/CDB
building functions. Note that these have return values, unlike
almost all other CCB building functions in CAM. The reason is
that they can fail, depending upon the particular combination
of input parameters. The primary failure case is if the user
wants NCQ, but fails to specify additional CDB storage. NCQ
requires using the 32-byte version of the SCSI ATA PASS-THROUGH
command, and the current CAM CDB size is 16 bytes.
sys/cam/scsi/scsi_da.h:
Add ZBC IN and ZBC OUT CDBs and opcodes.
Add SCSI Report Zones data structures.
Add scsi_zbc_in(), scsi_zbc_out(), scsi_ata_zac_mgmt_out(), and
scsi_ata_zac_mgmt_in() prototypes.
sys/dev/ahci/ahci.c:
Fix SEND / RECEIVE FPDMA QUEUED in the ahci(4) driver.
ahci_setup_fis() previously set the top bits of the sector count
register in the FIS to 0 for FPDMA commands. This is okay for
read and write, because the PRIO field is in the only thing in
those bits, and we don't implement that further up the stack.
But, for SEND and RECEIVE FPDMA QUEUED, the subcommand is in that
byte, so it needs to be transmitted to the drive.
In ahci_setup_fis(), always set the the top 8 bits of the
sector count register. We need it in both the standard
and NCQ / FPDMA cases.
sys/geom/eli/g_eli.c:
Pass BIO_ZONE commands through the GELI class.
sys/geom/geom.h:
Add g_io_zonecmd() prototype.
sys/geom/geom_dev.c:
Add new DIOCZONECMD ioctl, which allows sending zone commands to
disks.
sys/geom/geom_disk.c:
Add support for BIO_ZONE commands.
sys/geom/geom_disk.h:
Add a new flag, DISKFLAG_CANZONE, that indicates that a given
GEOM disk client can handle BIO_ZONE commands.
sys/geom/geom_io.c:
Add a new function, g_io_zonecmd(), that handles execution of
BIO_ZONE commands.
Add permissions check for BIO_ZONE commands.
Add command decoding for BIO_ZONE commands.
sys/geom/geom_subr.c:
Add DDB command decoding for BIO_ZONE commands.
sys/kern/subr_devstat.c:
Record statistics for REPORT ZONES commands. Note that the
number of bytes transferred for REPORT ZONES won't quite match
what is received from the harware. This is because we're
necessarily counting bytes coming from the da(4) / ada(4) drivers,
which are using the disk_zone.h interface to communicate up
the stack. The structure sizes it uses are slightly different
than the SCSI and ATA structure sizes.
sys/sys/ata.h:
Add many bit and structure definitions for ZAC, NCQ, and EPC
command support.
sys/sys/bio.h:
Convert the bio_cmd field to a straight enumeration. This will
yield more space for additional commands in the future. After
change r297955 and other related changes, this is now possible.
Converting to an enumeration will also prevent use as a bitmask
in the future.
sys/sys/disk.h:
Define the DIOCZONECMD ioctl.
sys/sys/disk_zone.h:
Add a new API for managing zoned disks. This is very close to
the SCSI ZBC and ATA ZAC standards, but uses integers in native
byte order instead of big endian (SCSI) or little endian (ATA)
byte arrays.
This is intended to offer to the complete feature set of the ZBC
and ZAC disk management without requiring the application developer
to include SCSI or ATA headers. We also use one set of headers
for ioctl consumers and kernel bio-level consumers.
sys/sys/param.h:
Bump __FreeBSD_version for sys/bio.h command changes, and inclusion
of SMR support.
usr.sbin/Makefile:
Add the zonectl utility.
usr.sbin/diskinfo/diskinfo.c
Add disk zoning capability to the 'diskinfo -v' output.
usr.sbin/zonectl/Makefile:
Add zonectl makefile.
usr.sbin/zonectl/zonectl.8
zonectl(8) man page.
usr.sbin/zonectl/zonectl.c
The zonectl(8) utility. This allows managing SCSI or ATA zoned
disks via the disk_zone.h API. You can report zones, reset write
pointers, get parameters, etc.
Sponsored by: Spectra Logic
Differential Revision: https://reviews.freebsd.org/D6147
Reviewed by: wblock (documentation)
2016-05-19 14:08:36 +00:00
|
|
|
static int
|
|
|
|
zonecheck(int fd, uint32_t *zone_mode, char *zone_str, size_t zone_str_len)
|
|
|
|
{
|
|
|
|
struct disk_zone_args zone_args;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
bzero(&zone_args, sizeof(zone_args));
|
|
|
|
|
|
|
|
zone_args.zone_cmd = DISK_ZONE_GET_PARAMS;
|
|
|
|
error = ioctl(fd, DIOCZONECMD, &zone_args);
|
|
|
|
|
|
|
|
if (error == 0) {
|
|
|
|
*zone_mode = zone_args.zone_params.disk_params.zone_mode;
|
|
|
|
|
|
|
|
switch (*zone_mode) {
|
|
|
|
case DISK_ZONE_MODE_NONE:
|
|
|
|
snprintf(zone_str, zone_str_len, "Not_Zoned");
|
|
|
|
break;
|
|
|
|
case DISK_ZONE_MODE_HOST_AWARE:
|
|
|
|
snprintf(zone_str, zone_str_len, "Host_Aware");
|
|
|
|
break;
|
|
|
|
case DISK_ZONE_MODE_DRIVE_MANAGED:
|
|
|
|
snprintf(zone_str, zone_str_len, "Drive_Managed");
|
|
|
|
break;
|
|
|
|
case DISK_ZONE_MODE_HOST_MANAGED:
|
|
|
|
snprintf(zone_str, zone_str_len, "Host_Managed");
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
snprintf(zone_str, zone_str_len, "Unknown_zone_mode_%u",
|
|
|
|
*zone_mode);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
}
|