Make data_submit backends method support not only read and write requests,
but also two new ones: verify and compare. Verify just checks readability
of the data in specified location without transferring them outside.
Compare reads the specified data and compares them to received data,
returning error if they are different.
VERIFY(10/12/16) commands request either verify or compare from backend,
depending on BYTCHK CDB field. COMPARE AND WRITE command executed in two
stages: first it requests compare, and then, if succeesed, requests write.
Atomicity of operation is guarantied by CTL request ordering code.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
trims to the device assumes the list is sorted. Don't apply the
optimization of not sorting the queue when we have SSDs to the
delete_queue, since it causes more discard traffic to the drive. While
one could argue that the higher levels should coalesce the trims,
that's not done today, so some optimization at this level is needed.
CR: https://phabric.freebsd.org/D142
Make it really work for native FreeBSD programs. Before this it was broken
for years due to different number of pointer dereferences in Linux and
FreeBSD IOCTL paths, permanently returning errors to FreeBSD programs.
This change breaks the driver FreeBSD IOCTL ABI, making it more strict,
but since it was not working any way -- who bother.
Add shims for 32-bit programs on 64-bit host, translating the argument
of the SG_IO IOCTL for both FreeBSD and Linux ABIs.
With this change I was able to run 32-bit Linux sg3_utils tools and simple
32 and 64-bit FreeBSD test tools on both 32 and 64-bit FreeBSD systems.
MFC after: 1 month
Nobody yet reported disk supporting I/Os less then our MAXPHYS value, but
since we any way have code to read Block Limits VPD page, that is easy.
MFC after: 2 weeks
Instead of rereading VPD pages on every device open, do it only on initial
device probe, and in cases when device reported via UNIT ATTENTIONs that
something has changed. Capacity is still rereaded on every open because
it is more critical for operation and more probable to change in run time.
On my tests with Intel 530 SSDs on mps(4) HBA this change reduces time
GEOM needs to retaste the device (that includes few open/close cycles)
from ~150ms to ~30ms.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
This code was heavily broken few months ago during CAM locking changes.
Fixing it would require almost complete rewrite. Since there are no
known devices on market using this interface younger then ~15 years, and
they are CD, not even DVD, I don't see much reason to rewrite it.
This change does not mean those devices won't work. They will just work
slower due to inefficient disks load/unload schedule if several LUNs
accessed same time.
Discussed with: ken@
Silence on: scsi@, hardware@
MFC after: 1 week
This patch adds support for three new SCSI commands: UNMAP, WRITE SAME(10)
and WRITE SAME(16). WRITE SAME commands support both normal write mode
and UNMAP flag. To properly report UNMAP capabilities this patch also adds
support for reporting two new VPD pages: Block limits and Logical Block
Provisioning.
UNMAP support can be enabled per-LUN by adding "-o unmap=on" to `ctladm
create` command line or "option unmap on" to lun sections of /etc/ctl.conf.
At this moment UNMAP supported for ramdisks and device-backed block LUNs.
It was tested to work great with ZFS ZVOLs. For file-backed LUNs UNMAP
support is unfortunately missing due to absence of respective VFS KPI.
Reviewed by: ken
MFC after: 1 month
Sponsored by: iXsystems, Inc
The latest draft of SBC-3 tells: "A MAXIMUM UNMAP LBA COUNT field set to
a non-zero value indicates the maximum number of LBAs that may be unmapped
by an UNMAP command." To me it does not sound like that limit is set per
single descriptor, but rather per all command. And I have at least one
device that behaves exactly that way. This patch fixes the problem there.
MFC after: 1 week
support all valid SAM-5 LUN IDs. CAM_VERSION is bumped, as the CAM ABI
(though not API) is changed. No behavior is changed relative to r257345
except that LUNs with non-zero high 32 bits will no longer be ignored
during device enumeration for SIMs that have set PIM_EXTLUNS.
Reviewed by: scottl
(like NAA assigned) and identify the same entity (like device or port).
Otherwise there can be false positives since at least some models of
Seagate disks use same IDs for the whole device and one of its ports.
MFC after: 2 weeks
the upper 32-bits of the LUN, if possible, into the target_lun field as
passed directly from the REPORT LUNs response. This allows extended LUN
support to work for all LUNs with zeros in the lower 32-bits, which covers
most addressing modes without breaking KBI. Behavior for drivers not
setting PIM_EXTLUNS is unchanged. No user-facing interfaces are modified.
Extended LUNs are stored with swizzled 16-bit word order so that, for
devices implementing LUN addressing (like SCSI-2), the numerical
representation of the LUN is identical with and without PIM_EXTLUNS. Thus
setting PIM_EXTLUNS keeps most behavior, and user-facing LUN IDs, unchanged.
This follows the strategy used in Solaris. A macro (CAM_EXTLUN_BYTE_SWIZZLE)
is provided to transform a lun_id_t into a uint64_t ordered for the wire.
This is the second part of work for full 64-bit extended LUN support and is
designed to a bridge for stable/10 to the final 64-bit LUN code. The
third and final part will involve widening lun_id_t to 64 bits and will
not be MFCed. This third part will break the KBI but will keep the KPI
unchanged so that all drivers that will care about this can be updated now
and not require code changes between HEAD and stable/10.
Reviewed by: scottl
MFC after: 2 weeks
- Replace ordered_tag_count counter with single flag;
- From da remove outstanding_cmds counter, duplicating pending_ccbs list;
- From da_softc remove unused links field.
information.
The existing algorithm selects a preferred leaf vdev based on offset of the zio
request modulo the number of members in the mirror. It assumes the devices are
of equal performance and that spreading the requests randomly over both drives
will be sufficient to saturate them. In practice this results in the leaf vdevs
being under utilized.
The new algorithm takes into the following additional factors:
* Load of the vdevs (number outstanding I/O requests)
* The locality of last queued I/O vs the new I/O request.
Within the locality calculation additional knowledge about the underlying vdev
is considered such as; is the device backing the vdev a rotating media device.
This results in performance increases across the board as well as significant
increases for predominantly streaming loads and for configurations which don't
have evenly performing devices.
The following are results from a setup with 3 Way Mirror with 2 x HD's and
1 x SSD from a basic test running multiple parrallel dd's.
With pre-fetch disabled (vfs.zfs.prefetch_disable=1):
== Stripe Balanced (default) ==
Read 15360MB using bs: 1048576, readers: 3, took 161 seconds @ 95 MB/s
== Load Balanced (zfslinux) ==
Read 15360MB using bs: 1048576, readers: 3, took 297 seconds @ 51 MB/s
== Load Balanced (locality freebsd) ==
Read 15360MB using bs: 1048576, readers: 3, took 54 seconds @ 284 MB/s
With pre-fetch enabled (vfs.zfs.prefetch_disable=0):
== Stripe Balanced (default) ==
Read 15360MB using bs: 1048576, readers: 3, took 91 seconds @ 168 MB/s
== Load Balanced (zfslinux) ==
Read 15360MB using bs: 1048576, readers: 3, took 108 seconds @ 142 MB/s
== Load Balanced (locality freebsd) ==
Read 15360MB using bs: 1048576, readers: 3, took 48 seconds @ 320 MB/s
In addition to the performance changes the code was also restructured, with
the help of Justin Gibbs, to provide a more logical flow which also ensures
vdevs loads are only calculated from the set of valid candidates.
The following additional sysctls where added to allow the administrator
to tune the behaviour of the load algorithm:
* vfs.zfs.vdev.mirror.rotating_inc
* vfs.zfs.vdev.mirror.rotating_seek_inc
* vfs.zfs.vdev.mirror.rotating_seek_offset
* vfs.zfs.vdev.mirror.non_rotating_inc
* vfs.zfs.vdev.mirror.non_rotating_seek_inc
These changes where based on work started by the zfsonlinux developers:
https://github.com/zfsonlinux/zfs/pull/1487
Reviewed by: gibbs, mav, will
MFC after: 2 weeks
Sponsored by: Multiplay
When safety requirements are met, it allows to avoid passing I/O requests
to GEOM g_up/g_down thread, executing them directly in the caller context.
That allows to avoid CPU bottlenecks in g_up/g_down threads, plus avoid
several context switches per I/O.
The defined now safety requirements are:
- caller should not hold any locks and should be reenterable;
- callee should not depend on GEOM dual-threaded concurency semantics;
- on the way down, if request is unmapped while callee doesn't support it,
the context should be sleepable;
- kernel thread stack usage should be below 50%.
To keep compatibility with GEOM classes not meeting above requirements
new provider and consumer flags added:
- G_CF_DIRECT_SEND -- consumer code meets caller requirements (request);
- G_CF_DIRECT_RECEIVE -- consumer code meets callee requirements (done);
- G_PF_DIRECT_SEND -- provider code meets caller requirements (done);
- G_PF_DIRECT_RECEIVE -- provider code meets callee requirements (request).
Capable GEOM class can set them, allowing direct dispatch in cases where
it is safe. If any of requirements are not met, request is queued to
g_up or g_down thread same as before.
Such GEOM classes were reviewed and updated to support direct dispatch:
CONCAT, DEV, DISK, GATE, MD, MIRROR, MULTIPATH, NOP, PART, RAID, STRIPE,
VFS, ZERO, ZFS::VDEV, ZFS::ZVOL, all classes based on g_slice KPI (LABEL,
MAP, FLASHMAP, etc).
To declare direct completion capability disk(9) KPI got new flag equivalent
to G_PF_DIRECT_SEND -- DISKFLAG_DIRECT_COMPLETION. da(4) and ada(4) disk
drivers got it set now thanks to earlier CAM locking work.
This change more then twice increases peak block storage performance on
systems with manu CPUs, together with earlier CAM locking changes reaching
more then 1 million IOPS (512 byte raw reads from 16 SATA SSDs on 4 HBAs to
256 user-level threads).
Sponsored by: iXsystems, Inc.
MFC after: 2 months
reduce lock congestion and improve SMP scalability of the SCSI/ATA stack,
preparing the ground for the coming next GEOM direct dispatch support.
Replace big per-SIM locks with bunch of smaller ones:
- per-LUN locks to protect device and peripheral drivers state;
- per-target locks to protect list of LUNs on target;
- per-bus locks to protect reference counting;
- per-send queue locks to protect queue of CCBs to be sent;
- per-done queue locks to protect queue of completed CCBs;
- remaining per-SIM locks now protect only HBA driver internals.
While holding LUN lock it is allowed (while not recommended for performance
reasons) to take SIM lock. The opposite acquisition order is forbidden.
All the other locks are leaf locks, that can be taken anywhere, but should
not be cascaded. Many functions, such as: xpt_action(), xpt_done(),
xpt_async(), xpt_create_path(), etc. are no longer require (but allow) SIM
lock to be held.
To keep compatibility and solve cases where SIM lock can't be dropped, all
xpt_async() calls in addition to xpt_done() calls are queued to completion
threads for async processing in clean environment without SIM lock held.
Instead of single CAM SWI thread, used for commands completion processing
before, use multiple (depending on number of CPUs) threads. Load balanced
between them using "hash" of the device B:T:L address.
HBA drivers that can drop SIM lock during completion processing and have
sufficient number of completion threads to efficiently scale to multiple
CPUs can use new function xpt_done_direct() to avoid extra context switch.
Make ahci(4) driver to use this mechanism depending on hardware setup.
Sponsored by: iXsystems, Inc.
MFC after: 2 months
original, this hides the contents of cam_compat.h from ktrace/kdump/truss,
avoiding problems there. There are no user-servicable parts in there, so
no need for those tools to be groping around in there.
Approved by: re
- Remove the timeout_ch field. It's been deprecated since FreeBSD 7.0;
MPSAFE drivers should be managing their own timeout storage. The
remaining non-MPSAFE drivers have been modified to also manage their own
storage, and should be considered for updating to MPSAFE (or removal)
during the FreeBSD 10.x lifecycle.
- Add fields related to soft timeouts and quality of service, to be used
in upcoming work.
- Add room for more flags in the CCB header and path_inq structures.
- Begin support for extended 64-bit LUNs.
- Bump the CAM version number to 0x18, but add compat shims. Tested with
camcontrol and smartctl.
Reviewed by: nathanw, ken, kib
Approved by: re
Obtained from: Netflix
functional state. While CTL is much more superior target from all points,
there is no reason why this code should not work.
Tested with ahc(4) as target side HBA.
MFC after: 2 weeks
to 15 minutes, and 5 minutes for things like READ ELEMENT STATUS.
This is needed to account for the worst case scenarios on at least
some Spectra Logic tape libraries.
Sponsored by: Spectra Logic
MFC after: 3 days
notify (enable spinup) required", instead of doing the normal
retries, poll for a change in status.
We will poll every half second for a minute for the status to
change.
Hitachi drives (and likely other SAS drives) return that ASC/ASCQ
when they are waiting to spin up. What it means is that they are
waiting for the SAS expander to send them the SAS
NOTIFY (ENABLE SPINUP) primitive.
That primitive is the mechanism expanders/enclosures use to
sequence drive spinup to avoid overloading power supplies.
Sponsored by: Spectra Logic
MFC after: 3 days
configure sa(4) to request no I/O splitting by default.
For tape devices, the user needs to be able to clearly understand
what blocksize is actually being used when writing to a tape
device. The previous behavior of physio(9) was that it would split
up any I/O that was too large for the device, or too large to fit
into MAXPHYS. This means that if, for instance, the user wrote a
1MB block to a tape device, and MAXPHYS was 128KB, the 1MB write
would be split into 8 128K chunks. This would be done without
informing the user.
This has suboptimal effects, especially when trying to communicate
status to the user. In the event of an error writing to a tape
(e.g. physical end of tape) in the middle of a 1MB block that has
been split into 8 pieces, the user could have the first two 128K
pieces written successfully, the third returned with an error, and
the last 5 returned with 0 bytes written. If the user is using
a standard write(2) system call, all he will see is the ENOSPC
error. He won't have a clue how much actually got written. (With
a writev(2) system call, he should be able to determine how much
got written in addition to the error.)
The solution is to prevent physio(9) from splitting the I/O. The
new cdev flag, SI_NOSPLIT, tells physio that the driver does not
want I/O to be split beforehand.
Although the sa(4) driver now enables SI_NOSPLIT by default,
that can be disabled by two loader tunables for now. It will not
be configurable starting in FreeBSD 11.0. kern.cam.sa.allow_io_split
allows the user to configure I/O splitting for all sa(4) driver
instances. kern.cam.sa.%d.allow_io_split allows the user to
configure I/O splitting for a specific sa(4) instance.
There are also now three sa(4) driver sysctl variables that let the
users see some sa(4) driver values. kern.cam.sa.%d.allow_io_split
shows whether I/O splitting is turned on. kern.cam.sa.%d.maxio shows
the maximum I/O size allowed by kernel configuration parameters
(e.g. MAXPHYS, DFLTPHYS) and the capabilities of the controller.
kern.cam.sa.%d.cpi_maxio shows the maximum I/O size supported by
the controller.
Note that a better long term solution would be to implement support
for chaining buffers, so that that MAXPHYS is no longer a limiting
factor for I/O size to tape and disk devices. At that point, the
controller and the tape drive would become the limiting factors.
sys/conf.h: Add a new cdev flag, SI_NOSPLIT, that allows a
driver to tell physio not to split up I/O.
sys/param.h: Bump __FreeBSD_version to 1000049 for the addition
of the SI_NOSPLIT cdev flag.
kern_physio.c: If the SI_NOSPLIT flag is set on the cdev, return
any I/O that is larger than si_iosize_max or
MAXPHYS, has more than one segment, or would have
to be split because of misalignment with EFBIG.
(File too large).
In the event of an error, print a console message to
give the user a clue about what happened.
scsi_sa.c: Set the SI_NOSPLIT cdev flag on the devices created
for the sa(4) driver by default.
Add tunables to control whether we allow I/O splitting
in physio(9).
Explain in the comments that allowing I/O splitting
will be deprecated for the sa(4) driver in FreeBSD
11.0.
Add sysctl variables to display the maximum I/O
size we can do (which could be further limited by
read block limits) and the maximum I/O size that
the controller can do.
Limit our maximum I/O size (recorded in the cdev's
si_iosize_max) by MAXPHYS. This isn't strictly
necessary, because physio(9) will limit it to
MAXPHYS, but it will provide some clarity for the
application.
Record the controller's maximum I/O size reported
in the Path Inquiry CCB.
sa.4: Document the block size behavior, and explain that
the option of allowing physio(9) to split the I/O
will disappear in FreeBSD 11.0.
Sponsored by: Spectra Logic
We now pay attention to the maxio field in the XPT_PATH_INQ CCB,
and if it is set, propagate it up to physio via the si_iosize_max
field in the cdev structure.
We also now pay attention to the PIM_UNMAPPED capability bit in the
XPT_PATH_INQ CCB, and set the new SI_UNMAPPED cdev flag when the
underlying SIM supports unmapped I/O.
scsi_sa.c: Add unmapped I/O support and propagate the SIM's
maximum I/O size up.
Adjust scsi_tape_read_write() in the same way that
scsi_read_write() was changed to support unmapped
I/O. We overload the readop parameter with bits
that tell us whether it's an unmapped I/O, and we
need to set the CAM_DATA_BIO CCB flag. This change
should be backwards compatible in source and
binary forms.
MFC after: 1 week
Sponsored by: Spectra Logic
While these operations are not really needed otherwise, at least for SCSI
they may cause extra errors if some other initiator holds write exclusive
reservation on the LUN (SYNCHRONIZE CACHE handled as "write" operation).