properly.
If there is garbage in the flags field, it can sometimes include a
set CDAI_FLAG_STORE flag, which may cause either an error or
perhaps result in overwriting the field that was intended to be
read.
sys/cam/cam_ccb.h:
Add a new flag to the XPT_DEV_ADVINFO CCB, CDAI_FLAG_NONE,
that callers can use to set the flags field when no store
is desired.
sys/cam/scsi/scsi_enc_ses.c:
In ses_setphyspath_callback(), explicitly set the
XPT_DEV_ADVINFO flags to CDAI_FLAG_NONE when fetching the
physical path information. Instead of ORing in the
CDAI_FLAG_STORE flag when storing the physical path, set
the flags field to CDAI_FLAG_STORE.
sys/cam/scsi/scsi_sa.c:
Set the XPT_DEV_ADVINFO flags field to CDAI_FLAG_NONE when
fetching extended inquiry information.
sys/cam/scsi/scsi_da.c:
When storing extended READ CAPACITY information, set the
XPT_DEV_ADVINFO flags field to CDAI_FLAG_STORE instead of
ORing it into a field that isn't initialized.
sys/dev/mpr/mpr_sas.c,
sys/dev/mps/mps_sas.c:
When fetching extended READ CAPACITY information, set the
XPT_DEV_ADVINFO flags field to CDAI_FLAG_NONE instead of
setting it to 0.
sbin/camcontrol/camcontrol.c:
When fetching a device ID, set the XPT_DEV_ADVINFO flags
field to CDAI_FLAG_NONE instead of 0.
sys/sys/param.h:
Bump __FreeBSD_version to 1100061 for the new XPT_DEV_ADVINFO
CCB flag, CDAI_FLAG_NONE.
Sponsored by: Spectra Logic
MFC after: 1 week
VMware returns BUSY status when storage has transient connectivity issues.
It is often better to wait and let VM admin fix the problem then crash.
Discussed with: ken
MFC after: 1 week
This includes a new summary mode (-s) for camcontrol defects that
quickly tells the user the most important thing: how many defects
are in the requested list. The actual location of the defects is
less important.
Modern drives frequently have more than the 8191 defects that can
be reported by the READ DEFECT DATA (10) command. If they don't
have that many grown defects, they certainly have more than 8191
defects in the primary (i.e. factory) defect list.
The READ DEFECT DATA (12) command allows for longer parameter
lists, as well as indexing into the list of defects, and so allows
reporting many more defects.
This has been tested with HGST drives and Seagate drives, but
does not fully work with Seagate drives. Once I have a Seagate
spec I may be able to determine whether it is possible to make it
work with Seagate drives.
scsi_da.h: Add a definition for the new long block defect
format.
Add bit and mask definitions for the new extended
physical sector and bytes from index defect
formats.
Add a prototype for the new scsi_read_defects() CDB
building function.
scsi_da.c: Add a new scsi_read_defects() CDB building function.
camcontrol(8) was previously composing CDBs manually.
This is long overdue.
camcontrol.c: Revamp the camcontrol defects subcommand. We now
go through multiple stages in trying to get defect
data off the drive while avoiding various drive
firmware quirks.
We start off by requesting the defect header with
the 10 byte command. If we're in summary mode (-s)
and the drive reports fewer defects than can be
represented in the 10 byte header, we're done.
Otherwise, we know that we need to issue the
12 byte command if the drive reports the maximum
number of defects.
If we're in summary mode, we're done if we get a
good response back when asking for the 12 byte header.
If the user has asked for the full list, then we
use the address descriptor index field in the 12
byte CDB to step through the list in 64K chunks.
64K is small enough to work with most any ancient
or modern SCSI controller.
Add support for printing the new long block defect
format, as well as the extended physical sector and
bytes from index formats. I don't have any drives
that support the new formats.
Add a hexadecimal output format that can be turned
on with -X.
Add a quiet mode (-q) that can be turned on with
the summary mode (-s) to just print out a number.
Revamp the error detection and recovery code for
the defects command to work with HGST drives.
Call the new scsi_read_defects() CDB building
function instead of rolling the CDB ourselves.
Pay attention to the residual from the defect list
request when printing it out, so we don't run off
the end of the list.
Use the new scsi_nv library routines to convert
from strings to numbers and back.
camcontrol.8: Document the new defect formats (longblock, extbfi,
extphys) and command line options (-q, -s, -S and
-X) for the defects subcommand.
Explain a little more about what drives generally
do and don't support.
Sponsored by: Spectra Logic
MFC after: 1 week
None of existing STEC devices need UNMAP or even support it well, having
many limitations and even hanging sometimes executing those commands.
New devices that may use UNMAP going to be released under HGST name.
MFC after: 3 days
These changes prevent sysctl(8) from returning proper output,
such as:
1) no output from sysctl(8)
2) erroneously returning ENOMEM with tools like truss(1)
or uname(1)
truss: can not get etype: Cannot allocate memory
there is an environment variable which shall initialize the SYSCTL
during early boot. This works for all SYSCTL types both statically and
dynamically created ones, except for the SYSCTL NODE type and SYSCTLs
which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to
be used in the case a tunable sysctl has a custom initialisation
function allowing the sysctl to still be marked as a tunable. The
kernel SYSCTL API is mostly the same, with a few exceptions for some
special operations like iterating childrens of a static/extern SYSCTL
node. This operation should probably be made into a factored out
common macro, hence some device drivers use this. The reason for
changing the SYSCTL API was the need for a SYSCTL parent OID pointer
and not only the SYSCTL parent OID list pointer in order to quickly
generate the sysctl path. The motivation behind this patch is to avoid
parameter loading cludges inside the OFED driver subsystem. Instead of
adding special code to the OFED driver subsystem to post-load tunables
into dynamically created sysctls, we generalize this in the kernel.
Other changes:
- Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask"
to "hw.pcic.intr_mask".
- Removed redundant TUNABLE statements throughout the kernel.
- Some minor code rewrites in connection to removing not needed
TUNABLE statements.
- Added a missing SYSCTL_DECL().
- Wrapped two very long lines.
- Avoid malloc()/free() inside sysctl string handling, in case it is
called to initialize a sysctl from a tunable, hence malloc()/free() is
not ready when sysctls from the sysctl dataset are registered.
- Bumped FreeBSD version to indicate SYSCTL API change.
MFC after: 2 weeks
Sponsored by: Mellanox Technologies
trims to the device assumes the list is sorted. Don't apply the
optimization of not sorting the queue when we have SSDs to the
delete_queue, since it causes more discard traffic to the drive. While
one could argue that the higher levels should coalesce the trims,
that's not done today, so some optimization at this level is needed.
CR: https://phabric.freebsd.org/D142
Nobody yet reported disk supporting I/Os less then our MAXPHYS value, but
since we any way have code to read Block Limits VPD page, that is easy.
MFC after: 2 weeks
Instead of rereading VPD pages on every device open, do it only on initial
device probe, and in cases when device reported via UNIT ATTENTIONs that
something has changed. Capacity is still rereaded on every open because
it is more critical for operation and more probable to change in run time.
On my tests with Intel 530 SSDs on mps(4) HBA this change reduces time
GEOM needs to retaste the device (that includes few open/close cycles)
from ~150ms to ~30ms.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
The latest draft of SBC-3 tells: "A MAXIMUM UNMAP LBA COUNT field set to
a non-zero value indicates the maximum number of LBAs that may be unmapped
by an UNMAP command." To me it does not sound like that limit is set per
single descriptor, but rather per all command. And I have at least one
device that behaves exactly that way. This patch fixes the problem there.
MFC after: 1 week
- Replace ordered_tag_count counter with single flag;
- From da remove outstanding_cmds counter, duplicating pending_ccbs list;
- From da_softc remove unused links field.
information.
The existing algorithm selects a preferred leaf vdev based on offset of the zio
request modulo the number of members in the mirror. It assumes the devices are
of equal performance and that spreading the requests randomly over both drives
will be sufficient to saturate them. In practice this results in the leaf vdevs
being under utilized.
The new algorithm takes into the following additional factors:
* Load of the vdevs (number outstanding I/O requests)
* The locality of last queued I/O vs the new I/O request.
Within the locality calculation additional knowledge about the underlying vdev
is considered such as; is the device backing the vdev a rotating media device.
This results in performance increases across the board as well as significant
increases for predominantly streaming loads and for configurations which don't
have evenly performing devices.
The following are results from a setup with 3 Way Mirror with 2 x HD's and
1 x SSD from a basic test running multiple parrallel dd's.
With pre-fetch disabled (vfs.zfs.prefetch_disable=1):
== Stripe Balanced (default) ==
Read 15360MB using bs: 1048576, readers: 3, took 161 seconds @ 95 MB/s
== Load Balanced (zfslinux) ==
Read 15360MB using bs: 1048576, readers: 3, took 297 seconds @ 51 MB/s
== Load Balanced (locality freebsd) ==
Read 15360MB using bs: 1048576, readers: 3, took 54 seconds @ 284 MB/s
With pre-fetch enabled (vfs.zfs.prefetch_disable=0):
== Stripe Balanced (default) ==
Read 15360MB using bs: 1048576, readers: 3, took 91 seconds @ 168 MB/s
== Load Balanced (zfslinux) ==
Read 15360MB using bs: 1048576, readers: 3, took 108 seconds @ 142 MB/s
== Load Balanced (locality freebsd) ==
Read 15360MB using bs: 1048576, readers: 3, took 48 seconds @ 320 MB/s
In addition to the performance changes the code was also restructured, with
the help of Justin Gibbs, to provide a more logical flow which also ensures
vdevs loads are only calculated from the set of valid candidates.
The following additional sysctls where added to allow the administrator
to tune the behaviour of the load algorithm:
* vfs.zfs.vdev.mirror.rotating_inc
* vfs.zfs.vdev.mirror.rotating_seek_inc
* vfs.zfs.vdev.mirror.rotating_seek_offset
* vfs.zfs.vdev.mirror.non_rotating_inc
* vfs.zfs.vdev.mirror.non_rotating_seek_inc
These changes where based on work started by the zfsonlinux developers:
https://github.com/zfsonlinux/zfs/pull/1487
Reviewed by: gibbs, mav, will
MFC after: 2 weeks
Sponsored by: Multiplay
When safety requirements are met, it allows to avoid passing I/O requests
to GEOM g_up/g_down thread, executing them directly in the caller context.
That allows to avoid CPU bottlenecks in g_up/g_down threads, plus avoid
several context switches per I/O.
The defined now safety requirements are:
- caller should not hold any locks and should be reenterable;
- callee should not depend on GEOM dual-threaded concurency semantics;
- on the way down, if request is unmapped while callee doesn't support it,
the context should be sleepable;
- kernel thread stack usage should be below 50%.
To keep compatibility with GEOM classes not meeting above requirements
new provider and consumer flags added:
- G_CF_DIRECT_SEND -- consumer code meets caller requirements (request);
- G_CF_DIRECT_RECEIVE -- consumer code meets callee requirements (done);
- G_PF_DIRECT_SEND -- provider code meets caller requirements (done);
- G_PF_DIRECT_RECEIVE -- provider code meets callee requirements (request).
Capable GEOM class can set them, allowing direct dispatch in cases where
it is safe. If any of requirements are not met, request is queued to
g_up or g_down thread same as before.
Such GEOM classes were reviewed and updated to support direct dispatch:
CONCAT, DEV, DISK, GATE, MD, MIRROR, MULTIPATH, NOP, PART, RAID, STRIPE,
VFS, ZERO, ZFS::VDEV, ZFS::ZVOL, all classes based on g_slice KPI (LABEL,
MAP, FLASHMAP, etc).
To declare direct completion capability disk(9) KPI got new flag equivalent
to G_PF_DIRECT_SEND -- DISKFLAG_DIRECT_COMPLETION. da(4) and ada(4) disk
drivers got it set now thanks to earlier CAM locking work.
This change more then twice increases peak block storage performance on
systems with manu CPUs, together with earlier CAM locking changes reaching
more then 1 million IOPS (512 byte raw reads from 16 SATA SSDs on 4 HBAs to
256 user-level threads).
Sponsored by: iXsystems, Inc.
MFC after: 2 months
reduce lock congestion and improve SMP scalability of the SCSI/ATA stack,
preparing the ground for the coming next GEOM direct dispatch support.
Replace big per-SIM locks with bunch of smaller ones:
- per-LUN locks to protect device and peripheral drivers state;
- per-target locks to protect list of LUNs on target;
- per-bus locks to protect reference counting;
- per-send queue locks to protect queue of CCBs to be sent;
- per-done queue locks to protect queue of completed CCBs;
- remaining per-SIM locks now protect only HBA driver internals.
While holding LUN lock it is allowed (while not recommended for performance
reasons) to take SIM lock. The opposite acquisition order is forbidden.
All the other locks are leaf locks, that can be taken anywhere, but should
not be cascaded. Many functions, such as: xpt_action(), xpt_done(),
xpt_async(), xpt_create_path(), etc. are no longer require (but allow) SIM
lock to be held.
To keep compatibility and solve cases where SIM lock can't be dropped, all
xpt_async() calls in addition to xpt_done() calls are queued to completion
threads for async processing in clean environment without SIM lock held.
Instead of single CAM SWI thread, used for commands completion processing
before, use multiple (depending on number of CPUs) threads. Load balanced
between them using "hash" of the device B:T:L address.
HBA drivers that can drop SIM lock during completion processing and have
sufficient number of completion threads to efficiently scale to multiple
CPUs can use new function xpt_done_direct() to avoid extra context switch.
Make ahci(4) driver to use this mechanism depending on hardware setup.
Sponsored by: iXsystems, Inc.
MFC after: 2 months
While these operations are not really needed otherwise, at least for SCSI
they may cause extra errors if some other initiator holds write exclusive
reservation on the LUN (SYNCHRONIZE CACHE handled as "write" operation).
Ensure that d_delmaxsize is always set, removing init to 0 which could cause
future issues if use cases change.
Allow kern.cam.da.X.delete_max (which maps to d_delmaxsize) to be increased
up to the calculated max after being reduced.
MFC after: 1 day
X-MFC-With: r249940
While GEOM in general has provider opened while sending BIO_GETATTR,
GEOM DISK does not really need to open disk to read medium-unrelated
attributes for own use.
Proposed by: ken
Re-ordered SSD quirks alphabetically so they are easier to maintain.
Removed my email and PR reference from comments on each quirk.
Added quirks for more SSDs:
* Crucial M4
* Corsair Force GT
* Intel 520 Series
* Kingston E100 Series
* Samsung 830 Series
Reviewed by: pjd (mentor)
Approved by: pjd (mentor)
MFC after: 1 week
This prevents users from selecting a delete method which may cause
corruption e.g. MPS WS16 on pre P14 firmware.
Reviewed by: pjd (mentor)
Approved by: pjd (mentor)
MFC after: 2 days