This is preparation for possibility to open/close media several times
per LUN life cycle. While there, rename variables to reduce confusion.
As additional bonus this allows to open read-only media, such as ZFS
snapshots.
It has nothing to share with too huge ctl.c other then device descriptor,
but even that may be counted as design error that may be fixed later.
At some point we may even want to have several ioctl ports.
Its idea was to be a simple initiator and execute several commands from
kernel level, but FreeBSD never had consumer for that functionality,
while its implementation polluted many unrelated places.
Switch initiator IDs in target mode to the same address space as target
IDs in initiator mode -- index in port database instead of handlers.
This makes initiator IDs persist across role changes and firmware resets,
when handlers previously assigned by firmware are lost and reused.
Make first step toward supporting target and initiator roles same time.
To avoid conflicts between target and initiator devices in CAM, make
CTL use target ID reported by HBA as its initiator_id in XPT_PATH_INQ.
That target ID is known to never be used for initiator role, so it won't
conflict. For Fibre Channel and FireWire HBAs this specific ID choice
is irrelevant since all target IDs there are virtual. Same time for SPI
HBAs it seems could be even requirement to use same target ID for both
initiator and target roles.
While there are some more things to polish in isp(4) driver, first tests
of using both roles same time on the same port appeared successfull:
# camcontrol devlist -v
scbus0 on isp0 bus 0:
<FREEBSD CTLDISK 0001> at scbus0 target 1 lun 0 (da20,pass21)
<> at scbus0 target 256 lun 0 (ctl0)
<> at scbus0 target -1 lun ffffffff (ctl1)
FreeBSD never had limitation on number of target IDs, and there is no
any other requirement to allocate them densely. Since slots of port
database already populated just sequentially, there is no much need
for another indirection to allocate sequentially too.
Add a new sesutil(8) utility
This is an utility for managing SCSI Enclosure Services (SES) device.
For now only one command is supported "locate" which will change the test of the
external LED associated to a given disk.
Usage if the following:
sesutil locate disk [on|off]
Disk can be a device name: "da12" or a special keyword: "all".
Reviewed by: mav
Relnotes: yes
Sponsored by: gandi.net
Use the C99 flexible array construct to denote a variable amount of
data rather than the old-school [1] construct. We have required c99
compilers for some time.
In the current implementation, the isp_kthread() threads never exit.
The target threads do have an exit mode from isp_attach(), but it is
not invoked from isp_detach().
Ensure isp_detach() notifies threads started for each channel, such
that they exit before their parent device softc detaches, and thus
before the module does. Otherwise, a page fault panic occurs later in:
sysctl_kern_proc
sysctl_out_proc
kern_proc_out
fill_kinfo_proc
fill_kinfo_thread
strlcpy(kp->ki_wmesg, td->td_wmesg, sizeof(kp->ki_wmesg));
For isp_kthread() (and isp(4) target threads), td->td_wmesg references
now-unmapped memory after the module has been unloaded. These threads
are typically msleep()ing at the time of unload, but they could also
attempt to execute now-unmapped code segments.
Initialize async_arg_ptr in xpt_async when called with async_code
AC_ADVINFO_CHANGED.
Without this change, newly inserted hard disks won't always have their
physical path device nodes created. The problem reproduces most readily
when attaching a large number of disks at once.
There are four places, all in cam_xpt.c, where ccbs are malloc'ed. Two of
these use M_ZERO, two don't. The two that don't meant that allocated ccbs
had trash in them making it hard to debug errors where they showed up. Due
to this, use M_ZERO all the time when allocating ccbs.
Restore the CAM XPT peripheral generation counter, and export it via sysctl.
Define it as an atomic uint32_t. These increments happen infrequently
enough for the atomic overhead to be a problem, and since they're now
independent atomics, they won't contend with xpt_lock_buses().
This counter is useful as a means of cheaply identifying whether any changes
have been made to the CAM peripheral list. Userland programs have no guarantee
that the counter won't change on them while being returned or while processing
the information, so they must be written accordingly.
Explain a bit of tricky code dealing with trims and how it prevents
starvation. These side effects aren't obvious without extremely
careful study, and are important to do just so.
cddl/contrib/opensolaris/uts/common/fs/zfs/bqueue.c wasn't added to
conf/files in r286705 and was fixed by r286718. The broken
version was MFCed by r288571.
Forgotten by: mav (again)
illumos/illumos-gate/commit/c546f36aa898d913ff77674fb5ff97f15b2e08b4
https://www.illumos.org/issues/6220
5408 introduced a memleak in l2arc, namely the member b_thawed gets leaked
when an arc_hdr is realloced from full to l2only.
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Simon Klinkert <simon.klinkert@gmail.com>
Reviewed by: George Wilson <george@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Arne Jansen <sensille@gmx.net>
(userland portion that was not merged in r286677)
Update zdb to also print ltime, type, and level information
for these new style holes. Previously, only the logical birth
time would be printed.
6214 zpools going south
In r286570 (MFV of r277426) an unprotected write to b_flags to
set the compression mode was introduced. This would open a race
window where data is partially decompressed, modified, checksummed
and written to the pool, resulting in pool corruption due to the
partial decompression.
Prevent this by reintroducing b_compress
illumos/illumos-gate@d4cd038c92
Illumos issues:
6214 zpools going south
https://www.illumos.org/issues/6214
Rewrite the ZFS prefetch code to detect only forward, sequential
streams.
The following kstats have been added:
kstat.zfs.misc.arcstats.sync_wait_for_async
How many sync reads have waited for async read
to complete. (less is better)
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch
How many demand read didn't have to wait for I/O
because of predictive prefetch. (more is better)
zfetch kstats have been similified to hits, misses, and max_streams,
with max_streams representing times when we were not able to create
new stream because we already have the maximum number of sequences
for a file.
The sysctl variable/loader tunable vfs.zfs.zfetch.block_cap have been
replaced by vfs.zfs.zfetch.max_distance, which controls maximum bytes
to prefetch per stream.
illumos/illumos-gate@cf6106c8a0
Illumos ZFS issues:
5987 zfs prefetch code needs work
https://www.illumos.org/issues/5987