Commit Graph

179899 Commits

Author SHA1 Message Date
Jim Harris
3d7eb41c1b Just disable the controller instead of deleting IO queues during detach.
This is just as effective, and removes the need for a bunch of admin commands
to a controller that's going to be disabled shortly anyways.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:48:41 +00:00
Jim Harris
ec84ecbba0 Have nvd(4) register for controller notifications.
Also have nvd maintain controller/namespace relationships internally.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:45:37 +00:00
Jim Harris
74019d4b67 Set Pre-boot Software Load Count to 0 at the end of the controller
start process.

The spec indicates the OS driver should use Set Features (Software
Progress Marker) to set the pre-boot software load count to 0
after the OS driver has successfully been initialized.  This allows
pre-boot software to determine if there have been any issues with the
OS loading.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:42:53 +00:00
Jim Harris
be34f21609 Remove the is_started flag from struct nvme_controller.
This flag was originally added to communicate to the sysctl code
which oids should be built, but there are easier ways to do this.  This
needs to be cleaned up prior to adding new controller states - for example,
controller failure.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:19:26 +00:00
Jim Harris
02e3348484 Ensure the controller's MDTS is accounted for in max_xfer_size.
The controller's IDENTIFY data contains MDTS (Max Data Transfer Size) to
allow the controller to specify the maximum I/O data transfer size.  nvme(4)
already provides a default maximum, but make sure it does not exceed what
MDTS reports.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:16:53 +00:00
Jim Harris
cb5b7c1304 Cap the number of retry attempts to a configurable number. This ensures
that if a specific I/O repeatedly times out, we don't retry it indefinitely.

The default number of retries will be 4, but is adjusted using hw.nvme.retry_count.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:14:51 +00:00
Jim Harris
0d7e13ecb2 Pass associated log page data to async event consumers, if requested.
Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:08:32 +00:00
Jim Harris
2868353a57 When an asynchronous event request is completed, automatically fetch the
specified log page.

This satisfies the spec condition that future async events of the same type
will not be sent until the associated log page is fetched.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:05:15 +00:00
Jim Harris
0692579bf3 Add structure definitions and controller command function for firmware
log pages.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:03:03 +00:00
Jim Harris
0892778256 Add structure definitions and a controller command function for
error log pages.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:01:53 +00:00
Jim Harris
cf81529ce3 Create struct nvme_status.
NVMe error log entries include status, so breaking this out into
its own data structure allows it to be included in both the
nvme_completion data structure as well as error log entry data
structures.

While here, expose nvme_completion_is_error(), and change all of
the places that were explicitly looking at sc/sct bits to use this
macro instead.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:00:18 +00:00
Jim Harris
f37c22a3bd Make nvme_ctrlr_reset a nop if a reset is already in progress.
This protects against cases where a controller crashes with multiple
I/O outstanding, each timing out and requesting controller resets
simultaneously.

While here, remove a debugging printf from a previous commit, and add
more logging around I/O that need to be resubmitted after a controller
reset.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 20:56:58 +00:00
Jim Harris
48ce317898 By default, always escalate to controller reset when an I/O times out.
While aborts are typically cleaner than a full controller reset, many times
an I/O timeout indicates other controller-level issues where aborts may not
work.  NVMe drivers for other operating systems are also defaulting to
controller reset rather than aborts for timed out I/O.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 20:32:57 +00:00
Ed Maste
a0dc59709c Always define and use PROGNAME
This avoids having separate cases in the install rule for PROGNAME set and
not set.  This is a minor cleanup in advance of further support for
standalone debug files.
2013-03-26 20:32:46 +00:00
Pedro F. Giffuni
f5678b698a Dtrace: dtrace.c erroneously checks for memory alignment on amd64.
Merge change from illumos:

3511 dtrace.c erroneously checks for memory alignment on amd64

Illumos Revision:	c93cc65

Reference:
https://www.illumos.org/issues/3511

Obtained from:	Illumos
MFC after:	3 weeks
2013-03-26 20:17:08 +00:00
Ed Maste
89ba026a9d Unconditionally include ${SRCCONF} if overridden
This avoids silently failing to include ${SRCCONF} specified by a make(1)
invocation.
2013-03-26 20:11:09 +00:00
Adrian Chadd
92e84e43a6 Implement the replacement EDMA FIFO code.
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)

This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.

The specifics:

* When filling/refilling the FIFO, use the new TXQ staging queue
  for FIFO frames

* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
  For now the non-CABQ transmit path pushes one frame into the TXQ
  staging queue without setting up the intermediary link pointers
  to chain them together, so draining frames from the txq staging
  queue to the FIFO queue occurs AMPDU / MPDU at a time.

* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
  ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
  at once.

* Now that frames are in a FIFO pending queue, we can top up the
  FIFO after completing a single frame.  This means we can keep
  it filled rather than waiting for it drain and _then_ adding
  more frames.

* The EDMA restart routine now walks the FIFO queue in the TXQ
  rather than the pending queue and re-initialises the FIFO with
  that.

* When restarting EDMA, we may have partially completed sending
  a list.  So stamp the first frame that we see in a list with
  ATH_BUF_FIFOPTR and push _that_ into the hardware.

* When completing frames, only check those on the FIFO queue.
  We should never ever queue frames from the pending queue
  direct to the hardware, so there's no point in checking.

* Until I figure out what's going on, make sure if the TXSTATUS
  for an empty queue pops up, complain loudly and continue.
  This will stop the panics that people are seeing.  I'll add
  some code later which will assist in ensuring I'm populating
  each descriptor with the correct queue ID.

* When considering whether to queue frames to the hardware queue
  directly or software queue frames, make sure the depth of
  the FIFO is taken into account now.

* When completing frames, tag them with ATH_BUF_BUSY if they're
  not the final frame in a FIFO list.  The same holding descriptor
  behaviour is required when handling descriptors linked together
  with a link pointer as the hardware will re-read the previous
  descriptor to refresh the link pointer before contiuning.

* .. and if we complete the FIFO list (ie, the buffer has
  ATH_BUF_FIFOEND set), then we don't need the holding buffer
  any longer.  Thus, free it.

Tested:

* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap

TODO:

* I don't yet trust that the EDMA restart routine is totally correct
  in all circumstances.  I'll continue to thrash this out under heavy
  multiple-TXQ traffic load and fix whatever pops up.
2013-03-26 20:04:45 +00:00
Jim Harris
941433323c Add a tunable for the I/O timeout interval. Default is still 30 seconds,
but can be adjusted between a min/max of 5 and 120 seconds.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 20:02:35 +00:00
Jim Harris
12d191ec12 Add handling for controller fatal status (csts.cfs).
On any I/O timeout, check for csts.cfs==1.  If set, the controller
is reporting fatal status and we reset the controller immediately,
rather than trying to abort the timed out command.

This changeset also includes deferring the controller start portion
of the reset to a separate task.  This ensures we are always performing
a controller start operation from a consistent context.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 19:58:17 +00:00
Jim Harris
dbba74428b Add API for nvme consumers to access controller and namespace identify data.
Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 19:52:57 +00:00
Jim Harris
b846efd7ec Add controller reset capability to nvme(4) and ability to explicitly
invoke it from nvmecontrol(8).

Controller reset will be performed in cases where I/O are repeatedly
timing out, the controller reports an unrecoverable condition, or
when explicitly requested via IOCTL or an nvme consumer.  Since the
controller may be in such a state where it cannot even process queue
deletion requests, we will perform a controller reset without trying
to clean up anything on the controller first.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 19:50:46 +00:00
Adrian Chadd
3feffbd796 Add per-TXQ EDMA FIFO staging queue support.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.

Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.

For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.

For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.

So:

* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
  lists.  It's a TAILQ (like the normal hardware frame queue) rather than
  the ath9k list-of-lists to represent FIFO entries.

* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.

* When allocating ath_buf entries, clear out the flag value before
  returning it or it'll end up having stale flags.

* When cloning ath_buf entries, only clone ATH_BUF_MGMT.  Don't clone
  the FIFO related flags.

* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
  drain the normal hardware queue.

Tested:

* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap

TODO:

* Test on other chipsets, just to be thorough.
2013-03-26 19:46:51 +00:00
Mark Johnston
8d7ad01f94 Invert the meaning of -S (added in r247405) and document its meaning. Also,
don't carp about the watchdog command taking too long until after the
watchdog has been patted, and don't carp via warnx(3) unless -S is set
since syslog(3) already logs to standard error otherwise.

Discussed with:	alfred
Reviewed by:	alfred
Approved by:	emaste (co-mentor)
2013-03-26 19:43:18 +00:00
Mark Johnston
7f7fc25b5a Make sure to set OBJS consistently in the cases where SRCS is and isn't
already defined. Setting it with "+=" makes it possible for other make
scripts (e.g. bsd.dtrace.mk) to include additional object files in the
linker arguments.

Approved by:	emaste (co-mentor)
2013-03-26 18:46:40 +00:00
Jim Harris
65c2474e6d Keep a doubly-linked list of outstanding trackers.
This enables in-order re-submission of I/O after a controller reset.

Sponsored by:	Intel
2013-03-26 18:45:16 +00:00
Jim Harris
5f1e251de6 Create a generic nvme_ctrlr_cmd_get_log_page function, and change the
health information log page function to use it.

Sponsored by:	Intel
2013-03-26 18:43:53 +00:00
Jim Harris
99d99f7408 Expose the get/set features API to nvme consumers.
Sponsored by:	Intel
2013-03-26 18:42:05 +00:00
Jim Harris
038a5ee403 Add an interface for nvme shim drivers (i.e. nvd) to register for
notifications when new nvme controllers are added to the system.

Sponsored by:	Intel
2013-03-26 18:39:54 +00:00
Jim Harris
0a0b08cc30 Enable asynchronous event requests on non-Chatham devices.
Also add logic to clean up all outstanding asynchronous event requests
when resetting or shutting down the controller, since these requests
will not be explicitly completed by the controller itself.

Sponsored by:	Intel
2013-03-26 18:37:36 +00:00
Jim Harris
990e741c18 Move controller destruction code from nvme_detach() to new nvme_ctrlr_destruct()
function.

Sponsored by:	Intel
2013-03-26 18:34:19 +00:00
Jim Harris
274b3a88fa Specify command timeout interval on a per-command type basis.
This is primarily driven by the need to disable timeouts for asynchronous
event requests, which by nature should not be timed out.

Sponsored by:	Intel
2013-03-26 18:31:46 +00:00
Jim Harris
879de69910 Explicitly abort a timed out command, if the ABORT command sent to the
controller indicates the command was not found.

Sponsored by:	Intel
2013-03-26 18:29:04 +00:00
Jim Harris
6cb0607039 Break out the code for completing an nvme_tracker object into a separate
function.

This allows for completions outside the normal completion path, for example
when an ABORT command fails due to the controller reporting the targeted
command does not exist.  This is mainly for protection against a faulty
controller, but we need to clean up our internal request nonetheless.

Sponsored by:	Intel
2013-03-26 18:27:22 +00:00
Jim Harris
448195e764 Add support for ABORT commands, including issuing these commands when
an I/O times out.

Also ensure that we retry commands that are aborted due to a timeout.

Sponsored by:	Intel
2013-03-26 18:23:35 +00:00
Jim Harris
d6f54866ea Add an internal _nvme_qpair_submit_request function, which performs
the submit action assuming the qpair lock has already been acquired.

Also change nvme_qpair_submit_request to just lock/unlock the mutex
around a call to this new function.

This fixes a recursive mutex acquisition in the retry path.

Sponsored by:	Intel
2013-03-26 18:20:11 +00:00
Jim Harris
aaf6b84a4e Make the DSM range count 0-based. Previously we were deallocating one more
LBA than we should have been.

Sponsored by:	Intel
2013-03-26 18:16:30 +00:00
Jim Harris
51a9feb9b0 Do not look at the namespace's thin provisioning field to determine if DSM
command is supported.  The two are not related.

Sponsored by:	Intel
2013-03-26 18:01:24 +00:00
Alan Cox
3fc10b7363 Introduce vm_radix_isleaf() and use it in a couple places. As compared to
using vm_radix_node_page() == NULL, the compiler is able to generate one
less conditional branch when vm_radix_isleaf() is used.  More use cases
involving the inner loops of vm_radix_insert(), vm_radix_lookup{,_ge,_le}(),
and vm_radix_remove() will follow.

Reviewed by:	attilio
Sponsored by:	EMC / Isilon Storage Division
2013-03-26 17:30:40 +00:00
Gleb Smirnoff
fa75f402ae Return ENOMEM if malloc() fails. 2013-03-26 14:08:14 +00:00
Gleb Smirnoff
a23a2dd138 Cleanup: wrap long lines, cleanup comments, etc. 2013-03-26 14:05:37 +00:00
Alexander Motin
7868ec506b geom_slice.c and its consumers like GEOM_LABEL are not touching the data
unless hotspots are used.  Pass G_PF_ACCEPT_UNMAPPED flag through except
such rare cases (obsolete GEOM_SUNLABEL and GEOM_BSD).
2013-03-26 07:55:24 +00:00
Alexander Motin
6c6e13b6e1 GEOM NOP does not touch the data, so pass G_PF_ACCEPT_UNMAPPED flag through. 2013-03-26 05:58:49 +00:00
Alexander Motin
a93c0ed463 Remove extra bio_data and bio_length copying to child request after calling
g_clone_bio(), that already copied them.
2013-03-26 05:42:12 +00:00
Adrian Chadd
35bec3655e Remove the mcast path calls to ath_hal_gettxdesclinkptr() for axq_link -
they're no longer needed for the legacy path and they're not wanted
for the EDMA path.

Tested:

* AR9280, hostap + CABQ
* AR9380/AR9580, hostap + CABQ
2013-03-26 04:56:54 +00:00
Adrian Chadd
b708ea2941 Remove this dead code - it's no longer relevant (as yes, we do actually
support TX on EDMA chips.)
2013-03-26 04:53:40 +00:00
Adrian Chadd
b6ef0f8ac8 Convert the CABQ queue code over to use the HAL link pointer method
instead of axq_link.

This (among a bunch of uncommitted work) is required for EDMA chips
to correctly transmit frames on the CABQ.

Tested:

* AR9280, hostap mode
* AR9380/AR9580, hostap mode (staggered beacons)

TODO:

* This code only really gets called when burst beacons are used;
  it glues multiple CABQ queues together when sending to the hardware.
* More thorough bursted beacon testing! (first requires some work with
  the beacon queue code for bursted beacons, as that currently uses the
  link pointer and will fail on EDMA chips.)
2013-03-26 04:52:16 +00:00
Adrian Chadd
9e7259a2a3 Convert the EDMA multicast queue code over to use the HAL method to set
the descriptor link pointer, rather than directly.

This is needed on AR9380 and later (ie, EDMA) NICs so the multicast queue
has a chance in hell of being put together right.

Tested:

* AR9380, AR9580 in hostap mode, CABQ traffic (but with other patches..)
2013-03-26 04:48:58 +00:00
Adrian Chadd
0891354cd2 Migrate the multicast queue assembly code to not use the axq_link pointer
and instead use the HAL method to set the link pointer.

Tested:

* AR9280, hostap mode, CABQ frames being queued and transmitted
2013-03-26 04:47:40 +00:00
Alexander Kabaev
31932fae1e Do not pass unmapped buffers to drivers that cannot handle them
In physio, check if device can handle unmapped IO and pass an
appropriately mapped buffer to the driver strategy routine. The
only driver in the tree that can handle unmapped buffers is one
exposed by GEOM, so mark it as such with the new flag in the
driver cdevsw structure.

This fixes insta-panics on hosts, running dconschat, as /dev/fwmem
is an example of the driver that makes use of physio routine, but
bypasses the g_down thread, where the buffer gets mapped normally.

Discussed with: kib (earlier version)
2013-03-26 01:17:06 +00:00
Pedro F. Giffuni
5472787377 Dtrace: Add SUN MDB-like type-aware print() action.
Merge change from illumos:

1694 Add type-aware print() action

This is a very nice feature implemented in upstream Dtrace.
A complete description is available here:
http://dtrace.org/blogs/eschrock/2011/10/26/your-mdb-fell-into-my-dtrace/

This change bumps the DT_VERS_* number to 1.9.0 in
accordance to what is done in illumos.

While here also include some minor cleanups to ease further merging
and appease clang with a fix by Fabian Keil.

Illumos Revisions:	13501:c3a7090dbc16
			13483:f413e6c5d297

Reference:
https://www.illumos.org/issues/1560
https://www.illumos.org/issues/1694

Tested by:	Fabian Keil
Obtained from:	Illumos
MFC after:	1 month
2013-03-25 20:38:09 +00:00