Commit Graph

25652 Commits

Author SHA1 Message Date
Jim Harris
3d7eb41c1b Just disable the controller instead of deleting IO queues during detach.
This is just as effective, and removes the need for a bunch of admin commands
to a controller that's going to be disabled shortly anyways.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:48:41 +00:00
Jim Harris
ec84ecbba0 Have nvd(4) register for controller notifications.
Also have nvd maintain controller/namespace relationships internally.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:45:37 +00:00
Jim Harris
74019d4b67 Set Pre-boot Software Load Count to 0 at the end of the controller
start process.

The spec indicates the OS driver should use Set Features (Software
Progress Marker) to set the pre-boot software load count to 0
after the OS driver has successfully been initialized.  This allows
pre-boot software to determine if there have been any issues with the
OS loading.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:42:53 +00:00
Jim Harris
be34f21609 Remove the is_started flag from struct nvme_controller.
This flag was originally added to communicate to the sysctl code
which oids should be built, but there are easier ways to do this.  This
needs to be cleaned up prior to adding new controller states - for example,
controller failure.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:19:26 +00:00
Jim Harris
02e3348484 Ensure the controller's MDTS is accounted for in max_xfer_size.
The controller's IDENTIFY data contains MDTS (Max Data Transfer Size) to
allow the controller to specify the maximum I/O data transfer size.  nvme(4)
already provides a default maximum, but make sure it does not exceed what
MDTS reports.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:16:53 +00:00
Jim Harris
cb5b7c1304 Cap the number of retry attempts to a configurable number. This ensures
that if a specific I/O repeatedly times out, we don't retry it indefinitely.

The default number of retries will be 4, but is adjusted using hw.nvme.retry_count.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:14:51 +00:00
Jim Harris
0d7e13ecb2 Pass associated log page data to async event consumers, if requested.
Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:08:32 +00:00
Jim Harris
2868353a57 When an asynchronous event request is completed, automatically fetch the
specified log page.

This satisfies the spec condition that future async events of the same type
will not be sent until the associated log page is fetched.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:05:15 +00:00
Jim Harris
0692579bf3 Add structure definitions and controller command function for firmware
log pages.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:03:03 +00:00
Jim Harris
0892778256 Add structure definitions and a controller command function for
error log pages.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:01:53 +00:00
Jim Harris
cf81529ce3 Create struct nvme_status.
NVMe error log entries include status, so breaking this out into
its own data structure allows it to be included in both the
nvme_completion data structure as well as error log entry data
structures.

While here, expose nvme_completion_is_error(), and change all of
the places that were explicitly looking at sc/sct bits to use this
macro instead.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 21:00:18 +00:00
Jim Harris
f37c22a3bd Make nvme_ctrlr_reset a nop if a reset is already in progress.
This protects against cases where a controller crashes with multiple
I/O outstanding, each timing out and requesting controller resets
simultaneously.

While here, remove a debugging printf from a previous commit, and add
more logging around I/O that need to be resubmitted after a controller
reset.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 20:56:58 +00:00
Jim Harris
48ce317898 By default, always escalate to controller reset when an I/O times out.
While aborts are typically cleaner than a full controller reset, many times
an I/O timeout indicates other controller-level issues where aborts may not
work.  NVMe drivers for other operating systems are also defaulting to
controller reset rather than aborts for timed out I/O.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 20:32:57 +00:00
Adrian Chadd
92e84e43a6 Implement the replacement EDMA FIFO code.
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)

This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.

The specifics:

* When filling/refilling the FIFO, use the new TXQ staging queue
  for FIFO frames

* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
  For now the non-CABQ transmit path pushes one frame into the TXQ
  staging queue without setting up the intermediary link pointers
  to chain them together, so draining frames from the txq staging
  queue to the FIFO queue occurs AMPDU / MPDU at a time.

* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
  ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
  at once.

* Now that frames are in a FIFO pending queue, we can top up the
  FIFO after completing a single frame.  This means we can keep
  it filled rather than waiting for it drain and _then_ adding
  more frames.

* The EDMA restart routine now walks the FIFO queue in the TXQ
  rather than the pending queue and re-initialises the FIFO with
  that.

* When restarting EDMA, we may have partially completed sending
  a list.  So stamp the first frame that we see in a list with
  ATH_BUF_FIFOPTR and push _that_ into the hardware.

* When completing frames, only check those on the FIFO queue.
  We should never ever queue frames from the pending queue
  direct to the hardware, so there's no point in checking.

* Until I figure out what's going on, make sure if the TXSTATUS
  for an empty queue pops up, complain loudly and continue.
  This will stop the panics that people are seeing.  I'll add
  some code later which will assist in ensuring I'm populating
  each descriptor with the correct queue ID.

* When considering whether to queue frames to the hardware queue
  directly or software queue frames, make sure the depth of
  the FIFO is taken into account now.

* When completing frames, tag them with ATH_BUF_BUSY if they're
  not the final frame in a FIFO list.  The same holding descriptor
  behaviour is required when handling descriptors linked together
  with a link pointer as the hardware will re-read the previous
  descriptor to refresh the link pointer before contiuning.

* .. and if we complete the FIFO list (ie, the buffer has
  ATH_BUF_FIFOEND set), then we don't need the holding buffer
  any longer.  Thus, free it.

Tested:

* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap

TODO:

* I don't yet trust that the EDMA restart routine is totally correct
  in all circumstances.  I'll continue to thrash this out under heavy
  multiple-TXQ traffic load and fix whatever pops up.
2013-03-26 20:04:45 +00:00
Jim Harris
941433323c Add a tunable for the I/O timeout interval. Default is still 30 seconds,
but can be adjusted between a min/max of 5 and 120 seconds.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 20:02:35 +00:00
Jim Harris
12d191ec12 Add handling for controller fatal status (csts.cfs).
On any I/O timeout, check for csts.cfs==1.  If set, the controller
is reporting fatal status and we reset the controller immediately,
rather than trying to abort the timed out command.

This changeset also includes deferring the controller start portion
of the reset to a separate task.  This ensures we are always performing
a controller start operation from a consistent context.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 19:58:17 +00:00
Jim Harris
dbba74428b Add API for nvme consumers to access controller and namespace identify data.
Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 19:52:57 +00:00
Jim Harris
b846efd7ec Add controller reset capability to nvme(4) and ability to explicitly
invoke it from nvmecontrol(8).

Controller reset will be performed in cases where I/O are repeatedly
timing out, the controller reports an unrecoverable condition, or
when explicitly requested via IOCTL or an nvme consumer.  Since the
controller may be in such a state where it cannot even process queue
deletion requests, we will perform a controller reset without trying
to clean up anything on the controller first.

Sponsored by:	Intel
Reviewed by:	carl
2013-03-26 19:50:46 +00:00
Adrian Chadd
3feffbd796 Add per-TXQ EDMA FIFO staging queue support.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.

Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.

For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.

For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.

So:

* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
  lists.  It's a TAILQ (like the normal hardware frame queue) rather than
  the ath9k list-of-lists to represent FIFO entries.

* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.

* When allocating ath_buf entries, clear out the flag value before
  returning it or it'll end up having stale flags.

* When cloning ath_buf entries, only clone ATH_BUF_MGMT.  Don't clone
  the FIFO related flags.

* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
  drain the normal hardware queue.

Tested:

* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap

TODO:

* Test on other chipsets, just to be thorough.
2013-03-26 19:46:51 +00:00
Jim Harris
65c2474e6d Keep a doubly-linked list of outstanding trackers.
This enables in-order re-submission of I/O after a controller reset.

Sponsored by:	Intel
2013-03-26 18:45:16 +00:00
Jim Harris
5f1e251de6 Create a generic nvme_ctrlr_cmd_get_log_page function, and change the
health information log page function to use it.

Sponsored by:	Intel
2013-03-26 18:43:53 +00:00
Jim Harris
99d99f7408 Expose the get/set features API to nvme consumers.
Sponsored by:	Intel
2013-03-26 18:42:05 +00:00
Jim Harris
038a5ee403 Add an interface for nvme shim drivers (i.e. nvd) to register for
notifications when new nvme controllers are added to the system.

Sponsored by:	Intel
2013-03-26 18:39:54 +00:00
Jim Harris
0a0b08cc30 Enable asynchronous event requests on non-Chatham devices.
Also add logic to clean up all outstanding asynchronous event requests
when resetting or shutting down the controller, since these requests
will not be explicitly completed by the controller itself.

Sponsored by:	Intel
2013-03-26 18:37:36 +00:00
Jim Harris
990e741c18 Move controller destruction code from nvme_detach() to new nvme_ctrlr_destruct()
function.

Sponsored by:	Intel
2013-03-26 18:34:19 +00:00
Jim Harris
274b3a88fa Specify command timeout interval on a per-command type basis.
This is primarily driven by the need to disable timeouts for asynchronous
event requests, which by nature should not be timed out.

Sponsored by:	Intel
2013-03-26 18:31:46 +00:00
Jim Harris
879de69910 Explicitly abort a timed out command, if the ABORT command sent to the
controller indicates the command was not found.

Sponsored by:	Intel
2013-03-26 18:29:04 +00:00
Jim Harris
6cb0607039 Break out the code for completing an nvme_tracker object into a separate
function.

This allows for completions outside the normal completion path, for example
when an ABORT command fails due to the controller reporting the targeted
command does not exist.  This is mainly for protection against a faulty
controller, but we need to clean up our internal request nonetheless.

Sponsored by:	Intel
2013-03-26 18:27:22 +00:00
Jim Harris
448195e764 Add support for ABORT commands, including issuing these commands when
an I/O times out.

Also ensure that we retry commands that are aborted due to a timeout.

Sponsored by:	Intel
2013-03-26 18:23:35 +00:00
Jim Harris
d6f54866ea Add an internal _nvme_qpair_submit_request function, which performs
the submit action assuming the qpair lock has already been acquired.

Also change nvme_qpair_submit_request to just lock/unlock the mutex
around a call to this new function.

This fixes a recursive mutex acquisition in the retry path.

Sponsored by:	Intel
2013-03-26 18:20:11 +00:00
Jim Harris
aaf6b84a4e Make the DSM range count 0-based. Previously we were deallocating one more
LBA than we should have been.

Sponsored by:	Intel
2013-03-26 18:16:30 +00:00
Jim Harris
51a9feb9b0 Do not look at the namespace's thin provisioning field to determine if DSM
command is supported.  The two are not related.

Sponsored by:	Intel
2013-03-26 18:01:24 +00:00
Adrian Chadd
35bec3655e Remove the mcast path calls to ath_hal_gettxdesclinkptr() for axq_link -
they're no longer needed for the legacy path and they're not wanted
for the EDMA path.

Tested:

* AR9280, hostap + CABQ
* AR9380/AR9580, hostap + CABQ
2013-03-26 04:56:54 +00:00
Adrian Chadd
b708ea2941 Remove this dead code - it's no longer relevant (as yes, we do actually
support TX on EDMA chips.)
2013-03-26 04:53:40 +00:00
Adrian Chadd
b6ef0f8ac8 Convert the CABQ queue code over to use the HAL link pointer method
instead of axq_link.

This (among a bunch of uncommitted work) is required for EDMA chips
to correctly transmit frames on the CABQ.

Tested:

* AR9280, hostap mode
* AR9380/AR9580, hostap mode (staggered beacons)

TODO:

* This code only really gets called when burst beacons are used;
  it glues multiple CABQ queues together when sending to the hardware.
* More thorough bursted beacon testing! (first requires some work with
  the beacon queue code for bursted beacons, as that currently uses the
  link pointer and will fail on EDMA chips.)
2013-03-26 04:52:16 +00:00
Adrian Chadd
9e7259a2a3 Convert the EDMA multicast queue code over to use the HAL method to set
the descriptor link pointer, rather than directly.

This is needed on AR9380 and later (ie, EDMA) NICs so the multicast queue
has a chance in hell of being put together right.

Tested:

* AR9380, AR9580 in hostap mode, CABQ traffic (but with other patches..)
2013-03-26 04:48:58 +00:00
Adrian Chadd
0891354cd2 Migrate the multicast queue assembly code to not use the axq_link pointer
and instead use the HAL method to set the link pointer.

Tested:

* AR9280, hostap mode, CABQ frames being queued and transmitted
2013-03-26 04:47:40 +00:00
Alexander V. Chernikov
58e8e6e6bb Unlock IPMI sc while performing requests via KCS and SMIC interfaces.
It is already done in SSIF interface code.
This reduces contention/spinning reported by many users.

PR:		kern/172166
Submitted by:	Eric van Gyzen <eric at vangyzen.net>
MFC after:	2 weeks
2013-03-25 14:30:34 +00:00
Alexander Motin
6a740c4a4f Read Asynchronous Notification statuses only if Port Multiplier or ATAPI
device are connected. ATA disks are not using ANs, while the extra register
read operation is quite expensive.
2013-03-25 13:58:17 +00:00
Alexander Motin
3d44989055 Depending on combination of running commands (NCQ/non-NCQ) try to avoid
extra read from PxCI/PxSACT registers.  If only NCQ commands are running, we
don't really need PxCI.  If only non-NCQ commands are running we don't need
PxSACT.  Mixed set may happen only on controllers with FIS-based switching
when port multiplier is attached, and then we have to read both registers.

MFC after:	1 month
2013-03-25 08:50:51 +00:00
Ian Lepore
a350e54067 Set the backlink in mmc commands to the mmc request that contains them. 2013-03-24 17:23:10 +00:00
Alexander Motin
db12db318d No need to erase all 64 bytes of CFIS area if we never use more then 16. 2013-03-24 16:51:21 +00:00
Adrian Chadd
1f6b3ed63c Add new regulatory domain.
Obtained from:	Qualcomm Atheros
2013-03-24 04:42:56 +00:00
Adrian Chadd
56a859789f Move the TXQ lock earlier in this routine - so to correctly protect the
link pointer check.
2013-03-24 04:09:54 +00:00
Adrian Chadd
0acf45ed86 Fix the locking changes due to the TXQ change drive-by.
Tested:

* AR9580, STA mode
2013-03-24 04:09:29 +00:00
Adrian Chadd
b837332d0a Overhaul the TXQ locking (again!) as part of some beacon/cabq timing
related issues.

Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.

This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.

The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.

The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support.  Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot.  For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.

The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission.  I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.

Major:

* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
  a single lock.

* Leave the software queue side of things under the ATH_TX_LOCK lock,
  (continuing) to serialise things as they are.

* Add a new function which is called whenever there's a beacon miss,
  to print out some debugging.  This is primarily designed to help
  me figure out if the beacon miss events are due to a noisy environment,
  issues with the PHY/MAC, or other.

* Move the CABQ setup/enable to occur _after_ all the VAPs have been
  looked at.  This means that for multiple VAPS in bursted mode, the
  CABQ gets primed once all VAPs are checked, rather than being primed
  on the first VAP and then having frames appended after this.

Minor:

* Add a (disabled) twiddle to let me enable/disable cabq traffic.
  It's primarily there to let me easily debug what's going on with beacon
  and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
  trying to trace down.

* Clear bf_next when flushing frames; it should quieten some warnings
  that show up when a node goes away.

Tested:

* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)

TODO:

* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
  ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
  correctly set when chaining descriptors.)
2013-03-24 00:03:12 +00:00
Adrian Chadd
49ddabc4bd CABQ calculation changes to try and fix some weird corner cases leading
to stuck beacons.

* Set the cabq readytime (ie, how long to burst for) to 50% of the total
  beacon interval time
* fix the cabq adjustment calculation based on how the beacon offset is
  calculated (the SWBA/DBA time offset.)

This is all still a bit magic voodoo but it does seem to have further
quietened issues with missed/stuck beacons under my local testing.
In any case, it better matches what the reference HAL implements.

Obtained from:	Qualcomm Atheros
2013-03-23 23:51:11 +00:00
Konstantin Belousov
b11d58b63f Do not call malloc(M_WAITOK) while bodev->fence_lock mutex is
held. The ttm_buffer_object_transfer() does not need the mutex locked
at all, except for the call to the driver sync_obj_ref() method.

Reported and tested by:	dumbbell
MFC after:   2 weeks
2013-03-23 22:23:15 +00:00
Jean-Sébastien Pédron
accadf8de2 drm/ttm: Fix a typo: s/pTTM]/[TTM]/ 2013-03-23 20:46:47 +00:00
Jean-Sébastien Pédron
76c40c6986 drm/ttm: Explain why we don't need to acquire a ref in ttm_bo_vm_ctor() 2013-03-23 20:43:26 +00:00