Commit Graph

1460 Commits

Author SHA1 Message Date
Adrian Chadd
5d4dedadb6 Use a per-RX-queue deferred list, rather than a single deferred list for
both queues.

Since ath_rx_pkt() does multi-mbuf frame recombining based on the RX queue,
this needs to occur.

Tested:

* AR9380 (XB112), hostap mode
2013-04-16 20:21:02 +00:00
Adrian Chadd
978c5ce568 Now that the register definitions are in -HEAD, enable this. 2013-04-15 17:59:06 +00:00
Adrian Chadd
a04110a3b6 Bring over some AR9271 register definitions from the QCA HAL.
Obtained from:	Qualcomm Atheros
2013-04-15 17:58:11 +00:00
Adrian Chadd
6961e9eda4 Always enable TXOK interrupts when setting up TX queues for EDMA NICs. 2013-04-11 22:02:35 +00:00
Adrian Chadd
69cbcb210d Fix this to compile when ATH_DEBUG_ALQ is defined but ATH_DEBUG isn't. 2013-04-08 21:15:43 +00:00
Adrian Chadd
7598a108ff Add a new TX power field - it's inteded to be used where low TX power
is configured for higher rates (lower than max) but higher TX power
is configured for the lower rates, above the configured cap, to improve
long distance behaviour.
2013-04-05 09:06:39 +00:00
Adrian Chadd
9580780191 HAL additions to enable MCI Bluetooth coexistence in the AR9300 HAL.
* Add the rest of the missing GPIO output mux types;
* Add in a new debug category;
* And a new MCI btcoex configuration option in ath_hal.ah_config

Obtained from:	Qualcomm Atheros
2013-04-05 07:41:47 +00:00
Adrian Chadd
28f4a39c95 Update comments! 2013-04-04 08:57:29 +00:00
Adrian Chadd
8cc724d9be Fix the busdma logic to work with EDMA chipsets when using bounce
buffers (ie, >4GB on amd64.)

The underlying problem was that PREREAD doesn't sync the mbuf
with the DMA memory (ie, bounce buffer), so the bounce buffer may
have had stale information.  Thus it was always considering the
buffer completed and things just went off the rails.

This change does the following:

* Make ath_rx_pkt() always consume the mbuf somehow; it no longer
  passes error mbufs (eg CRC errors, crypt errors, etc) back up
  to the RX path to recycle.  This means that a new mbuf is always
  allocated each time, but it's cleaner.

* Push the RX buffer map/unmap to occur in the RX path, not
  ath_rx_pkt().  Thus, ath_rx_pkt() now assumes (a) it has to consume
  the mbuf somehow, and (b) that it's already been unmapped and
  synced.

* For the legacy path, the descriptor isn't mapped, it comes out of
  coherent, DMA memory anyway.  So leave it there.

* For the EDMA path, the RX descriptor has to be cleared before
  its passed to the hardware, so that when we check with
  a POSTREAD sync, we actually get either a blank (not finished)
  or a filled out descriptor (finished.)  Otherwise we get stale
  data in the DMA memory.

* .. so, for EDMA RX path, we need PREREAD|PREWRITE to sync the
  data -> DMA memory, then POSTREAD|POSTWRITE to finish syncing
  the DMA memory -> data.

* Whilst we're here, make sure that in EDMA buffer setup (ie,
  bzero'ing the descriptor part) is done before the mbuf is
  map/synched.

NOTE: there's been a lot of commits besides this one with regards to
tidying up the busdma handling in ath(4).  Please check the recent
commit history.

Discussed with and thanks to:	scottl

Tested:

* AR5416 (non-EDMA) on i386, with the DMA tag for the driver
  set to 2^^30, not 2^^32, STA

* AR9580 (EDMA) on i386, as above, STA

* User - tested AR9380 on amd64 with 32GB RAM.

PR:		kern/177530
2013-04-04 08:21:56 +00:00
Adrian Chadd
c23a9d98bf Mark a couple of places where I think the dmamap isn't being unmapped
before the TX path is being aborted.

Right now it's in the TDMA code and I can live with that; but it really
should get fixed.

I'll do a more thorough audit of this code soon.
2013-04-02 06:25:10 +00:00
Adrian Chadd
a91ab3c099 Some TX dmamap cleanups.
* Don't use BUS_DMA_ALLOCNOW for descriptor DMA maps; we never use
  bounce buffers for the descriptors themselves.

* Add some XXX's to mark where the ath_buf has its mbuf ripped from
  underneath it without actually cleaning up the dmamap.  I haven't
  audited those particular code paths to see if the DMA map is guaranteed
  to be setup there; I'll do that later.

* Print out a warning if the descdma tidyup code is given some descriptors
  w/ maps to free.  Ideally the owner will free the mbufs and unmap
  the descriptors before freeing the descriptor/ath_buf pairs, but
  right now that's not guaranteed to be done.

Reviewed by:	scottl (BUS_DMA_ALLOCNOW tag)
2013-04-02 06:24:22 +00:00
Adrian Chadd
18303fd833 Add a missing unmap; if we're freeing this mbuf then we must
really both sync/unmap the dmamap before freeing it.
2013-04-02 06:21:37 +00:00
Adrian Chadd
3f3a5dbd2c Ensure that we only call the busdma unmap/flush routines once, when
the buffer is being freed.

* When buffers are cloned, the original mapping isn't copied but it
  wasn't freeing the mapping until later.  To be safe, free the
  mapping when the buffer is cloned.

* ath_freebuf() now no longer calls the busdma sync/unmap routines.

* ath_tx_freebuf() now calls sync/unmap.

* Call sync first, before calling unmap.

Tested:

* AR5416, STA mode
2013-04-01 20:57:13 +00:00
Adrian Chadd
587feafb5a Remove an un-needed comment. 2013-04-01 20:44:21 +00:00
Adrian Chadd
09067b6e9a Use ATH_MAX_SCATTER rather than ATH_TXDESC.
ATH_MAX_SCATTER is used to size the ath_buf DMA segment array.
We thus should use it when checking sizes of things.
2013-04-01 20:12:21 +00:00
Adrian Chadd
80b87f1814 Only unmap the RX mbuf DMA map if there's a buffer here.
The normal RX path (ath_rx_pkt()) will sync and unmap the
buffer before passing it up the stack.  We only need to do this
if we're flushing the FIFO during reset/shutdown.
2013-04-01 20:11:19 +00:00
Adrian Chadd
b92b5f6e3a * Stop processing after HAL_EIO; this is what the reference driver does.
* If we hit an empty queue condition (which I haven't yet root caused, grr.)
  .. make sure we release the lock before continuing.
2013-03-27 00:35:45 +00:00
Adrian Chadd
92e84e43a6 Implement the replacement EDMA FIFO code.
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)

This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.

The specifics:

* When filling/refilling the FIFO, use the new TXQ staging queue
  for FIFO frames

* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
  For now the non-CABQ transmit path pushes one frame into the TXQ
  staging queue without setting up the intermediary link pointers
  to chain them together, so draining frames from the txq staging
  queue to the FIFO queue occurs AMPDU / MPDU at a time.

* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
  ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
  at once.

* Now that frames are in a FIFO pending queue, we can top up the
  FIFO after completing a single frame.  This means we can keep
  it filled rather than waiting for it drain and _then_ adding
  more frames.

* The EDMA restart routine now walks the FIFO queue in the TXQ
  rather than the pending queue and re-initialises the FIFO with
  that.

* When restarting EDMA, we may have partially completed sending
  a list.  So stamp the first frame that we see in a list with
  ATH_BUF_FIFOPTR and push _that_ into the hardware.

* When completing frames, only check those on the FIFO queue.
  We should never ever queue frames from the pending queue
  direct to the hardware, so there's no point in checking.

* Until I figure out what's going on, make sure if the TXSTATUS
  for an empty queue pops up, complain loudly and continue.
  This will stop the panics that people are seeing.  I'll add
  some code later which will assist in ensuring I'm populating
  each descriptor with the correct queue ID.

* When considering whether to queue frames to the hardware queue
  directly or software queue frames, make sure the depth of
  the FIFO is taken into account now.

* When completing frames, tag them with ATH_BUF_BUSY if they're
  not the final frame in a FIFO list.  The same holding descriptor
  behaviour is required when handling descriptors linked together
  with a link pointer as the hardware will re-read the previous
  descriptor to refresh the link pointer before contiuning.

* .. and if we complete the FIFO list (ie, the buffer has
  ATH_BUF_FIFOEND set), then we don't need the holding buffer
  any longer.  Thus, free it.

Tested:

* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap

TODO:

* I don't yet trust that the EDMA restart routine is totally correct
  in all circumstances.  I'll continue to thrash this out under heavy
  multiple-TXQ traffic load and fix whatever pops up.
2013-03-26 20:04:45 +00:00
Adrian Chadd
3feffbd796 Add per-TXQ EDMA FIFO staging queue support.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.

Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.

For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.

For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.

So:

* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
  lists.  It's a TAILQ (like the normal hardware frame queue) rather than
  the ath9k list-of-lists to represent FIFO entries.

* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.

* When allocating ath_buf entries, clear out the flag value before
  returning it or it'll end up having stale flags.

* When cloning ath_buf entries, only clone ATH_BUF_MGMT.  Don't clone
  the FIFO related flags.

* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
  drain the normal hardware queue.

Tested:

* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap

TODO:

* Test on other chipsets, just to be thorough.
2013-03-26 19:46:51 +00:00
Adrian Chadd
35bec3655e Remove the mcast path calls to ath_hal_gettxdesclinkptr() for axq_link -
they're no longer needed for the legacy path and they're not wanted
for the EDMA path.

Tested:

* AR9280, hostap + CABQ
* AR9380/AR9580, hostap + CABQ
2013-03-26 04:56:54 +00:00
Adrian Chadd
b708ea2941 Remove this dead code - it's no longer relevant (as yes, we do actually
support TX on EDMA chips.)
2013-03-26 04:53:40 +00:00
Adrian Chadd
b6ef0f8ac8 Convert the CABQ queue code over to use the HAL link pointer method
instead of axq_link.

This (among a bunch of uncommitted work) is required for EDMA chips
to correctly transmit frames on the CABQ.

Tested:

* AR9280, hostap mode
* AR9380/AR9580, hostap mode (staggered beacons)

TODO:

* This code only really gets called when burst beacons are used;
  it glues multiple CABQ queues together when sending to the hardware.
* More thorough bursted beacon testing! (first requires some work with
  the beacon queue code for bursted beacons, as that currently uses the
  link pointer and will fail on EDMA chips.)
2013-03-26 04:52:16 +00:00
Adrian Chadd
9e7259a2a3 Convert the EDMA multicast queue code over to use the HAL method to set
the descriptor link pointer, rather than directly.

This is needed on AR9380 and later (ie, EDMA) NICs so the multicast queue
has a chance in hell of being put together right.

Tested:

* AR9380, AR9580 in hostap mode, CABQ traffic (but with other patches..)
2013-03-26 04:48:58 +00:00
Adrian Chadd
0891354cd2 Migrate the multicast queue assembly code to not use the axq_link pointer
and instead use the HAL method to set the link pointer.

Tested:

* AR9280, hostap mode, CABQ frames being queued and transmitted
2013-03-26 04:47:40 +00:00
Adrian Chadd
1f6b3ed63c Add new regulatory domain.
Obtained from:	Qualcomm Atheros
2013-03-24 04:42:56 +00:00
Adrian Chadd
56a859789f Move the TXQ lock earlier in this routine - so to correctly protect the
link pointer check.
2013-03-24 04:09:54 +00:00
Adrian Chadd
0acf45ed86 Fix the locking changes due to the TXQ change drive-by.
Tested:

* AR9580, STA mode
2013-03-24 04:09:29 +00:00
Adrian Chadd
b837332d0a Overhaul the TXQ locking (again!) as part of some beacon/cabq timing
related issues.

Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.

This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.

The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.

The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support.  Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot.  For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.

The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission.  I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.

Major:

* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
  a single lock.

* Leave the software queue side of things under the ATH_TX_LOCK lock,
  (continuing) to serialise things as they are.

* Add a new function which is called whenever there's a beacon miss,
  to print out some debugging.  This is primarily designed to help
  me figure out if the beacon miss events are due to a noisy environment,
  issues with the PHY/MAC, or other.

* Move the CABQ setup/enable to occur _after_ all the VAPs have been
  looked at.  This means that for multiple VAPS in bursted mode, the
  CABQ gets primed once all VAPs are checked, rather than being primed
  on the first VAP and then having frames appended after this.

Minor:

* Add a (disabled) twiddle to let me enable/disable cabq traffic.
  It's primarily there to let me easily debug what's going on with beacon
  and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
  trying to trace down.

* Clear bf_next when flushing frames; it should quieten some warnings
  that show up when a node goes away.

Tested:

* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)

TODO:

* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
  ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
  correctly set when chaining descriptors.)
2013-03-24 00:03:12 +00:00
Adrian Chadd
49ddabc4bd CABQ calculation changes to try and fix some weird corner cases leading
to stuck beacons.

* Set the cabq readytime (ie, how long to burst for) to 50% of the total
  beacon interval time
* fix the cabq adjustment calculation based on how the beacon offset is
  calculated (the SWBA/DBA time offset.)

This is all still a bit magic voodoo but it does seem to have further
quietened issues with missed/stuck beacons under my local testing.
In any case, it better matches what the reference HAL implements.

Obtained from:	Qualcomm Atheros
2013-03-23 23:51:11 +00:00
Adrian Chadd
9cda8c8082 Fix the EDMA CABQ handling - for now, the CABQ takes a descriptor chain
like the legacy chips expect.
2013-03-20 05:44:03 +00:00
Adrian Chadd
f0db652cf6 Break out the RX completion path into "FIFO check / refill" and
"complete RX frames."

The 128 entry RX FIFO is really easy to fill up and miss refilling
when it's done in the ath taskq - as that gets blocked up doing
RX completion, TX completion and other random things.

So the 128 entry RX FIFO now gets emptied and refilled in the ath_intr()
task (and it grabs / releases locks, so now ath_intr() can't just be
a FAST handler yet!) but the locks aren't held for very long. The
completion part is done in the ath taskqueue context.

Details:

* Create a new completed frame list - sc->sc_rx_rxlist;
* Split the EDMA RX process queue into two halves - one that
  processes the RX FIFO and refills it with new frames; another
  that completes the completed frame list;
* When tearing down the driver, flush whatever is in the deferred
  queue as well as what's in the FIFO;
* Create two new RX methods - one that processes all RX queues,
  one that processes the given RX queue.  When MSI is implemented,
  we get told which RX queue the interrupt came in on so we can
  specifically schedule that.  (And I can do that with the non-MSI
  path too; I'll figure that out later.)
* Convert the legacy code over to use these new RX methods;
* Replace all the instances of the RX taskqueue enqueue with a call
  to a relevant RX method to enqueue one or all RX queues.

Tested:

* AR9380, STA
* AR9580, STA
* AR5413, STA
2013-03-19 19:32:28 +00:00
Adrian Chadd
74ea88c379 Add more TODO items. 2013-03-19 17:55:36 +00:00
Adrian Chadd
378a752f59 Now that the tx map field is correctly populated for both edma and
legacy chips, just use that.
2013-03-19 17:54:37 +00:00
Adrian Chadd
1ab002f461 Print out the current fifo queue depth correctly - not just the max
queue depth.

Silly hat to me.
2013-03-18 02:29:57 +00:00
Adrian Chadd
eefc93a947 Dump out information about the RX descriptor free list and FIFO information. 2013-03-18 01:12:36 +00:00
Adrian Chadd
d50e882ab9 Log some more information when the RX buffer allocation failed. 2013-03-18 01:11:52 +00:00
Adrian Chadd
cd4f1ba89f Why'd I keep this here? remove it entirely now. 2013-03-15 20:22:20 +00:00
Adrian Chadd
302868d914 Fix two bugs:
* when pulling frames off of the TID queue, the ATH_TID_REMOVE()
  macro decrements the axq_depth field.  So don't do it twice.

* in ath_tx_comp_cleanup_aggr(), bf wasn't being reset to bf_first
  before walking the buffer list to complete buffers; so those buffers
  will leak.
2013-03-15 20:00:08 +00:00
Adrian Chadd
8454d32107 Remove a now incorrect comment.
This comment dates back to my initial stab at TX aggregation completion,
where I didn't even bother trying to do software retries.
2013-03-15 04:43:27 +00:00
Adrian Chadd
5f2f0e616b Add locking around the new holdingbf code.
Since this is being done during buffer free, it's a crap shoot whether
the TX path lock is held or not.  I tried putting the ath_freebuf() code
inside the TX lock and I got all kinds of locking issues - it turns out
that the buffer free path sometimes is called with the lock held and
sometimes isn't. So I'll go and fix that soon.

Hence for now the holdingbf buffers are protected by the TXBUF lock.
2013-03-15 02:52:37 +00:00
Adrian Chadd
629ce2188a Implement "holding buffers" per TX queue rather than globally.
When working on TDMA, Sam Leffler found that the MAC DMA hardware
would re-read the last TX descriptor when getting ready to transmit
the next one.  Thus the whole ATH_BUF_BUSY came into existance -
the descriptor must be left alone (very specifically the link pointer
must be maintained) until the hardware has moved onto the next frame.

He saw this in TDMA because the MAC would be frequently stopping during
active transmit (ie, when it wasn't its turn to transmit.)

Fast-forward to today.  It turns out that this is a problem not with
a single MAC DMA instance, but with each QCU (from 0->9).  They each
maintain separate descriptor pointers and will re-read the last
descriptor when starting to transmit the next.

So when your AP is busy transmitting from multiple TX queues, you'll
(more) frequently see one QCU stopped, waiting for a higher-priority QCU
to finsh transmitting, before it'll go ahead and continue.  If you mess
up the descriptor (ie by freeing it) then you're short of luck.

Thanks to rpaulo for sticking with me whilst I diagnosed this issue
that he was quite reliably triggering in his environment.

This is a reimplementation; it doesn't have anything in common with
the ath9k or the Qualcomm Atheros reference driver.

Now - it in theory doesn't apply on the EDMA chips, as long as you
push one complete frame into the FIFO at a time.  But the MAC can DMA
from a list of frames pushed into the hardware queue (ie, you concat
'n' frames together with link pointers, and then push the head pointer
into the TXQ FIFO.)  Since that's likely how I'm going to implement
CABQ handling in hostap mode, it's likely that I will end up teaching
the EDMA TX completion code about busy buffers, just to be "sure"
this doesn't creep up.

Tested - iperf ap->sta and sta->ap (with both sides running this code):

* AR5416 STA
* AR9160/AR9220 hostap

To validate that it doesn't break the EDMA (FIFO) chips:

* AR9380, AR9485, AR9462 STA

Using iperf with the -S <tos byte decimal value> to set the TCP client
side DSCP bits, mapping to different TIDs and thus different TX queues.

TODO:

* Make this work on the EDMA chips, if we end up pushing lists of frames
  to the hardware (eg how we eventually will handle cabq in hostap/ibss
  mode.)
2013-03-14 06:20:02 +00:00
Adrian Chadd
0639c54a67 Use the correct antenna configuration variable here. "diversity" just
controls whether it's on or off.

Found by:	clang
2013-03-12 03:03:24 +00:00
Adrian Chadd
0e168bb8e3 Add a few new fields to the RX vendor radiotap header:
* a flags field that lets me know what's going on;
* the hardware ratecode, unmolested by conversion to a bitrate;
* the HAL rs_flags field, useful for debugging;
* specifically mark aggregate sub-frames.

This stuff sorely needs tidying up - it's missing some important
stuff (eg numdelims) and it would be nice to put the flags at the
beginning rather than at the end.

Tested:

* AR9380, STA mode, 2x2 HT40, monitoring RSSI and EVM values
2013-03-11 06:54:58 +00:00
Adrian Chadd
6b3ba411d3 Bump the EVM array size up to fit the AR9380 EVM entries. 2013-03-11 06:01:00 +00:00
Adrian Chadd
1896b0880a Add three-stream EVM values. 2013-03-11 04:19:10 +00:00
Adrian Chadd
ba8d066231 Add another register definition bit - whether to populate EVM or PLCP
data in the RX status descriptor.

Obtained from:	Qualcomm Atheros
2013-03-10 09:43:01 +00:00
Adrian Chadd
b3420862a7 Disable the hw TID != buffer TID check.
I can 100% reliably trigger this on TID 1 traffic by using iperf -S 32
<client fields> to create traffic that maps to TID 1.

The reference driver doesn't do this check.
2013-03-09 08:50:17 +00:00
Adrian Chadd
9d2a962bf3 Print out the queue flags during a TX DMA shutdown. 2013-03-09 06:11:58 +00:00
Adrian Chadd
bdb9fa5c87 add a method to set/clear the VMF field in the TX descriptor.
Obtained from:	Qualcomm Atheros
2013-03-04 07:40:49 +00:00
Adrian Chadd
87c176d272 Add missing flags. 2013-02-28 23:39:38 +00:00