Commit Graph

186 Commits

Author SHA1 Message Date
Adrian Chadd
dff5bdf48c There's some races (likely in the BAR handling, sigh) which is causing
the pause/resume code to not be called completely symmetrically.

I'll chase down the root cause of that soon; this at least works around
the bug and tells me when it happens.
2013-04-20 22:46:31 +00:00
Adrian Chadd
12087a0769 Use the new net80211 method to fetch the node TX power, rather than
directly referencing ni->ni_txpower.

This provides the hardware with a slightly more accurate idea of
the maximum TX power to be using.

This is part of a series to get per-packet TPC to work (better).

Tested:

* AR5416, hostap mode
2013-04-16 21:26:44 +00:00
Adrian Chadd
c23a9d98bf Mark a couple of places where I think the dmamap isn't being unmapped
before the TX path is being aborted.

Right now it's in the TDMA code and I can live with that; but it really
should get fixed.

I'll do a more thorough audit of this code soon.
2013-04-02 06:25:10 +00:00
Adrian Chadd
3f3a5dbd2c Ensure that we only call the busdma unmap/flush routines once, when
the buffer is being freed.

* When buffers are cloned, the original mapping isn't copied but it
  wasn't freeing the mapping until later.  To be safe, free the
  mapping when the buffer is cloned.

* ath_freebuf() now no longer calls the busdma sync/unmap routines.

* ath_tx_freebuf() now calls sync/unmap.

* Call sync first, before calling unmap.

Tested:

* AR5416, STA mode
2013-04-01 20:57:13 +00:00
Adrian Chadd
09067b6e9a Use ATH_MAX_SCATTER rather than ATH_TXDESC.
ATH_MAX_SCATTER is used to size the ath_buf DMA segment array.
We thus should use it when checking sizes of things.
2013-04-01 20:12:21 +00:00
Adrian Chadd
92e84e43a6 Implement the replacement EDMA FIFO code.
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)

This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.

The specifics:

* When filling/refilling the FIFO, use the new TXQ staging queue
  for FIFO frames

* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
  For now the non-CABQ transmit path pushes one frame into the TXQ
  staging queue without setting up the intermediary link pointers
  to chain them together, so draining frames from the txq staging
  queue to the FIFO queue occurs AMPDU / MPDU at a time.

* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
  ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
  at once.

* Now that frames are in a FIFO pending queue, we can top up the
  FIFO after completing a single frame.  This means we can keep
  it filled rather than waiting for it drain and _then_ adding
  more frames.

* The EDMA restart routine now walks the FIFO queue in the TXQ
  rather than the pending queue and re-initialises the FIFO with
  that.

* When restarting EDMA, we may have partially completed sending
  a list.  So stamp the first frame that we see in a list with
  ATH_BUF_FIFOPTR and push _that_ into the hardware.

* When completing frames, only check those on the FIFO queue.
  We should never ever queue frames from the pending queue
  direct to the hardware, so there's no point in checking.

* Until I figure out what's going on, make sure if the TXSTATUS
  for an empty queue pops up, complain loudly and continue.
  This will stop the panics that people are seeing.  I'll add
  some code later which will assist in ensuring I'm populating
  each descriptor with the correct queue ID.

* When considering whether to queue frames to the hardware queue
  directly or software queue frames, make sure the depth of
  the FIFO is taken into account now.

* When completing frames, tag them with ATH_BUF_BUSY if they're
  not the final frame in a FIFO list.  The same holding descriptor
  behaviour is required when handling descriptors linked together
  with a link pointer as the hardware will re-read the previous
  descriptor to refresh the link pointer before contiuning.

* .. and if we complete the FIFO list (ie, the buffer has
  ATH_BUF_FIFOEND set), then we don't need the holding buffer
  any longer.  Thus, free it.

Tested:

* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap

TODO:

* I don't yet trust that the EDMA restart routine is totally correct
  in all circumstances.  I'll continue to thrash this out under heavy
  multiple-TXQ traffic load and fix whatever pops up.
2013-03-26 20:04:45 +00:00
Adrian Chadd
35bec3655e Remove the mcast path calls to ath_hal_gettxdesclinkptr() for axq_link -
they're no longer needed for the legacy path and they're not wanted
for the EDMA path.

Tested:

* AR9280, hostap + CABQ
* AR9380/AR9580, hostap + CABQ
2013-03-26 04:56:54 +00:00
Adrian Chadd
0891354cd2 Migrate the multicast queue assembly code to not use the axq_link pointer
and instead use the HAL method to set the link pointer.

Tested:

* AR9280, hostap mode, CABQ frames being queued and transmitted
2013-03-26 04:47:40 +00:00
Adrian Chadd
56a859789f Move the TXQ lock earlier in this routine - so to correctly protect the
link pointer check.
2013-03-24 04:09:54 +00:00
Adrian Chadd
b837332d0a Overhaul the TXQ locking (again!) as part of some beacon/cabq timing
related issues.

Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.

This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.

The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.

The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support.  Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot.  For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.

The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission.  I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.

Major:

* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
  a single lock.

* Leave the software queue side of things under the ATH_TX_LOCK lock,
  (continuing) to serialise things as they are.

* Add a new function which is called whenever there's a beacon miss,
  to print out some debugging.  This is primarily designed to help
  me figure out if the beacon miss events are due to a noisy environment,
  issues with the PHY/MAC, or other.

* Move the CABQ setup/enable to occur _after_ all the VAPs have been
  looked at.  This means that for multiple VAPS in bursted mode, the
  CABQ gets primed once all VAPs are checked, rather than being primed
  on the first VAP and then having frames appended after this.

Minor:

* Add a (disabled) twiddle to let me enable/disable cabq traffic.
  It's primarily there to let me easily debug what's going on with beacon
  and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
  trying to trace down.

* Clear bf_next when flushing frames; it should quieten some warnings
  that show up when a node goes away.

Tested:

* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)

TODO:

* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
  ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
  correctly set when chaining descriptors.)
2013-03-24 00:03:12 +00:00
Adrian Chadd
378a752f59 Now that the tx map field is correctly populated for both edma and
legacy chips, just use that.
2013-03-19 17:54:37 +00:00
Adrian Chadd
cd4f1ba89f Why'd I keep this here? remove it entirely now. 2013-03-15 20:22:20 +00:00
Adrian Chadd
302868d914 Fix two bugs:
* when pulling frames off of the TID queue, the ATH_TID_REMOVE()
  macro decrements the axq_depth field.  So don't do it twice.

* in ath_tx_comp_cleanup_aggr(), bf wasn't being reset to bf_first
  before walking the buffer list to complete buffers; so those buffers
  will leak.
2013-03-15 20:00:08 +00:00
Adrian Chadd
8454d32107 Remove a now incorrect comment.
This comment dates back to my initial stab at TX aggregation completion,
where I didn't even bother trying to do software retries.
2013-03-15 04:43:27 +00:00
Adrian Chadd
b3420862a7 Disable the hw TID != buffer TID check.
I can 100% reliably trigger this on TID 1 traffic by using iperf -S 32
<client fields> to create traffic that maps to TID 1.

The reference driver doesn't do this check.
2013-03-09 08:50:17 +00:00
Adrian Chadd
ce597531f2 Disable debugging entries about BAW issues. I haven't seen any issues
to do with BAW tracking in the last 9 months or so.
2013-02-21 21:47:35 +00:00
Adrian Chadd
f274e91f67 A couple of quick tidyups:
* Delete this debugging print - I used it when debugging the initial
  TX descriptor chaining code.  It now works, so let's toss it.
  It just confuses people if they enable TX descriptor debugging as they
  get two slightly different versions of the same descriptor.

* Indenting.
2013-02-20 11:22:44 +00:00
Adrian Chadd
1a85141ad4 Pull out the if_transmit() work and revert back to ath_start().
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:

* With if_start and the ifnet mbuf queue, any temporary latency
  would get eaten up by some mbufs being queued.  With ath_transmit()
  queuing things to ath_buf's, I'd only get 512 TX buffers before I
  couldn't queue any further frames.

* There's also some non-zero latency involved with TX being pushed
  into a taskqueue via direct dispatch.  Any time the scheduler didn't
  immediately schedule the ath TX task would cause extra latency.
  Various 1ge/10ge drivers implement both direct dispatch (if the TX
  lock can be acquired) and deferred task transmission (if the TX lock
  can't be acquired), with frames being pushed into a drbd queue.
  I'll have to do this at some point, but until I figure out how to
  deal with 802.11 fragments, I'll have to wait a while longer.

So what I saw:

* lots of extra latency, specially under load - if the taskqueue
  wasn't immediately scheduled, things went pear shaped;

* any extra latency would result in TX ath_buf's taking their sweet time
  being replenished, so any further calls to ath_transmit() would drop
  mbufs.

* .. yes, there's no explicit backpressure here - things are just dropped.
  Eek.

With this, the general performance has gone up, but those subtle if_start()
related race conditions are back.  For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.

There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation.  Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
2013-02-13 05:32:19 +00:00
Adrian Chadd
21bca442b9 Methodize the process of adding the software TX queue to the taskqueue.
Move it (for now) to the TX taskqueue.
2013-02-07 02:15:25 +00:00
Adrian Chadd
f28a552089 Migrate the TX sending code out from under the ath0 taskq and into
the separate ath0 TX taskq.

Whilst here, make sure that the TX software scheduler is also
running out of the TX task, rather than the ath0 taskqueue.

Make sure that the tx taskqueue is blocked/unblocked as necessary.

This allows for a little more parallelism on multi-core machines,
as well as (eventually) supporting a higher task priority for TX
tasks, allowing said TX task to preempt an already running RX or
TX completion task.

Tested:

* AR5416, AR9280 hostap and STA modes
2013-01-26 00:14:34 +00:00
Adrian Chadd
f74d878fda Fix this routine to acutally break out and not set clrdmask if any
of the TIDs are currently marked as "filtered."
2013-01-21 07:50:38 +00:00
Adrian Chadd
4f25ddbbe6 Migrate CLRDMASK to be a per-node flag, rather than a per-TID flag.
This is easily possible now that the TX is protected by a single
lock, rather than a per-TXQ (and thus per-TID) lock.

Only set CLRDMASK if none of the destinations are filtered.
This likely will need some tuning when it comes time to do UASPD/PS-POLL
TX, however at that point it should be manually set anyway.

Tested:

* AR9280, STA mode

TODO:

* More thorough testing in AP mode
* test other chipsets, just to be safe/sure.
2013-01-21 04:06:04 +00:00
Adrian Chadd
c5239edb98 Implement frame (data) transmission using if_transmit(), rather than
if_start().

This removes the overlapping data path TX from occuring, which
solves quite a number of the potential TX queue races in ath(4).
It doesn't fix the net80211 layer TX queue races and it doesn't
fix the raw TX path yet, but it's an important step towards this.

This hasn't dropped the TX performance in my testing; primarily
because now the TX path can quickly queue frames and continue
along processing.

This involves a few rather deep changes:

* Use the ath_buf as a queue placeholder for now, as we need to be
  able to support queuing a list of mbufs (ie, when transmitting
  fragments) and m_nextpkt can't be used here (because it's what is
  joining the fragments together)

* if_transmit() now simply allocates the ath_buf and queues it to
  a driver TX staging queue.

* TX is now moved into a taskqueue function.

* The TX taskqueue function now dequeues and transmits frames.

* Fragments are handled correctly here - as the current API passes
  the fragment list as one mbuf list (joined with m_nextpkt) through
  to the driver if_transmit().

* For the couple of places where ath_start() may be called (mostly
  from net80211 when starting the VAP up again), just reimplement
  it using the new enqueue and taskqueue methods.

What I don't like (about this work and the TX code in general):

* I'm using the same lock for the staging TX queue management and the
  actual TX.  This isn't required; I'm just being slack.

* I haven't yet moved TX to a separate taskqueue (but the taskqueue is
  created); it's easy enough to do this later if necessary.  I just need
  to make sure it's a higher priority queue, so TX has the same
  behaviour as it used to (where it would preempt existing RX..)

* I need to re-review the TX path a little more and make sure that
  ieee80211_node_*() functions aren't called within the TX lock.
  When queueing, I should just push failed frames into a queue and
  when I'm wrapping up the TX code, unlock the TX lock and
  call ieee80211_node_free() on each.

* It would be nice if I could hold the TX lock for the entire
  TX and TX completion, rather than this release/re-acquire behaviour.
  But that requires that I shuffle around the TX completion code
  to handle actual ath_buf free and net80211 callback/free outside
  of the TX lock.  That's one of my next projects.

* the ic_raw_xmit() path doesn't use this yet - so it still has
  sequencing problems with parallel, overlapping calls to the
  data path.  I'll fix this later.

Tested:

* Hostap - AR9280, AR9220
* STA - AR5212, AR9280, AR5416
2013-01-15 18:01:23 +00:00
Adrian Chadd
fc56c9c5e2 There's no need to use a TXQ pointer here; we specifically need the
hardware queue ID when queuing to EDMA descriptors.

This is a small part of trying to reduce the size of ath_buf entries.
2012-12-11 04:19:51 +00:00
Gleb Smirnoff
c6499eccad Mechanically substitute flags from historic mbuf allocator with
malloc(9) flags in sys/dev.
2012-12-04 09:32:43 +00:00
Adrian Chadd
974185bb13 Don't grab the PCU lock inside the TX lock. 2012-12-02 06:50:27 +00:00
Adrian Chadd
375307d411 Delete the per-TXQ locks and replace them with a single TX lock.
I couldn't think of a way to maintain the hardware TXQ locks _and_ layer
on top of that per-TXQ software queuing and any other kind of fine-grained
locks (eg per-TID, or per-node locks.)

So for now, to facilitate some further code refactoring and development
as part of the final push to get software queue ps-poll and u-apsd handling
into this driver, just do away with them entirely.

I may eventually bring them back at some point, when it looks slightly more
architectually cleaner to do so.  But as it stands at the present, it's
not really buying us much:

* in order to properly serialise things and not get bitten by scheduling
  and locking interactions with things higher up in the stack, we need to
  wrap the whole TX path in a long held lock.  Otherwise we can end up
  being pre-empted during frame handling, resulting in some out of order
  frame handling between sequence number allocation and encryption handling
  (ie, the seqno and the CCMP IV get out of sequence);

* .. so whilst that's the case, holding the lock for that long means that
  we're acquiring and releasing the TXQ lock _inside_ that context;

* And we also acquire it per-frame during frame completion, but we currently
  can't hold the lock for the duration of the TX completion as we need
  to call net80211 layer things with the locks _unheld_ to avoid LOR.

* .. the other places were grab that lock are reset/flush, which don't happen
  often.

My eventual aim is to change the TX path so all rejected frame transmissions
and all frame completions result in any ieee80211_free_node() calls to occur
outside of the TX lock; then I can cut back on the amount of locking that
goes on here.

There may be some LORs that occur when ieee80211_free_node() is called when
the TX queue path fails; I'll begin to address these in follow-up commits.
2012-12-02 06:24:08 +00:00
Adrian Chadd
491e124856 Until I figure out what to do here, remind myself that this needs some
rate control 'adjustment' when NOACK is set.
2012-11-28 06:55:34 +00:00
Adrian Chadd
bb327d284b ALQ logging enhancements:
* upon setup, tell the alq code what the chip information is.
* add TX/RX path logging for legacy chips.
* populate the tx/rx descriptor length fields with a best-estimate.
  It's overly big (96 bytes when AH_SUPPORT_AR5416 is enabled)
  but it'll do for now.

Whilst I'm here, add CURVNET_RESTORE() here during probe/attach as a
partial solution to fixing crashes during attach when the attach fails.
There are other attach failures that I have to deal with; those'll come
later.
2012-11-16 19:57:16 +00:00
Adrian Chadd
bbdf3df1c4 Make sure the final descriptor in an aggregate has rate control information.
This was broken by me when merging the 802.11n aggregate descriptor chain
setup with the default descriptor chain setup, in preparation for supporting
AR9380 NICs.

The corner case here is quite specific - if you queue an aggregate frame
with >1 frames in it, and the last subframe has only one descriptor making
it up, then that descriptor won't have the rate control information
copied into it. Look at what happens inside ar5416FillTxDesc() if
both firstSeg and lastSeg are set to 1.

Then when ar5416ProcTxDesc() goes to fill out ts_rate based on the
transmit index, it looks at the rate control fields in that descriptor
and dutifully sets it to be 0.

It doesn't happen for non-aggregate frames - if they have one descriptor,
the first descriptor already has rate control info.

I removed the call to ath_hal_setuplasttxdesc() when I migrated the
code to use the "new" style aggregate chain routines from the HAL.
But I missed this particular corner case.

This is a bit inefficient with MIPS boards as it involves a few redundant
writes into non-cachable memory.  I'll chase that up when it matters.

Tested:

 * AR9280 STA mode, TCP iperf traffic
 * Rui Paulo <rpaulo@> first reported this and has verified it on
   his AR9160 based AP.

PR:		kern/173636
2012-11-15 03:00:49 +00:00
Adrian Chadd
7d9dd2ac96 Add some debugging to try and catch an invalid TX rate (0x0) that is
being reported.
2012-11-13 06:28:57 +00:00
Adrian Chadd
58c82ec453 Remove this; i incorrectly committed the wrong (debug) changes in my
previous commit.
2012-11-11 21:57:18 +00:00
Adrian Chadd
04cdca73d9 Don't call av_set_tim() if it's NULL.
This happens during a scan in STA mode; any queued data frames will
be power save queued but as there's no TIM in STA mode, it panics.

This was introduced by me when I disabled my driver-aware power save
handling support.
2012-11-11 00:34:10 +00:00
Adrian Chadd
b69b0dcc24 Add some hooks into the driver to attach, detach and record EDMA descriptor
events.

This is primarily for the TX EDMA and TX EDMA completion. I haven't yet
tied it into the EDMA RX path or the legacy TX/RX path.

Things that I don't quite like:

* Make the pointer type 'void' in ath_softc and have if_ath_alq*()
  return a malloc'ed buffer.  That would remove the need to include
  if_ath_alq.h in if_athvar.h.
* The sysctl setup needs to be cleaned up.
2012-11-08 18:11:31 +00:00
Adrian Chadd
6e84772f4d Convert the aggregate descriptor path over to use the same API as
the non-aggregate path.

I "cheated" by using some TX setup code in our HAL that isn't present
in the atheros HAL (or Linux ath9k.)

The old path for forming aggregates was:

* setup the rate control in the first descriptor;
* call chaintxdesc() on all the frames;
* call setupfirsttxdesc() on the first descrpitor in the first
  frame;
* call setuplasttxdesc() on the last descriptor in the last frame.

The new path for forming aggregates looks like the non-aggregate path:

* call setuptxdesc() on the first descriptor in the first frame;
* setup the rate control in the first descriptor;
* call filltxdesc() on each descriptor in the frame;
* if it's an aggregate - call set11n_aggr_{first, middle, last} as
  appropriate (see the code for a description of what is "appropriate".)

Now, this is done primarily for the AR9300 HAL - it doesn't implement
the first set of aggregate functions.  It just has the older methods
and the "first/middle/last" aggregate methods.  So, let's convert the
code to use these.

Note: the AR5416 HAL in FreeBSD had that code (from me, a while ago)
and a previous commit brought it up to behave the same as the AR9300
HAL routines.

There's some further tidyups to be done - specifically, avoid doing
multiple calls to the 11n descriptor functions. I shouldn't call
clr11n_aggr(), then set11n_aggr_middle(), then also set11n_aggr_first().
On (at least MIPS) the TX descriptors are in non-cachable memory and
this will cause multiple slow writes.

I'll debug/tidy that up in a future commit.

Tested:

* AR9280, STA
* AR9280/AR9160, AP
* AR9380, STA (using a local, closed source HAL, sorry!)
2012-11-06 06:19:11 +00:00
Adrian Chadd
1b5c5f5ad0 I give up - introduce a TX lock to serialise TX operations.
I've tried serialising TX using queues and such but unfortunately
due to how this interacts with the locking going on elsewhere in the
networking stack, the TX task gets delayed, resulting in quite a
noticable throughput loss:

* baseline TCP for 2x2 11n HT40 is ~ 170mbit/sec;
* TCP for TX task in the ath taskq, with the RX also going on - 80mbit/sec;
* TCP for TX task in a separate, second taskq - 100mbit/sec.

So for now I'm going with the Linux wireless stack approach - lock tx
early.  The linux code does in the wireless stack, before the 802.11
state stuff happens and before it's punted to the driver.
But TX locking needs to also occur at the driver layer as the TX
completion code _also_ begins to drain the ifnet TX queue.

Whilst I'm here, add some KTR traces for the TX path.

Note:

* This really should be done at the net80211 layer (as well, at least.)
  But that'll have to wait for a little more thought to happen.
2012-10-31 06:27:58 +00:00
Adrian Chadd
548a605d0d Begin fleshing out some software queue awareness for TIM handling with
the power save queue.

* introduce some new ATH_NODE lock protected fields, tracking the
  net80211 psq and TIM state;
* when doing buffer transitions - ie, when sending and completing
  buffers - check the state of the SWQ and update the TIM appropriately.
* when clearing the TIM bit, if the SWQ is not empty then delay clearing
  it.

This is racy, but it's no less racy than the current net80211 power
save queue management code.  Specifically, with multiple TX threads,
it's quite plausible that parallel state updates will race and the
TIM will be left in an inconsistent state.  I'll address that in
a follow-up commit.
2012-10-28 21:13:12 +00:00
Adrian Chadd
9572684af7 Since it's not immediately obvious whether the current TX path handles
fragment rate lookups correctly, add a comment describing exactly that.

The assumption in the fragment duration code is the duration of the next
fragment will match the rate used by the current fragment.  But I think
a rate lookup is being done for _each_ fragment.  For older pre-sample
rate control this would almost always be the case, but for sample
it may be incorrect more often then correct.
2012-10-26 16:31:12 +00:00
Adrian Chadd
13aa9ee5c2 Stop abusing the ATH_TID_*() queue macros for filtered frames and give
them their own macro set.
2012-10-14 23:52:30 +00:00
Adrian Chadd
b1dddc280f Fix the non-TDMA build. 2012-10-13 06:27:34 +00:00
Adrian Chadd
3e6cc97fd6 Migrate the TID TXQ accesses to a new set of macros, rather than reusing
the ATH_TXQ_* macros.

* Introduce the new macros;
* rename the TID queue and TID filtered frame queue so the compiler
  tells me I'm using the wrong macro.

These should correspond 1:1 to the existing code.
2012-10-07 23:45:19 +00:00
Adrian Chadd
0eb8162623 Pause and unpause the software queues for a given node based on the
net80211 node power save state.

* Add an ATH_NODE_UNLOCK_ASSERT() check
* Add a new node field - an_is_powersave
* Pause/unpause the queue based on the node state
* Attempt to handle net80211 concurrency issues so the queue
  doesn't get paused/unpaused more than once at a time from
  the net80211 power save code.

Whilst here (and breaking my usual rule), set CLRDMASK when a queue
is unpaused, regardless of whether the queue has some pending traffic.
This means the first frame from that TID (now or later) will hvae
CLRDMASK set.

Also whilst here, bump the swretrymax counters whenever the
filtered frames code expires a frame.  Again, breaking my rule, but
this is just a statistics thing rather than a functional change.

This doesn't fix ps-poll (but it doesn't break it too much worse
than it is at the present) or correcting the TID updates.
That's next on the list.

Tested:
	* AR9220 AP (Atheros AP96 reference design)
	* Macbook Pro and LG Optimus 1 Android phone, both setting
	  and clearing power save state (but not using PS-POLL.)
2012-10-03 23:23:45 +00:00
Adrian Chadd
7403d1b9b2 Map the non-QoS TID to the voice queue, in order to ensure important
things like EAPOL frames make it out.

After a whole bunch of hacking/testing, I discovered that they weren't
being early-dropped by the stack (but I should look at ensuring that
later..) but were even making to the hardware transmit queue.
They were mostly even being received by the remote end.  However, the
remote end was completely ignoring them.

This didn't happen under 150-170MBit TCP tests as I'm guessing the TX
queue stayed very busy and the STA didn't do any scanning. However, when
doing 100Mbit/s of TCP traffic, the STA would do background scanning -
which involves it coming in and out of powersave mode with the AP.

Now, this is a total and utter hack around the real problems, which are:

* I need to implement proper power save handling and integrate it into
  the filtered frames support, so the driver/stack doesn't send frames
  whilst the station is actually in sleep;

* .. but frames were actually making it to the STA (macbook pro) and
  the AP did receive an ACK; but a tcpdump on the receiving side showed
  the EAPOL frame never made it. So the stack was dropping it for
  some reason;

* Importantly - the EAPOL frames are currently going into the non-QoS
  TID, which maps to the BE queue and is susceptible to that queue being
  busy doing other things, but;

* There's other traffic going on in the non-QoS TID from other contexts
  when scanning is going on and it's possible there's some races causing
  sequence number/IV issues, but;

* Importantly importantlly, I think the interaction with TID 16 multicast
  traffic in power save mode is causing issues - since I -believe- the
  sequence number space being used by the EAPOL frames on TID 16 overlaps
  with the multicast frames that have sequence numbers allocated and
  are then stuffed on the cabq.  Since with EAPOL frames being in TID 16
  and queued to the BE queue, it's going to be waiting to be serviced
  with all of the aggregate traffic going on - and if the CABQ gets
  emptied beforehand, those TID 16 multicast frames with sequence numbers
  will go out beforehand.

Now, there's quite likely a bunch of "stuff happening slightly out of
sequence" going on due to the nature of the TX path (read: lots of
overlapping and concurrent ath_start() and ath_raw_xmit() calls going
on, sigh) but I thought I had caught them all and stuffed each TID TX
behind a lock (that lasted as long as it needed to in order to get
the frame onto the relevant destination queue - thus keeping things
in order.)

Unfortunately the last problem is the big one and I'm going to stare at
it some more.  If it _is_

So this is a work around for now to ensure that EAPOL frames actually
make it out before any other stuff in the non-QoS TID and HOPEFULLY
before the CABQ gets active.

I'm now going to spend a little time in the TX path figuring out exactly
why the sender is rejecting things. There's two (well, three if you count
EAPOL contents invalid) possibilities:

* The sequence number is out of order (ie, something else like the multicast
  traffic on CABQ) is going out first on TID 16;
* The CCMP IV is out of order (similar to above - but less likely,  as the
  TX key for multicast traffic is different to unicast traffic);
* EAPOL contents strangely invalid.

AP: Ubiquiti RSPRO, AR9160/AR9220 NICs
STA: Macbook Pro, Broadcom 11n NIC
2012-09-26 03:45:42 +00:00
Adrian Chadd
0a54471901 Oops - don't do the clrdmask check in ath_tx_xmit_normal() - the wrong
lock may be held.

Kim reported that the TID lock wasn't held when ath_tx_update_clrdmask()
was called. Well, the underlying hardware TXQ for that TID.

I'm betting it's the cabq stuff. ath_tx_xmit_normal() can be called
for both real and software cabq.  For software cabq, the real destination
txq is different to the txq. So, the lock check will fail.

Reported by:	Kim Culhan <w8hdkim@gmail.com>
2012-09-25 20:41:43 +00:00
Adrian Chadd
23f44d2b30 Call ath_tx_tid_unsched() after the node has been flushed, so the
state can be printed correctly.
2012-09-25 05:56:59 +00:00
Adrian Chadd
0368251456 Migrate the ath(4) KTR logging to use an ATH_KTR() macro.
This should eventually be unified with ATH_DEBUG() so I can get both
from one macro; that may take some time.

Add some new probes for TX and TX completion.
2012-09-24 20:35:56 +00:00
Adrian Chadd
0c54de88e6 Prepare for software retransmission of non-aggregate frames but ensure
it's disabled.

The previous commit to enable CLRDMASK setting didn't do it at all
correctly for non-aggregate sessions - so the CLRDMASK bit would be
cleared and never re-set.

* move ath_tx_update_clrdmask() to be called by functions that setup
  descriptors and queue frames to the hardware, rather than scattered
  everywhere.

* Force CLRDMASK to be set on all non-aggregate session frames being
  transmitted.

* Use ath_tx_normal_comp() now on non-aggregate sessoin frames
  that are queued via ath_tx_xmit_normal().  That way the TID hwq is
  updated and they can trigger (eventual) filter frame queue resets
  and software retransmits.

There's still a bit more work to do in this area to reverse the silly
short-sightedness on my part, however it's likely going to be better
to fix this now than just reverting the patch.

Thanks to people on the freebsd-wireless@ mailing list for promptly
pointing this out.
2012-09-24 06:42:20 +00:00
Adrian Chadd
94eefcf1dc In (eventual) preparation for supporting disabling the whole 11n/software
retry path - add some code to make it obvious (to me!) how to disable
the software tx path.
2012-09-24 06:00:51 +00:00
Adrian Chadd
4e81f27c59 Introduce the CLRDMASK gating based on tid->clrdmask, enabling filtered
frames to occur.

* Create a new function which will set the bf_flags CLRDMASK bit
  if required.
* For raw frames, always set CLRDMASK.
* For BAR, ADDBA frames, always set CLRDMASK.
* For everything else, check if CLRDMASK needs to be set before
  calling tx_setds() or tx_setds11n().
* When unpausing a queue or drain/resetting it, set tid->clrdmask=1
  just to ensure traffic starts flowing.

What I need to do:

* Modify that function to _clear_ the CLRDMASK if it's not required,
  or retried frames may have CLRDMASK set when they don't need to.
  (Which isn't a huge deal, but..)

Whilst I'm here:

* ath_tx_normal_xmit() should really act like the AMPDU session TX
  functions - any incomplete frames will end up being assigned
  ath_tx_normal_comp() which will decrement tid->hwq_depth - but that
  won't have been incremented.

  So whilst I'm here, add a comment to do that.

* Fix the debug print function to be slightly clearer about things;
  it's not a good sign when I can't interpret my own debugging output.

I've done some testing on AR9280/AR5416/AR9160 STA and AP modes.
2012-09-20 03:13:20 +00:00
Adrian Chadd
d05b576d61 Place the comment where it should be. 2012-09-20 03:04:19 +00:00
Adrian Chadd
088d8b81f3 Add a work-around for some strange net80211 BAR races in the wireless
stack.

There are unfortunately quite a few odd cases in BAR TX and BAR TX
retransmission that I haven't yet fully diagnosed.  So for now, add
this work-around so the resume() function isn't called too often,
decrementing pause to -1 (and causing things to stay paused.)
2012-09-20 03:03:01 +00:00
Adrian Chadd
0aa5c1bbf5 Oops - take a copy of ath_tx_status from the buffer before the TX processing
is done.

The aggregate path was definitely accessing 'ts' before it was actually
being assigned.

This had the side effect of over-filtering frames, since occasionally that
bit would be '1'.

Whilst here, do the same thing in the non-aggregate completion function -
as calling the filter function may also invalidate bf.

Pointy hat to: adrian, for not noticing this over many, many code reviews.
2012-09-18 20:33:04 +00:00
Adrian Chadd
f1bc738ece Implement my first cut at filtered frames in aggregation sessions.
The hardware can optionally "filter" frames if successive transmissions
to a given node (ie, "entry in the keycache") fail.  That way the hardware
can implement a kind of early abort of all the other frames queued to
that destination, rather than simply trying to TX each frame to that
destination (and failing.)

The background:

* If a frame comes back as being filtered, the hardware didn't try to
  TX it (or it was outside the TX burst opportunity.) So, take it as a hint
  that some (but not all, see below) frames to the destination may be
  filtered.

* If the CLRDMASK bit is set in a TX descriptor, the "filter to this
  destination" bit in the keycache entry is cleared and TX to that host
  will be unconditionally retried.

* Right now everything has the CLRDMASK bit set, so filtered frames
  tend to be aggregates and frames that fall outside of the WME burst
  window. It was a bit worse in the past as I had messed up the TX
  flags and CLRDMASK wasn't being set on aggregate frames.

The annoying bits:

* It's easy (ish) to do for aggregate session frames - firstly, they
  can be retried in any order as long as they're within the BAW, and
  there's already a bunch of infrastructure tracking how many frames
  the TID has queued to the hardware (tid->hwq_depth.) However, for
  frames that bypassed the software queue, hwq_depth doesn't get
  incremented. I'll fix that in a subsequent commit.

* For non-aggregate session frames, the only retries that can occur
  are ones for sequence numbers that hvaen't successfully been TXed yet.
  Since there's no re-ordering going on in non-aggregate sessions, if any
  subsequent seqno frames make it out, any filtered frames before that
  seqno need to be dropped.

  Hence why this initially is just for aggregate session frames.

* Since there may be intermediary frames to the destination that
  have CLRDMASK set - for example, any directly dispatched management
  frames to that destination - it's possible that there will be some
  filtered frames followed up by some non filtered frames.  Thus,
  it can't be assumed that once you see a filtered frame for the given
  destination node, all subsequent frames for all TIDs will be filtered.

Ok, with that in mind:

* Create a per-TID filtered frame queue for frames that the hardware
  returns as filtered.

* Track filtered frames per-tid, rather than per-node.  It just makes
  the locking much easier.

* When a filtered frame appears in the completion function, the node
  transitions to "filtered", and all subsequent completed error frames
  (filtered or otherwise) are put on the filtered frame queue.  The TID
  is paused once (during the transition from non-filtered to filtered).

* If a filtered frame retry count exceeds SWMAX_RETRIES, a BAR should be
  sent.

* Once all the frames queued to the hardware for the given filtered frame
  TID, transition back from filtered frame to non-filtered frame, which
  means pre-pending all the filtered frames onto the head of the software
  queue, clearing the filtered frame state and unpausing the TID.

Things get quite hairy around handling completion (aggr, non-aggr, norm,
direct-dispatched frames to a hardware queue); whether it's an "error",
"cleanup" or "BAR" state as well as filtered, which order to do things
in (eg do filtered BEFORE checking for BAR, as the filter completion
may be needed to actually transmit a BAR frame.)

This work has definitely reminded me that I have to tidy up all the locking
and remove some of the ridiculous lock/unlock/lock/unlock going on in the
completion functions.

It's also reminded me that I should really split out TID versus hardware TXQ
locking, even if the underlying locking is still the destination hardware TXQ.

Finally, this is all pre-requisite for working on AP mode power save support
(PS-POLL, uAPSD) as well as improving performance to misbehaving nodes (as
they can transition into filter mode, stopping any TX until everything has
caught up.)

Finally (ish) - this should also be done for non-aggregate sessions as
there are still plenty of laptops and mobile devices that don't speak
802.11n but do wish for stable, useful power save AP support where packets
aren't simply dropped.  This requires software retransmission for
non-aggregate sessions to be implemented, which includes the caveats I've
mentioned above.

Finally finally - this doesn't yet do anything about the CLRDMASK bit in the
TX descriptor.  That's still unconditionally set to 1.  I'll debug the
current work (mostly ensuring I haven't busted up the hairy transitions
between BAR, filtered, error (all frames in an aggregate failing) and
cleanup (when transitioning from aggregation -> non-aggregation.))

Finally finally finally - this is all original work by yours truely, rather
than ported from the Atheros internal driver codebase or Linux ath9k.

Tested:
 * AR9280, AR5416 in STA mode
 * AR9280, AR9130 in hostap mode
 * Lots and lots of iperf testing in very marginal and non-marginal conditions,
   complete with inducing filtered frames + BAR TX conditions.
2012-09-18 10:14:17 +00:00
Adrian Chadd
c6e9cee205 Take credit for the work I've done in this source file. 2012-09-17 03:17:42 +00:00
Adrian Chadd
5d9b19f731 Clear the correct descriptor when going through the chained together
gather DMA descriptor list.

Pointy hat to: adrian@, for even USING bf->bf_desc here instead of 'ds'.
2012-09-11 04:11:42 +00:00
Adrian Chadd
21840808c8 Make sure the aggregate fields are properly cleared - both in the
ath_buf and when forming a non-aggregate frame.

The non-11n setds function is called when TXing aggregate frames (and
yes, I should fix this!) and the non-11n TX aggregation code doesn't clear
the delimiter field.  I figure it's nicer to do that.
2012-09-09 05:06:16 +00:00
Adrian Chadd
2a9f83af64 Ensure that single-frame aggregate session frames are retransmitted
with the correct configuration.

Occasionally an aggregate TX would fail and the first frame would be
retransmitted as a non-AMPDU frame.  Since bfs_aggr=1 and bfs_nframes > 1
(from the previous AMPDU attempt), the aggr completion function would be
called and be very confused about what's going on.

Noticed by:	Kim <w8hdkim@gmail.com>
PR:		kern/171394
2012-09-07 00:24:27 +00:00
Adrian Chadd
79b5235666 Fix a build issue when ATH_DEBUG isn't defined - just initialise and use
qnum.
2012-08-20 18:57:41 +00:00
Adrian Chadd
0f8423a27a Wrap debugging in #ifdef ATH_DEBUG 2012-08-20 15:30:26 +00:00
Adrian Chadd
42083b3d66 Advance the descriptor pointer by sc->sc_tx_desclen bytes, rather than
sizeof(struct ath_desc).  This isn't correct for EDMA TX descriptors.

This popped up during iperf tests. Ping tests never created frames that
had enough segments to overflow into a second descriptor.  However,
an iperf TCP test would do that after a few seconds; the second descriptor
would almost always certainly have garbage.

Tested:

* AR9380, STA mode
* AR9280, STA mode (802.11n TX, legacy TX)
2012-08-20 06:02:09 +00:00
Adrian Chadd
e2137b86d6 When assembling the descriptor list, make sure that the "first" descriptor
is marked correctly.

The existing logic assumed that the first descriptor is i == 0, which
doesn't hold for EDMA TX.  In this instance, the first time filltxdesc()
is called can be up to i == 3.

So for a two-buffer descriptor:

* firstSeg is set to 0;
* lastSeg is set to 1;
* the ath_hal_filltxdesc() code will treat it as the last segment in
  a descriptor chain and blank some of the descriptor fields, causing
  the TX to stop.

When firstSeg is set to 1 (regardless of lastSeg), it overrides the
lastSeg setting.  Thus, ath_hal_filltxdesc() won't blank out these
fields.

Tested: AR9380, STA mode.  With this, association is successful.
2012-08-19 02:16:22 +00:00
Adrian Chadd
2b200bb4ce Extend the non-aggregate TX descriptor chain routine to be aware of:
* the descriptor ID, and
* the multi-buffer support that the EDMA chips support.

This is required for successful MAC transmission of multi-descriptor
frames.  The MAC simply hangs if there are NULL buffers + 0 length pointers,
but the descriptor did have TxMore set.

This won't be done for the 11n aggregate path, as that will be modified
to use the newer API (ie, ath_hal_filltxdesc() and then set first|middle|
last_aggr), which will deprecate some of the current code.

TODO:

* Populate the numTxMaps field in the HAL, then make sure that's fetched
  by the driver.  Then I can undo that hack.

Tested:

* AR9380, AP mode, TX'ing non-aggregate 802.11n frames;
* AR9280, STA/AP mode, doing aggregate and non-aggregate traffic.
2012-08-15 08:14:16 +00:00
Adrian Chadd
1762ec944a Revert the ath_tx_draintxq() method, and instead teach it the minimum
necessary to "do" EDMA.

It was just using the TX completion status for logging information about
the descriptor completion.  Since with EDMA we don't know this without
checking the TX completion FIFO, we can't provide this information.
So don't.
2012-08-12 00:46:15 +00:00
Adrian Chadd
788e6aa99c Break out ath_draintxq() into a method and un-methodize ath_tx_processq().
Now that I understand what's going on with this, I've realised that
it's going to be quite difficult to implement a processq method in
the EDMA case.  Because there's a separate TX status FIFO, I can't
just run processq() on each EDMA TXQ to see what's finished.
i have to actually run the TX status queue and handle individual
TXQs.

So:

* unmethodize ath_tx_processq();
* leave ath_tx_draintxq() as a method, as it only uses the completion status
  for debugging rather than actively completing the frames (ie, all frames
  here are failed);
* Methodize ath_draintxq().

The EDMA ath_draintxq() will have to take care of running the TX
completion FIFO before (potentially) freeing frames in the queue.

The only two places where ath_tx_draintxq() (on a single TXQ) are used:

* ath_draintxq(); and
* the CABQ handling in the beacon setup code - it drains the CABQ before
  populating the CABQ with frames for a new beacon (when doing multi-VAP
  operation.)

So it's quite possible that once I methodize the CABQ and beacon handling,
I can just drop ath_tx_draintxq() in its entirety.

Finally, it's also quite possible that I can remove ath_tx_draintxq()
in the future and just "teach" it to not check the status when doing
EDMA.
2012-08-12 00:37:29 +00:00
Adrian Chadd
4ddf2cc38c Add the AR9300 HAL ID in to the 11n check routine.
I was having TX hang issues, which I root caused to having the
legacy ath_hal_setupxtxdesc() called, rather than the 11n rate scenario
setup code.  This meant that rate control information wasn't being
put into frames, causing the MAC to stall/hang.
2012-08-11 22:25:28 +00:00
Adrian Chadd
d2da554492 Correct re-initialise the link pointer to be the final descriptor in
the last buffer.

This fixes traffic stalls that were occuring with stuck beacon events.

PR:		kern/170433
2012-08-07 00:42:46 +00:00
Adrian Chadd
fffbec8618 Migrate the 802.11n ath_hal_chaintxdesc() API to use a buffer/segment
array, similar to what filltxdesc() uses.

This removes the last reference to ds_data in the TX path outside of
debugging statements.  These need to be adjusted/fixed.

Tested:

* AR9280 STA/AP with iperf TCP traffic
2012-08-05 11:24:21 +00:00
Adrian Chadd
46634305f4 Migrate the ath_hal_filltxdesc() API to take a list of buffer/seglen values.
The existing API only exposes 'seglen' (the current buffer (segment) length)
with the data buffer pointer set in 'ds_data'.  This is fine for the legacy
DMA engine but it won't work for the EDMA engines.

The EDMA engine has a significantly different TX descriptor layout.

* The legacy DMA engine had a ds_data pointer at the same offset in the
  descriptor for both TX and RX buffers;
* The EDMA engine has no ds_data for RX - the data is DMAed after the
  descriptor;
* The EDMA engine has support for 4 TX buffer/segment pairs in the TX
  DMA descriptor;
* The EDMA TX completion is in a different FIFO, and the driver will
  'link' the status completion entry to a QCU by a "QCU ID".
  I don't know why it's just not filled in by the hardware, alas.

So given that, here are the changes:

* Instead of directly fondling 'ds_data' in ath_desc, change the
  ath_hal_filltxdesc() to take an array of buffer pointers as well
  as segment len pointers;
* The EDMA TX completion status wants a descriptor and queue id.
  This (for now) uses bf_state.bfs_txq and will extract the hardware QCU
  ID from that.
* .. and this is ugly and wasteful; it should change to just store
  the QCU in the bf_state and save 3/7 bytes in the process.

Now, the weird crap:

* The aggregate TX path was using bf_state->bfs_txq for the TXQ, rather than
  taking a function argument.  I've tidied that up.
* The multicast queue frames get put on a software TXQ and then that is
  appended to the hardware CABQ when appropriate.  So for now, make sure
  that bf_state->bfs_txq points at the CABQ when adding frames to the
  multicast queue.
* .. but the multicast queue TX path for now doesn't use the software
  queue and instead
  (a) directly sets up the descriptor contents at that point;
  (b) the frames on the vap->avp_mcastq are then just appended wholesale
      to the CABQ.
  So for now, I don't have to worry about making the multicast path
  work with aggregation or the per-TID software queue. Phew.

What's left to do:

* I need to modify the 11n ath_hal_chaintxdesc() API to do the same.
  I'll do that in a subsequent commit.
* Remove bf_state.bfs_txq entirely and store the QCU as appropriate.
* .. then do the runtime "is this going on the right HWQ?" checks using
  that, rather than comparing pointer values.

Tested on:

* AR9280 STA/AP
* AR5416 STA/AP
2012-08-05 10:12:27 +00:00
Adrian Chadd
a6e829596d Fix an issue that crept in with the previous descriptor tidyup.
When forming aggregates, the last descriptor was now not being
correctly setup - instead, the "setuplasttxdesc" call was being
handed the first descriptor in the last subframe, rather than the
last descriptor in the last subframe.

This showed up as "bad series0 hwrate" messages, as the final
descriptor just didn't have any of the rate control information
squirreled away.

Tested:
	* AR9280 STA -> 11n AP, iperf TCP
2012-08-02 20:14:45 +00:00
Adrian Chadd
af01710118 Allow 802.11n hardware to support multi-rate retry when RTS/CTS is
enabled.

The legacy (pre-802.11n) hardware doesn't support this - although
the AR5212 era hardware supports MRR, it doesn't have all the bits
needed to support MRR + RTS/CTS.  The AR5416 and later support
a packet duration and RTS/CTS flags per rate scenario, so we should
support it.

Tested:

* AR9280, STA

PR:		kern/170302
2012-07-31 23:54:15 +00:00
Adrian Chadd
8c08c07ac4 Shuffle the call to ath_hal_setuplasttxdesc() to _after_ the rate control
code is called and remove it from ath_buf_set_rate().

For the legacy (non-11n API) TX routines, ath_hal_filltxdesc() takes care
of setting up the intermediary and final descriptors right, complete
with copying the rate control info into the final descriptor so the
rate modules can grab it.

The 11n version doesn't do this - ath_hal_chaintxdesc() doesn't
copy the rate control bits over, nor does it clear isaggr/moreaggr/
pad delimiters.  So the call to setuplasttxdesc() is needed here.

So:

* legacy NICs - never call the 11n rate control stuff, so filltxdesc
  copies the rate control info right;
* 11n NICs transmitting legacy or 11n non-aggregate frames -
  ath_hal_set11nratescenario() is called to setup rate control and
  then ath_hal_filltxdesc() chains them together - so the rate control
  info is right;
* 11n aggregate frames - set11nratescenario() is called, then
  ath_hal_chaintxdesc() is called to chain a list of aggregate and subframes
  together. This requires a call to ath_hal_setuplasttxdesc() to complete
  things.

Tested:

* AR9280 in station mode

TODO:

* I really should make sure that the descriptor contents get blanked
  out correctly or garbage left over from aggregate frames may show
  up in non-aggregate frames, leading to badness.
2012-07-31 17:08:29 +00:00
Adrian Chadd
d34a73472a Push the rate control and descriptor chaining into the descriptor "set"
functions, for both legacy and 802.11n.

This will simplify supporting the EDMA chipsets as these two descriptor
setup functions can just be overridden in their entirety, hiding all of
the subtle differences in setting things up.

It's not a permanent solution, as eventually the AR5416 HAL should grow
similar versions of the 11n descriptor functions and then those can be
used.

TODO:

* Push the "clr11naggr" call into the legacy setds, just to ensure
  that retried frames don't end up with the aggregate bits set
  inappropriately;
* Remove the "setlasttxdesc" call from the 11n TX path and push it
  into setds_11n.
* Ensure that setds_11n will work correctly for non-aggregate frames;
* .. and then when it does, just unconditionally call "setds_11n" for
  11n NICs and "setds" for non-11n NICs.
2012-07-31 16:41:09 +00:00
Adrian Chadd
f8418db57e Migrate some more TX side setup routines to be methods. 2012-07-31 03:09:48 +00:00
Adrian Chadd
746bab5b7f Break out the hardware handoff and TX DMA restart code into methods.
These (and a few others) will differ based on the underlying DMA
implementation.

For the EDMA NICs, simply stub them out in a fashion which will let
me focus on implementing the necessary descriptor API changes.
2012-07-31 02:28:32 +00:00
Adrian Chadd
0f4a46b376 Shuffle the rate control call to be consistent with non-aggregate TX.
The correct ordering for non-aggregate TX is:

* call ath_hal_setuptxdesc() to setup the first TX descriptor complete
  with the first TX rate/try count;
* call ath_hal_setupxtxdesc() to setup the multi-rate retry;
* .. or for 802.11n NICs, call ath_hal_set11nratescenario() for MRR and
  802.11n flags;
* then call ath_hal_filltxdesc() to setup intermediary descriptors
  in a multi-descriptor single frame.

The call to ath_hal_filltxdesc() routines seem to correctly (consistently?)
handle the intermediary descriptor flags, including copying the rate
control information to the final descriptor in the frame.  That's used
by the rate control module rather than the hardware.

Tested:

* Only on AR9280 STA mode, however it should work on other chips in
  both STA and AP mode.
2012-07-29 09:23:32 +00:00
Adrian Chadd
1006fc0c3b Modify ath_descdma_setup() to take a descriptor size parameter.
The AR9300 and later descriptors are 128 bytes, however I'd like to make
sure that isn't used for earlier chips.

* Populate the TX descriptor length field in the softc with
  sizeof(ath_desc)

* Use this field when allocating the TX descriptors

* Pre-AR93xx TX/RX descriptors will use the ath_desc size; newer ones will
  query the HAL for these sizes.
2012-07-23 23:40:13 +00:00
Adrian Chadd
3fdfc33024 Begin separating out the TX DMA setup in preparation for TX EDMA support.
* Introduce TX DMA setup/teardown methods, mirroring what's done in
  the RX path.

  Although the TX DMA descriptor is setup via ath_desc_alloc() /
  ath_desc_free(), there TX status descriptor ring will be allocated
  in this path.

* Remove some of the TX EDMA capability probing from the RX path and
  push it into the new TX EDMA path.
2012-07-23 03:52:18 +00:00
Adrian Chadd
3d9b15965e Begin modifying the descriptor allocation functions to support a variable
sized TX descriptor.

This is required for the AR93xx EDMA support which requires 128 byte
TX descriptors (which is significantly larger than the earlier
hardware.)
2012-07-23 02:26:33 +00:00
Adrian Chadd
bb06995571 Convert the TX path to use the new HAL methods for accessing the
TX descriptor link pointers.

This is required for the AR93xx and later chipsets.

The RX path is slightly different - the legacy RX path directly
accesses ath_desc->ds_link for now, however this isn't at all done
for EDMA (FIFO) RX.

Now, for those performing a little software archeology here:

This is all a bit sub-optimal. "struct ath_desc" is only really relevant
for the pre-AR93xx NICs - where ds_link and ds_data is always in the
same location.

The AR93xx and later NICs have different descriptor layouts altogether.

Now, for AR93xx and later NICs, you should never directly reference
ds_link and ds_data, as:

* the RX descriptors don't have either - the data is _after_ the RX
  descriptor.  They're just one large buffer.  There's also no need for
  a per-descriptor RX buffer size as they're all fixed sizes.

* the TX descriptors have 4 buffer and 4 length fields _and_ a link
  pointer.  Each frame takes up one TX FIFO pointer, but it can contain
  multiple subframes (either multiple frames in a buffer, and/or
  multiple frames in an aggregate/RIFS burst.)

* .. so, when TX frames are queued to a hardware queue, the link
  pointer is ONLY for buffers in that frame/aggregate.  The next frame
  starts in a new FIFO pointer.

* Finally, descriptor completion status is in a different ring.
  I'll write something up about that when its time to do so.

This was inspired by Linux ath9k and the reference driver but is a
reimplementation.

Obtained from:	Linux ath9k, Qualcomm Atheros
2012-07-19 03:51:16 +00:00
John Baldwin
0f078d635e Fix build when ATH_DEBUG is not defined. 2012-07-10 18:57:05 +00:00
Adrian Chadd
6abbbae5d3 Print the TX buffer if this error condition is asserted.
I need to figure out why this is occuring.  Hopefully I can get enough
descriptor dumps to figure it out.
2012-07-10 06:10:49 +00:00
Adrian Chadd
8405fe8662 Make sure the BAR TX session pause is correctly unpaused when a node
is reassociating.

PR:		kern/169432
2012-06-26 07:56:15 +00:00
Adrian Chadd
447fd44a6f Disable this warning debug for now, as I'm now aware of the particular
situation where it's occuring.

Whilst I'm here, flesh out a more descriptive description.
2012-06-14 04:01:25 +00:00
Adrian Chadd
af33d486ab Implement a separate, smaller pool of ath_buf entries for use by management
traffic.

* Create sc_mgmt_txbuf and sc_mgmt_txdesc, initialise/free them appropriately.
* Create an enum to represent buffer types in the API.
* Extend ath_getbuf() and _ath_getbuf_locked() to take the above enum.
* Right now anything sent via ic_raw_xmit() allocates via ATH_BUFTYPE_MGMT.
  This may not be very useful.
* Add ATH_BUF_MGMT flag (ath_buf.bf_flags) which indicates the current buffer
  is a mgmt buffer and should go back onto the mgmt free list.
* Extend 'txagg' to include debugging output for both normal and mgmt txbufs.
* When checking/clearing ATH_BUF_BUSY, do it on both TX pools.

Tested:

* STA mode, with heavy UDP injection via iperf.  This filled the TX queue
  however BARs were still going out successfully.

TODO:

* Initialise the mgmt buffers with ATH_BUF_MGMT and then ensure the right
  type is being allocated and freed on the appropriate list.  That'd save
  a write operation (to bf->bf_flags) on each buffer alloc/free.

* Test on AP mode, ensure that BAR TX and probe responses go out nicely
  when the main TX queue is filled (eg with paused traffic to a TID,
  awaiting a BAR to complete.)

PR:		kern/168170
2012-06-13 06:57:55 +00:00
Adrian Chadd
32c387f76a Oops, return the newly allocated buffer to the queue, not the completed
buffer.

PR:	kern/168170
2012-06-13 05:41:00 +00:00
Adrian Chadd
e1a50456b6 Replace the direct sc_txbuf manipulation with a pair of functions.
This is preparation work for having a separate ath_buf queue for
management traffic.

PR:		kern/168170
2012-06-13 05:39:16 +00:00
Adrian Chadd
c7c073413b Fix uninitialised reference.
Noticed by:	John Hay <jhay@meraka.org.za>
2012-06-11 12:26:23 +00:00
Adrian Chadd
7561cb5c8b Wrap the whole (software) TX path from ifnet dequeue to software queue
(or direct dispatch) behind the TXQ lock (which, remember, is doubling
as the TID lock too for now.)

This ensures that:

 (a) the sequence number and the CCMP PN allocation is done together;
 (b) overlapping transmit paths don't interleave frames, so we don't
     end up with the original issue that triggered kern/166190.

     Ie, that we don't end up with seqno A, B in thread 1, C, D in
     thread 2, and they being queued to the software queue as "A C D B"
     or similar, leading to the BAW stalls.

This has been tested:

* both STA and AP modes with INVARIANTS and WITNESS;
* TCP and UDP TX;
* both STA->AP and AP->STA.

STA is a Routerstation Pro (single CPU MIPS) and the AP is a dual-core
Centrino.

PR:		kern/166190
2012-06-11 07:44:16 +00:00
Adrian Chadd
4b6db4043f Add another TID lock. 2012-06-11 07:35:24 +00:00
Adrian Chadd
ba0e58f4fa Make sure the frames are queued to the head of the list, not the tail.
See previous commit.

PR:		kern/166190
2012-06-11 07:31:50 +00:00
Adrian Chadd
39f24578fb When scheduling frames in an aggregate session, the frames should be
scheduled from the head of the software queue rather than trying to
queue the newly given frame.

This leads to some rather unfortunate out of order (but still valid
as it's inside the BAW) frame TX.

This now:

* Always queues the frame at the end of the software queue;
* Tries to direct dispatch the frame at the head of the software queue,
  to try and fill up the hardware queue.

TODO:

* I should likely try to queue as many frames to the hardware as I can
  at this point, rather than doing one at a time;
* ath_tx_xmit_aggr() may fail and this code assumes that it'll schedule
  the TID.  Otherwise TX may stall.

PR:		kern/166190
2012-06-11 07:29:25 +00:00
Adrian Chadd
4547f047ba Retried frames need to be inserted in the head of the list, not the tail.
This is an unfortunate byproduct of how the routine is used - it's called
with the head frame on the queue, but if the frame is failed, it's inserted
into the tail of the queue.

Because of this, the sequence numbers would get all shuffled around and
the BAW would be bumped past this sequence number, that's now at the
end of the software queue.  Then, whenever it's time for that frame
to be transmitted, it'll be immediately outside of the BAW and TX will
stall until the BAW catches up.

It can also result in all kinds of weird duplicate BAW frames, leading
to hilarious panics.

PR:		kern/166190
2012-06-11 07:15:48 +00:00
Adrian Chadd
42f4d0618a Finish undoing the previous commit - this part of the code is no longer
required.

PR:		kern/166190
2012-06-11 07:08:40 +00:00
Adrian Chadd
c2ac9655c3 Introduce a new lock debug which is specifically for making sure the
_TID_ lock is held.

For now the TID lock is also the TXQ lock. This is just to make sure
that the right TXQ lock is held for the given TID.
2012-06-11 07:06:49 +00:00
Adrian Chadd
a108d2d6c6 Revert r233227 and followup commits as it breaks CCMP PN replay detection.
This showed up when doing heavy UDP throughput on SMP machines.

The problem with this is because the 802.11 sequence number is being
allocated separately to the CCMP PN replay number (which is assigned
during ieee80211_crypto_encap()).

Under significant throughput (200+ MBps) the TX path would be stressed
enough that frame TX/retry would force sequence number and PN allocation
to be out of order.  So once the frames were reordered via 802.11 seqnos,
the CCMP PN would be far out of order, causing most frames to be discarded
by the receiver.

I've fixed this in some local work by being forced to:

  (a) deal with the issues that lead to the parallel TX causing out of
      order sequence numbers in the first place;
  (b) fix all the packet queuing issues which lead to strange (but mostly
      valid) TX.

I'll begin fixing these in a subsequent commit or five.

PR:		kern/166190
2012-06-11 06:59:28 +00:00
Adrian Chadd
b815538dec Avoid using hard-coded numbers here. 2012-05-26 01:35:11 +00:00
Adrian Chadd
d3a6425b7c Fix up some corner cases with aggregation handling.
I've come across a weird scenario in net80211 where two TX streams will
happily attempt to setup an aggregation session together.
If we're very lucky, it happens concurrently on separate CPUs and the
total lack of locking in the net80211 aggregation code causes this stuff
to race. Badly.

So >1 call would occur to the ath(4) addba start, but only one call would
complete to addba complete or timeout.  The TID would thus stay paused.

The real fix is to implement some proper per-node (or maybe per-TID)
locking in net80211, which then could be leveraged by the ath(4) TX
aggregation code.

Whilst I'm at it, shuffle around the debugging messages a bit.
I like to keep people on their toes.
2012-05-22 06:31:03 +00:00
Adrian Chadd
6deb7f3271 For now, add a quick debugging patch to log when the hw TXQ != the TID/AC. 2012-05-21 22:43:38 +00:00
Adrian Chadd
4dfd450768 Rename ath_tx_cleanup() -> ath_tx_tid_cleanup() in order to not clash
with a symbol in if_ath.c
2012-05-21 22:39:13 +00:00
Adrian Chadd
e60c4fc2c9 Migrate the bulk of the RX routines out from if_ath.c to if_ath_rx.[ch].
* migrate the rx processing out into if_ath_rx.c
* migrate the TSF functions into if_ath_tsf.h, as inlines

This is in prepration for supporting the EDMA RX routines, required to
support the AR93xx series NICs.

TODO:

* ath_start() shouldn't be private, but it's called as part of
  the RX path. I should likely migrate ath_rx_tasklet() back into
  if_ath.c and then return this to be 'static'.  The RX code really
  shouldn't need to see TX routines (and vice versa.)

* ath_beacon_* should be in if_ath_beacon.[ch].

* ath_tdma_* should be in if_ath_tdma.[ch] ...
2012-05-20 02:05:10 +00:00