Commit Graph

78 Commits

Author SHA1 Message Date
Adrian Chadd
0f8423a27a Wrap debugging in #ifdef ATH_DEBUG 2012-08-20 15:30:26 +00:00
Adrian Chadd
42083b3d66 Advance the descriptor pointer by sc->sc_tx_desclen bytes, rather than
sizeof(struct ath_desc).  This isn't correct for EDMA TX descriptors.

This popped up during iperf tests. Ping tests never created frames that
had enough segments to overflow into a second descriptor.  However,
an iperf TCP test would do that after a few seconds; the second descriptor
would almost always certainly have garbage.

Tested:

* AR9380, STA mode
* AR9280, STA mode (802.11n TX, legacy TX)
2012-08-20 06:02:09 +00:00
Adrian Chadd
e2137b86d6 When assembling the descriptor list, make sure that the "first" descriptor
is marked correctly.

The existing logic assumed that the first descriptor is i == 0, which
doesn't hold for EDMA TX.  In this instance, the first time filltxdesc()
is called can be up to i == 3.

So for a two-buffer descriptor:

* firstSeg is set to 0;
* lastSeg is set to 1;
* the ath_hal_filltxdesc() code will treat it as the last segment in
  a descriptor chain and blank some of the descriptor fields, causing
  the TX to stop.

When firstSeg is set to 1 (regardless of lastSeg), it overrides the
lastSeg setting.  Thus, ath_hal_filltxdesc() won't blank out these
fields.

Tested: AR9380, STA mode.  With this, association is successful.
2012-08-19 02:16:22 +00:00
Adrian Chadd
2b200bb4ce Extend the non-aggregate TX descriptor chain routine to be aware of:
* the descriptor ID, and
* the multi-buffer support that the EDMA chips support.

This is required for successful MAC transmission of multi-descriptor
frames.  The MAC simply hangs if there are NULL buffers + 0 length pointers,
but the descriptor did have TxMore set.

This won't be done for the 11n aggregate path, as that will be modified
to use the newer API (ie, ath_hal_filltxdesc() and then set first|middle|
last_aggr), which will deprecate some of the current code.

TODO:

* Populate the numTxMaps field in the HAL, then make sure that's fetched
  by the driver.  Then I can undo that hack.

Tested:

* AR9380, AP mode, TX'ing non-aggregate 802.11n frames;
* AR9280, STA/AP mode, doing aggregate and non-aggregate traffic.
2012-08-15 08:14:16 +00:00
Adrian Chadd
1762ec944a Revert the ath_tx_draintxq() method, and instead teach it the minimum
necessary to "do" EDMA.

It was just using the TX completion status for logging information about
the descriptor completion.  Since with EDMA we don't know this without
checking the TX completion FIFO, we can't provide this information.
So don't.
2012-08-12 00:46:15 +00:00
Adrian Chadd
788e6aa99c Break out ath_draintxq() into a method and un-methodize ath_tx_processq().
Now that I understand what's going on with this, I've realised that
it's going to be quite difficult to implement a processq method in
the EDMA case.  Because there's a separate TX status FIFO, I can't
just run processq() on each EDMA TXQ to see what's finished.
i have to actually run the TX status queue and handle individual
TXQs.

So:

* unmethodize ath_tx_processq();
* leave ath_tx_draintxq() as a method, as it only uses the completion status
  for debugging rather than actively completing the frames (ie, all frames
  here are failed);
* Methodize ath_draintxq().

The EDMA ath_draintxq() will have to take care of running the TX
completion FIFO before (potentially) freeing frames in the queue.

The only two places where ath_tx_draintxq() (on a single TXQ) are used:

* ath_draintxq(); and
* the CABQ handling in the beacon setup code - it drains the CABQ before
  populating the CABQ with frames for a new beacon (when doing multi-VAP
  operation.)

So it's quite possible that once I methodize the CABQ and beacon handling,
I can just drop ath_tx_draintxq() in its entirety.

Finally, it's also quite possible that I can remove ath_tx_draintxq()
in the future and just "teach" it to not check the status when doing
EDMA.
2012-08-12 00:37:29 +00:00
Adrian Chadd
4ddf2cc38c Add the AR9300 HAL ID in to the 11n check routine.
I was having TX hang issues, which I root caused to having the
legacy ath_hal_setupxtxdesc() called, rather than the 11n rate scenario
setup code.  This meant that rate control information wasn't being
put into frames, causing the MAC to stall/hang.
2012-08-11 22:25:28 +00:00
Adrian Chadd
d2da554492 Correct re-initialise the link pointer to be the final descriptor in
the last buffer.

This fixes traffic stalls that were occuring with stuck beacon events.

PR:		kern/170433
2012-08-07 00:42:46 +00:00
Adrian Chadd
fffbec8618 Migrate the 802.11n ath_hal_chaintxdesc() API to use a buffer/segment
array, similar to what filltxdesc() uses.

This removes the last reference to ds_data in the TX path outside of
debugging statements.  These need to be adjusted/fixed.

Tested:

* AR9280 STA/AP with iperf TCP traffic
2012-08-05 11:24:21 +00:00
Adrian Chadd
46634305f4 Migrate the ath_hal_filltxdesc() API to take a list of buffer/seglen values.
The existing API only exposes 'seglen' (the current buffer (segment) length)
with the data buffer pointer set in 'ds_data'.  This is fine for the legacy
DMA engine but it won't work for the EDMA engines.

The EDMA engine has a significantly different TX descriptor layout.

* The legacy DMA engine had a ds_data pointer at the same offset in the
  descriptor for both TX and RX buffers;
* The EDMA engine has no ds_data for RX - the data is DMAed after the
  descriptor;
* The EDMA engine has support for 4 TX buffer/segment pairs in the TX
  DMA descriptor;
* The EDMA TX completion is in a different FIFO, and the driver will
  'link' the status completion entry to a QCU by a "QCU ID".
  I don't know why it's just not filled in by the hardware, alas.

So given that, here are the changes:

* Instead of directly fondling 'ds_data' in ath_desc, change the
  ath_hal_filltxdesc() to take an array of buffer pointers as well
  as segment len pointers;
* The EDMA TX completion status wants a descriptor and queue id.
  This (for now) uses bf_state.bfs_txq and will extract the hardware QCU
  ID from that.
* .. and this is ugly and wasteful; it should change to just store
  the QCU in the bf_state and save 3/7 bytes in the process.

Now, the weird crap:

* The aggregate TX path was using bf_state->bfs_txq for the TXQ, rather than
  taking a function argument.  I've tidied that up.
* The multicast queue frames get put on a software TXQ and then that is
  appended to the hardware CABQ when appropriate.  So for now, make sure
  that bf_state->bfs_txq points at the CABQ when adding frames to the
  multicast queue.
* .. but the multicast queue TX path for now doesn't use the software
  queue and instead
  (a) directly sets up the descriptor contents at that point;
  (b) the frames on the vap->avp_mcastq are then just appended wholesale
      to the CABQ.
  So for now, I don't have to worry about making the multicast path
  work with aggregation or the per-TID software queue. Phew.

What's left to do:

* I need to modify the 11n ath_hal_chaintxdesc() API to do the same.
  I'll do that in a subsequent commit.
* Remove bf_state.bfs_txq entirely and store the QCU as appropriate.
* .. then do the runtime "is this going on the right HWQ?" checks using
  that, rather than comparing pointer values.

Tested on:

* AR9280 STA/AP
* AR5416 STA/AP
2012-08-05 10:12:27 +00:00
Adrian Chadd
a6e829596d Fix an issue that crept in with the previous descriptor tidyup.
When forming aggregates, the last descriptor was now not being
correctly setup - instead, the "setuplasttxdesc" call was being
handed the first descriptor in the last subframe, rather than the
last descriptor in the last subframe.

This showed up as "bad series0 hwrate" messages, as the final
descriptor just didn't have any of the rate control information
squirreled away.

Tested:
	* AR9280 STA -> 11n AP, iperf TCP
2012-08-02 20:14:45 +00:00
Adrian Chadd
af01710118 Allow 802.11n hardware to support multi-rate retry when RTS/CTS is
enabled.

The legacy (pre-802.11n) hardware doesn't support this - although
the AR5212 era hardware supports MRR, it doesn't have all the bits
needed to support MRR + RTS/CTS.  The AR5416 and later support
a packet duration and RTS/CTS flags per rate scenario, so we should
support it.

Tested:

* AR9280, STA

PR:		kern/170302
2012-07-31 23:54:15 +00:00
Adrian Chadd
8c08c07ac4 Shuffle the call to ath_hal_setuplasttxdesc() to _after_ the rate control
code is called and remove it from ath_buf_set_rate().

For the legacy (non-11n API) TX routines, ath_hal_filltxdesc() takes care
of setting up the intermediary and final descriptors right, complete
with copying the rate control info into the final descriptor so the
rate modules can grab it.

The 11n version doesn't do this - ath_hal_chaintxdesc() doesn't
copy the rate control bits over, nor does it clear isaggr/moreaggr/
pad delimiters.  So the call to setuplasttxdesc() is needed here.

So:

* legacy NICs - never call the 11n rate control stuff, so filltxdesc
  copies the rate control info right;
* 11n NICs transmitting legacy or 11n non-aggregate frames -
  ath_hal_set11nratescenario() is called to setup rate control and
  then ath_hal_filltxdesc() chains them together - so the rate control
  info is right;
* 11n aggregate frames - set11nratescenario() is called, then
  ath_hal_chaintxdesc() is called to chain a list of aggregate and subframes
  together. This requires a call to ath_hal_setuplasttxdesc() to complete
  things.

Tested:

* AR9280 in station mode

TODO:

* I really should make sure that the descriptor contents get blanked
  out correctly or garbage left over from aggregate frames may show
  up in non-aggregate frames, leading to badness.
2012-07-31 17:08:29 +00:00
Adrian Chadd
d34a73472a Push the rate control and descriptor chaining into the descriptor "set"
functions, for both legacy and 802.11n.

This will simplify supporting the EDMA chipsets as these two descriptor
setup functions can just be overridden in their entirety, hiding all of
the subtle differences in setting things up.

It's not a permanent solution, as eventually the AR5416 HAL should grow
similar versions of the 11n descriptor functions and then those can be
used.

TODO:

* Push the "clr11naggr" call into the legacy setds, just to ensure
  that retried frames don't end up with the aggregate bits set
  inappropriately;
* Remove the "setlasttxdesc" call from the 11n TX path and push it
  into setds_11n.
* Ensure that setds_11n will work correctly for non-aggregate frames;
* .. and then when it does, just unconditionally call "setds_11n" for
  11n NICs and "setds" for non-11n NICs.
2012-07-31 16:41:09 +00:00
Adrian Chadd
f8418db57e Migrate some more TX side setup routines to be methods. 2012-07-31 03:09:48 +00:00
Adrian Chadd
746bab5b7f Break out the hardware handoff and TX DMA restart code into methods.
These (and a few others) will differ based on the underlying DMA
implementation.

For the EDMA NICs, simply stub them out in a fashion which will let
me focus on implementing the necessary descriptor API changes.
2012-07-31 02:28:32 +00:00
Adrian Chadd
0f4a46b376 Shuffle the rate control call to be consistent with non-aggregate TX.
The correct ordering for non-aggregate TX is:

* call ath_hal_setuptxdesc() to setup the first TX descriptor complete
  with the first TX rate/try count;
* call ath_hal_setupxtxdesc() to setup the multi-rate retry;
* .. or for 802.11n NICs, call ath_hal_set11nratescenario() for MRR and
  802.11n flags;
* then call ath_hal_filltxdesc() to setup intermediary descriptors
  in a multi-descriptor single frame.

The call to ath_hal_filltxdesc() routines seem to correctly (consistently?)
handle the intermediary descriptor flags, including copying the rate
control information to the final descriptor in the frame.  That's used
by the rate control module rather than the hardware.

Tested:

* Only on AR9280 STA mode, however it should work on other chips in
  both STA and AP mode.
2012-07-29 09:23:32 +00:00
Adrian Chadd
1006fc0c3b Modify ath_descdma_setup() to take a descriptor size parameter.
The AR9300 and later descriptors are 128 bytes, however I'd like to make
sure that isn't used for earlier chips.

* Populate the TX descriptor length field in the softc with
  sizeof(ath_desc)

* Use this field when allocating the TX descriptors

* Pre-AR93xx TX/RX descriptors will use the ath_desc size; newer ones will
  query the HAL for these sizes.
2012-07-23 23:40:13 +00:00
Adrian Chadd
3fdfc33024 Begin separating out the TX DMA setup in preparation for TX EDMA support.
* Introduce TX DMA setup/teardown methods, mirroring what's done in
  the RX path.

  Although the TX DMA descriptor is setup via ath_desc_alloc() /
  ath_desc_free(), there TX status descriptor ring will be allocated
  in this path.

* Remove some of the TX EDMA capability probing from the RX path and
  push it into the new TX EDMA path.
2012-07-23 03:52:18 +00:00
Adrian Chadd
3d9b15965e Begin modifying the descriptor allocation functions to support a variable
sized TX descriptor.

This is required for the AR93xx EDMA support which requires 128 byte
TX descriptors (which is significantly larger than the earlier
hardware.)
2012-07-23 02:26:33 +00:00
Adrian Chadd
bb06995571 Convert the TX path to use the new HAL methods for accessing the
TX descriptor link pointers.

This is required for the AR93xx and later chipsets.

The RX path is slightly different - the legacy RX path directly
accesses ath_desc->ds_link for now, however this isn't at all done
for EDMA (FIFO) RX.

Now, for those performing a little software archeology here:

This is all a bit sub-optimal. "struct ath_desc" is only really relevant
for the pre-AR93xx NICs - where ds_link and ds_data is always in the
same location.

The AR93xx and later NICs have different descriptor layouts altogether.

Now, for AR93xx and later NICs, you should never directly reference
ds_link and ds_data, as:

* the RX descriptors don't have either - the data is _after_ the RX
  descriptor.  They're just one large buffer.  There's also no need for
  a per-descriptor RX buffer size as they're all fixed sizes.

* the TX descriptors have 4 buffer and 4 length fields _and_ a link
  pointer.  Each frame takes up one TX FIFO pointer, but it can contain
  multiple subframes (either multiple frames in a buffer, and/or
  multiple frames in an aggregate/RIFS burst.)

* .. so, when TX frames are queued to a hardware queue, the link
  pointer is ONLY for buffers in that frame/aggregate.  The next frame
  starts in a new FIFO pointer.

* Finally, descriptor completion status is in a different ring.
  I'll write something up about that when its time to do so.

This was inspired by Linux ath9k and the reference driver but is a
reimplementation.

Obtained from:	Linux ath9k, Qualcomm Atheros
2012-07-19 03:51:16 +00:00
John Baldwin
0f078d635e Fix build when ATH_DEBUG is not defined. 2012-07-10 18:57:05 +00:00
Adrian Chadd
6abbbae5d3 Print the TX buffer if this error condition is asserted.
I need to figure out why this is occuring.  Hopefully I can get enough
descriptor dumps to figure it out.
2012-07-10 06:10:49 +00:00
Adrian Chadd
8405fe8662 Make sure the BAR TX session pause is correctly unpaused when a node
is reassociating.

PR:		kern/169432
2012-06-26 07:56:15 +00:00
Adrian Chadd
447fd44a6f Disable this warning debug for now, as I'm now aware of the particular
situation where it's occuring.

Whilst I'm here, flesh out a more descriptive description.
2012-06-14 04:01:25 +00:00
Adrian Chadd
af33d486ab Implement a separate, smaller pool of ath_buf entries for use by management
traffic.

* Create sc_mgmt_txbuf and sc_mgmt_txdesc, initialise/free them appropriately.
* Create an enum to represent buffer types in the API.
* Extend ath_getbuf() and _ath_getbuf_locked() to take the above enum.
* Right now anything sent via ic_raw_xmit() allocates via ATH_BUFTYPE_MGMT.
  This may not be very useful.
* Add ATH_BUF_MGMT flag (ath_buf.bf_flags) which indicates the current buffer
  is a mgmt buffer and should go back onto the mgmt free list.
* Extend 'txagg' to include debugging output for both normal and mgmt txbufs.
* When checking/clearing ATH_BUF_BUSY, do it on both TX pools.

Tested:

* STA mode, with heavy UDP injection via iperf.  This filled the TX queue
  however BARs were still going out successfully.

TODO:

* Initialise the mgmt buffers with ATH_BUF_MGMT and then ensure the right
  type is being allocated and freed on the appropriate list.  That'd save
  a write operation (to bf->bf_flags) on each buffer alloc/free.

* Test on AP mode, ensure that BAR TX and probe responses go out nicely
  when the main TX queue is filled (eg with paused traffic to a TID,
  awaiting a BAR to complete.)

PR:		kern/168170
2012-06-13 06:57:55 +00:00
Adrian Chadd
32c387f76a Oops, return the newly allocated buffer to the queue, not the completed
buffer.

PR:	kern/168170
2012-06-13 05:41:00 +00:00
Adrian Chadd
e1a50456b6 Replace the direct sc_txbuf manipulation with a pair of functions.
This is preparation work for having a separate ath_buf queue for
management traffic.

PR:		kern/168170
2012-06-13 05:39:16 +00:00
Adrian Chadd
c7c073413b Fix uninitialised reference.
Noticed by:	John Hay <jhay@meraka.org.za>
2012-06-11 12:26:23 +00:00
Adrian Chadd
7561cb5c8b Wrap the whole (software) TX path from ifnet dequeue to software queue
(or direct dispatch) behind the TXQ lock (which, remember, is doubling
as the TID lock too for now.)

This ensures that:

 (a) the sequence number and the CCMP PN allocation is done together;
 (b) overlapping transmit paths don't interleave frames, so we don't
     end up with the original issue that triggered kern/166190.

     Ie, that we don't end up with seqno A, B in thread 1, C, D in
     thread 2, and they being queued to the software queue as "A C D B"
     or similar, leading to the BAW stalls.

This has been tested:

* both STA and AP modes with INVARIANTS and WITNESS;
* TCP and UDP TX;
* both STA->AP and AP->STA.

STA is a Routerstation Pro (single CPU MIPS) and the AP is a dual-core
Centrino.

PR:		kern/166190
2012-06-11 07:44:16 +00:00
Adrian Chadd
4b6db4043f Add another TID lock. 2012-06-11 07:35:24 +00:00
Adrian Chadd
ba0e58f4fa Make sure the frames are queued to the head of the list, not the tail.
See previous commit.

PR:		kern/166190
2012-06-11 07:31:50 +00:00
Adrian Chadd
39f24578fb When scheduling frames in an aggregate session, the frames should be
scheduled from the head of the software queue rather than trying to
queue the newly given frame.

This leads to some rather unfortunate out of order (but still valid
as it's inside the BAW) frame TX.

This now:

* Always queues the frame at the end of the software queue;
* Tries to direct dispatch the frame at the head of the software queue,
  to try and fill up the hardware queue.

TODO:

* I should likely try to queue as many frames to the hardware as I can
  at this point, rather than doing one at a time;
* ath_tx_xmit_aggr() may fail and this code assumes that it'll schedule
  the TID.  Otherwise TX may stall.

PR:		kern/166190
2012-06-11 07:29:25 +00:00
Adrian Chadd
4547f047ba Retried frames need to be inserted in the head of the list, not the tail.
This is an unfortunate byproduct of how the routine is used - it's called
with the head frame on the queue, but if the frame is failed, it's inserted
into the tail of the queue.

Because of this, the sequence numbers would get all shuffled around and
the BAW would be bumped past this sequence number, that's now at the
end of the software queue.  Then, whenever it's time for that frame
to be transmitted, it'll be immediately outside of the BAW and TX will
stall until the BAW catches up.

It can also result in all kinds of weird duplicate BAW frames, leading
to hilarious panics.

PR:		kern/166190
2012-06-11 07:15:48 +00:00
Adrian Chadd
42f4d0618a Finish undoing the previous commit - this part of the code is no longer
required.

PR:		kern/166190
2012-06-11 07:08:40 +00:00
Adrian Chadd
c2ac9655c3 Introduce a new lock debug which is specifically for making sure the
_TID_ lock is held.

For now the TID lock is also the TXQ lock. This is just to make sure
that the right TXQ lock is held for the given TID.
2012-06-11 07:06:49 +00:00
Adrian Chadd
a108d2d6c6 Revert r233227 and followup commits as it breaks CCMP PN replay detection.
This showed up when doing heavy UDP throughput on SMP machines.

The problem with this is because the 802.11 sequence number is being
allocated separately to the CCMP PN replay number (which is assigned
during ieee80211_crypto_encap()).

Under significant throughput (200+ MBps) the TX path would be stressed
enough that frame TX/retry would force sequence number and PN allocation
to be out of order.  So once the frames were reordered via 802.11 seqnos,
the CCMP PN would be far out of order, causing most frames to be discarded
by the receiver.

I've fixed this in some local work by being forced to:

  (a) deal with the issues that lead to the parallel TX causing out of
      order sequence numbers in the first place;
  (b) fix all the packet queuing issues which lead to strange (but mostly
      valid) TX.

I'll begin fixing these in a subsequent commit or five.

PR:		kern/166190
2012-06-11 06:59:28 +00:00
Adrian Chadd
b815538dec Avoid using hard-coded numbers here. 2012-05-26 01:35:11 +00:00
Adrian Chadd
d3a6425b7c Fix up some corner cases with aggregation handling.
I've come across a weird scenario in net80211 where two TX streams will
happily attempt to setup an aggregation session together.
If we're very lucky, it happens concurrently on separate CPUs and the
total lack of locking in the net80211 aggregation code causes this stuff
to race. Badly.

So >1 call would occur to the ath(4) addba start, but only one call would
complete to addba complete or timeout.  The TID would thus stay paused.

The real fix is to implement some proper per-node (or maybe per-TID)
locking in net80211, which then could be leveraged by the ath(4) TX
aggregation code.

Whilst I'm at it, shuffle around the debugging messages a bit.
I like to keep people on their toes.
2012-05-22 06:31:03 +00:00
Adrian Chadd
6deb7f3271 For now, add a quick debugging patch to log when the hw TXQ != the TID/AC. 2012-05-21 22:43:38 +00:00
Adrian Chadd
4dfd450768 Rename ath_tx_cleanup() -> ath_tx_tid_cleanup() in order to not clash
with a symbol in if_ath.c
2012-05-21 22:39:13 +00:00
Adrian Chadd
e60c4fc2c9 Migrate the bulk of the RX routines out from if_ath.c to if_ath_rx.[ch].
* migrate the rx processing out into if_ath_rx.c
* migrate the TSF functions into if_ath_tsf.h, as inlines

This is in prepration for supporting the EDMA RX routines, required to
support the AR93xx series NICs.

TODO:

* ath_start() shouldn't be private, but it's called as part of
  the RX path. I should likely migrate ath_rx_tasklet() back into
  if_ath.c and then return this to be 'static'.  The RX code really
  shouldn't need to see TX routines (and vice versa.)

* ath_beacon_* should be in if_ath_beacon.[ch].

* ath_tdma_* should be in if_ath_tdma.[ch] ...
2012-05-20 02:05:10 +00:00
Adrian Chadd
0e22ed0eb2 Migrate ath_debug and sc_debug from an int to a uint64_t / QUAD;
add some more BAR debugging logic.

* Change the definition of ath_debug and ath_softc.sc_debug  from
  int to uint64_t;
* Change the relevant sysctls;
* Add a new BAR TX debugging field;
* Use this in if_ath_tx.

This has been tested by using the sysctl program, which happily allows
for fields > 32 bits to be configured.
2012-05-15 23:39:37 +00:00
Adrian Chadd
e9a6408e07 Handle non-xretry errors the same as xretry errors for now.
Although I _should_ handle the other errors in various ways (specifically
errors like FILT), treating them as having transmitted successfully
is completely wrong.  Here, they'd be counted as successful and the BAW
would be advanced.. but the RX side wouldn't have received them.

The specific errors I've been seeing here are HAL_TXERR_FILT.

This patch does fix the issue - I've tested it using -i 0.001 pings
(enough to start aggregation) and now the behaviour is correct:

* The RX side never sees a "moved window" error, and
* The TX side sends BARs as needed, with the RX side correctly handling
  them.

PR:		kern/167902
2012-05-15 04:55:15 +00:00
Adrian Chadd
96ff26ff5d Fix a couple of sc_ac2q[] mappings that were using the TID, not the AC.
PR:		kern/167588
2012-05-04 20:31:27 +00:00
Adrian Chadd
39da9d42bd Remove some of the redundant locking done in the TX completion path,
when checking whether BAR frames need to be checked.
2012-04-26 23:57:24 +00:00
Adrian Chadd
2aa563dfeb Migrate the net80211 TX aggregation state to be from per-AC to per-TID.
TODO:

* Test mwl(4) more thoroughly!

Reviewed by:	bschmidt (for iwn)
2012-04-15 20:29:39 +00:00
Adrian Chadd
b43facbff3 After reviewing the mcast/sleep station code a little, undo some brain
damage which I committed when I had less clue about such things.

Don't ever put normal data frames on the mcast software queue.
Just put mcast frames there if needed.

Pass the txq decision into ath_tx_normal_setup(), as we've already made
the decision.  Don't re-do it.

Whilst i'm here, add another random debugging statement.
2012-04-08 00:40:16 +00:00
Adrian Chadd
4d7f883711 Do a dma sync before the descriptors are chained together.
I need to find a better place to do this..
2012-04-07 05:51:43 +00:00
Adrian Chadd
e2e4a2c2a1 Break out the legacy duration and protection code into routines,
call these after rate control selection is done.

The duration/protection code wasn't working - it expected the rix to
be valid.  Unfortunately after I moved the rate control selection into
late in the process, the rix value isn't valid and thus the protection/
duration code would get things wrong.

HT frames are now correctly protected with an RTS and for the AR5416,
this involves having the aggregate frames be limited to 8K.

TODO:

* Fix up the DMA sync to occur just before the frame is queued to the
  hardware.  I'm adjusting the duration here but not doing the DMA
  flush.

* Doubly/triply ensure that the aggregate frames are being limited to
  the correct size, or the AR5416 will get unhappy when TXing RTS-protected
  aggregates.
2012-04-07 05:48:26 +00:00