* Move the node sleep/wake state under the TX lock rather than the
node lock. Let's leave the node lock protecting rate control only
for now.
* When reassociating, various state needs to be cleared. For example,
the aggregate session needs to be torn down, including any pending
aggregation negotiation and BAR TX waiting.
* .. and we need to do a "cleanup" pass since frames in the hardware
TX queue need to be transmitted.
Modify ath_tx_tid_cleanup() to be called with the TX lock held and push
frames into a completion list. This allows for the cleanup to be
done atomically for all TIDs in a node rather than grabbing and
releasing the TX lock each time.
a non-loss reset.
When the drain functions are called, the holding descriptor and link pointers
are NULLed out.
But when the processq function is called during a non-loss reset, this
doesn't occur. So the next time a DMA occurs, it's chained to a descriptor
that no longer exists and the hardware gets angry.
Tested:
* AR5416, STA mode; use sysctl dev.ath.X.forcebstuck=1 to force a non-loss
reset.
TODO:
* Further AR9380 testing just to check that the behaviour for the EDMA
chips is sane.
PR: kern/178477
of "right".)
Flip back on the "always continue TX DMA using the holding descriptor"
code - by always setting ATH_BUF_BUSY and never setting axq_link to NULL.
Since the holding descriptor is accessed via txq->axq_link and _that_
is done behind the TXQ lock rather than the TX path lock, the holding
descriptor stuff itself needs to be behind the TXQ lock.
So, do the mental gymnastics needed to do this.
I've not seen any of the hardware failures that I was seeing when
I last tried to do this.
Tested:
* AR5416, STA mode
I'm not sure why this is failing. The holding descriptor should be being
re-read when starting DMA of the next frame. Obviously something here
isn't totally correct.
I'll review the TX queue handling and see if I can figure out why this
is failing. I'll then re-revert this patch out and use the holding
descriptor again.
but partly to just tidy up things.
The problem here - there are too many TX buffers in the queue! By the
time one needs to transmit an EAPOL frame (for this PR, it's the response
to the group rekey notification from the AP) there are no ath_buf entries
free and the EAPOL frame doesn't go out.
Now, the problem!
* Enforcing the TX buffer limitation _before_ we dequeue the frame?
Bad idea. Because..
* .. it means I can't check whether the mbuf has M_EAPOL set.
The solution(s):
* De-queue the frame first
* Don't bother doing the TX buffer minimum free check until after
we know whether it's an EAPOL frame or not.
* If it's an EAPOL frame, allocate the buffer from the mgmt pool
rather than the default pool.
Whilst I'm here:
* Add a tweak to limit how many buffers a single node can acquire.
* Don't enforce that for EAPOL frames.
* .. set that to default to 1/4 of the available buffers, or 32,
whichever is more sane.
This doesn't fix issues due to a sleeping node or a very poor performing
node; but this doesn't make it worse.
Tested:
* AR5416 STA, TX'ing 100+ mbit UDP to an AP, but only 50mbit being received
(thus the TX queue fills up.)
* .. with CCMP / WPA2 encryption configured
* .. and the group rekey time set to 10 seconds, just to elicit the
behaviour very quickly.
PR: kern/138379
just "when the queue is busy."
After talking with the MAC team, it turns out that the linked list
implementation sometimes will not accept a TxDP update and will
instead re-read the link pointer. So even if the hardware has
finished transmitting a chain and has hit EOL/VEOL, it may still
re-read the link pointer to begin transmitting again.
So, always set ATH_BUF_BUSY on the last buffer in the chain (to
mark the last descriptor as the holding descriptor) and never
blank the axq_link pointer.
Tested:
* AR5416, STA mode
TODO:
* much more thorough testing with the pre-11n NICs, just to verify
that they behave the same way.
* test TDMA on the 11n and non-11n hardware.
The QCA9565 is a 1x1 2.4GHz 11n chip with integrated on-chip bluetooth.
The AR9300 HAL already has support for this chip; it just wasn't
included in the probe/attach path.
Tested:
* This commit brought to you over a QCA9565 wifi connection from
FreeBSD.
* .. ie, basic STA, pings, no iperf or antenna diversity checking just yet.
* That lock isn't actually held during reset - just the whole TX/RX path
is paused. So, remove the assertion.
* Log the TX queue status - how many hardware frames are active in the
MAC and whether the queue is active.
the pause/resume code to not be called completely symmetrically.
I'll chase down the root cause of that soon; this at least works around
the bug and tells me when it happens.
is compiled in or not.
This fixes issues with people running -HEAD but who build modules
without doing a "make buildkernel KERNCONF=XXX", thus picking up
opt_*.h. The resulting module wouldn't have 11n enabled and the
chainmask configuration would just be plain wrong.
* Add ah_ratesArray[] to the ar5416 HAL state - this stores the maximum
values permissable per rate.
* Since different chip EEPROM formats store this value in a different place,
store the HT40 power detector increment value in the ar5416 HAL state.
* Modify the target power setup code to store the maximum values in the
ar5416 HAL state rather than using a local variable.
* Add ar5416RateToRateTable() - to convert a hardware rate code to the
ratesArray enum / index.
* Add ar5416GetTxRatePower() - which goes through the gymnastics required
to correctly calculate the target TX power:
+ Add the power detector increment for ht40;
+ Take the power offset into account for AR9280 and later;
+ Offset the TX power correctly when doing open-loop TX power control;
+ Enforce the per-rate maximum value allowable.
Note - setting a TPC value of 0x0 in the TX descriptor on (at least)
the AR9160 resulted in the TX power being very high indeed. This didn't
happen on the AR9220. I'm guessing it's a chip bug that was fixed at
some point. So for now, just assume the AR5416/AR5418 and AR9130 are
also suspect and clamp the minimum value here at 1.
Tested:
* AR5416, AR9160, AR9220 hostap, verified using (2GHz) spectrum analyser
* Looked at target TX power in TX descriptor (using athalq) as well as TX
power on the spectrum analyser.
TODO:
* The TX descriptor code sets the target TX power to 0 for AR9285 chips.
I'm not yet sure why. Disable this for TPC and ensure that the TPC
TX power is set.
* AR9280, AR9285, AR9227, AR9287 testing!
* 5GHz testing!
Quirks:
* The per-packet TPC code is only exercised when the tpc sysctl is set
to 1. (dev.ath.X.tpc=1.) This needs to be done before you bring the
interface up.
* When TPC is enabled, setting the TX power doesn't end up with a call
through to the HAL to update the maximum TX power. So ensure that
you set the TPC sysctl before you bring the interface up and configure
a lower TX power or the hardware will be clamped by the lower TX
power (at least until the next channel change.)
Thanks to Qualcomm Atheros for all the hardware, and Sam Leffler for use
of his spectrum analyser to verify the TX channel power.
ath_tx_rate_fill_rcflags(). Include setting up the TX power cap in the
rate scenario setup code being passed to the HAL.
Other things:
* add a tx power cap field in ath_rc.
* Add a three-stream flag in ath_rc.
* Delete the LDPC flag from ath_rc - it's not a per-rate flag, it's a
global flag for the transmission.
directly referencing ni->ni_txpower.
This provides the hardware with a slightly more accurate idea of
the maximum TX power to be using.
This is part of a series to get per-packet TPC to work (better).
Tested:
* AR5416, hostap mode
is configured for higher rates (lower than max) but higher TX power
is configured for the lower rates, above the configured cap, to improve
long distance behaviour.
* Add the rest of the missing GPIO output mux types;
* Add in a new debug category;
* And a new MCI btcoex configuration option in ath_hal.ah_config
Obtained from: Qualcomm Atheros
buffers (ie, >4GB on amd64.)
The underlying problem was that PREREAD doesn't sync the mbuf
with the DMA memory (ie, bounce buffer), so the bounce buffer may
have had stale information. Thus it was always considering the
buffer completed and things just went off the rails.
This change does the following:
* Make ath_rx_pkt() always consume the mbuf somehow; it no longer
passes error mbufs (eg CRC errors, crypt errors, etc) back up
to the RX path to recycle. This means that a new mbuf is always
allocated each time, but it's cleaner.
* Push the RX buffer map/unmap to occur in the RX path, not
ath_rx_pkt(). Thus, ath_rx_pkt() now assumes (a) it has to consume
the mbuf somehow, and (b) that it's already been unmapped and
synced.
* For the legacy path, the descriptor isn't mapped, it comes out of
coherent, DMA memory anyway. So leave it there.
* For the EDMA path, the RX descriptor has to be cleared before
its passed to the hardware, so that when we check with
a POSTREAD sync, we actually get either a blank (not finished)
or a filled out descriptor (finished.) Otherwise we get stale
data in the DMA memory.
* .. so, for EDMA RX path, we need PREREAD|PREWRITE to sync the
data -> DMA memory, then POSTREAD|POSTWRITE to finish syncing
the DMA memory -> data.
* Whilst we're here, make sure that in EDMA buffer setup (ie,
bzero'ing the descriptor part) is done before the mbuf is
map/synched.
NOTE: there's been a lot of commits besides this one with regards to
tidying up the busdma handling in ath(4). Please check the recent
commit history.
Discussed with and thanks to: scottl
Tested:
* AR5416 (non-EDMA) on i386, with the DMA tag for the driver
set to 2^^30, not 2^^32, STA
* AR9580 (EDMA) on i386, as above, STA
* User - tested AR9380 on amd64 with 32GB RAM.
PR: kern/177530
before the TX path is being aborted.
Right now it's in the TDMA code and I can live with that; but it really
should get fixed.
I'll do a more thorough audit of this code soon.
* Don't use BUS_DMA_ALLOCNOW for descriptor DMA maps; we never use
bounce buffers for the descriptors themselves.
* Add some XXX's to mark where the ath_buf has its mbuf ripped from
underneath it without actually cleaning up the dmamap. I haven't
audited those particular code paths to see if the DMA map is guaranteed
to be setup there; I'll do that later.
* Print out a warning if the descdma tidyup code is given some descriptors
w/ maps to free. Ideally the owner will free the mbufs and unmap
the descriptors before freeing the descriptor/ath_buf pairs, but
right now that's not guaranteed to be done.
Reviewed by: scottl (BUS_DMA_ALLOCNOW tag)
the buffer is being freed.
* When buffers are cloned, the original mapping isn't copied but it
wasn't freeing the mapping until later. To be safe, free the
mapping when the buffer is cloned.
* ath_freebuf() now no longer calls the busdma sync/unmap routines.
* ath_tx_freebuf() now calls sync/unmap.
* Call sync first, before calling unmap.
Tested:
* AR5416, STA mode
The normal RX path (ath_rx_pkt()) will sync and unmap the
buffer before passing it up the stack. We only need to do this
if we're flushing the FIFO during reset/shutdown.
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)
This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.
The specifics:
* When filling/refilling the FIFO, use the new TXQ staging queue
for FIFO frames
* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
For now the non-CABQ transmit path pushes one frame into the TXQ
staging queue without setting up the intermediary link pointers
to chain them together, so draining frames from the txq staging
queue to the FIFO queue occurs AMPDU / MPDU at a time.
* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
at once.
* Now that frames are in a FIFO pending queue, we can top up the
FIFO after completing a single frame. This means we can keep
it filled rather than waiting for it drain and _then_ adding
more frames.
* The EDMA restart routine now walks the FIFO queue in the TXQ
rather than the pending queue and re-initialises the FIFO with
that.
* When restarting EDMA, we may have partially completed sending
a list. So stamp the first frame that we see in a list with
ATH_BUF_FIFOPTR and push _that_ into the hardware.
* When completing frames, only check those on the FIFO queue.
We should never ever queue frames from the pending queue
direct to the hardware, so there's no point in checking.
* Until I figure out what's going on, make sure if the TXSTATUS
for an empty queue pops up, complain loudly and continue.
This will stop the panics that people are seeing. I'll add
some code later which will assist in ensuring I'm populating
each descriptor with the correct queue ID.
* When considering whether to queue frames to the hardware queue
directly or software queue frames, make sure the depth of
the FIFO is taken into account now.
* When completing frames, tag them with ATH_BUF_BUSY if they're
not the final frame in a FIFO list. The same holding descriptor
behaviour is required when handling descriptors linked together
with a link pointer as the hardware will re-read the previous
descriptor to refresh the link pointer before contiuning.
* .. and if we complete the FIFO list (ie, the buffer has
ATH_BUF_FIFOEND set), then we don't need the holding buffer
any longer. Thus, free it.
Tested:
* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap
TODO:
* I don't yet trust that the EDMA restart routine is totally correct
in all circumstances. I'll continue to thrash this out under heavy
multiple-TXQ traffic load and fix whatever pops up.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.
Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.
For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.
For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.
So:
* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
lists. It's a TAILQ (like the normal hardware frame queue) rather than
the ath9k list-of-lists to represent FIFO entries.
* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.
* When allocating ath_buf entries, clear out the flag value before
returning it or it'll end up having stale flags.
* When cloning ath_buf entries, only clone ATH_BUF_MGMT. Don't clone
the FIFO related flags.
* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
drain the normal hardware queue.
Tested:
* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap
TODO:
* Test on other chipsets, just to be thorough.
instead of axq_link.
This (among a bunch of uncommitted work) is required for EDMA chips
to correctly transmit frames on the CABQ.
Tested:
* AR9280, hostap mode
* AR9380/AR9580, hostap mode (staggered beacons)
TODO:
* This code only really gets called when burst beacons are used;
it glues multiple CABQ queues together when sending to the hardware.
* More thorough bursted beacon testing! (first requires some work with
the beacon queue code for bursted beacons, as that currently uses the
link pointer and will fail on EDMA chips.)
the descriptor link pointer, rather than directly.
This is needed on AR9380 and later (ie, EDMA) NICs so the multicast queue
has a chance in hell of being put together right.
Tested:
* AR9380, AR9580 in hostap mode, CABQ traffic (but with other patches..)
related issues.
Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.
This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.
The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.
The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support. Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot. For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.
The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission. I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.
Major:
* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
a single lock.
* Leave the software queue side of things under the ATH_TX_LOCK lock,
(continuing) to serialise things as they are.
* Add a new function which is called whenever there's a beacon miss,
to print out some debugging. This is primarily designed to help
me figure out if the beacon miss events are due to a noisy environment,
issues with the PHY/MAC, or other.
* Move the CABQ setup/enable to occur _after_ all the VAPs have been
looked at. This means that for multiple VAPS in bursted mode, the
CABQ gets primed once all VAPs are checked, rather than being primed
on the first VAP and then having frames appended after this.
Minor:
* Add a (disabled) twiddle to let me enable/disable cabq traffic.
It's primarily there to let me easily debug what's going on with beacon
and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
trying to trace down.
* Clear bf_next when flushing frames; it should quieten some warnings
that show up when a node goes away.
Tested:
* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)
TODO:
* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
correctly set when chaining descriptors.)
to stuck beacons.
* Set the cabq readytime (ie, how long to burst for) to 50% of the total
beacon interval time
* fix the cabq adjustment calculation based on how the beacon offset is
calculated (the SWBA/DBA time offset.)
This is all still a bit magic voodoo but it does seem to have further
quietened issues with missed/stuck beacons under my local testing.
In any case, it better matches what the reference HAL implements.
Obtained from: Qualcomm Atheros
"complete RX frames."
The 128 entry RX FIFO is really easy to fill up and miss refilling
when it's done in the ath taskq - as that gets blocked up doing
RX completion, TX completion and other random things.
So the 128 entry RX FIFO now gets emptied and refilled in the ath_intr()
task (and it grabs / releases locks, so now ath_intr() can't just be
a FAST handler yet!) but the locks aren't held for very long. The
completion part is done in the ath taskqueue context.
Details:
* Create a new completed frame list - sc->sc_rx_rxlist;
* Split the EDMA RX process queue into two halves - one that
processes the RX FIFO and refills it with new frames; another
that completes the completed frame list;
* When tearing down the driver, flush whatever is in the deferred
queue as well as what's in the FIFO;
* Create two new RX methods - one that processes all RX queues,
one that processes the given RX queue. When MSI is implemented,
we get told which RX queue the interrupt came in on so we can
specifically schedule that. (And I can do that with the non-MSI
path too; I'll figure that out later.)
* Convert the legacy code over to use these new RX methods;
* Replace all the instances of the RX taskqueue enqueue with a call
to a relevant RX method to enqueue one or all RX queues.
Tested:
* AR9380, STA
* AR9580, STA
* AR5413, STA
* when pulling frames off of the TID queue, the ATH_TID_REMOVE()
macro decrements the axq_depth field. So don't do it twice.
* in ath_tx_comp_cleanup_aggr(), bf wasn't being reset to bf_first
before walking the buffer list to complete buffers; so those buffers
will leak.
Since this is being done during buffer free, it's a crap shoot whether
the TX path lock is held or not. I tried putting the ath_freebuf() code
inside the TX lock and I got all kinds of locking issues - it turns out
that the buffer free path sometimes is called with the lock held and
sometimes isn't. So I'll go and fix that soon.
Hence for now the holdingbf buffers are protected by the TXBUF lock.
When working on TDMA, Sam Leffler found that the MAC DMA hardware
would re-read the last TX descriptor when getting ready to transmit
the next one. Thus the whole ATH_BUF_BUSY came into existance -
the descriptor must be left alone (very specifically the link pointer
must be maintained) until the hardware has moved onto the next frame.
He saw this in TDMA because the MAC would be frequently stopping during
active transmit (ie, when it wasn't its turn to transmit.)
Fast-forward to today. It turns out that this is a problem not with
a single MAC DMA instance, but with each QCU (from 0->9). They each
maintain separate descriptor pointers and will re-read the last
descriptor when starting to transmit the next.
So when your AP is busy transmitting from multiple TX queues, you'll
(more) frequently see one QCU stopped, waiting for a higher-priority QCU
to finsh transmitting, before it'll go ahead and continue. If you mess
up the descriptor (ie by freeing it) then you're short of luck.
Thanks to rpaulo for sticking with me whilst I diagnosed this issue
that he was quite reliably triggering in his environment.
This is a reimplementation; it doesn't have anything in common with
the ath9k or the Qualcomm Atheros reference driver.
Now - it in theory doesn't apply on the EDMA chips, as long as you
push one complete frame into the FIFO at a time. But the MAC can DMA
from a list of frames pushed into the hardware queue (ie, you concat
'n' frames together with link pointers, and then push the head pointer
into the TXQ FIFO.) Since that's likely how I'm going to implement
CABQ handling in hostap mode, it's likely that I will end up teaching
the EDMA TX completion code about busy buffers, just to be "sure"
this doesn't creep up.
Tested - iperf ap->sta and sta->ap (with both sides running this code):
* AR5416 STA
* AR9160/AR9220 hostap
To validate that it doesn't break the EDMA (FIFO) chips:
* AR9380, AR9485, AR9462 STA
Using iperf with the -S <tos byte decimal value> to set the TCP client
side DSCP bits, mapping to different TIDs and thus different TX queues.
TODO:
* Make this work on the EDMA chips, if we end up pushing lists of frames
to the hardware (eg how we eventually will handle cabq in hostap/ibss
mode.)
* a flags field that lets me know what's going on;
* the hardware ratecode, unmolested by conversion to a bitrate;
* the HAL rs_flags field, useful for debugging;
* specifically mark aggregate sub-frames.
This stuff sorely needs tidying up - it's missing some important
stuff (eg numdelims) and it would be nice to put the flags at the
beginning rather than at the end.
Tested:
* AR9380, STA mode, 2x2 HT40, monitoring RSSI and EVM values
I can 100% reliably trigger this on TID 1 traffic by using iperf -S 32
<client fields> to create traffic that maps to TID 1.
The reference driver doesn't do this check.
I stumbled across this whilst trying to debug another weird hang reported
on the freebsd-wireless list.
Whilst here, add in the STBC check to ath_rateseries_setup().
Whilst here, fix the short preamble flag to be set only for legacy rates.
Whilst here, comment that we should be using the full set of decisions
made by ath_rateseries_setup() rather than recalculating them!
routine.
There were still corner cases where the EWMA update stats are being
called on a rix which didn't have an intermediary stats update; thus
no packets were counted against it. Sigh.
This should fix the crashes I've been seeing on recent -HEAD.
* If both ends have negotiated (at least) one stream;
* Only if it's a single stream rate (MCS0-7);
* Only if there's more than one TX chain enabled.
Tested:
* AR9280 STA mode -> Atheros AP; tested both MCS2 (STBC) and MCS12 (no STBC.)
Verified using athalq to inspect the TX descriptors.
TODO:
* Test AR5416 - no STBC should be enabled;
* Test AR9280 with one TX chain enabled - no STBC should be enabled.
The HAL already included the STBC fields; it just needed to be exposed
to the driver and net80211 stack.
This should allow single-stream STBC TX and RX to be negotiated; however
the driver and rate control code currently don't do anything with it.
rate.
This fixes two things:
* The intermediary rates now also have their EWMA values changed;
* The existing code was using the wrong value for longtries - so the
EWMA stats were only adjusted for the first rate and not subsequent
rates in a MRR setup.
TODO:
* Merge the EWMA updates into update_stats() now..
* Remove ar5416UpdateChainmasks();
* Remove the TX chainmask override code from the ar5416 TX descriptor
setup routines;
* Write a driver method to calculate the current chainmask based on the
operating mode and update the driver state;
* Call the HAL chainmask method before calling ath_hal_reset();
* Use the currently configured chainmask in the TX descriptors rather than
the hardware TX chainmasks.
Tested:
* AR5416, STA/AP mode - legacy and 11n modes
Right now the only way to set the chainmask is to set the hardware
configured chainmask through capabilities. This is fine for forcing
the chainmask to be something other than what the hardware is capable
of (eg to reduce TX/RX to one connected antenna) but it does change what
the HAL hardware chainmask configuration is.
For operational mode changes, it (may?) make sense to separately control
the TX/RX chainmask.
Right now it's done as part of ar5416_reset.c - ar5416UpdateChainMasks()
calculates which TX/RX chainmasks to enable based on the operating mode.
(1 for legacy and whatever is supported for 11n operation.) But doing
this in the HAL is suboptimal - the driver needs to know the currently
configured chainmask in order to correctly enable things for each
TX descriptor. This is currently done by overriding the chainmask
config in the ar5416 TX routines but this has to disappear - the AR9300
HAL support requires the driver to dynamically set the TX chainmask based
on the TX power and TX rate in order to meet mini-PCIe slot power
requirements.
So:
* Introduce a new HAL method to set the operational chainmask variables;
* Introduce null methods for the previous generation chipsets;
* Add new driver state to record the current chainmask separate from
the hardware configured chainmask.
Part #2 of this will involve disabling ar5416UpdateChainMasks() and moving
it into the driver; as well as properly programming the TX chainmask
based on the currently configured HAL chainmask.
Tested:
* AR5416, STA mode - both legacy (11a/11bg) and 11n rates - verified
that AR_SELFGEN_MASK (the chainmask used for self-generated frames like
ACKs and RTSes) is correct, as well as the TX descriptor contents is
correct.
an incorrectly calculated RTS duration value when transmitting aggregates.
These earlier 802.11n NICs incorrectly used the ACK duration time when
calculating what to put in the RTS of an aggregate frame. Instead it
should have used the block-ack time. The result is that other stations
may not reserve enough time and start transmitting _over_ the top of
the in-progress blockack field. Tsk.
This workaround is to popuate the burst duration field with the delta
between the ACK duration the hardware is using and the required duration
for the block-ack. The result is that the RTS field should now contain
the correct duration for the subsequent block-ack.
This doesn't apply for AR9280 and later NICs.
Obtained from: Qualcomm Atheros
Specifically - never jack the TX FIFO threshold up to the absolute
maximum; always leave enough space for two DMA transactions to
appear.
This is a paranoia from the Linux ath9k driver. It can't hurt.
Obtained from: Linux ath9k
The default is to limit them to what the hardware is capable of.
Add sysctl twiddles for both the non-RTS and RTS protected aggregate
generation.
Whilst here, add some comments about stuff that I've discovered during
my exploration of the TX aggregate / delimiter setup path from the
reference driver.
This has reduced the number of TX delimiter and data underruns when
doing large UDP transfers (>100mbit).
This stops any HAL_INT_TXURN interrupts from occuring, which is a good
sign!
Obtained from: Qualcomm Atheros
* Delete this debugging print - I used it when debugging the initial
TX descriptor chaining code. It now works, so let's toss it.
It just confuses people if they enable TX descriptor debugging as they
get two slightly different versions of the same descriptor.
* Indenting.
part of ts_status. Thus:
* make sure we decode them from ts_flags, rather than ts_status;
* make sure we decode them regardless of whether there's an error or not.
This correctly exposes descriptor configuration errors, TX delimiter
underruns and TX data underruns.
actually do have to reinitialise the RX side of things after an RX
descriptor EOL error.
* Revert a change of mine from quite a while ago - don't shortcut the
RX initialisation path. There's a RX FIFO bug in the earlier chips
(I'm not sure when it was fixed in this series, but it's fixed
with the AR9380 and later) which causes the same RX descriptor to
be written to over and over. This causes the descriptor to be
marked as "done", and this ends up causing the whole RX path to
go very strange. This should fixed the "kickpcu; handled X packets"
message spam where "X" is consistently small.
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:
* With if_start and the ifnet mbuf queue, any temporary latency
would get eaten up by some mbufs being queued. With ath_transmit()
queuing things to ath_buf's, I'd only get 512 TX buffers before I
couldn't queue any further frames.
* There's also some non-zero latency involved with TX being pushed
into a taskqueue via direct dispatch. Any time the scheduler didn't
immediately schedule the ath TX task would cause extra latency.
Various 1ge/10ge drivers implement both direct dispatch (if the TX
lock can be acquired) and deferred task transmission (if the TX lock
can't be acquired), with frames being pushed into a drbd queue.
I'll have to do this at some point, but until I figure out how to
deal with 802.11 fragments, I'll have to wait a while longer.
So what I saw:
* lots of extra latency, specially under load - if the taskqueue
wasn't immediately scheduled, things went pear shaped;
* any extra latency would result in TX ath_buf's taking their sweet time
being replenished, so any further calls to ath_transmit() would drop
mbufs.
* .. yes, there's no explicit backpressure here - things are just dropped.
Eek.
With this, the general performance has gone up, but those subtle if_start()
related race conditions are back. For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.
There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation. Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
when they're being called from the TX completion handler.
Going (back) through the taskqueue is just adding extra locking and
latency to packet operations. This improves performance a little bit
on most NICs.
It still hasn't restored the original performance of the AR5416 NIC
but the AR9160, AR9280 and later NICs behave very well with this.
Tested:
* AR5416 STA (still tops out at ~ 70mbit TCP, rather than 150mbit TCP..)
* AR9160 hostap (good for both TX and RX)
* AR9280 hostap (good for both TX and RX)
crappy 802.11n performance, sigh.)
With the AR5416, aggregates need to be limited to 8KiB if RTS/CTS is
enabled. However, larger aggregates were going out with RTSCTS enabled.
The following was going on:
* The first buffer in the list would have RTS/CTS enabled in
bf->bf_state.txflags;
* The aggregate would be formed;
* The "copy over the txflags from the first buffer" logic that I added
blanked the RTS/CTS TX flags fields, and then copied the bf_first
RTS/CTS flags over;
* .. but that'd cause bf_first to be blanked out! And thus the flag
was cleared;
* So the rest of the aggregate formation would run with those flags
cleared, and thus > 8KiB aggregates were formed.
The driver is now (again) correctly limiting aggregate formation for
the AR5416 but there are still other pending issues to resolve.
Tested:
* AR5416, STA mode
Right now, ic_curchan seems to be updated rather quickly (ie, during
the ioctl) and before the driver gets notified of what's going on.
So what I was seeing was:
* NIC was in channel X;
* It generates PHY errors for channel X;
* an ioctl comes along from userland and changes things to channel Y;
* .. this updates ic_curchan, but hasn't yet reset the hardware;
* in parallel, RX is occuring and it looks at ic_curchan;
* .. which is channel Y, so events get stamped with that now.
Sigh.
the separate ath0 TX taskq.
Whilst here, make sure that the TX software scheduler is also
running out of the TX task, rather than the ath0 taskqueue.
Make sure that the tx taskqueue is blocked/unblocked as necessary.
This allows for a little more parallelism on multi-core machines,
as well as (eventually) supporting a higher task priority for TX
tasks, allowing said TX task to preempt an already running RX or
TX completion task.
Tested:
* AR5416, AR9280 hostap and STA modes
This is easily possible now that the TX is protected by a single
lock, rather than a per-TXQ (and thus per-TID) lock.
Only set CLRDMASK if none of the destinations are filtered.
This likely will need some tuning when it comes time to do UASPD/PS-POLL
TX, however at that point it should be manually set anyway.
Tested:
* AR9280, STA mode
TODO:
* More thorough testing in AP mode
* test other chipsets, just to be safe/sure.
chip hangs.
* Always do a reset in ath_bmiss_proc(), regardless of whether the
hardware is "hung" or not. Specifically, for spectral scan, there's
likely a whole bunch of potential hangs that we don't (yet) recognise
in the HAL. So to avoid staying RX deaf persisting until the station
disassociates, just do a no-loss reset.
* Set sc_beacons=1 in STA mode. During a reset, the beacon programming
isn't done. (It's likely I need to set sc_syncbeacons during a hang
reset, but I digress.) Thus after a reset, there's no beacon timer
programming to send a BMISS interrupt if beacons aren't heard ..
thus if the AP disappears, you won't get notified and you'll have to
reset your interface.
This hasn't yet fixed all of the hangs that I've seen when debugging
spectral scan, but it's certainly reduced the hang frequency and it
should improve general STA stability in very noisy environments.
Tested:
* AR9280, STA mode, spectral scan off/on
PR: kern/175227
when an interface is going down.
Right now it's quite possible (but very unlikely!) that ath_reset()
or similar is called, leading to a beacon config call, in parallel with
the last VAP being destroyed.
This likely should be fixed by making sure the bmiss/bstuck/watchdog
taskqueues are canceled whenever the last VAP is destroyed.
if_start().
This removes the overlapping data path TX from occuring, which
solves quite a number of the potential TX queue races in ath(4).
It doesn't fix the net80211 layer TX queue races and it doesn't
fix the raw TX path yet, but it's an important step towards this.
This hasn't dropped the TX performance in my testing; primarily
because now the TX path can quickly queue frames and continue
along processing.
This involves a few rather deep changes:
* Use the ath_buf as a queue placeholder for now, as we need to be
able to support queuing a list of mbufs (ie, when transmitting
fragments) and m_nextpkt can't be used here (because it's what is
joining the fragments together)
* if_transmit() now simply allocates the ath_buf and queues it to
a driver TX staging queue.
* TX is now moved into a taskqueue function.
* The TX taskqueue function now dequeues and transmits frames.
* Fragments are handled correctly here - as the current API passes
the fragment list as one mbuf list (joined with m_nextpkt) through
to the driver if_transmit().
* For the couple of places where ath_start() may be called (mostly
from net80211 when starting the VAP up again), just reimplement
it using the new enqueue and taskqueue methods.
What I don't like (about this work and the TX code in general):
* I'm using the same lock for the staging TX queue management and the
actual TX. This isn't required; I'm just being slack.
* I haven't yet moved TX to a separate taskqueue (but the taskqueue is
created); it's easy enough to do this later if necessary. I just need
to make sure it's a higher priority queue, so TX has the same
behaviour as it used to (where it would preempt existing RX..)
* I need to re-review the TX path a little more and make sure that
ieee80211_node_*() functions aren't called within the TX lock.
When queueing, I should just push failed frames into a queue and
when I'm wrapping up the TX code, unlock the TX lock and
call ieee80211_node_free() on each.
* It would be nice if I could hold the TX lock for the entire
TX and TX completion, rather than this release/re-acquire behaviour.
But that requires that I shuffle around the TX completion code
to handle actual ath_buf free and net80211 callback/free outside
of the TX lock. That's one of my next projects.
* the ic_raw_xmit() path doesn't use this yet - so it still has
sequencing problems with parallel, overlapping calls to the
data path. I'll fix this later.
Tested:
* Hostap - AR9280, AR9220
* STA - AR5212, AR9280, AR5416
This is intended to support reporting FFT results during active channel
scans, for users who would like to fiddle around with writing applications
that do both FFT visualisation _and_ AP scanning.
* add a new ioctl to enable/trigger spectral scan at channel change/reset;
* set do_spectral consistently if it's enabled, so a channel set/reset
will carry forth the correct PHY error configuration so frames
are actually received;
* for NICs that don't do spectral scan, don't bother checking the
spectral scan state on channel change/reset.
Tested:
* AR9280 - STA and scanning;
* AR5416 - STA, ensured that the SS code doesn't panic
This includes the HAL routines to setup, enable/activate/disable spectral
scan and configure the relevant registers.
This still requires driver interaction to enable spectral scan reporting.
Specifically:
* call ah_spectralConfigure() to configure and enable spectral scan;
* .. there's currently no way to disable spectral scan... that will have
to follow.
* call ah_spectralStart() to force start a spectral report;
* call ah_spectralStop() to force stop an active spectral report.
The spectral scan results appear as PHY errors (type 0x5 on the AR9280,
same as radar) but with the spectral scan bit set (0x10 in the last byte
of the frame) identifying it as a spectral report rather than a radar
FFT report.
Caveats:
* It's likely quite difficult to run spectral _and_ radar at the same
time. Enabling spectral scan disables the radar thresholds but
leaves radar enabled. Thus, the driver (for now) needs to ensure
that only one or the other is enabled.
* .. it needs testing on HT40 mode.
Tested:
* AR9280 in STA mode, HT/20 only
TODO:
* Test on AR9285, AR9287;
* Test in both HT20 and HT40 modes;
* .. all the driver glue.
Obtained from: Qualcomm Atheros
* Finish adding the HAL capability to announce whether a NIC supports
spectral scan or not;
* Add spectral scan methods to the HAL structure;
* Add HAL_SPECTRAL_PARAM for configuration of the spectral scan logic.
The capability ID and HAL_SPECTRAL_PARAM struct are from Qualcomm
Atheros.
I couldn't think of a way to maintain the hardware TXQ locks _and_ layer
on top of that per-TXQ software queuing and any other kind of fine-grained
locks (eg per-TID, or per-node locks.)
So for now, to facilitate some further code refactoring and development
as part of the final push to get software queue ps-poll and u-apsd handling
into this driver, just do away with them entirely.
I may eventually bring them back at some point, when it looks slightly more
architectually cleaner to do so. But as it stands at the present, it's
not really buying us much:
* in order to properly serialise things and not get bitten by scheduling
and locking interactions with things higher up in the stack, we need to
wrap the whole TX path in a long held lock. Otherwise we can end up
being pre-empted during frame handling, resulting in some out of order
frame handling between sequence number allocation and encryption handling
(ie, the seqno and the CCMP IV get out of sequence);
* .. so whilst that's the case, holding the lock for that long means that
we're acquiring and releasing the TXQ lock _inside_ that context;
* And we also acquire it per-frame during frame completion, but we currently
can't hold the lock for the duration of the TX completion as we need
to call net80211 layer things with the locks _unheld_ to avoid LOR.
* .. the other places were grab that lock are reset/flush, which don't happen
often.
My eventual aim is to change the TX path so all rejected frame transmissions
and all frame completions result in any ieee80211_free_node() calls to occur
outside of the TX lock; then I can cut back on the amount of locking that
goes on here.
There may be some LORs that occur when ieee80211_free_node() is called when
the TX queue path fails; I'll begin to address these in follow-up commits.
enforcing the TXOP and TBTT limits:
* Frames which will overlap with TBTT will not TX;
* Frames which will exceed TXOP will be filtered.
This is not enabled by default; it's intended to be enabled by the
TDMA code on 802.11n capable chipsets.
now this works for non-debug and debug builds.
* Add a comment reminding me (or someone) to audit all of the relevant
math to ensure there's no weird wrapping issues still lurking about.
But yes, this does seem to be mostly working.
Pointy-hat-to: adrian, yet again
* add some further debugging prints, which are quite nice to have
* add in ALQ hooks (optional!) to allow for the TDMA information to be
logged in-line with the TX and RX descriptor information.
The existing logic wrapped programming nexttbtt at 65535 TU.
This is not good enough for the 11n chips, whose nexttbtt register
(GENERIC_TIMER_0) has an initial value from 0..2^31-1 TSF.
So converting the TU to TSF had the counter wrap at (65535 << 10) TSF.
Once this wrap occured, the nexttbtt value was very very low, much
lower than the current TSF value. At this point, the nexttbtt timer
would constantly fire, leading to the TX queue being constantly gated
open.. and when this occured, the sender was not correctly transmitting
in its slot but just able to continuously transmit. The master would
then delay transmitting its beacon until after the air became free
(which I guess would be after the burst interval, before the next burst
interval would quickly follow) and that big delta in master beacon TX
would start causing big swings in the slot timing adjustment.
With this change, the nexttbtt value is allowed to go all the way up
to the maximum value permissable by the 32 bit representation.
I haven't yet tested it to that point; I really should. The AR5212
HAL now filters out values above 65535 TU for the beacon configuration
(and the relevant legal values for SWBA, DBA and NEXTATIM) and the
AR5416 HAL just dutifully programs in what it should.
With this, TDMA is now useful on the 802.11n chips.
Tested:
* AR5416, AR9280 TDMA slave
* AR5413 TDMA slave
what the maximum legal values are.
The current beacon timer configuration from TDMA wraps things at
HAL_BEACON_PERIOD-1 TU. For the 11a chips this is fine, but for
the 11n chips it's not enough resolution. Since the 11a chips have a
limit on what's "valid", just enforce this so when I do write larger
values in, they get suitably wrapped before programming.
Tested:
* AR5413, TDMA slave
Todo:
* Run it for a (lot) longer on a clear channel, ensure that no strange
slippages occur.
* Re-validate this on STA configurations, just to be sure.
After chatting with the MAC team, the TSF writes (at least on the 11n
MACs, I don't know about pre-11n MACs) are done as 64 bit writes that
can take some time. So, doing a 32 bit TSF write is definitely not
supported. Leave a comment here which explains that.
Whilst here, add a comment which outlines that after a reset or TSF
write, the TSF write may take a while (up to 50uS) to update.
A write or reset shouldn't be done whilst the previous one is in
flight. Also (and this isn't currently done) a read shouldn't
occur until the SLEEP32_TSF_WRITE_STAT is clear. Right now we're
not doing that, mostly because we haven't been doing lots of TSF
resets/writes until recently.
TSF write.
The TSF_L32 update is fine for the AR5413 (and later, I guess) 11abg NICs
however on the 11n NICs this didn't work. The TSF writes were causing
a much larger time to be skipped, leading to the timing to never
converge.
I've tested this 64 bit TSF read, adjust and write on both the
11n NICs and the AR5413 NIC I've been using for testing. It works
fine on each.
This patch allows the AR5416/AR9280 to be used as a TDMA member.
I don't yet know why the AR9280 is ~7uS accurate rather than ~3uS;
I'll look into it soon.
Tested:
* AR5413, TDMA slave (~ 3us accuracy)
* AR5416, TDMA slave (~ 3us accuracy)
* AR9280, TDMA slave (~ 7us accuracy)
on the 802.11n NICs.
The 802.11n NICs return a TBTT value that continues far past the 16 bit
HAL_BEACON_PERIOD time (in TU.) The code would constrain nextslot to
HAL_BEACON_PERIOD, but it wasn't constraining nexttbtt - the pre-11n
NICs would only return TU values from 0 -> HAL_BEACON_PERIOD. Thus,
when nexttbtt exceeded 64 milliseconds, it would not wrap (but nextslot
did) which lead to a huge tsfdelta.
So until the slot calculation is converted to work in TSF rather than
a mix of TSF and TU, "make" the nexttbtt values match the TU assumptions
for pre-11n NICs.
This fixes the crazy deltatsf calculations but it doesn't fix the
non-convergent tsfdelta issue. That'll be fixed in a subsequent commit.
encryption types.
The AR5210 only has four WEP key slots, in contrast to what the
later MACs have (ie, the keycache.) So there's no way to store a "clear"
key.
Even if the driver is taught to not allocate CLR key entries for
the AR5210, the hardware will actually attempt to decode the encrypted
frames with the (likely all 0!) WEP keys.
So for now, disable the hardware encryption entirely and just so it
all in software. That allows both WEP -and- WPA to actually work.
If someone wishes to try and make hardware WEP _but_ software WPA work,
they'll have to create a HAL capability to enable/disable hardware
encryption based on the current STA/Hostap mode. However, making
multi-vap work with one WEP and one WPA VAP will require hardware
encryption to be disabled anyway.
* For CABQ traffic, I -can- chain them together using the next pointer
and just push that particular chain head to the CABQ. However, this
doesn't magically make EDMA TX CABQ work - I have to do some further
hoop jumping.
* upon setup, tell the alq code what the chip information is.
* add TX/RX path logging for legacy chips.
* populate the tx/rx descriptor length fields with a best-estimate.
It's overly big (96 bytes when AH_SUPPORT_AR5416 is enabled)
but it'll do for now.
Whilst I'm here, add CURVNET_RESTORE() here during probe/attach as a
partial solution to fixing crashes during attach when the attach fails.
There are other attach failures that I have to deal with; those'll come
later.
* Add a new method which allows the driver to push the MAC/phy/hal info
into the logging stream.
* Add a new ALQ logging entry which logs the mac/phy/hal information.
* Modify the ALQ startup path to log the MAC/phy/hal information
so the decoder knows which HAL/chip is generating this information.
* Convert the header and mac/phy/hal information to use be32, rather than
host order. I'd like to make this stuff endian-agnostic so I can
decode MIPS generated logs on a PC.
This requires some further driver modifications to correctly log the
right initial chip information.
Also - although noone bar me is currently using this, I've shifted the
debug bitmask around a bit. Consider yourself warned!
This was broken by me when merging the 802.11n aggregate descriptor chain
setup with the default descriptor chain setup, in preparation for supporting
AR9380 NICs.
The corner case here is quite specific - if you queue an aggregate frame
with >1 frames in it, and the last subframe has only one descriptor making
it up, then that descriptor won't have the rate control information
copied into it. Look at what happens inside ar5416FillTxDesc() if
both firstSeg and lastSeg are set to 1.
Then when ar5416ProcTxDesc() goes to fill out ts_rate based on the
transmit index, it looks at the rate control fields in that descriptor
and dutifully sets it to be 0.
It doesn't happen for non-aggregate frames - if they have one descriptor,
the first descriptor already has rate control info.
I removed the call to ath_hal_setuplasttxdesc() when I migrated the
code to use the "new" style aggregate chain routines from the HAL.
But I missed this particular corner case.
This is a bit inefficient with MIPS boards as it involves a few redundant
writes into non-cachable memory. I'll chase that up when it matters.
Tested:
* AR9280 STA mode, TCP iperf traffic
* Rui Paulo <rpaulo@> first reported this and has verified it on
his AR9160 based AP.
PR: kern/173636
This happens during a scan in STA mode; any queued data frames will
be power save queued but as there's no TIM in STA mode, it panics.
This was introduced by me when I disabled my driver-aware power save
handling support.
actual traffic with an AR9380/AR9382/AR9485.
The sample rate control stats would show impossibly large numbers for
"successful packets transmitted." The number was a tad under 2^^64-1.
So after a bit of digging, I found that the sample rate control code
was making 'tries' turn into a negative number.. and this was because
ts_longretry was too small.
The hardware returns "ts_longretry" at the current rate selection,
not overall for that TX descriptor. So if you setup four TX rate
scenarios and the second one works, ts_longretry is only set for
the number of attempts at that second rate scenario. The FreeBSD HAL
code does the correction in ath_hal_proctxdesc() - however, this isn't
possible with EDMA.
EDMA TX completion is done separate from the original TX descriptor.
So the real solution is to split out "find ts_rate and ts_longretry"
from "complete TX descriptor". Until that's done, put a hack in
the EDMA TX path that uses the rate scenario information in the ath_buf.
Tested: AR9380, AR9382, AR9485 STA mode
events.
This is primarily for the TX EDMA and TX EDMA completion. I haven't yet
tied it into the EDMA RX path or the legacy TX/RX path.
Things that I don't quite like:
* Make the pointer type 'void' in ath_softc and have if_ath_alq*()
return a malloc'ed buffer. That would remove the need to include
if_ath_alq.h in if_athvar.h.
* The sysctl setup needs to be cleaned up.
I'm using this to debug EDMA TX and RX descriptors and it's really helpful
to have a non-printf() way to decode frames.
I won't link this into the build until I've tidied it up a little more.
This will eventually be behind ATH_DEBUG_ALQ.
ps-poll is totally broken in its current form.
This should unbreak things enough to let people use PS-POLL devices,
but leave it in place for me to finish PS-POLL handling.
the non-aggregate path.
I "cheated" by using some TX setup code in our HAL that isn't present
in the atheros HAL (or Linux ath9k.)
The old path for forming aggregates was:
* setup the rate control in the first descriptor;
* call chaintxdesc() on all the frames;
* call setupfirsttxdesc() on the first descrpitor in the first
frame;
* call setuplasttxdesc() on the last descriptor in the last frame.
The new path for forming aggregates looks like the non-aggregate path:
* call setuptxdesc() on the first descriptor in the first frame;
* setup the rate control in the first descriptor;
* call filltxdesc() on each descriptor in the frame;
* if it's an aggregate - call set11n_aggr_{first, middle, last} as
appropriate (see the code for a description of what is "appropriate".)
Now, this is done primarily for the AR9300 HAL - it doesn't implement
the first set of aggregate functions. It just has the older methods
and the "first/middle/last" aggregate methods. So, let's convert the
code to use these.
Note: the AR5416 HAL in FreeBSD had that code (from me, a while ago)
and a previous commit brought it up to behave the same as the AR9300
HAL routines.
There's some further tidyups to be done - specifically, avoid doing
multiple calls to the 11n descriptor functions. I shouldn't call
clr11n_aggr(), then set11n_aggr_middle(), then also set11n_aggr_first().
On (at least MIPS) the TX descriptors are in non-cachable memory and
this will cause multiple slow writes.
I'll debug/tidy that up in a future commit.
Tested:
* AR9280, STA
* AR9280/AR9160, AP
* AR9380, STA (using a local, closed source HAL, sorry!)
them, please let me know if not). Most of these are of the form:
static const struct bzzt_type {
[...list of members...]
} const bzzt_devs[] = {
[...list of initializers...]
};
The second const is unnecessary, as arrays cannot be modified anyway,
and if the elements are const, the whole thing is const automatically
(e.g. it is placed in .rodata).
I have verified this does not change the binary output of a full kernel
build (except for build timestamps embedded in the object files).
Reviewed by: yongari, marius
MFC after: 1 week
* don't poke ath_hal_txstart() if nothing was pushed into the FIFO during
the refill process;
* shuffle around the TX debugging output a little so it's logged at
TX hardware enqueue;
* Add logging of the TX status processing.
of small (< 256 byte) aggregate frames.
This needs to be done or 11n aggregation TX just simply doesn't work
on these NICs.
Whilst here, extend some debug printing; I was using this whilst
debugging the TX power setup in the TX descriptor(s) on the AR9380.
* introduce a new HAL API method to pull out the TX status descriptor
contents.
* Add num_delims to the 11n first aggr method. This isn't used by the
driver at the moment so it won't affect anything.
* Add some more ANI spur immunity levels.
* For AR5111 radios attached to an AR5212, limit the 5GHz channels
that are available. A later revision of the AR5111 supports the 4.9GHz
PSB channels but right now there's no check in place for the radio
revision.
If someone wants PSB support on AR5212+AR5111 radios then please let
me know and I'll add the relevant version check.
Obtained from: Qualcomm Atheros
the internet as "AR9380 and later which didn't get its PCI ID written
in at power-on", so it's hardly an unknown constant.
Obtained from: Qualcomm Atheros
in some very degenerate conditions.
However, until ath_rate_form_aggr() is taught to not form aggregates
if ANY selected rate is non-MCS, this can't yet be enabled.
So, just add a comment.
I've tried serialising TX using queues and such but unfortunately
due to how this interacts with the locking going on elsewhere in the
networking stack, the TX task gets delayed, resulting in quite a
noticable throughput loss:
* baseline TCP for 2x2 11n HT40 is ~ 170mbit/sec;
* TCP for TX task in the ath taskq, with the RX also going on - 80mbit/sec;
* TCP for TX task in a separate, second taskq - 100mbit/sec.
So for now I'm going with the Linux wireless stack approach - lock tx
early. The linux code does in the wireless stack, before the 802.11
state stuff happens and before it's punted to the driver.
But TX locking needs to also occur at the driver layer as the TX
completion code _also_ begins to drain the ifnet TX queue.
Whilst I'm here, add some KTR traces for the TX path.
Note:
* This really should be done at the net80211 layer (as well, at least.)
But that'll have to wait for a little more thought to happen.
the power save queue.
* introduce some new ATH_NODE lock protected fields, tracking the
net80211 psq and TIM state;
* when doing buffer transitions - ie, when sending and completing
buffers - check the state of the SWQ and update the TIM appropriately.
* when clearing the TIM bit, if the SWQ is not empty then delay clearing
it.
This is racy, but it's no less racy than the current net80211 power
save queue management code. Specifically, with multiple TX threads,
it's quite plausible that parallel state updates will race and the
TIM will be left in an inconsistent state. I'll address that in
a follow-up commit.
support with ath(4) and VIMAGE.
Right now the VIMAGE code doesn't supply a default vnet context during:
* hotplug attach;
* any device detach.
It special cases kldload/boot time probing (by setting the context to
vnet0) but that doesn't occur when probing devices during a bus rescan -
eg, adding a cardbus card.
These will eventually go away when the VIMAGE support extends to providing
default contexts to hotplug attach/detach.
fragment rate lookups correctly, add a comment describing exactly that.
The assumption in the fragment duration code is the duration of the next
fragment will match the rate used by the current fragment. But I think
a rate lookup is being done for _each_ fragment. For older pre-sample
rate control this would almost always be the case, but for sample
it may be incorrect more often then correct.
stashed away in ath_node.
As much as I tried to stuff that behind the ATH_NODE lock, unfortunately
the locking is just too plain hairy (for me! And I wrote it!) to do
cleanly. Hence using atomics here instead of a lock. The ATH_NODE lock
just isn't currently used anywhere besides the rate control updates.
If in the future everything gets migrated back to using a single ATH_NODE
lock or a single global ATH_TX lock (ie, a single TX lock for all TX and
TX completion) then fine, I'll remove the atomics.
it run out of multiple concurrent contexts.
Right now the ath(4) TX processing is a bit hairy. Specifically:
* It was running out of ath_start(), which could occur from multiple
concurrent sending processes (as if_start() can be started from multiple
sending threads nowdays.. sigh)
* during RX if fast frames are enabled (so not really at the moment, not
until I fix this particular feature again..)
* during ath_reset() - so anything which calls that
* during ath_tx_proc*() in the ath taskqueue - ie, TX is attempted again
after TX completion, as there's now hopefully some ath_bufs available.
* Then, the ic_raw_xmit() method can queue raw frames for transmission
at any time, from any net80211 TX context. Ew.
This has caused packet ordering issues in the past - specifically,
there's absolutely no guarantee that preemption won't occuring _during_
ath_start() by the TX completion processing, which will call ath_start()
again. It's a mess - 802.11 really, really wants things to be in
sequence or things go all kinds of loopy.
So:
* create a new task struct for TX'ing;
* make the if_start method simply queue the task on the ath taskqueue;
* make ath_start() just be called by the new TX task;
* make ath_tx_kick() just schedule the ath TX task, rather than directly
calling ath_start().
Now yes, this means that I've taken a step backwards in terms of
concurrency - TX -and- RX now occur in the same single-task taskqueue.
But there's nothing stopping me from separating out the TX / TX completion
code into a separate taskqueue which runs in parallel with the RX path,
if that ends up being appropriate for some platforms.
This fixes the CCMP/seqno concurrency issues that creep up when you
transmit large amounts of uni-directional UDP traffic (>200MBit) on a
FreeBSD STA -> AP, as now there's only one TX context no matter what's
going on (TX completion->retry/software queue,
userland->net80211->ath_start(), TX completion -> ath_start());
but it won't fix any concurrency issues between raw transmitted frames
and non-raw transmitted frames (eg EAPOL frames on TID 16 and any other
TID 16 multicast traffic that gets put on the CABQ.) That is going to
require a bunch more re-architecture before it's feasible to fix.
In any case, this is a big step towards making the majority of the TX
path locking irrelevant, as now almost all TX activity occurs in the
taskqueue.
Phew.
Right now processing a full 512 frame queue takes quite a while (measured
on the order of milliseconds.) Because of this, the TX processing ends up
sometimes preempting the taskqueue:
* userland sends a frame
* it goes in through net80211 and out to ath_start()
* ath_start() will end up either direct dispatching or software queuing a
frame.
If TX had to wait for RX to finish, it would add quite a few ms of
additional latency to the packet transmission. This in the past has
caused issues with TCP throughput.
Now, as part of my attempt to bring sanity to the TX/RX paths, the first
step is to make the RX processing happen in smaller 'parts'. That way
when TX is pushed into the ath taskqueue, there won't be so much latency
in the way of things.
The bigger scale change (which will come much later) is to actually
process the frames in the ath_intr taskqueue but process _frames_ in
the ath driver taskqueue. That would reduce the latency between
processing and requeuing new descriptors. But that'll come later.
The actual work:
* Add ATH_RX_MAX at 128 (static for now);
* break out of the processing loop if npkts reaches ATH_RX_MAX;
* if we processed ATH_RX_MAX or more frames during the processing loop,
immediately reschedule another RX taskqueue run. This will handle
the further frames in the taskqueue.
This should have very minimal impact on the general throughput case,
unless the scheduler is being very very strange or the ath taskqueue
ends up spending a lot of time on non-RX operations (such as TX
completion.)
the ATH_TXQ_* macros.
* Introduce the new macros;
* rename the TID queue and TID filtered frame queue so the compiler
tells me I'm using the wrong macro.
These should correspond 1:1 to the existing code.
AR5416 and AR9280, but leave it disabled by default.
TL;DR: don't enable this code at all unless you go through the process
of getting the NIC re-certified. This is purely to be used as a
reference and NOT a certified solution by any stretch of the imagination.
The background:
The AR5112 RF synth right up to the AR5133 RF synth (used on the AR5416,
derivative is used for the AR9130/AR9160) only implement down to 2.5MHz
channel spacing in 5GHz. Ie, the RF synth is programmed in steps of 2.5MHz
(or 5, 10, 20MHz.) So they can't represent the quarter rate channels
in the 4.9GHz PSB (which end in xxx2MHz and xxx7MHz). They support
fractional spacing in 2GHz (1MHz spacing) (or things wouldn't work,
right?)
So instead of doing this, the RF synth programming for the AR5112 and
later code will round to the nearest available frequency.
If all NICs were RF5112 or later, they'll inter-operate fine - they all
program the same. (And for reference, only the latest revision of the
RF5111 NICs do it, but the driver doesn't yet implement the programming.)
However:
* The AR5416 programming didn't at all implement the fractional synth
work around as above;
* The AR9280 programming actually programmed the accurate centre frequency
and thus wouldn't inter-operate with the legacy NICs.
So this patch:
* Implements the 4.9GHz PSB fractional synth workaround, exactly as the
RF5112 and later code does;
* Adds a very dirty workaround from me to calculate the same channel
centre "fudge" to the AR9280 code when operating on fractional frequencies
in 5GHz.
HOWEVER however:
It is disabled by default. Since the HAL didn't implement this feature,
it's highly unlikely that the AR5416 and AR928x has been tested in these
centre frequencies. There's a lot of regulatory compliance testing required
before a NIC can have this enabled - checking for centre frequency,
for drift, for synth spurs, for distortion and spectral mask compliance.
There's likely a lot of other things that need testing so please don't
treat this as an exhaustive, authoritative list. There's a perfectly good
process out there to get a NIC certified by your regulatory domain, please
go and engage someone to do that for you and pay the relevant fees.
If a company wishes to grab this work and certify existing 802.11n NICs
for work in these bands then please be my guest. The AR9280 works fine
on the correct fractional synth channels (49x2 and 49x7Mhz) so you don't
need to get certification for that. But the 500KHz offset hack may have
the above issues (spur, distortion, accuracy, etc) so you will need to
get the NIC recertified.
Please note that it's also CARD dependent. Just because the RF synth
will behave correctly doesn't at all mean that the card design will also
behave correctly. So no, I won't enable this by default if someone
verifies a specific AR5416/AR9280 NIC works. Please don't ask.
Tested:
I used the following NICs to do basic interoperability testing at
half and quarter rates. However, I only did very minimal spectrum
analyser testing (mostly "am I about to blow things up" testing;
not "certification ready" testing):
* AR5212 + AR5112 synth
* AR5413 + AR5413 synth
* AR5416 + AR5113 synth
* AR9280
net80211 node power save state.
* Add an ATH_NODE_UNLOCK_ASSERT() check
* Add a new node field - an_is_powersave
* Pause/unpause the queue based on the node state
* Attempt to handle net80211 concurrency issues so the queue
doesn't get paused/unpaused more than once at a time from
the net80211 power save code.
Whilst here (and breaking my usual rule), set CLRDMASK when a queue
is unpaused, regardless of whether the queue has some pending traffic.
This means the first frame from that TID (now or later) will hvae
CLRDMASK set.
Also whilst here, bump the swretrymax counters whenever the
filtered frames code expires a frame. Again, breaking my rule, but
this is just a statistics thing rather than a functional change.
This doesn't fix ps-poll (but it doesn't break it too much worse
than it is at the present) or correcting the TID updates.
That's next on the list.
Tested:
* AR9220 AP (Atheros AP96 reference design)
* Macbook Pro and LG Optimus 1 Android phone, both setting
and clearing power save state (but not using PS-POLL.)
This doesn't specifically fix the issue(s) i'm seeing in this 2GHz
environment (where setting/increasing spur immunity causes OFDM restart
errors to skyrocket through the roof; but leaving it at 0 would leave
the environment cleaner..)
Pointy-hat-to: me, for committing this broken code in the first place.
things like EAPOL frames make it out.
After a whole bunch of hacking/testing, I discovered that they weren't
being early-dropped by the stack (but I should look at ensuring that
later..) but were even making to the hardware transmit queue.
They were mostly even being received by the remote end. However, the
remote end was completely ignoring them.
This didn't happen under 150-170MBit TCP tests as I'm guessing the TX
queue stayed very busy and the STA didn't do any scanning. However, when
doing 100Mbit/s of TCP traffic, the STA would do background scanning -
which involves it coming in and out of powersave mode with the AP.
Now, this is a total and utter hack around the real problems, which are:
* I need to implement proper power save handling and integrate it into
the filtered frames support, so the driver/stack doesn't send frames
whilst the station is actually in sleep;
* .. but frames were actually making it to the STA (macbook pro) and
the AP did receive an ACK; but a tcpdump on the receiving side showed
the EAPOL frame never made it. So the stack was dropping it for
some reason;
* Importantly - the EAPOL frames are currently going into the non-QoS
TID, which maps to the BE queue and is susceptible to that queue being
busy doing other things, but;
* There's other traffic going on in the non-QoS TID from other contexts
when scanning is going on and it's possible there's some races causing
sequence number/IV issues, but;
* Importantly importantlly, I think the interaction with TID 16 multicast
traffic in power save mode is causing issues - since I -believe- the
sequence number space being used by the EAPOL frames on TID 16 overlaps
with the multicast frames that have sequence numbers allocated and
are then stuffed on the cabq. Since with EAPOL frames being in TID 16
and queued to the BE queue, it's going to be waiting to be serviced
with all of the aggregate traffic going on - and if the CABQ gets
emptied beforehand, those TID 16 multicast frames with sequence numbers
will go out beforehand.
Now, there's quite likely a bunch of "stuff happening slightly out of
sequence" going on due to the nature of the TX path (read: lots of
overlapping and concurrent ath_start() and ath_raw_xmit() calls going
on, sigh) but I thought I had caught them all and stuffed each TID TX
behind a lock (that lasted as long as it needed to in order to get
the frame onto the relevant destination queue - thus keeping things
in order.)
Unfortunately the last problem is the big one and I'm going to stare at
it some more. If it _is_
So this is a work around for now to ensure that EAPOL frames actually
make it out before any other stuff in the non-QoS TID and HOPEFULLY
before the CABQ gets active.
I'm now going to spend a little time in the TX path figuring out exactly
why the sender is rejecting things. There's two (well, three if you count
EAPOL contents invalid) possibilities:
* The sequence number is out of order (ie, something else like the multicast
traffic on CABQ) is going out first on TID 16;
* The CCMP IV is out of order (similar to above - but less likely, as the
TX key for multicast traffic is different to unicast traffic);
* EAPOL contents strangely invalid.
AP: Ubiquiti RSPRO, AR9160/AR9220 NICs
STA: Macbook Pro, Broadcom 11n NIC
lock may be held.
Kim reported that the TID lock wasn't held when ath_tx_update_clrdmask()
was called. Well, the underlying hardware TXQ for that TID.
I'm betting it's the cabq stuff. ath_tx_xmit_normal() can be called
for both real and software cabq. For software cabq, the real destination
txq is different to the txq. So, the lock check will fail.
Reported by: Kim Culhan <w8hdkim@gmail.com>
This should eventually be unified with ATH_DEBUG() so I can get both
from one macro; that may take some time.
Add some new probes for TX and TX completion.
* use the correct frame status - although the completion descriptor is
the _last_ in the frame/aggregate, the status is currently stored in
the _first_ buffer.
* Print out ath_buf specific fields once, not per descriptor in an ath_buf.
it's disabled.
The previous commit to enable CLRDMASK setting didn't do it at all
correctly for non-aggregate sessions - so the CLRDMASK bit would be
cleared and never re-set.
* move ath_tx_update_clrdmask() to be called by functions that setup
descriptors and queue frames to the hardware, rather than scattered
everywhere.
* Force CLRDMASK to be set on all non-aggregate session frames being
transmitted.
* Use ath_tx_normal_comp() now on non-aggregate sessoin frames
that are queued via ath_tx_xmit_normal(). That way the TID hwq is
updated and they can trigger (eventual) filter frame queue resets
and software retransmits.
There's still a bit more work to do in this area to reverse the silly
short-sightedness on my part, however it's likely going to be better
to fix this now than just reverting the patch.
Thanks to people on the freebsd-wireless@ mailing list for promptly
pointing this out.
frames to occur.
* Create a new function which will set the bf_flags CLRDMASK bit
if required.
* For raw frames, always set CLRDMASK.
* For BAR, ADDBA frames, always set CLRDMASK.
* For everything else, check if CLRDMASK needs to be set before
calling tx_setds() or tx_setds11n().
* When unpausing a queue or drain/resetting it, set tid->clrdmask=1
just to ensure traffic starts flowing.
What I need to do:
* Modify that function to _clear_ the CLRDMASK if it's not required,
or retried frames may have CLRDMASK set when they don't need to.
(Which isn't a huge deal, but..)
Whilst I'm here:
* ath_tx_normal_xmit() should really act like the AMPDU session TX
functions - any incomplete frames will end up being assigned
ath_tx_normal_comp() which will decrement tid->hwq_depth - but that
won't have been incremented.
So whilst I'm here, add a comment to do that.
* Fix the debug print function to be slightly clearer about things;
it's not a good sign when I can't interpret my own debugging output.
I've done some testing on AR9280/AR5416/AR9160 STA and AP modes.
stack.
There are unfortunately quite a few odd cases in BAR TX and BAR TX
retransmission that I haven't yet fully diagnosed. So for now, add
this work-around so the resume() function isn't called too often,
decrementing pause to -1 (and causing things to stay paused.)
is done.
The aggregate path was definitely accessing 'ts' before it was actually
being assigned.
This had the side effect of over-filtering frames, since occasionally that
bit would be '1'.
Whilst here, do the same thing in the non-aggregate completion function -
as calling the filter function may also invalidate bf.
Pointy hat to: adrian, for not noticing this over many, many code reviews.
The hardware can optionally "filter" frames if successive transmissions
to a given node (ie, "entry in the keycache") fail. That way the hardware
can implement a kind of early abort of all the other frames queued to
that destination, rather than simply trying to TX each frame to that
destination (and failing.)
The background:
* If a frame comes back as being filtered, the hardware didn't try to
TX it (or it was outside the TX burst opportunity.) So, take it as a hint
that some (but not all, see below) frames to the destination may be
filtered.
* If the CLRDMASK bit is set in a TX descriptor, the "filter to this
destination" bit in the keycache entry is cleared and TX to that host
will be unconditionally retried.
* Right now everything has the CLRDMASK bit set, so filtered frames
tend to be aggregates and frames that fall outside of the WME burst
window. It was a bit worse in the past as I had messed up the TX
flags and CLRDMASK wasn't being set on aggregate frames.
The annoying bits:
* It's easy (ish) to do for aggregate session frames - firstly, they
can be retried in any order as long as they're within the BAW, and
there's already a bunch of infrastructure tracking how many frames
the TID has queued to the hardware (tid->hwq_depth.) However, for
frames that bypassed the software queue, hwq_depth doesn't get
incremented. I'll fix that in a subsequent commit.
* For non-aggregate session frames, the only retries that can occur
are ones for sequence numbers that hvaen't successfully been TXed yet.
Since there's no re-ordering going on in non-aggregate sessions, if any
subsequent seqno frames make it out, any filtered frames before that
seqno need to be dropped.
Hence why this initially is just for aggregate session frames.
* Since there may be intermediary frames to the destination that
have CLRDMASK set - for example, any directly dispatched management
frames to that destination - it's possible that there will be some
filtered frames followed up by some non filtered frames. Thus,
it can't be assumed that once you see a filtered frame for the given
destination node, all subsequent frames for all TIDs will be filtered.
Ok, with that in mind:
* Create a per-TID filtered frame queue for frames that the hardware
returns as filtered.
* Track filtered frames per-tid, rather than per-node. It just makes
the locking much easier.
* When a filtered frame appears in the completion function, the node
transitions to "filtered", and all subsequent completed error frames
(filtered or otherwise) are put on the filtered frame queue. The TID
is paused once (during the transition from non-filtered to filtered).
* If a filtered frame retry count exceeds SWMAX_RETRIES, a BAR should be
sent.
* Once all the frames queued to the hardware for the given filtered frame
TID, transition back from filtered frame to non-filtered frame, which
means pre-pending all the filtered frames onto the head of the software
queue, clearing the filtered frame state and unpausing the TID.
Things get quite hairy around handling completion (aggr, non-aggr, norm,
direct-dispatched frames to a hardware queue); whether it's an "error",
"cleanup" or "BAR" state as well as filtered, which order to do things
in (eg do filtered BEFORE checking for BAR, as the filter completion
may be needed to actually transmit a BAR frame.)
This work has definitely reminded me that I have to tidy up all the locking
and remove some of the ridiculous lock/unlock/lock/unlock going on in the
completion functions.
It's also reminded me that I should really split out TID versus hardware TXQ
locking, even if the underlying locking is still the destination hardware TXQ.
Finally, this is all pre-requisite for working on AP mode power save support
(PS-POLL, uAPSD) as well as improving performance to misbehaving nodes (as
they can transition into filter mode, stopping any TX until everything has
caught up.)
Finally (ish) - this should also be done for non-aggregate sessions as
there are still plenty of laptops and mobile devices that don't speak
802.11n but do wish for stable, useful power save AP support where packets
aren't simply dropped. This requires software retransmission for
non-aggregate sessions to be implemented, which includes the caveats I've
mentioned above.
Finally finally - this doesn't yet do anything about the CLRDMASK bit in the
TX descriptor. That's still unconditionally set to 1. I'll debug the
current work (mostly ensuring I haven't busted up the hairy transitions
between BAR, filtered, error (all frames in an aggregate failing) and
cleanup (when transitioning from aggregation -> non-aggregation.))
Finally finally finally - this is all original work by yours truely, rather
than ported from the Atheros internal driver codebase or Linux ath9k.
Tested:
* AR9280, AR5416 in STA mode
* AR9280, AR9130 in hostap mode
* Lots and lots of iperf testing in very marginal and non-marginal conditions,
complete with inducing filtered frames + BAR TX conditions.
These are intended for software TX filtering support, where the NIC
decides there has been too many successive failues to a destination
and will filter it.
Although the filtering is done per-destination (via the keycache),
the state and queue is kept per-TID for now. It simplifies the overall
architecture design and locking.
Whilst here, add ATH_TID_UNLOCK_ASSERT().
* Don't treat high percentage failures as "sucessive failures" - high
MCS rates are very picky and will quite happily "fade" from low
to high failure % and back again within a few seconds. If they really
don't work, the aggregate will just plain fail.
* Only sample MCS rates +/- 3 from the current MCS. Sample will back off
quite quickly, so there's no need to sample _all_ MCS rates between
a high MCS rate and MCS0; there may be a lot of them.
* Modify the smoothing rate to be 75% rather than 95% - it's more adaptive
but it comes with a cost of being slightly less stable at times.
A per-node, hysterisis behaviour would be nicer.
I'm not sure where in the deep, distant past I found the AR_PHY_MODE
registers for half/quarter rate mode, but unfortunately that doesn't
seem to work "right" for non-AR9280 chips.
Specifically:
* don't touch AR_PHY_MODE
* set the PLL bits when configuring half/quarter rate
I've verified this on the AR9280 (5ghz fast clock) and the AR5416.
The AR9280 works in both half/quarter rate; the AR5416 unfortunately
only currently works at half rate. It fails to calibrate on quarter rate.
No, this isn't HT/5 and HT/10 support. This is the 11a half/quarter
rate support primarily used by the 4.9GHz and GSM band regulatory
domains.
This is definitely a work in progress.
TODO:
* everything in the last commit;
* lots more interoperability testing with the AR5212 half/quarter rate
support for the relevant chips;
* Do some interop testing on half/quarter rate support between _all_
the 11n chips - AR5416, AR9160, AR9280 (and AR9285/AR9287 when 2GHz
half/quarter rate support is coded up.)
used when running the chips in half/quarter rate.
This sets up some default parameters which are then overridden by the
driver (which manually configures things like slot timing at interface
start time.)
Although this is a copy-and-modify from the AR5212 HAL, I did peek
at the reference HAL and the ath9k driver to see what they did.
Ath9k in particular doesn't hard-code this - instead, their version
of ar5416InitUserSettings() does all of the relevant math.
TODO:
* do the math, not hard code things!
* fix the mac clock calculation for the AR9287; since it runs the
MAC clock at a higher rate, requiring all the duration calculations
to change;
* Do a whole lot more validation for half/quarter rates.
Obtained from: Qualcomm Atheros, Linux ath9k
Some of the math is a little wrong thanks to clocks in 11a mode running
at 44MHz when in fast clock mode (rather than 40MHz, which the chips
before AR9280 ran 11a in). That'll have to be addressed in a future commit.
This fixes the incorrect slot (and likely ACK/RTS timeout) values
which I see when enabling half/quarter rate support on the AR9280.
The resulting math matches the expected calculated default values.