* Grab the reset lock first, so any subsequent interrupt, TX, RX work
will fail
* Then shut down interrupts
* Then wait for TX/RX to finish running
At this point no further work will be running, so it's safe to do the
reset path code.
PR: kern/179232
The main problem here is that fast and driver RX diversity isn't actually
configured; I need to figure out why that is. That said, this makes
the single-antenna connected AR9285 and AR2427 (AR9285 w/ no 11n) work
correctly.
PR: kern/179269
and if queue mechanism; also fix up (non-11n) TX fragment handling.
This may result in a bit of a performance drop for now but I plan on
debugging and resolving this at a later stage.
Whilst here, fix the transmit path so fragment transmission works.
The TX fragmentation handling is a bit more special. In order to
correctly transmit TX fragments, there's a bunch of corner cases that
need to be handled:
* They must be transmitted back to back, in the same order..
* .. ie, you need to hold the TX lock whilst transmitting this
set of fragments rather than interleaving it with other MSDUs
destined to other nodes;
* The length of the next fragment is required when transmitting, in
order to correctly set the NAV field in the current frame to the
length of the next frame; which requires ..
* .. that we know the transmit duration of the next frame, which ..
* .. requires us to set the rate of all fragments to the same length,
or make the decision up-front, etc.
To facilitate this, I've added a new ath_buf field to describe the
length of the next fragment. This avoids having to keep the mbuf
chain together. This used to work before my 11n TX path work because
the ath_tx_start() routine would be handed a single mbuf with m_nextpkt
pointing to the next frame, and that would be maintained all the way
up to when the duration calculation was done. This doesn't hold
true any longer - the actual queuing may occur at any point in the
future (think ath_node TID software queuing) so this information
needs to be maintained.
Right now this does work for non-11n frames but it doesn't at all
enforce the same rate control decision for all frames in the fragment.
I plan on fixing this in a followup commit.
RTS/CTS has the same issue, I'll look at fixing this in a subsequent
commit.
Finaly, 11n fragment support requires the driver to have fully
decided what the rate scenario setup is - including 20/40MHz,
short/long GI, STBC, LDPC, number of streams, etc. Right now that
decision is (currently) made _after_ the NAV field value is updated.
I'll fix all of this in subsequent commits.
Tested:
* AR5416, STA, transmitting 11abg fragments
* AR5416, STA, 11n fragments work but the NAV field is incorrect for
the reasons above.
TODO:
* It would be nice to be able to queue mbufs per-node and per-TID so
we can only queue ath_buf entries when it's time to assemble frames
to send to the hardware.
But honestly, we should just do that level of software queue management
in net80211 rather than ath(4), so I'm going to leave this alone for now.
* More thorough AP, mesh and adhoc testing.
* Ensure that net80211 doesn't hand us fragmented frames when A-MPDU has
been negotiated, as we can't do software retransmission of fragments.
* .. set CLRDMASK when transmitting fragments, just to ensure.
traffic.
When transmitting non-aggregate traffic, we need to keep the hardware
busy whilst transmitting or small bursts in txdone/tx latency will
kill us.
This restores non-aggregate iperf performance, especially when doing
TDMA.
Tested:
* AR5416<->AR5416, TDMA
* AR5416 STA <-> AR9280 AP
of course.)
There's a few things that needed to happen:
* In case someone decides to set the beacon transmission rate to be
at an MCS rate, use the MCS-aware version of the duration calculation
to figure out how long the received beacon frame was.
* If TxOP enforcing is available on the hardware and we're doing TDMA,
enable it after a reset and set the TDMA guard interval to zero.
This seems to behave fine.
TODO:
* Although I haven't yet seen packet loss, the PHY errors that would be
triggered (specifically Transmit-Override-Receive) aren't enabled
by the 11n HAL. I'll have to do some work to enable these PHY errors
for debugging.
What broke:
* My recent changes to the TX queue handling has resulted in the driver
not keeping the hardware queue properly filled when doing non-aggregate
traffic. I have a patch to commit soon which fixes this situation
(albeit by reminding me about how my ath driver locking isn't working
out, sigh.)
So if you want to test this without updating to the next set of patches
that I commit, just bump the sysctl dev.ath.X.hwq_limit from 2 to 32.
Tested:
* AR5416 <-> AR5416, with ampdu disabled, HT40, 5GHz, MCS12+Short-GI.
I saw 30mbit/sec in both directions using a bidirectional UDP test.
The list-based DMA engine has the following behaviour:
* When the DMA engine is in the init state, you can write the first
descriptor address to the QCU TxDP register and it will work.
* Then when it hits the end of the list (ie, it either hits a NULL
link pointer, OR it hits a descriptor with VEOL set) the QCU
stops, and the TxDP points to the last descriptor that was transmitted.
* Then when you want to transmit a new frame, you can then either:
+ write the head of the new list into TxDP, or
+ you write the head of the new list into the link pointer of the
last completed descriptor (ie, where TxDP points), then kick
TxE to restart transmission on that QCU>
* The hardware then will re-read the descriptor to pick up the link
pointer and then jump to that.
Now, the quirks:
* If you write a TxDP when there's been no previous TxDP (ie, it's 0),
it works.
* If you write a TxDP in any other instance, the TxDP write may actually
fail. Thus, when you start transmission, it will re-read the last
transmitted descriptor to get the link pointer, NOT just start a new
transmission.
So the correct thing to do here is:
* ALWAYS use the holding descriptor (ie, the last transmitted descriptor
that we've kept safe) and use the link pointer in _THAT_ to transmit
the next frame.
* NEVER write to the TxDP after you've done the initial write.
* .. also, don't do this whilst you're also resetting the NIC.
With this in mind, the following patch does basically the above.
* Since this encapsulates Sam's issues with the QCU behaviour w/ TDMA,
kill the TDMA special case and replace it with the above.
* Add a new TXQ flag - PUTRUNNING - which indicates that we've started
DMA.
* Clear that flag when DMA has been shutdown.
* Ensure that we're not restarting DMA with PUTRUNNING enabled.
* Fix the link pointer logic during TXQ drain - we should always ensure
the link pointer does point to something if there's a list of frames.
Having it be NULL as an indication that DMA has finished or during
a reset causes trouble.
Now, given all of this, i want to nuke axq_link from orbit. There's now HAL
methods to get and set the link pointer of a descriptor, so what we
should do instead is to update the right link pointer.
* If there's a holding descriptor and an empty TXQ list, set the
link pointer of said holding descriptor to the new frame.
* If there's a non-empty TXQ list, set the link pointer of the
last descriptor in the list to the new frame.
* Nuke axq_link from orbit.
Note:
* The AR9380 doesn't need this. FIFO TX writes are atomic. As long as
we don't append to a list of frames that we've already passed to the
hardware, all of the above doesn't apply. The holding descriptor stuff
is still needed to ensure the hardware can re-read a completed
descriptor to move onto the next one, but we restart DMA by pushing in
a new FIFO entry into the TX QCU. That doesn't require any real
gymnastics.
Tested:
* AR5210, AR5211, AR5212, AR5416, AR9380 - STA mode.
doesn't match the actual hardware queue this frame is queued to.
I'm trying to ensure that the holding buffers are actually being queued
to the same TX queue as the holding buffer that they end up on.
I'm pretty sure this is all correct so if this complains, it'll be due
to some kind of subtle broken-ness that needs fixing.
This is only done for legacy hardware, not EDMA hardware.
Tested:
* AR5416 STA mode, very lightly
PS-POLL support.
This implements PS-POLL awareness i nthe
* Implement frame "leaking", which allows for a software queue
to be scheduled even though it's asleep
* Track whether a frame has been leaked or not
* Leak out a single non-AMPDU frame when transmitting aggregates
* Queue BAR frames if the node is asleep
* Direct-dispatch the rest of control and management frames.
This allows for things like re-association to occur (which involves
sending probe req/resp as well as assoc request/response) when
the node is asleep and then tries reassociating.
* Limit how many frames can set in the software node queue whilst
the node is asleep. net80211 is already buffering frames for us
so this is mostly just paranoia.
* Add a PS-POLL method which leaks out a frame if there's something
in the software queue, else it calls net80211's ps-poll routine.
Since the ath PS-POLL routine marks the node as having a single frame
to leak, either a software queued frame would leak, OR the next queued
frame would leak. The next queued frame could be something from the
net80211 power save queue, OR it could be a NULL frame from net80211.
TODO:
* Don't transmit further BAR frames (eg via a timeout) if the node is
currently asleep. Otherwise we may end up exhausting management frames
due to the lots of queued BAR frames.
I may just undo this bit later on and direct-dispatch BAR frames
even if the node is asleep.
* It would be nice to burst out a single A-MPDU frame if both ends
support this. I may end adding a FreeBSD IE soon to negotiate
this power save behaviour.
* I should make STAs timeout of power save mode if they've been in power
save for more than a handful of seconds. This way cards that get
"stuck" in power save mode don't stay there for the "inactivity" timeout
in net80211.
* Move the queue depth check into the driver layer (ath_start / ath_transmit)
rather than doing it in the TX path.
* There could be some naughty corner cases with ps-poll leaking.
Specifically, if net80211 generates a NULL data frame whilst another
transmitter sends a normal data frame out net80211 output / transmit,
we need to ensure that the NULL data frame goes out first.
This is one of those things that should occur inside the VAP/ic TX lock.
Grr, more investigations to do..
Tested:
* STA: AR5416, AR9280
* AP: AR5416, AR9280, AR9160
* Move the node sleep/wake state under the TX lock rather than the
node lock. Let's leave the node lock protecting rate control only
for now.
* When reassociating, various state needs to be cleared. For example,
the aggregate session needs to be torn down, including any pending
aggregation negotiation and BAR TX waiting.
* .. and we need to do a "cleanup" pass since frames in the hardware
TX queue need to be transmitted.
Modify ath_tx_tid_cleanup() to be called with the TX lock held and push
frames into a completion list. This allows for the cleanup to be
done atomically for all TIDs in a node rather than grabbing and
releasing the TX lock each time.
a non-loss reset.
When the drain functions are called, the holding descriptor and link pointers
are NULLed out.
But when the processq function is called during a non-loss reset, this
doesn't occur. So the next time a DMA occurs, it's chained to a descriptor
that no longer exists and the hardware gets angry.
Tested:
* AR5416, STA mode; use sysctl dev.ath.X.forcebstuck=1 to force a non-loss
reset.
TODO:
* Further AR9380 testing just to check that the behaviour for the EDMA
chips is sane.
PR: kern/178477
of "right".)
Flip back on the "always continue TX DMA using the holding descriptor"
code - by always setting ATH_BUF_BUSY and never setting axq_link to NULL.
Since the holding descriptor is accessed via txq->axq_link and _that_
is done behind the TXQ lock rather than the TX path lock, the holding
descriptor stuff itself needs to be behind the TXQ lock.
So, do the mental gymnastics needed to do this.
I've not seen any of the hardware failures that I was seeing when
I last tried to do this.
Tested:
* AR5416, STA mode
I'm not sure why this is failing. The holding descriptor should be being
re-read when starting DMA of the next frame. Obviously something here
isn't totally correct.
I'll review the TX queue handling and see if I can figure out why this
is failing. I'll then re-revert this patch out and use the holding
descriptor again.
but partly to just tidy up things.
The problem here - there are too many TX buffers in the queue! By the
time one needs to transmit an EAPOL frame (for this PR, it's the response
to the group rekey notification from the AP) there are no ath_buf entries
free and the EAPOL frame doesn't go out.
Now, the problem!
* Enforcing the TX buffer limitation _before_ we dequeue the frame?
Bad idea. Because..
* .. it means I can't check whether the mbuf has M_EAPOL set.
The solution(s):
* De-queue the frame first
* Don't bother doing the TX buffer minimum free check until after
we know whether it's an EAPOL frame or not.
* If it's an EAPOL frame, allocate the buffer from the mgmt pool
rather than the default pool.
Whilst I'm here:
* Add a tweak to limit how many buffers a single node can acquire.
* Don't enforce that for EAPOL frames.
* .. set that to default to 1/4 of the available buffers, or 32,
whichever is more sane.
This doesn't fix issues due to a sleeping node or a very poor performing
node; but this doesn't make it worse.
Tested:
* AR5416 STA, TX'ing 100+ mbit UDP to an AP, but only 50mbit being received
(thus the TX queue fills up.)
* .. with CCMP / WPA2 encryption configured
* .. and the group rekey time set to 10 seconds, just to elicit the
behaviour very quickly.
PR: kern/138379
just "when the queue is busy."
After talking with the MAC team, it turns out that the linked list
implementation sometimes will not accept a TxDP update and will
instead re-read the link pointer. So even if the hardware has
finished transmitting a chain and has hit EOL/VEOL, it may still
re-read the link pointer to begin transmitting again.
So, always set ATH_BUF_BUSY on the last buffer in the chain (to
mark the last descriptor as the holding descriptor) and never
blank the axq_link pointer.
Tested:
* AR5416, STA mode
TODO:
* much more thorough testing with the pre-11n NICs, just to verify
that they behave the same way.
* test TDMA on the 11n and non-11n hardware.
The QCA9565 is a 1x1 2.4GHz 11n chip with integrated on-chip bluetooth.
The AR9300 HAL already has support for this chip; it just wasn't
included in the probe/attach path.
Tested:
* This commit brought to you over a QCA9565 wifi connection from
FreeBSD.
* .. ie, basic STA, pings, no iperf or antenna diversity checking just yet.
* That lock isn't actually held during reset - just the whole TX/RX path
is paused. So, remove the assertion.
* Log the TX queue status - how many hardware frames are active in the
MAC and whether the queue is active.
the pause/resume code to not be called completely symmetrically.
I'll chase down the root cause of that soon; this at least works around
the bug and tells me when it happens.
is compiled in or not.
This fixes issues with people running -HEAD but who build modules
without doing a "make buildkernel KERNCONF=XXX", thus picking up
opt_*.h. The resulting module wouldn't have 11n enabled and the
chainmask configuration would just be plain wrong.
* Add ah_ratesArray[] to the ar5416 HAL state - this stores the maximum
values permissable per rate.
* Since different chip EEPROM formats store this value in a different place,
store the HT40 power detector increment value in the ar5416 HAL state.
* Modify the target power setup code to store the maximum values in the
ar5416 HAL state rather than using a local variable.
* Add ar5416RateToRateTable() - to convert a hardware rate code to the
ratesArray enum / index.
* Add ar5416GetTxRatePower() - which goes through the gymnastics required
to correctly calculate the target TX power:
+ Add the power detector increment for ht40;
+ Take the power offset into account for AR9280 and later;
+ Offset the TX power correctly when doing open-loop TX power control;
+ Enforce the per-rate maximum value allowable.
Note - setting a TPC value of 0x0 in the TX descriptor on (at least)
the AR9160 resulted in the TX power being very high indeed. This didn't
happen on the AR9220. I'm guessing it's a chip bug that was fixed at
some point. So for now, just assume the AR5416/AR5418 and AR9130 are
also suspect and clamp the minimum value here at 1.
Tested:
* AR5416, AR9160, AR9220 hostap, verified using (2GHz) spectrum analyser
* Looked at target TX power in TX descriptor (using athalq) as well as TX
power on the spectrum analyser.
TODO:
* The TX descriptor code sets the target TX power to 0 for AR9285 chips.
I'm not yet sure why. Disable this for TPC and ensure that the TPC
TX power is set.
* AR9280, AR9285, AR9227, AR9287 testing!
* 5GHz testing!
Quirks:
* The per-packet TPC code is only exercised when the tpc sysctl is set
to 1. (dev.ath.X.tpc=1.) This needs to be done before you bring the
interface up.
* When TPC is enabled, setting the TX power doesn't end up with a call
through to the HAL to update the maximum TX power. So ensure that
you set the TPC sysctl before you bring the interface up and configure
a lower TX power or the hardware will be clamped by the lower TX
power (at least until the next channel change.)
Thanks to Qualcomm Atheros for all the hardware, and Sam Leffler for use
of his spectrum analyser to verify the TX channel power.
ath_tx_rate_fill_rcflags(). Include setting up the TX power cap in the
rate scenario setup code being passed to the HAL.
Other things:
* add a tx power cap field in ath_rc.
* Add a three-stream flag in ath_rc.
* Delete the LDPC flag from ath_rc - it's not a per-rate flag, it's a
global flag for the transmission.
directly referencing ni->ni_txpower.
This provides the hardware with a slightly more accurate idea of
the maximum TX power to be using.
This is part of a series to get per-packet TPC to work (better).
Tested:
* AR5416, hostap mode
is configured for higher rates (lower than max) but higher TX power
is configured for the lower rates, above the configured cap, to improve
long distance behaviour.
* Add the rest of the missing GPIO output mux types;
* Add in a new debug category;
* And a new MCI btcoex configuration option in ath_hal.ah_config
Obtained from: Qualcomm Atheros
buffers (ie, >4GB on amd64.)
The underlying problem was that PREREAD doesn't sync the mbuf
with the DMA memory (ie, bounce buffer), so the bounce buffer may
have had stale information. Thus it was always considering the
buffer completed and things just went off the rails.
This change does the following:
* Make ath_rx_pkt() always consume the mbuf somehow; it no longer
passes error mbufs (eg CRC errors, crypt errors, etc) back up
to the RX path to recycle. This means that a new mbuf is always
allocated each time, but it's cleaner.
* Push the RX buffer map/unmap to occur in the RX path, not
ath_rx_pkt(). Thus, ath_rx_pkt() now assumes (a) it has to consume
the mbuf somehow, and (b) that it's already been unmapped and
synced.
* For the legacy path, the descriptor isn't mapped, it comes out of
coherent, DMA memory anyway. So leave it there.
* For the EDMA path, the RX descriptor has to be cleared before
its passed to the hardware, so that when we check with
a POSTREAD sync, we actually get either a blank (not finished)
or a filled out descriptor (finished.) Otherwise we get stale
data in the DMA memory.
* .. so, for EDMA RX path, we need PREREAD|PREWRITE to sync the
data -> DMA memory, then POSTREAD|POSTWRITE to finish syncing
the DMA memory -> data.
* Whilst we're here, make sure that in EDMA buffer setup (ie,
bzero'ing the descriptor part) is done before the mbuf is
map/synched.
NOTE: there's been a lot of commits besides this one with regards to
tidying up the busdma handling in ath(4). Please check the recent
commit history.
Discussed with and thanks to: scottl
Tested:
* AR5416 (non-EDMA) on i386, with the DMA tag for the driver
set to 2^^30, not 2^^32, STA
* AR9580 (EDMA) on i386, as above, STA
* User - tested AR9380 on amd64 with 32GB RAM.
PR: kern/177530
before the TX path is being aborted.
Right now it's in the TDMA code and I can live with that; but it really
should get fixed.
I'll do a more thorough audit of this code soon.
* Don't use BUS_DMA_ALLOCNOW for descriptor DMA maps; we never use
bounce buffers for the descriptors themselves.
* Add some XXX's to mark where the ath_buf has its mbuf ripped from
underneath it without actually cleaning up the dmamap. I haven't
audited those particular code paths to see if the DMA map is guaranteed
to be setup there; I'll do that later.
* Print out a warning if the descdma tidyup code is given some descriptors
w/ maps to free. Ideally the owner will free the mbufs and unmap
the descriptors before freeing the descriptor/ath_buf pairs, but
right now that's not guaranteed to be done.
Reviewed by: scottl (BUS_DMA_ALLOCNOW tag)
the buffer is being freed.
* When buffers are cloned, the original mapping isn't copied but it
wasn't freeing the mapping until later. To be safe, free the
mapping when the buffer is cloned.
* ath_freebuf() now no longer calls the busdma sync/unmap routines.
* ath_tx_freebuf() now calls sync/unmap.
* Call sync first, before calling unmap.
Tested:
* AR5416, STA mode
The normal RX path (ath_rx_pkt()) will sync and unmap the
buffer before passing it up the stack. We only need to do this
if we're flushing the FIFO during reset/shutdown.
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)
This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.
The specifics:
* When filling/refilling the FIFO, use the new TXQ staging queue
for FIFO frames
* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
For now the non-CABQ transmit path pushes one frame into the TXQ
staging queue without setting up the intermediary link pointers
to chain them together, so draining frames from the txq staging
queue to the FIFO queue occurs AMPDU / MPDU at a time.
* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
at once.
* Now that frames are in a FIFO pending queue, we can top up the
FIFO after completing a single frame. This means we can keep
it filled rather than waiting for it drain and _then_ adding
more frames.
* The EDMA restart routine now walks the FIFO queue in the TXQ
rather than the pending queue and re-initialises the FIFO with
that.
* When restarting EDMA, we may have partially completed sending
a list. So stamp the first frame that we see in a list with
ATH_BUF_FIFOPTR and push _that_ into the hardware.
* When completing frames, only check those on the FIFO queue.
We should never ever queue frames from the pending queue
direct to the hardware, so there's no point in checking.
* Until I figure out what's going on, make sure if the TXSTATUS
for an empty queue pops up, complain loudly and continue.
This will stop the panics that people are seeing. I'll add
some code later which will assist in ensuring I'm populating
each descriptor with the correct queue ID.
* When considering whether to queue frames to the hardware queue
directly or software queue frames, make sure the depth of
the FIFO is taken into account now.
* When completing frames, tag them with ATH_BUF_BUSY if they're
not the final frame in a FIFO list. The same holding descriptor
behaviour is required when handling descriptors linked together
with a link pointer as the hardware will re-read the previous
descriptor to refresh the link pointer before contiuning.
* .. and if we complete the FIFO list (ie, the buffer has
ATH_BUF_FIFOEND set), then we don't need the holding buffer
any longer. Thus, free it.
Tested:
* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap
TODO:
* I don't yet trust that the EDMA restart routine is totally correct
in all circumstances. I'll continue to thrash this out under heavy
multiple-TXQ traffic load and fix whatever pops up.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.
Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.
For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.
For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.
So:
* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
lists. It's a TAILQ (like the normal hardware frame queue) rather than
the ath9k list-of-lists to represent FIFO entries.
* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.
* When allocating ath_buf entries, clear out the flag value before
returning it or it'll end up having stale flags.
* When cloning ath_buf entries, only clone ATH_BUF_MGMT. Don't clone
the FIFO related flags.
* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
drain the normal hardware queue.
Tested:
* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap
TODO:
* Test on other chipsets, just to be thorough.
instead of axq_link.
This (among a bunch of uncommitted work) is required for EDMA chips
to correctly transmit frames on the CABQ.
Tested:
* AR9280, hostap mode
* AR9380/AR9580, hostap mode (staggered beacons)
TODO:
* This code only really gets called when burst beacons are used;
it glues multiple CABQ queues together when sending to the hardware.
* More thorough bursted beacon testing! (first requires some work with
the beacon queue code for bursted beacons, as that currently uses the
link pointer and will fail on EDMA chips.)
the descriptor link pointer, rather than directly.
This is needed on AR9380 and later (ie, EDMA) NICs so the multicast queue
has a chance in hell of being put together right.
Tested:
* AR9380, AR9580 in hostap mode, CABQ traffic (but with other patches..)
related issues.
Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.
This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.
The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.
The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support. Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot. For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.
The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission. I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.
Major:
* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
a single lock.
* Leave the software queue side of things under the ATH_TX_LOCK lock,
(continuing) to serialise things as they are.
* Add a new function which is called whenever there's a beacon miss,
to print out some debugging. This is primarily designed to help
me figure out if the beacon miss events are due to a noisy environment,
issues with the PHY/MAC, or other.
* Move the CABQ setup/enable to occur _after_ all the VAPs have been
looked at. This means that for multiple VAPS in bursted mode, the
CABQ gets primed once all VAPs are checked, rather than being primed
on the first VAP and then having frames appended after this.
Minor:
* Add a (disabled) twiddle to let me enable/disable cabq traffic.
It's primarily there to let me easily debug what's going on with beacon
and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
trying to trace down.
* Clear bf_next when flushing frames; it should quieten some warnings
that show up when a node goes away.
Tested:
* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)
TODO:
* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
correctly set when chaining descriptors.)
to stuck beacons.
* Set the cabq readytime (ie, how long to burst for) to 50% of the total
beacon interval time
* fix the cabq adjustment calculation based on how the beacon offset is
calculated (the SWBA/DBA time offset.)
This is all still a bit magic voodoo but it does seem to have further
quietened issues with missed/stuck beacons under my local testing.
In any case, it better matches what the reference HAL implements.
Obtained from: Qualcomm Atheros
"complete RX frames."
The 128 entry RX FIFO is really easy to fill up and miss refilling
when it's done in the ath taskq - as that gets blocked up doing
RX completion, TX completion and other random things.
So the 128 entry RX FIFO now gets emptied and refilled in the ath_intr()
task (and it grabs / releases locks, so now ath_intr() can't just be
a FAST handler yet!) but the locks aren't held for very long. The
completion part is done in the ath taskqueue context.
Details:
* Create a new completed frame list - sc->sc_rx_rxlist;
* Split the EDMA RX process queue into two halves - one that
processes the RX FIFO and refills it with new frames; another
that completes the completed frame list;
* When tearing down the driver, flush whatever is in the deferred
queue as well as what's in the FIFO;
* Create two new RX methods - one that processes all RX queues,
one that processes the given RX queue. When MSI is implemented,
we get told which RX queue the interrupt came in on so we can
specifically schedule that. (And I can do that with the non-MSI
path too; I'll figure that out later.)
* Convert the legacy code over to use these new RX methods;
* Replace all the instances of the RX taskqueue enqueue with a call
to a relevant RX method to enqueue one or all RX queues.
Tested:
* AR9380, STA
* AR9580, STA
* AR5413, STA
* when pulling frames off of the TID queue, the ATH_TID_REMOVE()
macro decrements the axq_depth field. So don't do it twice.
* in ath_tx_comp_cleanup_aggr(), bf wasn't being reset to bf_first
before walking the buffer list to complete buffers; so those buffers
will leak.
Since this is being done during buffer free, it's a crap shoot whether
the TX path lock is held or not. I tried putting the ath_freebuf() code
inside the TX lock and I got all kinds of locking issues - it turns out
that the buffer free path sometimes is called with the lock held and
sometimes isn't. So I'll go and fix that soon.
Hence for now the holdingbf buffers are protected by the TXBUF lock.
When working on TDMA, Sam Leffler found that the MAC DMA hardware
would re-read the last TX descriptor when getting ready to transmit
the next one. Thus the whole ATH_BUF_BUSY came into existance -
the descriptor must be left alone (very specifically the link pointer
must be maintained) until the hardware has moved onto the next frame.
He saw this in TDMA because the MAC would be frequently stopping during
active transmit (ie, when it wasn't its turn to transmit.)
Fast-forward to today. It turns out that this is a problem not with
a single MAC DMA instance, but with each QCU (from 0->9). They each
maintain separate descriptor pointers and will re-read the last
descriptor when starting to transmit the next.
So when your AP is busy transmitting from multiple TX queues, you'll
(more) frequently see one QCU stopped, waiting for a higher-priority QCU
to finsh transmitting, before it'll go ahead and continue. If you mess
up the descriptor (ie by freeing it) then you're short of luck.
Thanks to rpaulo for sticking with me whilst I diagnosed this issue
that he was quite reliably triggering in his environment.
This is a reimplementation; it doesn't have anything in common with
the ath9k or the Qualcomm Atheros reference driver.
Now - it in theory doesn't apply on the EDMA chips, as long as you
push one complete frame into the FIFO at a time. But the MAC can DMA
from a list of frames pushed into the hardware queue (ie, you concat
'n' frames together with link pointers, and then push the head pointer
into the TXQ FIFO.) Since that's likely how I'm going to implement
CABQ handling in hostap mode, it's likely that I will end up teaching
the EDMA TX completion code about busy buffers, just to be "sure"
this doesn't creep up.
Tested - iperf ap->sta and sta->ap (with both sides running this code):
* AR5416 STA
* AR9160/AR9220 hostap
To validate that it doesn't break the EDMA (FIFO) chips:
* AR9380, AR9485, AR9462 STA
Using iperf with the -S <tos byte decimal value> to set the TCP client
side DSCP bits, mapping to different TIDs and thus different TX queues.
TODO:
* Make this work on the EDMA chips, if we end up pushing lists of frames
to the hardware (eg how we eventually will handle cabq in hostap/ibss
mode.)
* a flags field that lets me know what's going on;
* the hardware ratecode, unmolested by conversion to a bitrate;
* the HAL rs_flags field, useful for debugging;
* specifically mark aggregate sub-frames.
This stuff sorely needs tidying up - it's missing some important
stuff (eg numdelims) and it would be nice to put the flags at the
beginning rather than at the end.
Tested:
* AR9380, STA mode, 2x2 HT40, monitoring RSSI and EVM values
I can 100% reliably trigger this on TID 1 traffic by using iperf -S 32
<client fields> to create traffic that maps to TID 1.
The reference driver doesn't do this check.
I stumbled across this whilst trying to debug another weird hang reported
on the freebsd-wireless list.
Whilst here, add in the STBC check to ath_rateseries_setup().
Whilst here, fix the short preamble flag to be set only for legacy rates.
Whilst here, comment that we should be using the full set of decisions
made by ath_rateseries_setup() rather than recalculating them!
routine.
There were still corner cases where the EWMA update stats are being
called on a rix which didn't have an intermediary stats update; thus
no packets were counted against it. Sigh.
This should fix the crashes I've been seeing on recent -HEAD.
* If both ends have negotiated (at least) one stream;
* Only if it's a single stream rate (MCS0-7);
* Only if there's more than one TX chain enabled.
Tested:
* AR9280 STA mode -> Atheros AP; tested both MCS2 (STBC) and MCS12 (no STBC.)
Verified using athalq to inspect the TX descriptors.
TODO:
* Test AR5416 - no STBC should be enabled;
* Test AR9280 with one TX chain enabled - no STBC should be enabled.
The HAL already included the STBC fields; it just needed to be exposed
to the driver and net80211 stack.
This should allow single-stream STBC TX and RX to be negotiated; however
the driver and rate control code currently don't do anything with it.
rate.
This fixes two things:
* The intermediary rates now also have their EWMA values changed;
* The existing code was using the wrong value for longtries - so the
EWMA stats were only adjusted for the first rate and not subsequent
rates in a MRR setup.
TODO:
* Merge the EWMA updates into update_stats() now..
* Remove ar5416UpdateChainmasks();
* Remove the TX chainmask override code from the ar5416 TX descriptor
setup routines;
* Write a driver method to calculate the current chainmask based on the
operating mode and update the driver state;
* Call the HAL chainmask method before calling ath_hal_reset();
* Use the currently configured chainmask in the TX descriptors rather than
the hardware TX chainmasks.
Tested:
* AR5416, STA/AP mode - legacy and 11n modes
Right now the only way to set the chainmask is to set the hardware
configured chainmask through capabilities. This is fine for forcing
the chainmask to be something other than what the hardware is capable
of (eg to reduce TX/RX to one connected antenna) but it does change what
the HAL hardware chainmask configuration is.
For operational mode changes, it (may?) make sense to separately control
the TX/RX chainmask.
Right now it's done as part of ar5416_reset.c - ar5416UpdateChainMasks()
calculates which TX/RX chainmasks to enable based on the operating mode.
(1 for legacy and whatever is supported for 11n operation.) But doing
this in the HAL is suboptimal - the driver needs to know the currently
configured chainmask in order to correctly enable things for each
TX descriptor. This is currently done by overriding the chainmask
config in the ar5416 TX routines but this has to disappear - the AR9300
HAL support requires the driver to dynamically set the TX chainmask based
on the TX power and TX rate in order to meet mini-PCIe slot power
requirements.
So:
* Introduce a new HAL method to set the operational chainmask variables;
* Introduce null methods for the previous generation chipsets;
* Add new driver state to record the current chainmask separate from
the hardware configured chainmask.
Part #2 of this will involve disabling ar5416UpdateChainMasks() and moving
it into the driver; as well as properly programming the TX chainmask
based on the currently configured HAL chainmask.
Tested:
* AR5416, STA mode - both legacy (11a/11bg) and 11n rates - verified
that AR_SELFGEN_MASK (the chainmask used for self-generated frames like
ACKs and RTSes) is correct, as well as the TX descriptor contents is
correct.
an incorrectly calculated RTS duration value when transmitting aggregates.
These earlier 802.11n NICs incorrectly used the ACK duration time when
calculating what to put in the RTS of an aggregate frame. Instead it
should have used the block-ack time. The result is that other stations
may not reserve enough time and start transmitting _over_ the top of
the in-progress blockack field. Tsk.
This workaround is to popuate the burst duration field with the delta
between the ACK duration the hardware is using and the required duration
for the block-ack. The result is that the RTS field should now contain
the correct duration for the subsequent block-ack.
This doesn't apply for AR9280 and later NICs.
Obtained from: Qualcomm Atheros
Specifically - never jack the TX FIFO threshold up to the absolute
maximum; always leave enough space for two DMA transactions to
appear.
This is a paranoia from the Linux ath9k driver. It can't hurt.
Obtained from: Linux ath9k
The default is to limit them to what the hardware is capable of.
Add sysctl twiddles for both the non-RTS and RTS protected aggregate
generation.
Whilst here, add some comments about stuff that I've discovered during
my exploration of the TX aggregate / delimiter setup path from the
reference driver.
This has reduced the number of TX delimiter and data underruns when
doing large UDP transfers (>100mbit).
This stops any HAL_INT_TXURN interrupts from occuring, which is a good
sign!
Obtained from: Qualcomm Atheros
* Delete this debugging print - I used it when debugging the initial
TX descriptor chaining code. It now works, so let's toss it.
It just confuses people if they enable TX descriptor debugging as they
get two slightly different versions of the same descriptor.
* Indenting.
part of ts_status. Thus:
* make sure we decode them from ts_flags, rather than ts_status;
* make sure we decode them regardless of whether there's an error or not.
This correctly exposes descriptor configuration errors, TX delimiter
underruns and TX data underruns.
actually do have to reinitialise the RX side of things after an RX
descriptor EOL error.
* Revert a change of mine from quite a while ago - don't shortcut the
RX initialisation path. There's a RX FIFO bug in the earlier chips
(I'm not sure when it was fixed in this series, but it's fixed
with the AR9380 and later) which causes the same RX descriptor to
be written to over and over. This causes the descriptor to be
marked as "done", and this ends up causing the whole RX path to
go very strange. This should fixed the "kickpcu; handled X packets"
message spam where "X" is consistently small.
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:
* With if_start and the ifnet mbuf queue, any temporary latency
would get eaten up by some mbufs being queued. With ath_transmit()
queuing things to ath_buf's, I'd only get 512 TX buffers before I
couldn't queue any further frames.
* There's also some non-zero latency involved with TX being pushed
into a taskqueue via direct dispatch. Any time the scheduler didn't
immediately schedule the ath TX task would cause extra latency.
Various 1ge/10ge drivers implement both direct dispatch (if the TX
lock can be acquired) and deferred task transmission (if the TX lock
can't be acquired), with frames being pushed into a drbd queue.
I'll have to do this at some point, but until I figure out how to
deal with 802.11 fragments, I'll have to wait a while longer.
So what I saw:
* lots of extra latency, specially under load - if the taskqueue
wasn't immediately scheduled, things went pear shaped;
* any extra latency would result in TX ath_buf's taking their sweet time
being replenished, so any further calls to ath_transmit() would drop
mbufs.
* .. yes, there's no explicit backpressure here - things are just dropped.
Eek.
With this, the general performance has gone up, but those subtle if_start()
related race conditions are back. For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.
There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation. Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
when they're being called from the TX completion handler.
Going (back) through the taskqueue is just adding extra locking and
latency to packet operations. This improves performance a little bit
on most NICs.
It still hasn't restored the original performance of the AR5416 NIC
but the AR9160, AR9280 and later NICs behave very well with this.
Tested:
* AR5416 STA (still tops out at ~ 70mbit TCP, rather than 150mbit TCP..)
* AR9160 hostap (good for both TX and RX)
* AR9280 hostap (good for both TX and RX)
crappy 802.11n performance, sigh.)
With the AR5416, aggregates need to be limited to 8KiB if RTS/CTS is
enabled. However, larger aggregates were going out with RTSCTS enabled.
The following was going on:
* The first buffer in the list would have RTS/CTS enabled in
bf->bf_state.txflags;
* The aggregate would be formed;
* The "copy over the txflags from the first buffer" logic that I added
blanked the RTS/CTS TX flags fields, and then copied the bf_first
RTS/CTS flags over;
* .. but that'd cause bf_first to be blanked out! And thus the flag
was cleared;
* So the rest of the aggregate formation would run with those flags
cleared, and thus > 8KiB aggregates were formed.
The driver is now (again) correctly limiting aggregate formation for
the AR5416 but there are still other pending issues to resolve.
Tested:
* AR5416, STA mode
Right now, ic_curchan seems to be updated rather quickly (ie, during
the ioctl) and before the driver gets notified of what's going on.
So what I was seeing was:
* NIC was in channel X;
* It generates PHY errors for channel X;
* an ioctl comes along from userland and changes things to channel Y;
* .. this updates ic_curchan, but hasn't yet reset the hardware;
* in parallel, RX is occuring and it looks at ic_curchan;
* .. which is channel Y, so events get stamped with that now.
Sigh.
the separate ath0 TX taskq.
Whilst here, make sure that the TX software scheduler is also
running out of the TX task, rather than the ath0 taskqueue.
Make sure that the tx taskqueue is blocked/unblocked as necessary.
This allows for a little more parallelism on multi-core machines,
as well as (eventually) supporting a higher task priority for TX
tasks, allowing said TX task to preempt an already running RX or
TX completion task.
Tested:
* AR5416, AR9280 hostap and STA modes
This is easily possible now that the TX is protected by a single
lock, rather than a per-TXQ (and thus per-TID) lock.
Only set CLRDMASK if none of the destinations are filtered.
This likely will need some tuning when it comes time to do UASPD/PS-POLL
TX, however at that point it should be manually set anyway.
Tested:
* AR9280, STA mode
TODO:
* More thorough testing in AP mode
* test other chipsets, just to be safe/sure.
chip hangs.
* Always do a reset in ath_bmiss_proc(), regardless of whether the
hardware is "hung" or not. Specifically, for spectral scan, there's
likely a whole bunch of potential hangs that we don't (yet) recognise
in the HAL. So to avoid staying RX deaf persisting until the station
disassociates, just do a no-loss reset.
* Set sc_beacons=1 in STA mode. During a reset, the beacon programming
isn't done. (It's likely I need to set sc_syncbeacons during a hang
reset, but I digress.) Thus after a reset, there's no beacon timer
programming to send a BMISS interrupt if beacons aren't heard ..
thus if the AP disappears, you won't get notified and you'll have to
reset your interface.
This hasn't yet fixed all of the hangs that I've seen when debugging
spectral scan, but it's certainly reduced the hang frequency and it
should improve general STA stability in very noisy environments.
Tested:
* AR9280, STA mode, spectral scan off/on
PR: kern/175227
when an interface is going down.
Right now it's quite possible (but very unlikely!) that ath_reset()
or similar is called, leading to a beacon config call, in parallel with
the last VAP being destroyed.
This likely should be fixed by making sure the bmiss/bstuck/watchdog
taskqueues are canceled whenever the last VAP is destroyed.
if_start().
This removes the overlapping data path TX from occuring, which
solves quite a number of the potential TX queue races in ath(4).
It doesn't fix the net80211 layer TX queue races and it doesn't
fix the raw TX path yet, but it's an important step towards this.
This hasn't dropped the TX performance in my testing; primarily
because now the TX path can quickly queue frames and continue
along processing.
This involves a few rather deep changes:
* Use the ath_buf as a queue placeholder for now, as we need to be
able to support queuing a list of mbufs (ie, when transmitting
fragments) and m_nextpkt can't be used here (because it's what is
joining the fragments together)
* if_transmit() now simply allocates the ath_buf and queues it to
a driver TX staging queue.
* TX is now moved into a taskqueue function.
* The TX taskqueue function now dequeues and transmits frames.
* Fragments are handled correctly here - as the current API passes
the fragment list as one mbuf list (joined with m_nextpkt) through
to the driver if_transmit().
* For the couple of places where ath_start() may be called (mostly
from net80211 when starting the VAP up again), just reimplement
it using the new enqueue and taskqueue methods.
What I don't like (about this work and the TX code in general):
* I'm using the same lock for the staging TX queue management and the
actual TX. This isn't required; I'm just being slack.
* I haven't yet moved TX to a separate taskqueue (but the taskqueue is
created); it's easy enough to do this later if necessary. I just need
to make sure it's a higher priority queue, so TX has the same
behaviour as it used to (where it would preempt existing RX..)
* I need to re-review the TX path a little more and make sure that
ieee80211_node_*() functions aren't called within the TX lock.
When queueing, I should just push failed frames into a queue and
when I'm wrapping up the TX code, unlock the TX lock and
call ieee80211_node_free() on each.
* It would be nice if I could hold the TX lock for the entire
TX and TX completion, rather than this release/re-acquire behaviour.
But that requires that I shuffle around the TX completion code
to handle actual ath_buf free and net80211 callback/free outside
of the TX lock. That's one of my next projects.
* the ic_raw_xmit() path doesn't use this yet - so it still has
sequencing problems with parallel, overlapping calls to the
data path. I'll fix this later.
Tested:
* Hostap - AR9280, AR9220
* STA - AR5212, AR9280, AR5416
This is intended to support reporting FFT results during active channel
scans, for users who would like to fiddle around with writing applications
that do both FFT visualisation _and_ AP scanning.
* add a new ioctl to enable/trigger spectral scan at channel change/reset;
* set do_spectral consistently if it's enabled, so a channel set/reset
will carry forth the correct PHY error configuration so frames
are actually received;
* for NICs that don't do spectral scan, don't bother checking the
spectral scan state on channel change/reset.
Tested:
* AR9280 - STA and scanning;
* AR5416 - STA, ensured that the SS code doesn't panic