These are needed to diagnose TX hangs that I and hiren are seeing.
Without it, the only way we'll see debugging is by having ATH_DEBUG_SW_TX
enabled and that is going to be very, very spammy.
ATH_DEBUG_RESET is fine; it's only going to be done during stuck beacon
situations in AP mode.
Whilst I'm here, and now that it's behind debugging, let's just disable
the "print only one" conditional. I'll eventually make it more tunable.
Tested:
* AR9220, hostap mode.
The origin of WEP comes from IEEE Std 802.11-1997 where it defines
whether the frame body of MAC frame has been encrypted using WEP
algorithm or not.
IEEE Std. 802.11-2007 changes WEP to Protected Frame, indicates
whether the frame is protected by a cryptographic encapsulation
algorithm.
Reviewed by: adrian, rpaulo
to this event, adding if_var.h to files that do need it. Also, include
all includes that now are included due to implicit pollution via if_var.h
Sponsored by: Netflix
Sponsored by: Nginx, Inc.
and if queue mechanism; also fix up (non-11n) TX fragment handling.
This may result in a bit of a performance drop for now but I plan on
debugging and resolving this at a later stage.
Whilst here, fix the transmit path so fragment transmission works.
The TX fragmentation handling is a bit more special. In order to
correctly transmit TX fragments, there's a bunch of corner cases that
need to be handled:
* They must be transmitted back to back, in the same order..
* .. ie, you need to hold the TX lock whilst transmitting this
set of fragments rather than interleaving it with other MSDUs
destined to other nodes;
* The length of the next fragment is required when transmitting, in
order to correctly set the NAV field in the current frame to the
length of the next frame; which requires ..
* .. that we know the transmit duration of the next frame, which ..
* .. requires us to set the rate of all fragments to the same length,
or make the decision up-front, etc.
To facilitate this, I've added a new ath_buf field to describe the
length of the next fragment. This avoids having to keep the mbuf
chain together. This used to work before my 11n TX path work because
the ath_tx_start() routine would be handed a single mbuf with m_nextpkt
pointing to the next frame, and that would be maintained all the way
up to when the duration calculation was done. This doesn't hold
true any longer - the actual queuing may occur at any point in the
future (think ath_node TID software queuing) so this information
needs to be maintained.
Right now this does work for non-11n frames but it doesn't at all
enforce the same rate control decision for all frames in the fragment.
I plan on fixing this in a followup commit.
RTS/CTS has the same issue, I'll look at fixing this in a subsequent
commit.
Finaly, 11n fragment support requires the driver to have fully
decided what the rate scenario setup is - including 20/40MHz,
short/long GI, STBC, LDPC, number of streams, etc. Right now that
decision is (currently) made _after_ the NAV field value is updated.
I'll fix all of this in subsequent commits.
Tested:
* AR5416, STA, transmitting 11abg fragments
* AR5416, STA, 11n fragments work but the NAV field is incorrect for
the reasons above.
TODO:
* It would be nice to be able to queue mbufs per-node and per-TID so
we can only queue ath_buf entries when it's time to assemble frames
to send to the hardware.
But honestly, we should just do that level of software queue management
in net80211 rather than ath(4), so I'm going to leave this alone for now.
* More thorough AP, mesh and adhoc testing.
* Ensure that net80211 doesn't hand us fragmented frames when A-MPDU has
been negotiated, as we can't do software retransmission of fragments.
* .. set CLRDMASK when transmitting fragments, just to ensure.
traffic.
When transmitting non-aggregate traffic, we need to keep the hardware
busy whilst transmitting or small bursts in txdone/tx latency will
kill us.
This restores non-aggregate iperf performance, especially when doing
TDMA.
Tested:
* AR5416<->AR5416, TDMA
* AR5416 STA <-> AR9280 AP
The list-based DMA engine has the following behaviour:
* When the DMA engine is in the init state, you can write the first
descriptor address to the QCU TxDP register and it will work.
* Then when it hits the end of the list (ie, it either hits a NULL
link pointer, OR it hits a descriptor with VEOL set) the QCU
stops, and the TxDP points to the last descriptor that was transmitted.
* Then when you want to transmit a new frame, you can then either:
+ write the head of the new list into TxDP, or
+ you write the head of the new list into the link pointer of the
last completed descriptor (ie, where TxDP points), then kick
TxE to restart transmission on that QCU>
* The hardware then will re-read the descriptor to pick up the link
pointer and then jump to that.
Now, the quirks:
* If you write a TxDP when there's been no previous TxDP (ie, it's 0),
it works.
* If you write a TxDP in any other instance, the TxDP write may actually
fail. Thus, when you start transmission, it will re-read the last
transmitted descriptor to get the link pointer, NOT just start a new
transmission.
So the correct thing to do here is:
* ALWAYS use the holding descriptor (ie, the last transmitted descriptor
that we've kept safe) and use the link pointer in _THAT_ to transmit
the next frame.
* NEVER write to the TxDP after you've done the initial write.
* .. also, don't do this whilst you're also resetting the NIC.
With this in mind, the following patch does basically the above.
* Since this encapsulates Sam's issues with the QCU behaviour w/ TDMA,
kill the TDMA special case and replace it with the above.
* Add a new TXQ flag - PUTRUNNING - which indicates that we've started
DMA.
* Clear that flag when DMA has been shutdown.
* Ensure that we're not restarting DMA with PUTRUNNING enabled.
* Fix the link pointer logic during TXQ drain - we should always ensure
the link pointer does point to something if there's a list of frames.
Having it be NULL as an indication that DMA has finished or during
a reset causes trouble.
Now, given all of this, i want to nuke axq_link from orbit. There's now HAL
methods to get and set the link pointer of a descriptor, so what we
should do instead is to update the right link pointer.
* If there's a holding descriptor and an empty TXQ list, set the
link pointer of said holding descriptor to the new frame.
* If there's a non-empty TXQ list, set the link pointer of the
last descriptor in the list to the new frame.
* Nuke axq_link from orbit.
Note:
* The AR9380 doesn't need this. FIFO TX writes are atomic. As long as
we don't append to a list of frames that we've already passed to the
hardware, all of the above doesn't apply. The holding descriptor stuff
is still needed to ensure the hardware can re-read a completed
descriptor to move onto the next one, but we restart DMA by pushing in
a new FIFO entry into the TX QCU. That doesn't require any real
gymnastics.
Tested:
* AR5210, AR5211, AR5212, AR5416, AR9380 - STA mode.
doesn't match the actual hardware queue this frame is queued to.
I'm trying to ensure that the holding buffers are actually being queued
to the same TX queue as the holding buffer that they end up on.
I'm pretty sure this is all correct so if this complains, it'll be due
to some kind of subtle broken-ness that needs fixing.
This is only done for legacy hardware, not EDMA hardware.
Tested:
* AR5416 STA mode, very lightly
PS-POLL support.
This implements PS-POLL awareness i nthe
* Implement frame "leaking", which allows for a software queue
to be scheduled even though it's asleep
* Track whether a frame has been leaked or not
* Leak out a single non-AMPDU frame when transmitting aggregates
* Queue BAR frames if the node is asleep
* Direct-dispatch the rest of control and management frames.
This allows for things like re-association to occur (which involves
sending probe req/resp as well as assoc request/response) when
the node is asleep and then tries reassociating.
* Limit how many frames can set in the software node queue whilst
the node is asleep. net80211 is already buffering frames for us
so this is mostly just paranoia.
* Add a PS-POLL method which leaks out a frame if there's something
in the software queue, else it calls net80211's ps-poll routine.
Since the ath PS-POLL routine marks the node as having a single frame
to leak, either a software queued frame would leak, OR the next queued
frame would leak. The next queued frame could be something from the
net80211 power save queue, OR it could be a NULL frame from net80211.
TODO:
* Don't transmit further BAR frames (eg via a timeout) if the node is
currently asleep. Otherwise we may end up exhausting management frames
due to the lots of queued BAR frames.
I may just undo this bit later on and direct-dispatch BAR frames
even if the node is asleep.
* It would be nice to burst out a single A-MPDU frame if both ends
support this. I may end adding a FreeBSD IE soon to negotiate
this power save behaviour.
* I should make STAs timeout of power save mode if they've been in power
save for more than a handful of seconds. This way cards that get
"stuck" in power save mode don't stay there for the "inactivity" timeout
in net80211.
* Move the queue depth check into the driver layer (ath_start / ath_transmit)
rather than doing it in the TX path.
* There could be some naughty corner cases with ps-poll leaking.
Specifically, if net80211 generates a NULL data frame whilst another
transmitter sends a normal data frame out net80211 output / transmit,
we need to ensure that the NULL data frame goes out first.
This is one of those things that should occur inside the VAP/ic TX lock.
Grr, more investigations to do..
Tested:
* STA: AR5416, AR9280
* AP: AR5416, AR9280, AR9160
the pause/resume code to not be called completely symmetrically.
I'll chase down the root cause of that soon; this at least works around
the bug and tells me when it happens.
directly referencing ni->ni_txpower.
This provides the hardware with a slightly more accurate idea of
the maximum TX power to be using.
This is part of a series to get per-packet TPC to work (better).
Tested:
* AR5416, hostap mode
before the TX path is being aborted.
Right now it's in the TDMA code and I can live with that; but it really
should get fixed.
I'll do a more thorough audit of this code soon.
the buffer is being freed.
* When buffers are cloned, the original mapping isn't copied but it
wasn't freeing the mapping until later. To be safe, free the
mapping when the buffer is cloned.
* ath_freebuf() now no longer calls the busdma sync/unmap routines.
* ath_tx_freebuf() now calls sync/unmap.
* Call sync first, before calling unmap.
Tested:
* AR5416, STA mode
(Yes, the previous code temporarily broke EDMA TX. I'm sorry; I should've
actually setup ATH_BUF_FIFOEND on frames so txq->axq_fifo_depth was
cleared!)
This code implements a whole bunch of sorely needed EDMA TX improvements
along with CABQ TX support.
The specifics:
* When filling/refilling the FIFO, use the new TXQ staging queue
for FIFO frames
* Tag frames with ATH_BUF_FIFOPTR and ATH_BUF_FIFOEND correctly.
For now the non-CABQ transmit path pushes one frame into the TXQ
staging queue without setting up the intermediary link pointers
to chain them together, so draining frames from the txq staging
queue to the FIFO queue occurs AMPDU / MPDU at a time.
* In the CABQ case, manually tag the list with ATH_BUF_FIFOPTR and
ATH_BUF_FIFOEND so a chain of frames is pushed into the FIFO
at once.
* Now that frames are in a FIFO pending queue, we can top up the
FIFO after completing a single frame. This means we can keep
it filled rather than waiting for it drain and _then_ adding
more frames.
* The EDMA restart routine now walks the FIFO queue in the TXQ
rather than the pending queue and re-initialises the FIFO with
that.
* When restarting EDMA, we may have partially completed sending
a list. So stamp the first frame that we see in a list with
ATH_BUF_FIFOPTR and push _that_ into the hardware.
* When completing frames, only check those on the FIFO queue.
We should never ever queue frames from the pending queue
direct to the hardware, so there's no point in checking.
* Until I figure out what's going on, make sure if the TXSTATUS
for an empty queue pops up, complain loudly and continue.
This will stop the panics that people are seeing. I'll add
some code later which will assist in ensuring I'm populating
each descriptor with the correct queue ID.
* When considering whether to queue frames to the hardware queue
directly or software queue frames, make sure the depth of
the FIFO is taken into account now.
* When completing frames, tag them with ATH_BUF_BUSY if they're
not the final frame in a FIFO list. The same holding descriptor
behaviour is required when handling descriptors linked together
with a link pointer as the hardware will re-read the previous
descriptor to refresh the link pointer before contiuning.
* .. and if we complete the FIFO list (ie, the buffer has
ATH_BUF_FIFOEND set), then we don't need the holding buffer
any longer. Thus, free it.
Tested:
* AR9380/AR9580, STA and hostap
* AR9280, STA/hostap
TODO:
* I don't yet trust that the EDMA restart routine is totally correct
in all circumstances. I'll continue to thrash this out under heavy
multiple-TXQ traffic load and fix whatever pops up.
related issues.
Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.
This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.
The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.
The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support. Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot. For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.
The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission. I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.
Major:
* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
a single lock.
* Leave the software queue side of things under the ATH_TX_LOCK lock,
(continuing) to serialise things as they are.
* Add a new function which is called whenever there's a beacon miss,
to print out some debugging. This is primarily designed to help
me figure out if the beacon miss events are due to a noisy environment,
issues with the PHY/MAC, or other.
* Move the CABQ setup/enable to occur _after_ all the VAPs have been
looked at. This means that for multiple VAPS in bursted mode, the
CABQ gets primed once all VAPs are checked, rather than being primed
on the first VAP and then having frames appended after this.
Minor:
* Add a (disabled) twiddle to let me enable/disable cabq traffic.
It's primarily there to let me easily debug what's going on with beacon
and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
trying to trace down.
* Clear bf_next when flushing frames; it should quieten some warnings
that show up when a node goes away.
Tested:
* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)
TODO:
* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
correctly set when chaining descriptors.)
* when pulling frames off of the TID queue, the ATH_TID_REMOVE()
macro decrements the axq_depth field. So don't do it twice.
* in ath_tx_comp_cleanup_aggr(), bf wasn't being reset to bf_first
before walking the buffer list to complete buffers; so those buffers
will leak.
I can 100% reliably trigger this on TID 1 traffic by using iperf -S 32
<client fields> to create traffic that maps to TID 1.
The reference driver doesn't do this check.
* Delete this debugging print - I used it when debugging the initial
TX descriptor chaining code. It now works, so let's toss it.
It just confuses people if they enable TX descriptor debugging as they
get two slightly different versions of the same descriptor.
* Indenting.
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:
* With if_start and the ifnet mbuf queue, any temporary latency
would get eaten up by some mbufs being queued. With ath_transmit()
queuing things to ath_buf's, I'd only get 512 TX buffers before I
couldn't queue any further frames.
* There's also some non-zero latency involved with TX being pushed
into a taskqueue via direct dispatch. Any time the scheduler didn't
immediately schedule the ath TX task would cause extra latency.
Various 1ge/10ge drivers implement both direct dispatch (if the TX
lock can be acquired) and deferred task transmission (if the TX lock
can't be acquired), with frames being pushed into a drbd queue.
I'll have to do this at some point, but until I figure out how to
deal with 802.11 fragments, I'll have to wait a while longer.
So what I saw:
* lots of extra latency, specially under load - if the taskqueue
wasn't immediately scheduled, things went pear shaped;
* any extra latency would result in TX ath_buf's taking their sweet time
being replenished, so any further calls to ath_transmit() would drop
mbufs.
* .. yes, there's no explicit backpressure here - things are just dropped.
Eek.
With this, the general performance has gone up, but those subtle if_start()
related race conditions are back. For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.
There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation. Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
the separate ath0 TX taskq.
Whilst here, make sure that the TX software scheduler is also
running out of the TX task, rather than the ath0 taskqueue.
Make sure that the tx taskqueue is blocked/unblocked as necessary.
This allows for a little more parallelism on multi-core machines,
as well as (eventually) supporting a higher task priority for TX
tasks, allowing said TX task to preempt an already running RX or
TX completion task.
Tested:
* AR5416, AR9280 hostap and STA modes
This is easily possible now that the TX is protected by a single
lock, rather than a per-TXQ (and thus per-TID) lock.
Only set CLRDMASK if none of the destinations are filtered.
This likely will need some tuning when it comes time to do UASPD/PS-POLL
TX, however at that point it should be manually set anyway.
Tested:
* AR9280, STA mode
TODO:
* More thorough testing in AP mode
* test other chipsets, just to be safe/sure.
if_start().
This removes the overlapping data path TX from occuring, which
solves quite a number of the potential TX queue races in ath(4).
It doesn't fix the net80211 layer TX queue races and it doesn't
fix the raw TX path yet, but it's an important step towards this.
This hasn't dropped the TX performance in my testing; primarily
because now the TX path can quickly queue frames and continue
along processing.
This involves a few rather deep changes:
* Use the ath_buf as a queue placeholder for now, as we need to be
able to support queuing a list of mbufs (ie, when transmitting
fragments) and m_nextpkt can't be used here (because it's what is
joining the fragments together)
* if_transmit() now simply allocates the ath_buf and queues it to
a driver TX staging queue.
* TX is now moved into a taskqueue function.
* The TX taskqueue function now dequeues and transmits frames.
* Fragments are handled correctly here - as the current API passes
the fragment list as one mbuf list (joined with m_nextpkt) through
to the driver if_transmit().
* For the couple of places where ath_start() may be called (mostly
from net80211 when starting the VAP up again), just reimplement
it using the new enqueue and taskqueue methods.
What I don't like (about this work and the TX code in general):
* I'm using the same lock for the staging TX queue management and the
actual TX. This isn't required; I'm just being slack.
* I haven't yet moved TX to a separate taskqueue (but the taskqueue is
created); it's easy enough to do this later if necessary. I just need
to make sure it's a higher priority queue, so TX has the same
behaviour as it used to (where it would preempt existing RX..)
* I need to re-review the TX path a little more and make sure that
ieee80211_node_*() functions aren't called within the TX lock.
When queueing, I should just push failed frames into a queue and
when I'm wrapping up the TX code, unlock the TX lock and
call ieee80211_node_free() on each.
* It would be nice if I could hold the TX lock for the entire
TX and TX completion, rather than this release/re-acquire behaviour.
But that requires that I shuffle around the TX completion code
to handle actual ath_buf free and net80211 callback/free outside
of the TX lock. That's one of my next projects.
* the ic_raw_xmit() path doesn't use this yet - so it still has
sequencing problems with parallel, overlapping calls to the
data path. I'll fix this later.
Tested:
* Hostap - AR9280, AR9220
* STA - AR5212, AR9280, AR5416
I couldn't think of a way to maintain the hardware TXQ locks _and_ layer
on top of that per-TXQ software queuing and any other kind of fine-grained
locks (eg per-TID, or per-node locks.)
So for now, to facilitate some further code refactoring and development
as part of the final push to get software queue ps-poll and u-apsd handling
into this driver, just do away with them entirely.
I may eventually bring them back at some point, when it looks slightly more
architectually cleaner to do so. But as it stands at the present, it's
not really buying us much:
* in order to properly serialise things and not get bitten by scheduling
and locking interactions with things higher up in the stack, we need to
wrap the whole TX path in a long held lock. Otherwise we can end up
being pre-empted during frame handling, resulting in some out of order
frame handling between sequence number allocation and encryption handling
(ie, the seqno and the CCMP IV get out of sequence);
* .. so whilst that's the case, holding the lock for that long means that
we're acquiring and releasing the TXQ lock _inside_ that context;
* And we also acquire it per-frame during frame completion, but we currently
can't hold the lock for the duration of the TX completion as we need
to call net80211 layer things with the locks _unheld_ to avoid LOR.
* .. the other places were grab that lock are reset/flush, which don't happen
often.
My eventual aim is to change the TX path so all rejected frame transmissions
and all frame completions result in any ieee80211_free_node() calls to occur
outside of the TX lock; then I can cut back on the amount of locking that
goes on here.
There may be some LORs that occur when ieee80211_free_node() is called when
the TX queue path fails; I'll begin to address these in follow-up commits.
* upon setup, tell the alq code what the chip information is.
* add TX/RX path logging for legacy chips.
* populate the tx/rx descriptor length fields with a best-estimate.
It's overly big (96 bytes when AH_SUPPORT_AR5416 is enabled)
but it'll do for now.
Whilst I'm here, add CURVNET_RESTORE() here during probe/attach as a
partial solution to fixing crashes during attach when the attach fails.
There are other attach failures that I have to deal with; those'll come
later.