Commit Graph

38 Commits

Author SHA1 Message Date
Adrian Chadd
cd7dffd058 Migrate ath(4) to now use if_transmit instead of the legacy if_start
and if queue mechanism; also fix up (non-11n) TX fragment handling.

This may result in a bit of a performance drop for now but I plan on
debugging and resolving this at a later stage.

Whilst here, fix the transmit path so fragment transmission works.

The TX fragmentation handling is a bit more special.  In order to
correctly transmit TX fragments, there's a bunch of corner cases that
need to be handled:

* They must be transmitted back to back, in the same order..
* .. ie, you need to hold the TX lock whilst transmitting this
  set of fragments rather than interleaving it with other MSDUs
  destined to other nodes;
* The length of the next fragment is required when transmitting, in
  order to correctly set the NAV field in the current frame to the
  length of the next frame; which requires ..
* .. that we know the transmit duration of the next frame, which ..
* .. requires us to set the rate of all fragments to the same length,
  or make the decision up-front, etc.

To facilitate this, I've added a new ath_buf field to describe the
length of the next fragment.  This avoids having to keep the mbuf
chain together.  This used to work before my 11n TX path work because
the ath_tx_start() routine would be handed a single mbuf with m_nextpkt
pointing to the next frame, and that would be maintained all the way
up to when the duration calculation was done.  This doesn't hold
true any longer - the actual queuing may occur at any point in the
future (think ath_node TID software queuing) so this information
needs to be maintained.

Right now this does work for non-11n frames but it doesn't at all
enforce the same rate control decision for all frames in the fragment.
I plan on fixing this in a followup commit.

RTS/CTS has the same issue, I'll look at fixing this in a subsequent
commit.

Finaly, 11n fragment support requires the driver to have fully
decided what the rate scenario setup is - including 20/40MHz,
short/long GI, STBC, LDPC, number of streams, etc.  Right now that
decision is (currently) made _after_ the NAV field value is updated.
I'll fix all of this in subsequent commits.

Tested:

* AR5416, STA, transmitting 11abg fragments
* AR5416, STA, 11n fragments work but the NAV field is incorrect for
  the reasons above.

TODO:

* It would be nice to be able to queue mbufs per-node and per-TID so
  we can only queue ath_buf entries when it's time to assemble frames
  to send to the hardware.

  But honestly, we should just do that level of software queue management
  in net80211 rather than ath(4), so I'm going to leave this alone for now.

* More thorough AP, mesh and adhoc testing.

* Ensure that net80211 doesn't hand us fragmented frames when A-MPDU has
  been negotiated, as we can't do software retransmission of fragments.

* .. set CLRDMASK when transmitting fragments, just to ensure.
2013-05-26 22:23:39 +00:00
Adrian Chadd
9be82a4209 Be (very) careful about how to add more TX DMA work.
The list-based DMA engine has the following behaviour:

* When the DMA engine is in the init state, you can write the first
  descriptor address to the QCU TxDP register and it will work.

* Then when it hits the end of the list (ie, it either hits a NULL
  link pointer, OR it hits a descriptor with VEOL set) the QCU
  stops, and the TxDP points to the last descriptor that was transmitted.

* Then when you want to transmit a new frame, you can then either:
  + write the head of the new list into TxDP, or
  + you write the head of the new list into the link pointer of the
    last completed descriptor (ie, where TxDP points), then kick
    TxE to restart transmission on that QCU>

* The hardware then will re-read the descriptor to pick up the link
  pointer and then jump to that.

Now, the quirks:

* If you write a TxDP when there's been no previous TxDP (ie, it's 0),
  it works.

* If you write a TxDP in any other instance, the TxDP write may actually
  fail.  Thus, when you start transmission, it will re-read the last
  transmitted descriptor to get the link pointer, NOT just start a new
  transmission.

So the correct thing to do here is:

* ALWAYS use the holding descriptor (ie, the last transmitted descriptor
  that we've kept safe) and use the link pointer in _THAT_ to transmit
  the next frame.

* NEVER write to the TxDP after you've done the initial write.

* .. also, don't do this whilst you're also resetting the NIC.

With this in mind, the following patch does basically the above.

* Since this encapsulates Sam's issues with the QCU behaviour w/ TDMA,
  kill the TDMA special case and replace it with the above.

* Add a new TXQ flag - PUTRUNNING - which indicates that we've started
  DMA.

* Clear that flag when DMA has been shutdown.

* Ensure that we're not restarting DMA with PUTRUNNING enabled.

* Fix the link pointer logic during TXQ drain - we should always ensure
  the link pointer does point to something if there's a list of frames.
  Having it be NULL as an indication that DMA has finished or during
  a reset causes trouble.

Now, given all of this, i want to nuke axq_link from orbit.  There's now HAL
methods to get and set the link pointer of a descriptor, so what we
should do instead is to update the right link pointer.

* If there's a holding descriptor and an empty TXQ list, set the
  link pointer of said holding descriptor to the new frame.

* If there's a non-empty TXQ list, set the link pointer of the
  last descriptor in the list to the new frame.

* Nuke axq_link from orbit.

Note:

* The AR9380 doesn't need this.  FIFO TX writes are atomic.  As long as
  we don't append to a list of frames that we've already passed to the
  hardware, all of the above doesn't apply.  The holding descriptor stuff
  is still needed to ensure the hardware can re-read a completed
  descriptor to move onto the next one, but we restart DMA by pushing in
  a new FIFO entry into the TX QCU.  That doesn't require any real
  gymnastics.

Tested:

* AR5210, AR5211, AR5212, AR5416, AR9380 - STA mode.
2013-05-18 18:27:53 +00:00
Adrian Chadd
caa16e6960 This shouldn't have made it into this commit, sorry. 2013-05-08 08:53:55 +00:00
Adrian Chadd
d3731e4b21 Revert a previous commit - this is causing hardware errors.
I'm not sure why this is failing.  The holding descriptor should be being
re-read when starting DMA of the next frame.  Obviously something here
isn't totally correct.

I'll review the TX queue handling and see if I can figure out why this
is failing.  I'll then re-revert this patch out and use the holding
descriptor again.
2013-05-08 07:30:33 +00:00
Adrian Chadd
3f3a5dbd2c Ensure that we only call the busdma unmap/flush routines once, when
the buffer is being freed.

* When buffers are cloned, the original mapping isn't copied but it
  wasn't freeing the mapping until later.  To be safe, free the
  mapping when the buffer is cloned.

* ath_freebuf() now no longer calls the busdma sync/unmap routines.

* ath_tx_freebuf() now calls sync/unmap.

* Call sync first, before calling unmap.

Tested:

* AR5416, STA mode
2013-04-01 20:57:13 +00:00
Adrian Chadd
3feffbd796 Add per-TXQ EDMA FIFO staging queue support.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.

Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.

For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.

For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.

So:

* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
  lists.  It's a TAILQ (like the normal hardware frame queue) rather than
  the ath9k list-of-lists to represent FIFO entries.

* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.

* When allocating ath_buf entries, clear out the flag value before
  returning it or it'll end up having stale flags.

* When cloning ath_buf entries, only clone ATH_BUF_MGMT.  Don't clone
  the FIFO related flags.

* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
  drain the normal hardware queue.

Tested:

* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap

TODO:

* Test on other chipsets, just to be thorough.
2013-03-26 19:46:51 +00:00
Adrian Chadd
b837332d0a Overhaul the TXQ locking (again!) as part of some beacon/cabq timing
related issues.

Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.

This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.

The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.

The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support.  Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot.  For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.

The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission.  I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.

Major:

* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
  a single lock.

* Leave the software queue side of things under the ATH_TX_LOCK lock,
  (continuing) to serialise things as they are.

* Add a new function which is called whenever there's a beacon miss,
  to print out some debugging.  This is primarily designed to help
  me figure out if the beacon miss events are due to a noisy environment,
  issues with the PHY/MAC, or other.

* Move the CABQ setup/enable to occur _after_ all the VAPs have been
  looked at.  This means that for multiple VAPS in bursted mode, the
  CABQ gets primed once all VAPs are checked, rather than being primed
  on the first VAP and then having frames appended after this.

Minor:

* Add a (disabled) twiddle to let me enable/disable cabq traffic.
  It's primarily there to let me easily debug what's going on with beacon
  and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
  trying to trace down.

* Clear bf_next when flushing frames; it should quieten some warnings
  that show up when a node goes away.

Tested:

* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)

TODO:

* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
  ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
  correctly set when chaining descriptors.)
2013-03-24 00:03:12 +00:00
Adrian Chadd
1a85141ad4 Pull out the if_transmit() work and revert back to ath_start().
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:

* With if_start and the ifnet mbuf queue, any temporary latency
  would get eaten up by some mbufs being queued.  With ath_transmit()
  queuing things to ath_buf's, I'd only get 512 TX buffers before I
  couldn't queue any further frames.

* There's also some non-zero latency involved with TX being pushed
  into a taskqueue via direct dispatch.  Any time the scheduler didn't
  immediately schedule the ath TX task would cause extra latency.
  Various 1ge/10ge drivers implement both direct dispatch (if the TX
  lock can be acquired) and deferred task transmission (if the TX lock
  can't be acquired), with frames being pushed into a drbd queue.
  I'll have to do this at some point, but until I figure out how to
  deal with 802.11 fragments, I'll have to wait a while longer.

So what I saw:

* lots of extra latency, specially under load - if the taskqueue
  wasn't immediately scheduled, things went pear shaped;

* any extra latency would result in TX ath_buf's taking their sweet time
  being replenished, so any further calls to ath_transmit() would drop
  mbufs.

* .. yes, there's no explicit backpressure here - things are just dropped.
  Eek.

With this, the general performance has gone up, but those subtle if_start()
related race conditions are back.  For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.

There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation.  Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
2013-02-13 05:32:19 +00:00
Adrian Chadd
53a835d2db Put this back into the ath taskqueue rather than the ath TX taskqueue.
This now should mean all the entry points into the software TX
scheduler are back in the same taskqueue.
2013-02-11 07:49:40 +00:00
Adrian Chadd
21bca442b9 Methodize the process of adding the software TX queue to the taskqueue.
Move it (for now) to the TX taskqueue.
2013-02-07 02:15:25 +00:00
Adrian Chadd
f28a552089 Migrate the TX sending code out from under the ath0 taskq and into
the separate ath0 TX taskq.

Whilst here, make sure that the TX software scheduler is also
running out of the TX task, rather than the ath0 taskqueue.

Make sure that the tx taskqueue is blocked/unblocked as necessary.

This allows for a little more parallelism on multi-core machines,
as well as (eventually) supporting a higher task priority for TX
tasks, allowing said TX task to preempt an already running RX or
TX completion task.

Tested:

* AR5416, AR9280 hostap and STA modes
2013-01-26 00:14:34 +00:00
Adrian Chadd
c5239edb98 Implement frame (data) transmission using if_transmit(), rather than
if_start().

This removes the overlapping data path TX from occuring, which
solves quite a number of the potential TX queue races in ath(4).
It doesn't fix the net80211 layer TX queue races and it doesn't
fix the raw TX path yet, but it's an important step towards this.

This hasn't dropped the TX performance in my testing; primarily
because now the TX path can quickly queue frames and continue
along processing.

This involves a few rather deep changes:

* Use the ath_buf as a queue placeholder for now, as we need to be
  able to support queuing a list of mbufs (ie, when transmitting
  fragments) and m_nextpkt can't be used here (because it's what is
  joining the fragments together)

* if_transmit() now simply allocates the ath_buf and queues it to
  a driver TX staging queue.

* TX is now moved into a taskqueue function.

* The TX taskqueue function now dequeues and transmits frames.

* Fragments are handled correctly here - as the current API passes
  the fragment list as one mbuf list (joined with m_nextpkt) through
  to the driver if_transmit().

* For the couple of places where ath_start() may be called (mostly
  from net80211 when starting the VAP up again), just reimplement
  it using the new enqueue and taskqueue methods.

What I don't like (about this work and the TX code in general):

* I'm using the same lock for the staging TX queue management and the
  actual TX.  This isn't required; I'm just being slack.

* I haven't yet moved TX to a separate taskqueue (but the taskqueue is
  created); it's easy enough to do this later if necessary.  I just need
  to make sure it's a higher priority queue, so TX has the same
  behaviour as it used to (where it would preempt existing RX..)

* I need to re-review the TX path a little more and make sure that
  ieee80211_node_*() functions aren't called within the TX lock.
  When queueing, I should just push failed frames into a queue and
  when I'm wrapping up the TX code, unlock the TX lock and
  call ieee80211_node_free() on each.

* It would be nice if I could hold the TX lock for the entire
  TX and TX completion, rather than this release/re-acquire behaviour.
  But that requires that I shuffle around the TX completion code
  to handle actual ath_buf free and net80211 callback/free outside
  of the TX lock.  That's one of my next projects.

* the ic_raw_xmit() path doesn't use this yet - so it still has
  sequencing problems with parallel, overlapping calls to the
  data path.  I'll fix this later.

Tested:

* Hostap - AR9280, AR9220
* STA - AR5212, AR9280, AR5416
2013-01-15 18:01:23 +00:00
Adrian Chadd
1b5c5f5ad0 I give up - introduce a TX lock to serialise TX operations.
I've tried serialising TX using queues and such but unfortunately
due to how this interacts with the locking going on elsewhere in the
networking stack, the TX task gets delayed, resulting in quite a
noticable throughput loss:

* baseline TCP for 2x2 11n HT40 is ~ 170mbit/sec;
* TCP for TX task in the ath taskq, with the RX also going on - 80mbit/sec;
* TCP for TX task in a separate, second taskq - 100mbit/sec.

So for now I'm going with the Linux wireless stack approach - lock tx
early.  The linux code does in the wireless stack, before the 802.11
state stuff happens and before it's punted to the driver.
But TX locking needs to also occur at the driver layer as the TX
completion code _also_ begins to drain the ifnet TX queue.

Whilst I'm here, add some KTR traces for the TX path.

Note:

* This really should be done at the net80211 layer (as well, at least.)
  But that'll have to wait for a little more thought to happen.
2012-10-31 06:27:58 +00:00
Adrian Chadd
548a605d0d Begin fleshing out some software queue awareness for TIM handling with
the power save queue.

* introduce some new ATH_NODE lock protected fields, tracking the
  net80211 psq and TIM state;
* when doing buffer transitions - ie, when sending and completing
  buffers - check the state of the SWQ and update the TIM appropriately.
* when clearing the TIM bit, if the SWQ is not empty then delay clearing
  it.

This is racy, but it's no less racy than the current net80211 power
save queue management code.  Specifically, with multiple TX threads,
it's quite plausible that parallel state updates will race and the
TIM will be left in an inconsistent state.  I'll address that in
a follow-up commit.
2012-10-28 21:13:12 +00:00
Adrian Chadd
8e7393944d Push the actual TX processing into the ath taskqueue, rather than having
it run out of multiple concurrent contexts.

Right now the ath(4) TX processing is a bit hairy. Specifically:

* It was running out of ath_start(), which could occur from multiple
  concurrent sending processes (as if_start() can be started from multiple
  sending threads nowdays.. sigh)

* during RX if fast frames are enabled (so not really at the moment, not
  until I fix this particular feature again..)

* during ath_reset() - so anything which calls that

* during ath_tx_proc*() in the ath taskqueue - ie, TX is attempted again
  after TX completion, as there's now hopefully some ath_bufs available.

* Then, the ic_raw_xmit() method can queue raw frames for transmission
  at any time, from any net80211 TX context. Ew.

This has caused packet ordering issues in the past - specifically,
there's absolutely no guarantee that preemption won't occuring _during_
ath_start() by the TX completion processing, which will call ath_start()
again. It's a mess - 802.11 really, really wants things to be in
sequence or things go all kinds of loopy.

So:

* create a new task struct for TX'ing;
* make the if_start method simply queue the task on the ath taskqueue;
* make ath_start() just be called by the new TX task;
* make ath_tx_kick() just schedule the ath TX task, rather than directly
  calling ath_start().

Now yes, this means that I've taken a step backwards in terms of
concurrency - TX -and- RX now occur in the same single-task taskqueue.
But there's nothing stopping me from separating out the TX / TX completion
code into a separate taskqueue which runs in parallel with the RX path,
if that ends up being appropriate for some platforms.

This fixes the CCMP/seqno concurrency issues that creep up when you
transmit large amounts of uni-directional UDP traffic (>200MBit) on a
FreeBSD STA -> AP, as now there's only one TX context no matter what's
going on (TX completion->retry/software queue,
userland->net80211->ath_start(), TX completion -> ath_start());
but it won't fix any concurrency issues between raw transmitted frames
and non-raw transmitted frames (eg EAPOL frames on TID 16 and any other
TID 16 multicast traffic that gets put on the CABQ.)  That is going to
require a bunch more re-architecture before it's feasible to fix.

In any case, this is a big step towards making the majority of the TX
path locking irrelevant, as now almost all TX activity occurs in the
taskqueue.

Phew.
2012-10-14 20:44:08 +00:00
Adrian Chadd
bad98824c0 Break out the TX completion code into a separate function, so it can be
re-used by the upcoming EDMA TX completion code.

Make ath_stoptxdma() public, again so the EDMA TX code can use it.

Don't check for the TXQ bitmap in the ISR when doing EDMA work as it
doesn't apply for EDMA.
2012-08-14 22:32:20 +00:00
Adrian Chadd
1762ec944a Revert the ath_tx_draintxq() method, and instead teach it the minimum
necessary to "do" EDMA.

It was just using the TX completion status for logging information about
the descriptor completion.  Since with EDMA we don't know this without
checking the TX completion FIFO, we can't provide this information.
So don't.
2012-08-12 00:46:15 +00:00
Adrian Chadd
788e6aa99c Break out ath_draintxq() into a method and un-methodize ath_tx_processq().
Now that I understand what's going on with this, I've realised that
it's going to be quite difficult to implement a processq method in
the EDMA case.  Because there's a separate TX status FIFO, I can't
just run processq() on each EDMA TXQ to see what's finished.
i have to actually run the TX status queue and handle individual
TXQs.

So:

* unmethodize ath_tx_processq();
* leave ath_tx_draintxq() as a method, as it only uses the completion status
  for debugging rather than actively completing the frames (ie, all frames
  here are failed);
* Methodize ath_draintxq().

The EDMA ath_draintxq() will have to take care of running the TX
completion FIFO before (potentially) freeing frames in the queue.

The only two places where ath_tx_draintxq() (on a single TXQ) are used:

* ath_draintxq(); and
* the CABQ handling in the beacon setup code - it drains the CABQ before
  populating the CABQ with frames for a new beacon (when doing multi-VAP
  operation.)

So it's quite possible that once I methodize the CABQ and beacon handling,
I can just drop ath_tx_draintxq() in its entirety.

Finally, it's also quite possible that I can remove ath_tx_draintxq()
in the future and just "teach" it to not check the status when doing
EDMA.
2012-08-12 00:37:29 +00:00
Adrian Chadd
f8418db57e Migrate some more TX side setup routines to be methods. 2012-07-31 03:09:48 +00:00
Adrian Chadd
b39722d6dd Migrate the descriptor allocation function to not care about the number
of buffers, only the number of descriptors.

This involves:

* Change the allocation function to not use nbuf at all;
* When calling it, pass in "nbuf * ndesc" to correctly update how many
  descriptors are being allocated.

Whilst here, fix the descriptor allocation code to correctly allocate
a larger buffer size if the Merlin 4KB WAR is required.  It overallocates
descriptors when allocating a block that doesn't ever have a 4KB boundary
being crossed, but that can be fixed at a later stage.
2012-07-27 05:48:42 +00:00
Adrian Chadd
c9f78537bc Refactor out the descriptor allocation code from the buffer allocation
code.

The TX EDMA completion path is going to need descriptors allocated but
not any buffers.  This code will form the basis for that.
2012-07-27 05:34:45 +00:00
Adrian Chadd
1006fc0c3b Modify ath_descdma_setup() to take a descriptor size parameter.
The AR9300 and later descriptors are 128 bytes, however I'd like to make
sure that isn't used for earlier chips.

* Populate the TX descriptor length field in the softc with
  sizeof(ath_desc)

* Use this field when allocating the TX descriptors

* Pre-AR93xx TX/RX descriptors will use the ath_desc size; newer ones will
  query the HAL for these sizes.
2012-07-23 23:40:13 +00:00
Adrian Chadd
39abbd9bd2 Fix EDMA RX to actually work without panicing the machine.
I was setting up the RX EDMA buffer to be 4096 bytes rather than the
RX data buffer portion.  The hardware was likely getting very confused
and DMAing descriptor portions into places it shouldn't, leading to
memory corruption and occasional panics.

Whilst here, don't bother allocating descriptors for the RX EDMA case.
We don't use those descriptors. Instead, just allocate ath_buf entries.
2012-07-14 02:07:51 +00:00
Adrian Chadd
3d184db2f8 Further preparations for the RX EDMA support.
Break out the DMA descriptor setup/teardown code into a method.
The EDMA RX code doesn't allocate descriptors, just ath_buf entries.
2012-07-09 08:37:59 +00:00
Adrian Chadd
af33d486ab Implement a separate, smaller pool of ath_buf entries for use by management
traffic.

* Create sc_mgmt_txbuf and sc_mgmt_txdesc, initialise/free them appropriately.
* Create an enum to represent buffer types in the API.
* Extend ath_getbuf() and _ath_getbuf_locked() to take the above enum.
* Right now anything sent via ic_raw_xmit() allocates via ATH_BUFTYPE_MGMT.
  This may not be very useful.
* Add ATH_BUF_MGMT flag (ath_buf.bf_flags) which indicates the current buffer
  is a mgmt buffer and should go back onto the mgmt free list.
* Extend 'txagg' to include debugging output for both normal and mgmt txbufs.
* When checking/clearing ATH_BUF_BUSY, do it on both TX pools.

Tested:

* STA mode, with heavy UDP injection via iperf.  This filled the TX queue
  however BARs were still going out successfully.

TODO:

* Initialise the mgmt buffers with ATH_BUF_MGMT and then ensure the right
  type is being allocated and freed on the appropriate list.  That'd save
  a write operation (to bf->bf_flags) on each buffer alloc/free.

* Test on AP mode, ensure that BAR TX and probe responses go out nicely
  when the main TX queue is filled (eg with paused traffic to a TID,
  awaiting a BAR to complete.)

PR:		kern/168170
2012-06-13 06:57:55 +00:00
Adrian Chadd
e1a50456b6 Replace the direct sc_txbuf manipulation with a pair of functions.
This is preparation work for having a separate ath_buf queue for
management traffic.

PR:		kern/168170
2012-06-13 05:39:16 +00:00
Adrian Chadd
9f95609828 Mostly revert previous commit(s). After doing a bunch of local testing,
it turns out that it negatively affects performance.  I'm stil investigating
exactly why deferring the IO causes such negative TCP performance but
doesn't affect UDP preformance.

Leave the ath_tx_kick() change in there however; it's going to be useful
to have that there for if_transmit() work.

PR:		kern/168649
2012-06-05 06:03:55 +00:00
Adrian Chadd
14d33c7e35 Create a function - ath_tx_kick() - which is called where ath_start() is
called to "kick" along TX.

For now, schedule a taskqueue call.

Later on I may go back to the direct call of ath_rx_tasklet() - but for
now, this will do.

I've tested UDP and TCP TX. UDP TX still achieves 240MBit, but TCP
TX gets stuck at around 100MBit or so, instead of the 150MBit it should
be at.  I'll re-test with no ACPI/power/sleep states enabled at startup
and see what effect it has.

This is in preparation for supporting an if_transmit() path, which will
turn ath_tx_kick() into a NUL operation (as there won't be an ifnet
queue to service.)

Tested:
	* AR9280 STA

TODO:
	* test on AR5416, AR9160, AR928x STA/AP modes

PR:		kern/168649
2012-06-05 03:14:49 +00:00
Adrian Chadd
470a7f4191 Migrate the TX path to a taskqueue for now, until a better way of
implementing parallel TX and TX/RX completion can be done without
simply abusing long-held locks.

Right now, multiple concurrent ath_start() entries can result in
frames being dequeued out of order.  Well, they're dequeued in order
fine, but if there's any preemption or race between CPUs between:

* removing the frame from the ifnet, and
* calling and runningath_tx_start(), until the frame is placed on a
  software or hardware TXQ

Then although dequeueing the frame is in-order, queueing it to the hardware
may be out of order.

This is solved in a lot of other drivers by just holding a TX lock over
a rather long period of time.  This lets them continue to direct dispatch
without races between dequeue and hardware queue.

Note to observers: if_transmit() doesn't necessarily solve this.
It removes the ifnet from the main path, but the same issue exists if
there's some intermediary queue (eg a bufring, which as an aside also
may pull in ifnet when you're using ALTQ.)

So, until I can sit down and code up a much better way of doing parallel
TX, I'm going to leave the TX path using a deferred taskqueue task.
What I will likely head towards is doing a direct dispatch to hardware
or software via if_transmit(), but it'll require some driver changes to
allow queues to be made without using the really large ath_buf / ath_desc
entries.

TODO:

* Look at how feasible it'll be to just do direct dispatch to
  ath_tx_start() from if_transmit(), avoiding doing _any_ intermediary
  serialisation into a global queue.  This may break ALTQ for example,
  so I have to be delicate.

* It's quite likely that I should break up ath_tx_start() so it
  deposits frames onto the software queues first, and then only fill
  in the 802.11 fields when it's being queued to the hardware.
  That will make the if_transmit() -> software queue path very
  quick and lightweight.

* This has some very bad behaviour when using ACPI and Cx states.
  I'll do some subsequent analysis using KTR and schedgraph and file
  a follow-up PR or two.

PR:		kern/168649
2012-06-04 22:01:12 +00:00
Adrian Chadd
ba5c15d9ba Migrate most of the beacon handling functions out to if_ath_beacon.c.
This is also in preparation for supporting AR9300 and later NICs.
2012-05-20 04:14:29 +00:00
Adrian Chadd
a35dae8d87 Migrate the TDMA management functions out of if_ath.c into if_ath_tdma.c.
There's some TX path TDMA code in if_ath_tx.c which should be migrated
out, but first I should likely try and verify/fix/repair the TDMA support
in 9.x and -HEAD.
2012-05-20 02:49:42 +00:00
Adrian Chadd
e60c4fc2c9 Migrate the bulk of the RX routines out from if_ath.c to if_ath_rx.[ch].
* migrate the rx processing out into if_ath_rx.c
* migrate the TSF functions into if_ath_tsf.h, as inlines

This is in prepration for supporting the EDMA RX routines, required to
support the AR93xx series NICs.

TODO:

* ath_start() shouldn't be private, but it's called as part of
  the RX path. I should likely migrate ath_rx_tasklet() back into
  if_ath.c and then return this to be 'static'.  The RX code really
  shouldn't need to see TX routines (and vice versa.)

* ath_beacon_* should be in if_ath_beacon.[ch].

* ath_tdma_* should be in if_ath_tdma.[ch] ...
2012-05-20 02:05:10 +00:00
Adrian Chadd
eb6f0de09d Introduce TX aggregation and software TX queue management
for Atheros AR5416 and later wireless devices.

This is a very large commit - the complete history can be
found in the user/adrian/if_ath_tx branch.

Legacy (ie, pre-AR5416) devices also use the per-software
TXQ support and (in theory) can support non-aggregation
ADDBA sessions. However, the net80211 stack doesn't currently
support this.

In summary:

TX path:

* queued frames normally go onto a per-TID, per-node queue
* some special frames (eg ADDBA control frames) are thrown
  directly onto the relevant hardware queue so they can
  go out before any software queued frames are queued.
* Add methods to create, suspend, resume and tear down an
  aggregation session.
* Add in software retransmission of both normal and aggregate
  frames.
* Add in completion handling of aggregate frames, including
  parsing the block ack bitmap provided by the hardware.
* Write an aggregation function which can assemble frames into
  an aggregate based on the selected rate control and channel
  configuration.
* The per-TID queues are locked based on their target hardware
  TX queue. This matches what ath9k/atheros does, and thus
  simplified porting over some of the aggregation logic.
* When doing TX aggregation, stick the sequence number allocation
  in the TX path rather than net80211 TX path, and protect it
  by the TXQ lock.

Rate control:

* Delay rate control selection until the frame is about to
  be queued to the hardware, so retried frames can have their
  rate control choices changed. Frames with a static rate
  control selection have that applied before each TX, just
  to simplify the TX path (ie, not have "static" and "dynamic"
  rate control special cased.)
* Teach ath_rate_sample about aggregates - both completion and
  errors.
* Add an EWMA for tracking what the current "good" MCS rate is
  based on failure rates.

Misc:

* Introduce a bunch of dirty hacks and workarounds so TID mapping
  and net80211 frame inspection can be kept out of the net80211
  layer. Because of the way this code works (and it's from Atheros
  and Linux ath9k), there is a consistent, 1:1 mapping between
  TID and AC. So we need to ensure that frames going to a specific
  TID will _always_ end up on the right AC, and vice versa, or the
  completion/locking will simply get very confused. I plan on
  addressing this mess in the future.

Known issues:

* There is no BAR frame transmission just yet. A whole lot of
  tidying up needs to occur before BAR frame TX can occur in the
  "correct" place - ie, once the TID TX queue has been drained.

* Interface reset/purge/etc results in frames in the TX and RX
  queues being removed. This creates holes in the sequence numbers
  being assigned and the TX/RX AMPDU code (on either side) just
  hangs.

* There's no filtered frame support at the present moment, so
  stations going into power saving mode will simply have a number
  of frames dropped - likely resulting in a traffic "hang".

* Raw frame TX is going to just not function with 11n aggregation.
  Likely this needs to be modified to always override the sequence
  number if the frame is going into an aggregation session.
  However, general raw frame injection currently doesn't work in
  general in net80211, so let's just ignore this for now until
  this is sorted out.

* HT protection is just not implemented and won't be until the above
  is sorted out. In addition, the AR5416 has issues RTS protecting
  large aggregates (anything >8k), so the work around needs to be
  ported and tested. Thus, this will be put on hold until the above
  work is complete.

* The rate control module 'sample' is the only currently supported
  module; onoe/amrr haven't been tested and have likely bit rotted
  a little. I'll follow up with some commits to make them work again
  for non-11n rates, but they won't be updated to handle 11n and
  aggregation. If someone wishes to do so then they're welcome to
  send along patches.

* .. and "sample" doesn't really do a good job of 11n TX. Specifically,
  the metrics used (packet TX time and failure/success rates) isn't as
  useful for 11n. It's likely that it should be extended to take into
  account the aggregate throughput possible and then choose a rate
  which maximises that. Ie, it may be acceptable for a higher MCS rate
  with a higher failure to be used if it gives a more acceptable
  throughput/latency then a lower MCS rate @ a lower error rate.
  Again, patches will be gratefully accepted.

Because of this, ATH_ENABLE_11N is still not enabled by default.

Sponsored by:	Hobnob, Inc.
Obtained from:	Linux, Atheros
2011-11-08 22:43:13 +00:00
Adrian Chadd
9352fb7ab0 Refactor out the TX buffer management and completion code in
preparation for TX aggregation.

* Add in logic which calls ath_buf bf->bf_comp if it's set.
  This allows for AMPDU (and RIFS, and FF, if someone desires) code
  to handle completion - which includes freeing subframes, retransmitting
  subframes, etc.

* Break out the buffer free, buffer busy/unbusy default completion handler
  code into separate functions. This allows bf_comp methods to free and
  unbusy each subframe ath_buf as required.

* Break out the statistics update code into a separate function, just
  to clean up the TX completion path a little.

Sponsored by:	Hobnob, Inc.
2011-11-08 21:49:33 +00:00
Adrian Chadd
e346b0732c Change ath_buf allocation to:
* Immediately return NULL if a buffer isn't available;
* Track the "buffers not available" count;
* Clear some fields used for tx aggregation;
* Add ath_buf_clone() which clones the majority of buffer state.
  This is needed when retransmission of a "busy" buffer is required.

Sponsored by:	Hobnob, Inc.
2011-11-08 21:13:05 +00:00
Adrian Chadd
517526efe8 In preparation for supporting 11n TX/RX properly, allow for TX queue draining
and interface resets to be marked as ATH_RESET_DEFAULT, ATH_RESET_FULL,
ATH_RESET_NOLOSS.

Currently a reset is still a reset - ie, all tx/rx frames in the hardware
queues are purged. This means that those frames will be lost to the 11n TX
and RX aggregation state tracking, breaking AMPDU sessions.

The (eventual) new semantics:

* ATH_RESET_DEFAULT:
      full reset, this is the default for reset situations
      which I haven't yet figured out what they should be.
* ATH_RESET_FULL:
      A full reset - for things such as channel changes.
* ATH_RESET_NOLOSS:
      Don't flush TX/RX queues - handle pending RX frames and leave TX
      frames where they are; restart TX DMA from where it was.
2011-11-08 18:56:52 +00:00
Adrian Chadd
6079fdbede Migrate the sysctl related routines (statistics, debugging, etc) out of
if_ath.c and into if_ath_sysctl.c .
2011-03-02 16:03:19 +00:00
Adrian Chadd
b8e788a53a Migrate the TX path code out of if_ath and into a separate source file.
There's two reasons for this:

* the raw and non-raw TX path shares a lot of duplicate code which should be
  refactored;
* the 11n-ready chip TX path needs a little reworking.
2011-01-29 11:35:23 +00:00