Commit Graph

569 Commits

Author SHA1 Message Date
Adrian Chadd
dfaf8de927 Dump out the holding buffer descriptor contents and addresses stopping DMA. 2013-05-16 17:45:01 +00:00
Adrian Chadd
22a3aee637 Implement my first cut at "correct" node power-save and
PS-POLL support.

This implements PS-POLL awareness i nthe

* Implement frame "leaking", which allows for a software queue
  to be scheduled even though it's asleep
* Track whether a frame has been leaked or not
* Leak out a single non-AMPDU frame when transmitting aggregates
* Queue BAR frames if the node is asleep
* Direct-dispatch the rest of control and management frames.
  This allows for things like re-association to occur (which involves
  sending probe req/resp as well as assoc request/response) when
  the node is asleep and then tries reassociating.
* Limit how many frames can set in the software node queue whilst
  the node is asleep.  net80211 is already buffering frames for us
  so this is mostly just paranoia.
* Add a PS-POLL method which leaks out a frame if there's something
  in the software queue, else it calls net80211's ps-poll routine.
  Since the ath PS-POLL routine marks the node as having a single frame
  to leak, either a software queued frame would leak, OR the next queued
  frame would leak. The next queued frame could be something from the
  net80211 power save queue, OR it could be a NULL frame from net80211.

TODO:

* Don't transmit further BAR frames (eg via a timeout) if the node is
  currently asleep.  Otherwise we may end up exhausting management frames
  due to the lots of queued BAR frames.

  I may just undo this bit later on and direct-dispatch BAR frames
  even if the node is asleep.

* It would be nice to burst out a single A-MPDU frame if both ends
  support this.  I may end adding a FreeBSD IE soon to negotiate
  this power save behaviour.

* I should make STAs timeout of power save mode if they've been in power
  save for more than a handful of seconds.  This way cards that get
  "stuck" in power save mode don't stay there for the "inactivity" timeout
  in net80211.

* Move the queue depth check into the driver layer (ath_start / ath_transmit)
  rather than doing it in the TX path.

* There could be some naughty corner cases with ps-poll leaking.
  Specifically, if net80211 generates a NULL data frame whilst another
  transmitter sends a normal data frame out net80211 output / transmit,
  we need to ensure that the NULL data frame goes out first.
  This is one of those things that should occur inside the VAP/ic TX lock.
  Grr, more investigations to do..

Tested:

* STA: AR5416, AR9280
* AP: AR5416, AR9280, AR9160
2013-05-15 18:33:05 +00:00
Adrian Chadd
370f81fab6 Add ALQ beacon debugging. 2013-05-13 21:18:00 +00:00
Adrian Chadd
9b48fb4b32 Improve the debugging output - use the MAC address rather than various
pointer values everywhere.
2013-05-13 19:52:35 +00:00
Adrian Chadd
ba83edd45c Since the node state is 100% back under the TX lock, just kill the use
of atomics.

I'll re-think this nonsense later.
2013-05-13 19:03:12 +00:00
Adrian Chadd
4bed2b67ca Begin tidying up the reassociation and node sleep/wakeup paths.
* Move the node sleep/wake state under the TX lock rather than the
  node lock.  Let's leave the node lock protecting rate control only
  for now.

* When reassociating, various state needs to be cleared.  For example,
  the aggregate session needs to be torn down, including any pending
  aggregation negotiation and BAR TX waiting.

* .. and we need to do a "cleanup" pass since frames in the hardware
  TX queue need to be transmitted.

Modify ath_tx_tid_cleanup() to be called with the TX lock held and push
frames into a completion list.  This allows for the cleanup to be
done atomically for all TIDs in a node rather than grabbing and
releasing the TX lock each time.
2013-05-13 18:56:04 +00:00
Adrian Chadd
8328d6e4d4 Make sure the holding descriptor and link pointer are both freed during
a non-loss reset.

When the drain functions are called, the holding descriptor and link pointers
are NULLed out.

But when the processq function is called during a non-loss reset, this
doesn't occur.  So the next time a DMA occurs, it's chained to a descriptor
that no longer exists and the hardware gets angry.

Tested:

* AR5416, STA mode; use sysctl dev.ath.X.forcebstuck=1 to force a non-loss
  reset.

TODO:

* Further AR9380 testing just to check that the behaviour for the EDMA
  chips is sane.

PR:		kern/178477
2013-05-10 10:06:45 +00:00
Adrian Chadd
5e0185081d Fix the holding descriptor logic to actually be "right" (for values
of "right".)

Flip back on the "always continue TX DMA using the holding descriptor"
code - by always setting ATH_BUF_BUSY and never setting axq_link to NULL.

Since the holding descriptor is accessed via txq->axq_link and _that_
is done behind the TXQ lock rather than the TX path lock, the holding
descriptor stuff itself needs to be behind the TXQ lock.

So, do the mental gymnastics needed to do this.

I've not seen any of the hardware failures that I was seeing when
I last tried to do this.

Tested:

* AR5416, STA mode
2013-05-08 21:23:51 +00:00
Adrian Chadd
d3731e4b21 Revert a previous commit - this is causing hardware errors.
I'm not sure why this is failing.  The holding descriptor should be being
re-read when starting DMA of the next frame.  Obviously something here
isn't totally correct.

I'll review the TX queue handling and see if I can figure out why this
is failing.  I'll then re-revert this patch out and use the holding
descriptor again.
2013-05-08 07:30:33 +00:00
Adrian Chadd
7dcb2bea01 Re-work how transmit buffer limits are enforced - partly to fix the PR,
but partly to just tidy up things.

The problem here - there are too many TX buffers in the queue! By the
time one needs to transmit an EAPOL frame (for this PR, it's the response
to the group rekey notification from the AP) there are no ath_buf entries
free and the EAPOL frame doesn't go out.

Now, the problem!

* Enforcing the TX buffer limitation _before_ we dequeue the frame?
  Bad idea. Because..
* .. it means I can't check whether the mbuf has M_EAPOL set.

The solution(s):

* De-queue the frame first
* Don't bother doing the TX buffer minimum free check until after
  we know whether it's an EAPOL frame or not.
* If it's an EAPOL frame, allocate the buffer from the mgmt pool
  rather than the default pool.

Whilst I'm here:

* Add a tweak to limit how many buffers a single node can acquire.
* Don't enforce that for EAPOL frames.
* .. set that to default to 1/4 of the available buffers, or 32,
  whichever is more sane.

This doesn't fix issues due to a sleeping node or a very poor performing
node; but this doesn't make it worse.

Tested:

* AR5416 STA, TX'ing 100+ mbit UDP to an AP, but only 50mbit being received
  (thus the TX queue fills up.)
* .. with CCMP / WPA2 encryption configured
* .. and the group rekey time set to 10 seconds, just to elicit the
  behaviour very quickly.

PR:		kern/138379
2013-05-07 07:52:18 +00:00
Adrian Chadd
4136c09143 The holding buffer logic needs to be used for _all_ transmission, not
just "when the queue is busy."

After talking with the MAC team, it turns out that the linked list
implementation sometimes will not accept a TxDP update and will
instead re-read the link pointer.  So even if the hardware has
finished transmitting a chain and has hit EOL/VEOL, it may still
re-read the link pointer to begin transmitting again.

So, always set ATH_BUF_BUSY on the last buffer in the chain (to
mark the last descriptor as the holding descriptor) and never
blank the axq_link pointer.

Tested:

* AR5416, STA mode

TODO:

* much more thorough testing with the pre-11n NICs, just to verify
  that they behave the same way.
* test TDMA on the 11n and non-11n hardware.
2013-05-04 04:03:50 +00:00
Adrian Chadd
8d06054291 Debugging changes!
* That lock isn't actually held during reset - just the whole TX/RX path
  is paused.  So, remove the assertion.

* Log the TX queue status - how many hardware frames are active in the
  MAC and whether the queue is active.
2013-04-29 07:28:29 +00:00
Adrian Chadd
07187d1109 Conditionally compile this only if ATH_DEBUG is defined. 2013-04-26 22:22:38 +00:00
Adrian Chadd
ed261a611b Dump the entire TXQ descriptor contents during a reset, rather than only
completed descriptors.
2013-04-26 21:51:17 +00:00
Adrian Chadd
ff5b563430 Initialise the chainmask fields regardless of whether 11n support
is compiled in or not.

This fixes issues with people running -HEAD but who build modules
without doing a "make buildkernel KERNCONF=XXX", thus picking up
opt_*.h.  The resulting module wouldn't have 11n enabled and the
chainmask configuration would just be plain wrong.
2013-04-19 21:49:11 +00:00
Adrian Chadd
7904f51655 Add a debug statement to log the currently chosen chainmask configuration. 2013-04-19 08:06:45 +00:00
Adrian Chadd
b0bf95ff15 .. don't know how this snuck into this commit. Sorry.
Fix compile build before anyone notices.
2013-04-19 08:01:34 +00:00
Adrian Chadd
b661bd2e52 Print out the chainmask configuration. 2013-04-19 07:56:22 +00:00
Adrian Chadd
6f4fb2d8e6 Use uint32_t for fields that are fetched via ath_hal_getcapability(). 2013-04-19 06:59:10 +00:00
Adrian Chadd
5d4dedadb6 Use a per-RX-queue deferred list, rather than a single deferred list for
both queues.

Since ath_rx_pkt() does multi-mbuf frame recombining based on the RX queue,
this needs to occur.

Tested:

* AR9380 (XB112), hostap mode
2013-04-16 20:21:02 +00:00
Adrian Chadd
6961e9eda4 Always enable TXOK interrupts when setting up TX queues for EDMA NICs. 2013-04-11 22:02:35 +00:00
Adrian Chadd
a91ab3c099 Some TX dmamap cleanups.
* Don't use BUS_DMA_ALLOCNOW for descriptor DMA maps; we never use
  bounce buffers for the descriptors themselves.

* Add some XXX's to mark where the ath_buf has its mbuf ripped from
  underneath it without actually cleaning up the dmamap.  I haven't
  audited those particular code paths to see if the DMA map is guaranteed
  to be setup there; I'll do that later.

* Print out a warning if the descdma tidyup code is given some descriptors
  w/ maps to free.  Ideally the owner will free the mbufs and unmap
  the descriptors before freeing the descriptor/ath_buf pairs, but
  right now that's not guaranteed to be done.

Reviewed by:	scottl (BUS_DMA_ALLOCNOW tag)
2013-04-02 06:24:22 +00:00
Adrian Chadd
3f3a5dbd2c Ensure that we only call the busdma unmap/flush routines once, when
the buffer is being freed.

* When buffers are cloned, the original mapping isn't copied but it
  wasn't freeing the mapping until later.  To be safe, free the
  mapping when the buffer is cloned.

* ath_freebuf() now no longer calls the busdma sync/unmap routines.

* ath_tx_freebuf() now calls sync/unmap.

* Call sync first, before calling unmap.

Tested:

* AR5416, STA mode
2013-04-01 20:57:13 +00:00
Adrian Chadd
587feafb5a Remove an un-needed comment. 2013-04-01 20:44:21 +00:00
Adrian Chadd
09067b6e9a Use ATH_MAX_SCATTER rather than ATH_TXDESC.
ATH_MAX_SCATTER is used to size the ath_buf DMA segment array.
We thus should use it when checking sizes of things.
2013-04-01 20:12:21 +00:00
Adrian Chadd
3feffbd796 Add per-TXQ EDMA FIFO staging queue support.
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.

Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.

For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.

For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.

So:

* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
  lists.  It's a TAILQ (like the normal hardware frame queue) rather than
  the ath9k list-of-lists to represent FIFO entries.

* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.

* When allocating ath_buf entries, clear out the flag value before
  returning it or it'll end up having stale flags.

* When cloning ath_buf entries, only clone ATH_BUF_MGMT.  Don't clone
  the FIFO related flags.

* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
  drain the normal hardware queue.

Tested:

* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap

TODO:

* Test on other chipsets, just to be thorough.
2013-03-26 19:46:51 +00:00
Adrian Chadd
b837332d0a Overhaul the TXQ locking (again!) as part of some beacon/cabq timing
related issues.

Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.

This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.

The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.

The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support.  Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot.  For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.

The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission.  I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.

Major:

* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
  a single lock.

* Leave the software queue side of things under the ATH_TX_LOCK lock,
  (continuing) to serialise things as they are.

* Add a new function which is called whenever there's a beacon miss,
  to print out some debugging.  This is primarily designed to help
  me figure out if the beacon miss events are due to a noisy environment,
  issues with the PHY/MAC, or other.

* Move the CABQ setup/enable to occur _after_ all the VAPs have been
  looked at.  This means that for multiple VAPS in bursted mode, the
  CABQ gets primed once all VAPs are checked, rather than being primed
  on the first VAP and then having frames appended after this.

Minor:

* Add a (disabled) twiddle to let me enable/disable cabq traffic.
  It's primarily there to let me easily debug what's going on with beacon
  and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
  trying to trace down.

* Clear bf_next when flushing frames; it should quieten some warnings
  that show up when a node goes away.

Tested:

* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)

TODO:

* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
  ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
  correctly set when chaining descriptors.)
2013-03-24 00:03:12 +00:00
Adrian Chadd
f0db652cf6 Break out the RX completion path into "FIFO check / refill" and
"complete RX frames."

The 128 entry RX FIFO is really easy to fill up and miss refilling
when it's done in the ath taskq - as that gets blocked up doing
RX completion, TX completion and other random things.

So the 128 entry RX FIFO now gets emptied and refilled in the ath_intr()
task (and it grabs / releases locks, so now ath_intr() can't just be
a FAST handler yet!) but the locks aren't held for very long. The
completion part is done in the ath taskqueue context.

Details:

* Create a new completed frame list - sc->sc_rx_rxlist;
* Split the EDMA RX process queue into two halves - one that
  processes the RX FIFO and refills it with new frames; another
  that completes the completed frame list;
* When tearing down the driver, flush whatever is in the deferred
  queue as well as what's in the FIFO;
* Create two new RX methods - one that processes all RX queues,
  one that processes the given RX queue.  When MSI is implemented,
  we get told which RX queue the interrupt came in on so we can
  specifically schedule that.  (And I can do that with the non-MSI
  path too; I'll figure that out later.)
* Convert the legacy code over to use these new RX methods;
* Replace all the instances of the RX taskqueue enqueue with a call
  to a relevant RX method to enqueue one or all RX queues.

Tested:

* AR9380, STA
* AR9580, STA
* AR5413, STA
2013-03-19 19:32:28 +00:00
Adrian Chadd
5f2f0e616b Add locking around the new holdingbf code.
Since this is being done during buffer free, it's a crap shoot whether
the TX path lock is held or not.  I tried putting the ath_freebuf() code
inside the TX lock and I got all kinds of locking issues - it turns out
that the buffer free path sometimes is called with the lock held and
sometimes isn't. So I'll go and fix that soon.

Hence for now the holdingbf buffers are protected by the TXBUF lock.
2013-03-15 02:52:37 +00:00
Adrian Chadd
629ce2188a Implement "holding buffers" per TX queue rather than globally.
When working on TDMA, Sam Leffler found that the MAC DMA hardware
would re-read the last TX descriptor when getting ready to transmit
the next one.  Thus the whole ATH_BUF_BUSY came into existance -
the descriptor must be left alone (very specifically the link pointer
must be maintained) until the hardware has moved onto the next frame.

He saw this in TDMA because the MAC would be frequently stopping during
active transmit (ie, when it wasn't its turn to transmit.)

Fast-forward to today.  It turns out that this is a problem not with
a single MAC DMA instance, but with each QCU (from 0->9).  They each
maintain separate descriptor pointers and will re-read the last
descriptor when starting to transmit the next.

So when your AP is busy transmitting from multiple TX queues, you'll
(more) frequently see one QCU stopped, waiting for a higher-priority QCU
to finsh transmitting, before it'll go ahead and continue.  If you mess
up the descriptor (ie by freeing it) then you're short of luck.

Thanks to rpaulo for sticking with me whilst I diagnosed this issue
that he was quite reliably triggering in his environment.

This is a reimplementation; it doesn't have anything in common with
the ath9k or the Qualcomm Atheros reference driver.

Now - it in theory doesn't apply on the EDMA chips, as long as you
push one complete frame into the FIFO at a time.  But the MAC can DMA
from a list of frames pushed into the hardware queue (ie, you concat
'n' frames together with link pointers, and then push the head pointer
into the TXQ FIFO.)  Since that's likely how I'm going to implement
CABQ handling in hostap mode, it's likely that I will end up teaching
the EDMA TX completion code about busy buffers, just to be "sure"
this doesn't creep up.

Tested - iperf ap->sta and sta->ap (with both sides running this code):

* AR5416 STA
* AR9160/AR9220 hostap

To validate that it doesn't break the EDMA (FIFO) chips:

* AR9380, AR9485, AR9462 STA

Using iperf with the -S <tos byte decimal value> to set the TCP client
side DSCP bits, mapping to different TIDs and thus different TX queues.

TODO:

* Make this work on the EDMA chips, if we end up pushing lists of frames
  to the hardware (eg how we eventually will handle cabq in hostap/ibss
  mode.)
2013-03-14 06:20:02 +00:00
Adrian Chadd
9d2a962bf3 Print out the queue flags during a TX DMA shutdown. 2013-03-09 06:11:58 +00:00
Adrian Chadd
6606ba811c Add in the STBC TX/RX capability support into the HAL and driver.
The HAL already included the STBC fields; it just needed to be exposed
to the driver and net80211 stack.

This should allow single-stream STBC TX and RX to be negotiated; however
the driver and rate control code currently don't do anything with it.
2013-02-27 00:25:44 +00:00
Adrian Chadd
6322256b83 Part #2 of the TX chainmask changes:
* Remove ar5416UpdateChainmasks();
* Remove the TX chainmask override code from the ar5416 TX descriptor
  setup routines;
* Write a driver method to calculate the current chainmask based on the
  operating mode and update the driver state;
* Call the HAL chainmask method before calling ath_hal_reset();
* Use the currently configured chainmask in the TX descriptors rather than
  the hardware TX chainmasks.

Tested:

* AR5416, STA/AP mode - legacy and 11n modes
2013-02-25 22:45:02 +00:00
Adrian Chadd
ce597531f2 Disable debugging entries about BAW issues. I haven't seen any issues
to do with BAW tracking in the last 9 months or so.
2013-02-21 21:47:35 +00:00
Adrian Chadd
a54ecf784a Add an option to allow the minimum number of delimiters to be tweaked.
This is primarily for debugging purposes.

Tested:

* AR5416, STA mode
2013-02-21 06:38:49 +00:00
Adrian Chadd
4a502c332a Add a new option to limit the maximum size of aggregates.
The default is to limit them to what the hardware is capable of.

Add sysctl twiddles for both the non-RTS and RTS protected aggregate
generation.

Whilst here, add some comments about stuff that I've discovered during
my exploration of the TX aggregate / delimiter setup path from the
reference driver.
2013-02-21 06:18:40 +00:00
Adrian Chadd
69930f8794 Enable TX FIFO underrun interrupts. This allows the TX FIFO threshold
adjustment code to now run.

Tested:

* AR5416, STA

TODO:

* Much more thorough testing on the other chips, AR5210 -> AR9287
2013-02-20 11:20:51 +00:00
Adrian Chadd
bab336db27 oops, tab! 2013-02-20 11:17:29 +00:00
Adrian Chadd
a26f33276f Post interrupts in the ath alq trace. 2013-02-20 11:17:03 +00:00
Adrian Chadd
158cb431db CFG_ERR, DATA_UNDERRUN and DELIM_UNDERRUN are all flags, rather than
part of ts_status. Thus:

* make sure we decode them from ts_flags, rather than ts_status;
* make sure we decode them regardless of whether there's an error or not.

This correctly exposes descriptor configuration errors, TX delimiter
underruns and TX data underruns.
2013-02-20 11:14:55 +00:00
Adrian Chadd
1a85141ad4 Pull out the if_transmit() work and revert back to ath_start().
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:

* With if_start and the ifnet mbuf queue, any temporary latency
  would get eaten up by some mbufs being queued.  With ath_transmit()
  queuing things to ath_buf's, I'd only get 512 TX buffers before I
  couldn't queue any further frames.

* There's also some non-zero latency involved with TX being pushed
  into a taskqueue via direct dispatch.  Any time the scheduler didn't
  immediately schedule the ath TX task would cause extra latency.
  Various 1ge/10ge drivers implement both direct dispatch (if the TX
  lock can be acquired) and deferred task transmission (if the TX lock
  can't be acquired), with frames being pushed into a drbd queue.
  I'll have to do this at some point, but until I figure out how to
  deal with 802.11 fragments, I'll have to wait a while longer.

So what I saw:

* lots of extra latency, specially under load - if the taskqueue
  wasn't immediately scheduled, things went pear shaped;

* any extra latency would result in TX ath_buf's taking their sweet time
  being replenished, so any further calls to ath_transmit() would drop
  mbufs.

* .. yes, there's no explicit backpressure here - things are just dropped.
  Eek.

With this, the general performance has gone up, but those subtle if_start()
related race conditions are back.  For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.

There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation.  Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
2013-02-13 05:32:19 +00:00
Adrian Chadd
a40880ade4 Go back to direct-dispatch of the software queue and frame TX paths
when they're being called from the TX completion handler.

Going (back) through the taskqueue is just adding extra locking and
latency to packet operations.  This improves performance a little bit
on most NICs.

It still hasn't restored the original performance of the AR5416 NIC
but the AR9160, AR9280 and later NICs behave very well with this.

Tested:

* AR5416 STA (still tops out at ~ 70mbit TCP, rather than 150mbit TCP..)
* AR9160 hostap (good for both TX and RX)
* AR9280 hostap (good for both TX and RX)
2013-02-11 07:48:26 +00:00
Adrian Chadd
1b3502e5a1 Create a new TX lock specifically for queuing frames.
This now separates out the act of queuing frames from the act of running
TX and TX completion.
2013-02-07 07:50:16 +00:00
Adrian Chadd
21bca442b9 Methodize the process of adding the software TX queue to the taskqueue.
Move it (for now) to the TX taskqueue.
2013-02-07 02:15:25 +00:00
Adrian Chadd
f28a552089 Migrate the TX sending code out from under the ath0 taskq and into
the separate ath0 TX taskq.

Whilst here, make sure that the TX software scheduler is also
running out of the TX task, rather than the ath0 taskqueue.

Make sure that the tx taskqueue is blocked/unblocked as necessary.

This allows for a little more parallelism on multi-core machines,
as well as (eventually) supporting a higher task priority for TX
tasks, allowing said TX task to preempt an already running RX or
TX completion task.

Tested:

* AR5416, AR9280 hostap and STA modes
2013-01-26 00:14:34 +00:00
Adrian Chadd
a74ebfe59e Fix hangs (exposed by spectral scan activity) in STA mode when the
chip hangs.

* Always do a reset in ath_bmiss_proc(), regardless of whether the
  hardware is "hung" or not.  Specifically, for spectral scan, there's
  likely a whole bunch of potential hangs that we don't (yet) recognise
  in the HAL.  So to avoid staying RX deaf persisting until the station
  disassociates, just do a no-loss reset.

* Set sc_beacons=1 in STA mode.  During a reset, the beacon programming
  isn't done.  (It's likely I need to set sc_syncbeacons during a hang
  reset, but I digress.)  Thus after a reset, there's no beacon timer
  programming to send a BMISS interrupt if beacons aren't heard ..
  thus if the AP disappears, you won't get notified and you'll have to
  reset your interface.

This hasn't yet fixed all of the hangs that I've seen when debugging
spectral scan, but it's certainly reduced the hang frequency and it
should improve general STA stability in very noisy environments.

Tested:

* AR9280, STA mode, spectral scan off/on

PR:		kern/175227
2013-01-17 16:43:59 +00:00
Adrian Chadd
c5239edb98 Implement frame (data) transmission using if_transmit(), rather than
if_start().

This removes the overlapping data path TX from occuring, which
solves quite a number of the potential TX queue races in ath(4).
It doesn't fix the net80211 layer TX queue races and it doesn't
fix the raw TX path yet, but it's an important step towards this.

This hasn't dropped the TX performance in my testing; primarily
because now the TX path can quickly queue frames and continue
along processing.

This involves a few rather deep changes:

* Use the ath_buf as a queue placeholder for now, as we need to be
  able to support queuing a list of mbufs (ie, when transmitting
  fragments) and m_nextpkt can't be used here (because it's what is
  joining the fragments together)

* if_transmit() now simply allocates the ath_buf and queues it to
  a driver TX staging queue.

* TX is now moved into a taskqueue function.

* The TX taskqueue function now dequeues and transmits frames.

* Fragments are handled correctly here - as the current API passes
  the fragment list as one mbuf list (joined with m_nextpkt) through
  to the driver if_transmit().

* For the couple of places where ath_start() may be called (mostly
  from net80211 when starting the VAP up again), just reimplement
  it using the new enqueue and taskqueue methods.

What I don't like (about this work and the TX code in general):

* I'm using the same lock for the staging TX queue management and the
  actual TX.  This isn't required; I'm just being slack.

* I haven't yet moved TX to a separate taskqueue (but the taskqueue is
  created); it's easy enough to do this later if necessary.  I just need
  to make sure it's a higher priority queue, so TX has the same
  behaviour as it used to (where it would preempt existing RX..)

* I need to re-review the TX path a little more and make sure that
  ieee80211_node_*() functions aren't called within the TX lock.
  When queueing, I should just push failed frames into a queue and
  when I'm wrapping up the TX code, unlock the TX lock and
  call ieee80211_node_free() on each.

* It would be nice if I could hold the TX lock for the entire
  TX and TX completion, rather than this release/re-acquire behaviour.
  But that requires that I shuffle around the TX completion code
  to handle actual ath_buf free and net80211 callback/free outside
  of the TX lock.  That's one of my next projects.

* the ic_raw_xmit() path doesn't use this yet - so it still has
  sequencing problems with parallel, overlapping calls to the
  data path.  I'll fix this later.

Tested:

* Hostap - AR9280, AR9220
* STA - AR5212, AR9280, AR5416
2013-01-15 18:01:23 +00:00
Adrian Chadd
9af351f9e8 Add a new (skeleton) spectral mode manager module. 2013-01-02 03:59:02 +00:00
Baptiste Daroussin
661c81c3d6 Fix typo in comment.
Submitted by:	Christoph Mallon <christoph.mallon@gmx.de>
2012-12-28 21:59:47 +00:00
Adrian Chadd
375307d411 Delete the per-TXQ locks and replace them with a single TX lock.
I couldn't think of a way to maintain the hardware TXQ locks _and_ layer
on top of that per-TXQ software queuing and any other kind of fine-grained
locks (eg per-TID, or per-node locks.)

So for now, to facilitate some further code refactoring and development
as part of the final push to get software queue ps-poll and u-apsd handling
into this driver, just do away with them entirely.

I may eventually bring them back at some point, when it looks slightly more
architectually cleaner to do so.  But as it stands at the present, it's
not really buying us much:

* in order to properly serialise things and not get bitten by scheduling
  and locking interactions with things higher up in the stack, we need to
  wrap the whole TX path in a long held lock.  Otherwise we can end up
  being pre-empted during frame handling, resulting in some out of order
  frame handling between sequence number allocation and encryption handling
  (ie, the seqno and the CCMP IV get out of sequence);

* .. so whilst that's the case, holding the lock for that long means that
  we're acquiring and releasing the TXQ lock _inside_ that context;

* And we also acquire it per-frame during frame completion, but we currently
  can't hold the lock for the duration of the TX completion as we need
  to call net80211 layer things with the locks _unheld_ to avoid LOR.

* .. the other places were grab that lock are reset/flush, which don't happen
  often.

My eventual aim is to change the TX path so all rejected frame transmissions
and all frame completions result in any ieee80211_free_node() calls to occur
outside of the TX lock; then I can cut back on the amount of locking that
goes on here.

There may be some LORs that occur when ieee80211_free_node() is called when
the TX queue path fails; I'll begin to address these in follow-up commits.
2012-12-02 06:24:08 +00:00
Adrian Chadd
8bf4020830 Call if_free() with the correct vnet context if and only if ifp_vnet
isn't NULL.

If the attach fails prematurely and there's no if_vnet context, calling
CURVNET_SET(ifp->if_vnet) is going to dereference a NULL pointer.
2012-11-28 07:12:08 +00:00
Adrian Chadd
bb327d284b ALQ logging enhancements:
* upon setup, tell the alq code what the chip information is.
* add TX/RX path logging for legacy chips.
* populate the tx/rx descriptor length fields with a best-estimate.
  It's overly big (96 bytes when AH_SUPPORT_AR5416 is enabled)
  but it'll do for now.

Whilst I'm here, add CURVNET_RESTORE() here during probe/attach as a
partial solution to fixing crashes during attach when the attach fails.
There are other attach failures that I have to deal with; those'll come
later.
2012-11-16 19:57:16 +00:00
Adrian Chadd
603280386b Correctly fix the 'scan during STA mode' crash. 2012-11-11 21:58:18 +00:00
Adrian Chadd
89d2e576a4 Don't compile in my (not yet committed) ath_alq code unless ATH_DEBUG_ALQ
is defined.

This will unbreak ATH_DEBUG builds.
2012-11-07 16:34:09 +00:00
Adrian Chadd
bdbb6e5b8c Disable my software queue TIM and PS handling for now.
ps-poll is totally broken in its current form.

This should unbreak things enough to let people use PS-POLL devices,
but leave it in place for me to finish PS-POLL handling.
2012-11-07 06:29:45 +00:00
Adrian Chadd
5540369b93 Add a new HAL call to extract out the HAL enterprise bits from the
AR9300 HAL.
2012-11-03 22:12:35 +00:00
Adrian Chadd
1b5c5f5ad0 I give up - introduce a TX lock to serialise TX operations.
I've tried serialising TX using queues and such but unfortunately
due to how this interacts with the locking going on elsewhere in the
networking stack, the TX task gets delayed, resulting in quite a
noticable throughput loss:

* baseline TCP for 2x2 11n HT40 is ~ 170mbit/sec;
* TCP for TX task in the ath taskq, with the RX also going on - 80mbit/sec;
* TCP for TX task in a separate, second taskq - 100mbit/sec.

So for now I'm going with the Linux wireless stack approach - lock tx
early.  The linux code does in the wireless stack, before the 802.11
state stuff happens and before it's punted to the driver.
But TX locking needs to also occur at the driver layer as the TX
completion code _also_ begins to drain the ifnet TX queue.

Whilst I'm here, add some KTR traces for the TX path.

Note:

* This really should be done at the net80211 layer (as well, at least.)
  But that'll have to wait for a little more thought to happen.
2012-10-31 06:27:58 +00:00
Adrian Chadd
548a605d0d Begin fleshing out some software queue awareness for TIM handling with
the power save queue.

* introduce some new ATH_NODE lock protected fields, tracking the
  net80211 psq and TIM state;
* when doing buffer transitions - ie, when sending and completing
  buffers - check the state of the SWQ and update the TIM appropriately.
* when clearing the TIM bit, if the SWQ is not empty then delay clearing
  it.

This is racy, but it's no less racy than the current net80211 power
save queue management code.  Specifically, with multiple TX threads,
it's quite plausible that parallel state updates will race and the
TIM will be left in an inconsistent state.  I'll address that in
a follow-up commit.
2012-10-28 21:13:12 +00:00
Adrian Chadd
a93c5097c9 Add a temporary (for values of "temporary") work around for hotplug
support with ath(4) and VIMAGE.

Right now the VIMAGE code doesn't supply a default vnet context during:

* hotplug attach;
* any device detach.

It special cases kldload/boot time probing (by setting the context to
vnet0) but that doesn't occur when probing devices during a bus rescan -
eg, adding a cardbus card.

These will eventually go away when the VIMAGE support extends to providing
default contexts to hotplug attach/detach.
2012-10-28 18:46:06 +00:00
Adrian Chadd
8e7393944d Push the actual TX processing into the ath taskqueue, rather than having
it run out of multiple concurrent contexts.

Right now the ath(4) TX processing is a bit hairy. Specifically:

* It was running out of ath_start(), which could occur from multiple
  concurrent sending processes (as if_start() can be started from multiple
  sending threads nowdays.. sigh)

* during RX if fast frames are enabled (so not really at the moment, not
  until I fix this particular feature again..)

* during ath_reset() - so anything which calls that

* during ath_tx_proc*() in the ath taskqueue - ie, TX is attempted again
  after TX completion, as there's now hopefully some ath_bufs available.

* Then, the ic_raw_xmit() method can queue raw frames for transmission
  at any time, from any net80211 TX context. Ew.

This has caused packet ordering issues in the past - specifically,
there's absolutely no guarantee that preemption won't occuring _during_
ath_start() by the TX completion processing, which will call ath_start()
again. It's a mess - 802.11 really, really wants things to be in
sequence or things go all kinds of loopy.

So:

* create a new task struct for TX'ing;
* make the if_start method simply queue the task on the ath taskqueue;
* make ath_start() just be called by the new TX task;
* make ath_tx_kick() just schedule the ath TX task, rather than directly
  calling ath_start().

Now yes, this means that I've taken a step backwards in terms of
concurrency - TX -and- RX now occur in the same single-task taskqueue.
But there's nothing stopping me from separating out the TX / TX completion
code into a separate taskqueue which runs in parallel with the RX path,
if that ends up being appropriate for some platforms.

This fixes the CCMP/seqno concurrency issues that creep up when you
transmit large amounts of uni-directional UDP traffic (>200MBit) on a
FreeBSD STA -> AP, as now there's only one TX context no matter what's
going on (TX completion->retry/software queue,
userland->net80211->ath_start(), TX completion -> ath_start());
but it won't fix any concurrency issues between raw transmitted frames
and non-raw transmitted frames (eg EAPOL frames on TID 16 and any other
TID 16 multicast traffic that gets put on the CABQ.)  That is going to
require a bunch more re-architecture before it's feasible to fix.

In any case, this is a big step towards making the majority of the TX
path locking irrelevant, as now almost all TX activity occurs in the
taskqueue.

Phew.
2012-10-14 20:44:08 +00:00
Adrian Chadd
943e37a120 Initialise an uninitialised variable. 2012-10-05 16:44:00 +00:00
Adrian Chadd
0eb8162623 Pause and unpause the software queues for a given node based on the
net80211 node power save state.

* Add an ATH_NODE_UNLOCK_ASSERT() check
* Add a new node field - an_is_powersave
* Pause/unpause the queue based on the node state
* Attempt to handle net80211 concurrency issues so the queue
  doesn't get paused/unpaused more than once at a time from
  the net80211 power save code.

Whilst here (and breaking my usual rule), set CLRDMASK when a queue
is unpaused, regardless of whether the queue has some pending traffic.
This means the first frame from that TID (now or later) will hvae
CLRDMASK set.

Also whilst here, bump the swretrymax counters whenever the
filtered frames code expires a frame.  Again, breaking my rule, but
this is just a statistics thing rather than a functional change.

This doesn't fix ps-poll (but it doesn't break it too much worse
than it is at the present) or correcting the TID updates.
That's next on the list.

Tested:
	* AR9220 AP (Atheros AP96 reference design)
	* Macbook Pro and LG Optimus 1 Android phone, both setting
	  and clearing power save state (but not using PS-POLL.)
2012-10-03 23:23:45 +00:00
Adrian Chadd
0368251456 Migrate the ath(4) KTR logging to use an ATH_KTR() macro.
This should eventually be unified with ATH_DEBUG() so I can get both
from one macro; that may take some time.

Add some new probes for TX and TX completion.
2012-09-24 20:35:56 +00:00
Adrian Chadd
a71362cec6 Remove TDMA #define entries from if_ath.c; they now exist in if_ath_tdma.h. 2012-09-09 04:53:10 +00:00
Adrian Chadd
fb20fd244e There's no nede to allocate a DMA map just before calling bus_dmamem_alloc().
In fact, bus_dmamem_alloc() happily NULLs the dmat pointer passed in,
before replacing it with its own.

This fixes a MIPS crash when kldload'ing if_ath/if_ath_pci -
bus_dmamap_destroy() was passed in a NULL dmat pointer and was doing
all kinds of very bad things.

Reviewed by:	scottl
2012-08-29 16:58:51 +00:00
Adrian Chadd
85bf9bc3d5 Implement a sequential descriptor ID value and stuff it in the ath_buf.
This will be used by the EDMA TX code to assign descriptor IDs in order
to provide some debugging.
2012-08-15 06:48:34 +00:00
Adrian Chadd
bad98824c0 Break out the TX completion code into a separate function, so it can be
re-used by the upcoming EDMA TX completion code.

Make ath_stoptxdma() public, again so the EDMA TX code can use it.

Don't check for the TXQ bitmap in the ISR when doing EDMA work as it
doesn't apply for EDMA.
2012-08-14 22:32:20 +00:00
Adrian Chadd
1762ec944a Revert the ath_tx_draintxq() method, and instead teach it the minimum
necessary to "do" EDMA.

It was just using the TX completion status for logging information about
the descriptor completion.  Since with EDMA we don't know this without
checking the TX completion FIFO, we can't provide this information.
So don't.
2012-08-12 00:46:15 +00:00
Adrian Chadd
788e6aa99c Break out ath_draintxq() into a method and un-methodize ath_tx_processq().
Now that I understand what's going on with this, I've realised that
it's going to be quite difficult to implement a processq method in
the EDMA case.  Because there's a separate TX status FIFO, I can't
just run processq() on each EDMA TXQ to see what's finished.
i have to actually run the TX status queue and handle individual
TXQs.

So:

* unmethodize ath_tx_processq();
* leave ath_tx_draintxq() as a method, as it only uses the completion status
  for debugging rather than actively completing the frames (ie, all frames
  here are failed);
* Methodize ath_draintxq().

The EDMA ath_draintxq() will have to take care of running the TX
completion FIFO before (potentially) freeing frames in the queue.

The only two places where ath_tx_draintxq() (on a single TXQ) are used:

* ath_draintxq(); and
* the CABQ handling in the beacon setup code - it drains the CABQ before
  populating the CABQ with frames for a new beacon (when doing multi-VAP
  operation.)

So it's quite possible that once I methodize the CABQ and beacon handling,
I can just drop ath_tx_draintxq() in its entirety.

Finally, it's also quite possible that I can remove ath_tx_draintxq()
in the future and just "teach" it to not check the status when doing
EDMA.
2012-08-12 00:37:29 +00:00
Adrian Chadd
e1252ce1d2 Extend the beacon code slightly to support AP mode beaconing for the
EDMA HAL hardware.

* The EDMA HAL code assumes the nexttbtt and intval values are in TU/8
  units, rather than TU.  For now, just "hack" around that here, at least
  until I code up something to translate it in the HAL.
* Setup some different TXQ flags for EDMA hardware.
* The EDMA HAL doesn't support setting the first rate series via
  ath_hal_setuptxdesc() - instead, a call to ath_hal_set11nratescenario()
  is always required.  So for now, just do an 11n rate series setup
  for EDMA beacon frames.

This allows my AR9380 to successfully transmit beacon frames.

However, CABQ TX and all normal data frame TX and TX completion is
still not functional and will require some more significant code churn
to make work.
2012-08-11 23:26:19 +00:00
Adrian Chadd
af01710118 Allow 802.11n hardware to support multi-rate retry when RTS/CTS is
enabled.

The legacy (pre-802.11n) hardware doesn't support this - although
the AR5212 era hardware supports MRR, it doesn't have all the bits
needed to support MRR + RTS/CTS.  The AR5416 and later support
a packet duration and RTS/CTS flags per rate scenario, so we should
support it.

Tested:

* AR9280, STA

PR:		kern/170302
2012-07-31 23:54:15 +00:00
Adrian Chadd
f8418db57e Migrate some more TX side setup routines to be methods. 2012-07-31 03:09:48 +00:00
Adrian Chadd
7ef7f613c2 Fix breakage introduced in r238824 - correctly calculate the descriptor
wrapping.

The previous code was only wrapping descriptor "block" boundaries rather
than individual descriptors.  It sounds equivalent but it isn't.

r238824 changed the descriptor allocation to enforce that an individual
descriptor doesn't wrap a 4KiB boundary rather than the whole block
of descriptors.  Eg, for TX descriptors, they're allocated in blocks
of 10 descriptors for each ath_buf (for scatter/gather DMA.)
2012-07-29 08:52:32 +00:00
Adrian Chadd
4bf404ea10 Add a missing call to ath_txdma_teardown(). 2012-07-28 04:40:52 +00:00
Adrian Chadd
9ed9f02b67 Modify ath_descdma_cleanup() to handle ath_descdma instances with no
buffers.

ath_descdma is now being used for things other than the classical
combination of ath_buf + ath_desc allocations.  In this particular case,
don't try to free and blank out the ath_buf list if it's not passed in.
2012-07-27 10:38:17 +00:00
Adrian Chadd
b39722d6dd Migrate the descriptor allocation function to not care about the number
of buffers, only the number of descriptors.

This involves:

* Change the allocation function to not use nbuf at all;
* When calling it, pass in "nbuf * ndesc" to correctly update how many
  descriptors are being allocated.

Whilst here, fix the descriptor allocation code to correctly allocate
a larger buffer size if the Merlin 4KB WAR is required.  It overallocates
descriptors when allocating a block that doesn't ever have a 4KB boundary
being crossed, but that can be fixed at a later stage.
2012-07-27 05:48:42 +00:00
Adrian Chadd
c9f78537bc Refactor out the descriptor allocation code from the buffer allocation
code.

The TX EDMA completion path is going to need descriptors allocated but
not any buffers.  This code will form the basis for that.
2012-07-27 05:34:45 +00:00
Adrian Chadd
1006fc0c3b Modify ath_descdma_setup() to take a descriptor size parameter.
The AR9300 and later descriptors are 128 bytes, however I'd like to make
sure that isn't used for earlier chips.

* Populate the TX descriptor length field in the softc with
  sizeof(ath_desc)

* Use this field when allocating the TX descriptors

* Pre-AR93xx TX/RX descriptors will use the ath_desc size; newer ones will
  query the HAL for these sizes.
2012-07-23 23:40:13 +00:00
Adrian Chadd
3fdfc33024 Begin separating out the TX DMA setup in preparation for TX EDMA support.
* Introduce TX DMA setup/teardown methods, mirroring what's done in
  the RX path.

  Although the TX DMA descriptor is setup via ath_desc_alloc() /
  ath_desc_free(), there TX status descriptor ring will be allocated
  in this path.

* Remove some of the TX EDMA capability probing from the RX path and
  push it into the new TX EDMA path.
2012-07-23 03:52:18 +00:00
Adrian Chadd
3d9b15965e Begin modifying the descriptor allocation functions to support a variable
sized TX descriptor.

This is required for the AR93xx EDMA support which requires 128 byte
TX descriptors (which is significantly larger than the earlier
hardware.)
2012-07-23 02:26:33 +00:00
Adrian Chadd
b8f2a85349 Enable the basic node-based rate control statistics via an ioctl(). 2012-07-20 01:36:46 +00:00
Adrian Chadd
b5b60f35b7 Ensure that error is set.
Noticed by:	rui
2012-07-14 05:51:54 +00:00
Adrian Chadd
8d467c41b0 Don't free the descriptor allocation/map if it doesn't exist.
I missed this in my previous commit.
2012-07-14 02:47:16 +00:00
Adrian Chadd
39abbd9bd2 Fix EDMA RX to actually work without panicing the machine.
I was setting up the RX EDMA buffer to be 4096 bytes rather than the
RX data buffer portion.  The hardware was likely getting very confused
and DMAing descriptor portions into places it shouldn't, leading to
memory corruption and occasional panics.

Whilst here, don't bother allocating descriptors for the RX EDMA case.
We don't use those descriptors. Instead, just allocate ath_buf entries.
2012-07-14 02:07:51 +00:00
Adrian Chadd
bcbb08ceb5 Flip on EDMA RX of both HP and LP queue frames.
Yes, this is in the legacy interrupt path.  The NIC does support
MSI but I haven't yet sat down and written that code.
2012-07-10 07:43:31 +00:00
Adrian Chadd
2633dc9382 Migrate the ATH_KTR_* fields out to if_ath_debug.h . 2012-07-10 06:11:39 +00:00
Adrian Chadd
3d184db2f8 Further preparations for the RX EDMA support.
Break out the DMA descriptor setup/teardown code into a method.
The EDMA RX code doesn't allocate descriptors, just ath_buf entries.
2012-07-09 08:37:59 +00:00
Adrian Chadd
f8cc9b09b0 Begin abstracting out the RX path in preparation for RX EDMA support.
The RX EDMA support requires a modified approach to the RX descriptor
handling.

Specifically:

* There's now two RX queues - high and low priority;
* The RX queues are implemented as FIFOs; they're now an array of pointers
  to buffers;
* .. and the RX buffer and descriptor are in the same "buffer", rather than
  being separate.

So to that end, this commit abstracts out most of the RX related functions
from the bulk of the driver.  Notably, the RX DMA/buffer allocation isn't
updated, primarily because I haven't yet fleshed out what it should look
like.

Whilst I'm here, create a set of matching but mostly unimplemented EDMA
stubs.

Tested:

  * AR9280, station mode

TODO:

  * Thorough AP and other mode testing for non-EDMA chips;
  * Figure out how to allocate RX buffers suitable for RX EDMA, including
    correctly setting the mbuf length to compensate for the RX descriptor
    and completion status area.
2012-07-03 06:59:12 +00:00
Adrian Chadd
f8aa9fd500 Shuffle these initialisations to where they should be. 2012-06-24 08:28:06 +00:00
Adrian Chadd
e1b5ab97e8 Introduce an optional ath(4) radiotap vendor extension.
This includes a few new fields in each RXed frame:

* per chain RX RSSI (ctl and ext);
* current RX chainmask;
* EVM information;
* PHY error code;
* basic RX status bits (CRC error, PHY error, etc).

This is primarily to allow me to do some userland PHY error processing
for radar and spectral scan data.  However since EVM and per-chain RSSI
is provided, others may find it useful for a variety of tasks.

The default is to not compile in the radiotap vendor extensions, primarily
because tcpdump doesn't seem to handle the particular vendor extension
layout I'm using, and I'd rather not break existing code out there that
may be (badly) parsing the radiotap data.

Instead, add the option 'ATH_ENABLE_RADIOTAP_VENDOR_EXT' to your kernel
configuration file to enable these options.
2012-06-24 07:01:49 +00:00
Adrian Chadd
d1328898eb After some discussion with bschmidt@, it's likely better to just go
through ieee80211_suspend_all() and ieee80211_resume_all().
All the other wireless drivers are doing that particular dance.

PR:		kern/169084
2012-06-17 03:08:33 +00:00
Adrian Chadd
af0c4b9e4f oops, remove this, it wasn't supposed to be committed. 2012-06-16 22:26:45 +00:00
Konstantin Belousov
ec528f07de Fix build. 2012-06-16 20:49:08 +00:00
Adrian Chadd
375d4f068a Shuffle some more fields in ath_buf so it's not too big.
This shaves off 20 bytes - from 288 bytes to 268 bytes.

However, it's still too big.
2012-06-16 04:41:35 +00:00
Adrian Chadd
021a0db52e Convert ath(4) to just use ieee80211_suspend_all() and ieee80211_resume_all().
The existing code tries to use the beacon miss timer to signal that the AP
has gone away.  Unfortunately this doesn't seem to be behaving itself.
I'll try to investigate why this is for the sake of completeness.

The result is the STA will stay "associated" to the AP it was associated
with when it suspended.  It never receives a bmiss notification so it
never tries reassociating.

PR:		kern/169084
2012-06-15 01:15:59 +00:00
Adrian Chadd
3b324f5772 Disable BGSCAN for 802.11n for now. Until scanning during traffic is
fixed for 802.11n TX, this needs to be disabled or users wlil see randomly
hanging aggregation sessions.

Whilst I'm here, remove the warning about 802.11n being full of dragons.
It's nowhere near that scary now.
2012-06-14 04:14:06 +00:00
Adrian Chadd
23ced6c117 Implement a global (all non-mgmt traffic) TX ath_buf limitation when
ath_start() is called.

This (defaults to 10 frames) gives for a little headway in the TX ath_buf
allocation, so buffer cloning is still possible.

This requires a lot omre experimenting and tuning.

It also doesn't stop a node/TID from consuming all of the available
ath_buf's, especially when the node is going through high packet loss
or only talking at a low TX rate.  It also doesn't stop a paused TID
from taking all of the ath_bufs.  I'll look at fixing that up in subsequent
commits.

PR:	kern/168170
2012-06-14 00:51:53 +00:00
Adrian Chadd
af33d486ab Implement a separate, smaller pool of ath_buf entries for use by management
traffic.

* Create sc_mgmt_txbuf and sc_mgmt_txdesc, initialise/free them appropriately.
* Create an enum to represent buffer types in the API.
* Extend ath_getbuf() and _ath_getbuf_locked() to take the above enum.
* Right now anything sent via ic_raw_xmit() allocates via ATH_BUFTYPE_MGMT.
  This may not be very useful.
* Add ATH_BUF_MGMT flag (ath_buf.bf_flags) which indicates the current buffer
  is a mgmt buffer and should go back onto the mgmt free list.
* Extend 'txagg' to include debugging output for both normal and mgmt txbufs.
* When checking/clearing ATH_BUF_BUSY, do it on both TX pools.

Tested:

* STA mode, with heavy UDP injection via iperf.  This filled the TX queue
  however BARs were still going out successfully.

TODO:

* Initialise the mgmt buffers with ATH_BUF_MGMT and then ensure the right
  type is being allocated and freed on the appropriate list.  That'd save
  a write operation (to bf->bf_flags) on each buffer alloc/free.

* Test on AP mode, ensure that BAR TX and probe responses go out nicely
  when the main TX queue is filled (eg with paused traffic to a TID,
  awaiting a BAR to complete.)

PR:		kern/168170
2012-06-13 06:57:55 +00:00
Adrian Chadd
e1a50456b6 Replace the direct sc_txbuf manipulation with a pair of functions.
This is preparation work for having a separate ath_buf queue for
management traffic.

PR:		kern/168170
2012-06-13 05:39:16 +00:00
Adrian Chadd
94fe37d25c Add a new ioctl for ath(4) which returns the aggregate statistics. 2012-06-10 06:42:18 +00:00