ps-poll is totally broken in its current form.
This should unbreak things enough to let people use PS-POLL devices,
but leave it in place for me to finish PS-POLL handling.
the non-aggregate path.
I "cheated" by using some TX setup code in our HAL that isn't present
in the atheros HAL (or Linux ath9k.)
The old path for forming aggregates was:
* setup the rate control in the first descriptor;
* call chaintxdesc() on all the frames;
* call setupfirsttxdesc() on the first descrpitor in the first
frame;
* call setuplasttxdesc() on the last descriptor in the last frame.
The new path for forming aggregates looks like the non-aggregate path:
* call setuptxdesc() on the first descriptor in the first frame;
* setup the rate control in the first descriptor;
* call filltxdesc() on each descriptor in the frame;
* if it's an aggregate - call set11n_aggr_{first, middle, last} as
appropriate (see the code for a description of what is "appropriate".)
Now, this is done primarily for the AR9300 HAL - it doesn't implement
the first set of aggregate functions. It just has the older methods
and the "first/middle/last" aggregate methods. So, let's convert the
code to use these.
Note: the AR5416 HAL in FreeBSD had that code (from me, a while ago)
and a previous commit brought it up to behave the same as the AR9300
HAL routines.
There's some further tidyups to be done - specifically, avoid doing
multiple calls to the 11n descriptor functions. I shouldn't call
clr11n_aggr(), then set11n_aggr_middle(), then also set11n_aggr_first().
On (at least MIPS) the TX descriptors are in non-cachable memory and
this will cause multiple slow writes.
I'll debug/tidy that up in a future commit.
Tested:
* AR9280, STA
* AR9280/AR9160, AP
* AR9380, STA (using a local, closed source HAL, sorry!)
them, please let me know if not). Most of these are of the form:
static const struct bzzt_type {
[...list of members...]
} const bzzt_devs[] = {
[...list of initializers...]
};
The second const is unnecessary, as arrays cannot be modified anyway,
and if the elements are const, the whole thing is const automatically
(e.g. it is placed in .rodata).
I have verified this does not change the binary output of a full kernel
build (except for build timestamps embedded in the object files).
Reviewed by: yongari, marius
MFC after: 1 week
* don't poke ath_hal_txstart() if nothing was pushed into the FIFO during
the refill process;
* shuffle around the TX debugging output a little so it's logged at
TX hardware enqueue;
* Add logging of the TX status processing.
of small (< 256 byte) aggregate frames.
This needs to be done or 11n aggregation TX just simply doesn't work
on these NICs.
Whilst here, extend some debug printing; I was using this whilst
debugging the TX power setup in the TX descriptor(s) on the AR9380.
* introduce a new HAL API method to pull out the TX status descriptor
contents.
* Add num_delims to the 11n first aggr method. This isn't used by the
driver at the moment so it won't affect anything.
* Add some more ANI spur immunity levels.
* For AR5111 radios attached to an AR5212, limit the 5GHz channels
that are available. A later revision of the AR5111 supports the 4.9GHz
PSB channels but right now there's no check in place for the radio
revision.
If someone wants PSB support on AR5212+AR5111 radios then please let
me know and I'll add the relevant version check.
Obtained from: Qualcomm Atheros
the internet as "AR9380 and later which didn't get its PCI ID written
in at power-on", so it's hardly an unknown constant.
Obtained from: Qualcomm Atheros
in some very degenerate conditions.
However, until ath_rate_form_aggr() is taught to not form aggregates
if ANY selected rate is non-MCS, this can't yet be enabled.
So, just add a comment.
I've tried serialising TX using queues and such but unfortunately
due to how this interacts with the locking going on elsewhere in the
networking stack, the TX task gets delayed, resulting in quite a
noticable throughput loss:
* baseline TCP for 2x2 11n HT40 is ~ 170mbit/sec;
* TCP for TX task in the ath taskq, with the RX also going on - 80mbit/sec;
* TCP for TX task in a separate, second taskq - 100mbit/sec.
So for now I'm going with the Linux wireless stack approach - lock tx
early. The linux code does in the wireless stack, before the 802.11
state stuff happens and before it's punted to the driver.
But TX locking needs to also occur at the driver layer as the TX
completion code _also_ begins to drain the ifnet TX queue.
Whilst I'm here, add some KTR traces for the TX path.
Note:
* This really should be done at the net80211 layer (as well, at least.)
But that'll have to wait for a little more thought to happen.
the power save queue.
* introduce some new ATH_NODE lock protected fields, tracking the
net80211 psq and TIM state;
* when doing buffer transitions - ie, when sending and completing
buffers - check the state of the SWQ and update the TIM appropriately.
* when clearing the TIM bit, if the SWQ is not empty then delay clearing
it.
This is racy, but it's no less racy than the current net80211 power
save queue management code. Specifically, with multiple TX threads,
it's quite plausible that parallel state updates will race and the
TIM will be left in an inconsistent state. I'll address that in
a follow-up commit.
support with ath(4) and VIMAGE.
Right now the VIMAGE code doesn't supply a default vnet context during:
* hotplug attach;
* any device detach.
It special cases kldload/boot time probing (by setting the context to
vnet0) but that doesn't occur when probing devices during a bus rescan -
eg, adding a cardbus card.
These will eventually go away when the VIMAGE support extends to providing
default contexts to hotplug attach/detach.
fragment rate lookups correctly, add a comment describing exactly that.
The assumption in the fragment duration code is the duration of the next
fragment will match the rate used by the current fragment. But I think
a rate lookup is being done for _each_ fragment. For older pre-sample
rate control this would almost always be the case, but for sample
it may be incorrect more often then correct.
stashed away in ath_node.
As much as I tried to stuff that behind the ATH_NODE lock, unfortunately
the locking is just too plain hairy (for me! And I wrote it!) to do
cleanly. Hence using atomics here instead of a lock. The ATH_NODE lock
just isn't currently used anywhere besides the rate control updates.
If in the future everything gets migrated back to using a single ATH_NODE
lock or a single global ATH_TX lock (ie, a single TX lock for all TX and
TX completion) then fine, I'll remove the atomics.
it run out of multiple concurrent contexts.
Right now the ath(4) TX processing is a bit hairy. Specifically:
* It was running out of ath_start(), which could occur from multiple
concurrent sending processes (as if_start() can be started from multiple
sending threads nowdays.. sigh)
* during RX if fast frames are enabled (so not really at the moment, not
until I fix this particular feature again..)
* during ath_reset() - so anything which calls that
* during ath_tx_proc*() in the ath taskqueue - ie, TX is attempted again
after TX completion, as there's now hopefully some ath_bufs available.
* Then, the ic_raw_xmit() method can queue raw frames for transmission
at any time, from any net80211 TX context. Ew.
This has caused packet ordering issues in the past - specifically,
there's absolutely no guarantee that preemption won't occuring _during_
ath_start() by the TX completion processing, which will call ath_start()
again. It's a mess - 802.11 really, really wants things to be in
sequence or things go all kinds of loopy.
So:
* create a new task struct for TX'ing;
* make the if_start method simply queue the task on the ath taskqueue;
* make ath_start() just be called by the new TX task;
* make ath_tx_kick() just schedule the ath TX task, rather than directly
calling ath_start().
Now yes, this means that I've taken a step backwards in terms of
concurrency - TX -and- RX now occur in the same single-task taskqueue.
But there's nothing stopping me from separating out the TX / TX completion
code into a separate taskqueue which runs in parallel with the RX path,
if that ends up being appropriate for some platforms.
This fixes the CCMP/seqno concurrency issues that creep up when you
transmit large amounts of uni-directional UDP traffic (>200MBit) on a
FreeBSD STA -> AP, as now there's only one TX context no matter what's
going on (TX completion->retry/software queue,
userland->net80211->ath_start(), TX completion -> ath_start());
but it won't fix any concurrency issues between raw transmitted frames
and non-raw transmitted frames (eg EAPOL frames on TID 16 and any other
TID 16 multicast traffic that gets put on the CABQ.) That is going to
require a bunch more re-architecture before it's feasible to fix.
In any case, this is a big step towards making the majority of the TX
path locking irrelevant, as now almost all TX activity occurs in the
taskqueue.
Phew.
Right now processing a full 512 frame queue takes quite a while (measured
on the order of milliseconds.) Because of this, the TX processing ends up
sometimes preempting the taskqueue:
* userland sends a frame
* it goes in through net80211 and out to ath_start()
* ath_start() will end up either direct dispatching or software queuing a
frame.
If TX had to wait for RX to finish, it would add quite a few ms of
additional latency to the packet transmission. This in the past has
caused issues with TCP throughput.
Now, as part of my attempt to bring sanity to the TX/RX paths, the first
step is to make the RX processing happen in smaller 'parts'. That way
when TX is pushed into the ath taskqueue, there won't be so much latency
in the way of things.
The bigger scale change (which will come much later) is to actually
process the frames in the ath_intr taskqueue but process _frames_ in
the ath driver taskqueue. That would reduce the latency between
processing and requeuing new descriptors. But that'll come later.
The actual work:
* Add ATH_RX_MAX at 128 (static for now);
* break out of the processing loop if npkts reaches ATH_RX_MAX;
* if we processed ATH_RX_MAX or more frames during the processing loop,
immediately reschedule another RX taskqueue run. This will handle
the further frames in the taskqueue.
This should have very minimal impact on the general throughput case,
unless the scheduler is being very very strange or the ath taskqueue
ends up spending a lot of time on non-RX operations (such as TX
completion.)
the ATH_TXQ_* macros.
* Introduce the new macros;
* rename the TID queue and TID filtered frame queue so the compiler
tells me I'm using the wrong macro.
These should correspond 1:1 to the existing code.
AR5416 and AR9280, but leave it disabled by default.
TL;DR: don't enable this code at all unless you go through the process
of getting the NIC re-certified. This is purely to be used as a
reference and NOT a certified solution by any stretch of the imagination.
The background:
The AR5112 RF synth right up to the AR5133 RF synth (used on the AR5416,
derivative is used for the AR9130/AR9160) only implement down to 2.5MHz
channel spacing in 5GHz. Ie, the RF synth is programmed in steps of 2.5MHz
(or 5, 10, 20MHz.) So they can't represent the quarter rate channels
in the 4.9GHz PSB (which end in xxx2MHz and xxx7MHz). They support
fractional spacing in 2GHz (1MHz spacing) (or things wouldn't work,
right?)
So instead of doing this, the RF synth programming for the AR5112 and
later code will round to the nearest available frequency.
If all NICs were RF5112 or later, they'll inter-operate fine - they all
program the same. (And for reference, only the latest revision of the
RF5111 NICs do it, but the driver doesn't yet implement the programming.)
However:
* The AR5416 programming didn't at all implement the fractional synth
work around as above;
* The AR9280 programming actually programmed the accurate centre frequency
and thus wouldn't inter-operate with the legacy NICs.
So this patch:
* Implements the 4.9GHz PSB fractional synth workaround, exactly as the
RF5112 and later code does;
* Adds a very dirty workaround from me to calculate the same channel
centre "fudge" to the AR9280 code when operating on fractional frequencies
in 5GHz.
HOWEVER however:
It is disabled by default. Since the HAL didn't implement this feature,
it's highly unlikely that the AR5416 and AR928x has been tested in these
centre frequencies. There's a lot of regulatory compliance testing required
before a NIC can have this enabled - checking for centre frequency,
for drift, for synth spurs, for distortion and spectral mask compliance.
There's likely a lot of other things that need testing so please don't
treat this as an exhaustive, authoritative list. There's a perfectly good
process out there to get a NIC certified by your regulatory domain, please
go and engage someone to do that for you and pay the relevant fees.
If a company wishes to grab this work and certify existing 802.11n NICs
for work in these bands then please be my guest. The AR9280 works fine
on the correct fractional synth channels (49x2 and 49x7Mhz) so you don't
need to get certification for that. But the 500KHz offset hack may have
the above issues (spur, distortion, accuracy, etc) so you will need to
get the NIC recertified.
Please note that it's also CARD dependent. Just because the RF synth
will behave correctly doesn't at all mean that the card design will also
behave correctly. So no, I won't enable this by default if someone
verifies a specific AR5416/AR9280 NIC works. Please don't ask.
Tested:
I used the following NICs to do basic interoperability testing at
half and quarter rates. However, I only did very minimal spectrum
analyser testing (mostly "am I about to blow things up" testing;
not "certification ready" testing):
* AR5212 + AR5112 synth
* AR5413 + AR5413 synth
* AR5416 + AR5113 synth
* AR9280
net80211 node power save state.
* Add an ATH_NODE_UNLOCK_ASSERT() check
* Add a new node field - an_is_powersave
* Pause/unpause the queue based on the node state
* Attempt to handle net80211 concurrency issues so the queue
doesn't get paused/unpaused more than once at a time from
the net80211 power save code.
Whilst here (and breaking my usual rule), set CLRDMASK when a queue
is unpaused, regardless of whether the queue has some pending traffic.
This means the first frame from that TID (now or later) will hvae
CLRDMASK set.
Also whilst here, bump the swretrymax counters whenever the
filtered frames code expires a frame. Again, breaking my rule, but
this is just a statistics thing rather than a functional change.
This doesn't fix ps-poll (but it doesn't break it too much worse
than it is at the present) or correcting the TID updates.
That's next on the list.
Tested:
* AR9220 AP (Atheros AP96 reference design)
* Macbook Pro and LG Optimus 1 Android phone, both setting
and clearing power save state (but not using PS-POLL.)
This doesn't specifically fix the issue(s) i'm seeing in this 2GHz
environment (where setting/increasing spur immunity causes OFDM restart
errors to skyrocket through the roof; but leaving it at 0 would leave
the environment cleaner..)
Pointy-hat-to: me, for committing this broken code in the first place.
things like EAPOL frames make it out.
After a whole bunch of hacking/testing, I discovered that they weren't
being early-dropped by the stack (but I should look at ensuring that
later..) but were even making to the hardware transmit queue.
They were mostly even being received by the remote end. However, the
remote end was completely ignoring them.
This didn't happen under 150-170MBit TCP tests as I'm guessing the TX
queue stayed very busy and the STA didn't do any scanning. However, when
doing 100Mbit/s of TCP traffic, the STA would do background scanning -
which involves it coming in and out of powersave mode with the AP.
Now, this is a total and utter hack around the real problems, which are:
* I need to implement proper power save handling and integrate it into
the filtered frames support, so the driver/stack doesn't send frames
whilst the station is actually in sleep;
* .. but frames were actually making it to the STA (macbook pro) and
the AP did receive an ACK; but a tcpdump on the receiving side showed
the EAPOL frame never made it. So the stack was dropping it for
some reason;
* Importantly - the EAPOL frames are currently going into the non-QoS
TID, which maps to the BE queue and is susceptible to that queue being
busy doing other things, but;
* There's other traffic going on in the non-QoS TID from other contexts
when scanning is going on and it's possible there's some races causing
sequence number/IV issues, but;
* Importantly importantlly, I think the interaction with TID 16 multicast
traffic in power save mode is causing issues - since I -believe- the
sequence number space being used by the EAPOL frames on TID 16 overlaps
with the multicast frames that have sequence numbers allocated and
are then stuffed on the cabq. Since with EAPOL frames being in TID 16
and queued to the BE queue, it's going to be waiting to be serviced
with all of the aggregate traffic going on - and if the CABQ gets
emptied beforehand, those TID 16 multicast frames with sequence numbers
will go out beforehand.
Now, there's quite likely a bunch of "stuff happening slightly out of
sequence" going on due to the nature of the TX path (read: lots of
overlapping and concurrent ath_start() and ath_raw_xmit() calls going
on, sigh) but I thought I had caught them all and stuffed each TID TX
behind a lock (that lasted as long as it needed to in order to get
the frame onto the relevant destination queue - thus keeping things
in order.)
Unfortunately the last problem is the big one and I'm going to stare at
it some more. If it _is_
So this is a work around for now to ensure that EAPOL frames actually
make it out before any other stuff in the non-QoS TID and HOPEFULLY
before the CABQ gets active.
I'm now going to spend a little time in the TX path figuring out exactly
why the sender is rejecting things. There's two (well, three if you count
EAPOL contents invalid) possibilities:
* The sequence number is out of order (ie, something else like the multicast
traffic on CABQ) is going out first on TID 16;
* The CCMP IV is out of order (similar to above - but less likely, as the
TX key for multicast traffic is different to unicast traffic);
* EAPOL contents strangely invalid.
AP: Ubiquiti RSPRO, AR9160/AR9220 NICs
STA: Macbook Pro, Broadcom 11n NIC
lock may be held.
Kim reported that the TID lock wasn't held when ath_tx_update_clrdmask()
was called. Well, the underlying hardware TXQ for that TID.
I'm betting it's the cabq stuff. ath_tx_xmit_normal() can be called
for both real and software cabq. For software cabq, the real destination
txq is different to the txq. So, the lock check will fail.
Reported by: Kim Culhan <w8hdkim@gmail.com>
This should eventually be unified with ATH_DEBUG() so I can get both
from one macro; that may take some time.
Add some new probes for TX and TX completion.
* use the correct frame status - although the completion descriptor is
the _last_ in the frame/aggregate, the status is currently stored in
the _first_ buffer.
* Print out ath_buf specific fields once, not per descriptor in an ath_buf.
it's disabled.
The previous commit to enable CLRDMASK setting didn't do it at all
correctly for non-aggregate sessions - so the CLRDMASK bit would be
cleared and never re-set.
* move ath_tx_update_clrdmask() to be called by functions that setup
descriptors and queue frames to the hardware, rather than scattered
everywhere.
* Force CLRDMASK to be set on all non-aggregate session frames being
transmitted.
* Use ath_tx_normal_comp() now on non-aggregate sessoin frames
that are queued via ath_tx_xmit_normal(). That way the TID hwq is
updated and they can trigger (eventual) filter frame queue resets
and software retransmits.
There's still a bit more work to do in this area to reverse the silly
short-sightedness on my part, however it's likely going to be better
to fix this now than just reverting the patch.
Thanks to people on the freebsd-wireless@ mailing list for promptly
pointing this out.
frames to occur.
* Create a new function which will set the bf_flags CLRDMASK bit
if required.
* For raw frames, always set CLRDMASK.
* For BAR, ADDBA frames, always set CLRDMASK.
* For everything else, check if CLRDMASK needs to be set before
calling tx_setds() or tx_setds11n().
* When unpausing a queue or drain/resetting it, set tid->clrdmask=1
just to ensure traffic starts flowing.
What I need to do:
* Modify that function to _clear_ the CLRDMASK if it's not required,
or retried frames may have CLRDMASK set when they don't need to.
(Which isn't a huge deal, but..)
Whilst I'm here:
* ath_tx_normal_xmit() should really act like the AMPDU session TX
functions - any incomplete frames will end up being assigned
ath_tx_normal_comp() which will decrement tid->hwq_depth - but that
won't have been incremented.
So whilst I'm here, add a comment to do that.
* Fix the debug print function to be slightly clearer about things;
it's not a good sign when I can't interpret my own debugging output.
I've done some testing on AR9280/AR5416/AR9160 STA and AP modes.
stack.
There are unfortunately quite a few odd cases in BAR TX and BAR TX
retransmission that I haven't yet fully diagnosed. So for now, add
this work-around so the resume() function isn't called too often,
decrementing pause to -1 (and causing things to stay paused.)
is done.
The aggregate path was definitely accessing 'ts' before it was actually
being assigned.
This had the side effect of over-filtering frames, since occasionally that
bit would be '1'.
Whilst here, do the same thing in the non-aggregate completion function -
as calling the filter function may also invalidate bf.
Pointy hat to: adrian, for not noticing this over many, many code reviews.
The hardware can optionally "filter" frames if successive transmissions
to a given node (ie, "entry in the keycache") fail. That way the hardware
can implement a kind of early abort of all the other frames queued to
that destination, rather than simply trying to TX each frame to that
destination (and failing.)
The background:
* If a frame comes back as being filtered, the hardware didn't try to
TX it (or it was outside the TX burst opportunity.) So, take it as a hint
that some (but not all, see below) frames to the destination may be
filtered.
* If the CLRDMASK bit is set in a TX descriptor, the "filter to this
destination" bit in the keycache entry is cleared and TX to that host
will be unconditionally retried.
* Right now everything has the CLRDMASK bit set, so filtered frames
tend to be aggregates and frames that fall outside of the WME burst
window. It was a bit worse in the past as I had messed up the TX
flags and CLRDMASK wasn't being set on aggregate frames.
The annoying bits:
* It's easy (ish) to do for aggregate session frames - firstly, they
can be retried in any order as long as they're within the BAW, and
there's already a bunch of infrastructure tracking how many frames
the TID has queued to the hardware (tid->hwq_depth.) However, for
frames that bypassed the software queue, hwq_depth doesn't get
incremented. I'll fix that in a subsequent commit.
* For non-aggregate session frames, the only retries that can occur
are ones for sequence numbers that hvaen't successfully been TXed yet.
Since there's no re-ordering going on in non-aggregate sessions, if any
subsequent seqno frames make it out, any filtered frames before that
seqno need to be dropped.
Hence why this initially is just for aggregate session frames.
* Since there may be intermediary frames to the destination that
have CLRDMASK set - for example, any directly dispatched management
frames to that destination - it's possible that there will be some
filtered frames followed up by some non filtered frames. Thus,
it can't be assumed that once you see a filtered frame for the given
destination node, all subsequent frames for all TIDs will be filtered.
Ok, with that in mind:
* Create a per-TID filtered frame queue for frames that the hardware
returns as filtered.
* Track filtered frames per-tid, rather than per-node. It just makes
the locking much easier.
* When a filtered frame appears in the completion function, the node
transitions to "filtered", and all subsequent completed error frames
(filtered or otherwise) are put on the filtered frame queue. The TID
is paused once (during the transition from non-filtered to filtered).
* If a filtered frame retry count exceeds SWMAX_RETRIES, a BAR should be
sent.
* Once all the frames queued to the hardware for the given filtered frame
TID, transition back from filtered frame to non-filtered frame, which
means pre-pending all the filtered frames onto the head of the software
queue, clearing the filtered frame state and unpausing the TID.
Things get quite hairy around handling completion (aggr, non-aggr, norm,
direct-dispatched frames to a hardware queue); whether it's an "error",
"cleanup" or "BAR" state as well as filtered, which order to do things
in (eg do filtered BEFORE checking for BAR, as the filter completion
may be needed to actually transmit a BAR frame.)
This work has definitely reminded me that I have to tidy up all the locking
and remove some of the ridiculous lock/unlock/lock/unlock going on in the
completion functions.
It's also reminded me that I should really split out TID versus hardware TXQ
locking, even if the underlying locking is still the destination hardware TXQ.
Finally, this is all pre-requisite for working on AP mode power save support
(PS-POLL, uAPSD) as well as improving performance to misbehaving nodes (as
they can transition into filter mode, stopping any TX until everything has
caught up.)
Finally (ish) - this should also be done for non-aggregate sessions as
there are still plenty of laptops and mobile devices that don't speak
802.11n but do wish for stable, useful power save AP support where packets
aren't simply dropped. This requires software retransmission for
non-aggregate sessions to be implemented, which includes the caveats I've
mentioned above.
Finally finally - this doesn't yet do anything about the CLRDMASK bit in the
TX descriptor. That's still unconditionally set to 1. I'll debug the
current work (mostly ensuring I haven't busted up the hairy transitions
between BAR, filtered, error (all frames in an aggregate failing) and
cleanup (when transitioning from aggregation -> non-aggregation.))
Finally finally finally - this is all original work by yours truely, rather
than ported from the Atheros internal driver codebase or Linux ath9k.
Tested:
* AR9280, AR5416 in STA mode
* AR9280, AR9130 in hostap mode
* Lots and lots of iperf testing in very marginal and non-marginal conditions,
complete with inducing filtered frames + BAR TX conditions.