TSFOOR happens if a beacon with a given TSF isn't received within the
programmed/expected TSF value, plus/minus a fudge range. (OOR == out of range.)
If this happens then it could be because the baseband/mac is stuck, or
the baseband is deaf. So, do a cold reset and resync the beacon to
try and unstick the hardware.
It also happens when a bad AP decides to err, slew its TSF because they
themselves are resetting and they don't preserve the TSF "well."
This has fixed a bunch of weird corner cases on my 2GHz AP radio upstairs
here where it occasionally goes deaf due to how much 2GHz noise is up
here (and ANI gets a little sideways) and this unsticks the station
VAP.
For AP modes a hung baseband/mac usually ends up as a stuck beacon
and those have been addressed for a long time by just resetting the
hardware. But similar hangs in station mode didn't have a similar
recovery mechanism.
Tested:
* AR9380, STA mode, 2GHz/5GHz
* AR9580, STA mode, 5GHz
* QCA9344 SoC w/ on-board wifi (TL-WDR4300/3600 devices); 2GHz
STA mode
My initial rate control code was .. suboptimal. I wanted to at least get MCS
rates sent, but it didn't do anywhere near enough to handle low signal level links
or remotely keep accurate statistics.
So, 8 years later, here's what I should've done back then.
* Firstly, I wasn't at all tracking packet sizes other than the two buckets
(250 and 1600 bytes.) So, extend it to include 4096, 8192, 16384, 32768 and
65536. I may go add 2048 at some point if I find it's useful.
This is important for a few reasons. First, when forming A-MPDU or AMSDU
aggregates the frame sizes are larger, and thus the TX time calculation
is woefully, increasingly wrong. Secondly, the behaviour of 802.11 channels
isn't some fixed thing, both due to channel conditions and radios themselves.
Notably, there was some observations done a few years ago on 11n chipsets
which noticed longer aggregates showed an increase in failed A-MPDU sub-frame
reception as you got further along in the transmit time. It could be due to
a variety of things - transmitter linearity, channel conditions changing,
frequency/phase drift, etc - but the observation was to potentially form
shorter aggregates to improve BER.
* .. and then modify the ath TX path to report the length of the aggregate sent,
so as the statistics kept would line up with the correct bucket.
* Then on the rate control look-up side - i was also only using the first frame
length for an A-MPDU rate control lookup which isn't good enough here.
So, add a new method that walks the TID software queue for that node to
find out what the likely length of data available is. It isn't ALL of the
data in the queue because we'll only ever send enough data to fit inside the
block-ack window, so limit how many bytes we return to roughly what ath_tx_form_aggr()
would do.
* .. and cache that in the first ath_buf in the aggregate so it and the eventual
AMPDU length can be returned to the rate control code.
* THEN, modify the rate control code to look at them both when deciding which bucket
to attribute the sent frame on. I'm erring on the side of caution and using the
size bucket that the lookup is based on.
Ok, so now the rate lookups and statistics are "more correct". However, MCS rates
are not the same as 11abg rates in that they're not a monotonically incrementing
set of faster rates and you can't assume that just because a given MCS rate fails,
the next higher one wouldn't work better or be a lower average tx time.
So, I had to do a bunch of surgery to the best rate and sample rate math.
This is the bit that's a WIP.
* First, simplify the statistics updates (update_stats()) to do a single pass on
all rates.
* Next, make sure that each rate average tx time is updated based on /its/ failure/success.
Eg if you sent a frame with { MCS15, MCS12, MCS8 } and MCS8 succeeded, MCS15 and MCS
12 would have their average tx time updated for /their/ part of the transmission,
not the whole transmission.
* Next, EWMA wasn't being fully calculated based on the /failures/ in each of the
rate attempts. So, if MCS15, MCS12 failed above but MCS8 didn't, then ensure
that the statistics noted that /all/ subframes failed at those rates, rather than
the eventual set of transmitted/sent frames. This ensures the EWMA /and/ average
TX time are updated correctly.
* When picking a sample rate and initial rate, probe rates aroud the current MCS
but limit it to MCS0..7 /for all spatial streams/, rather than doing crazy things
like hitting MCS7 and then probing MCS8 - MCS8 is basically MCS0 but two spatial
streams. It's a /lot/ slower than MCS7. Also, the reverse is true - if we're at
MCS8 then don't probe MCS7 as part of it, it's not likely to succeed.
* Fix bugs in pick_best_rate() where I was /immediately/ choosing the highest MCS
rate if there weren't any frames yet transmitted. I was defaulting to 25% EWMA and
.. then each comparison would accept the higher rate. Just skip those; sampling
will fill in the details.
So, this seems to work a lot better. It's not perfect; I'm still seeing a lot of
instability around higher MCS rates because there are bursts of loss/retransmissions
that aren't /too/ bad. But i'll keep iterating over this and tidying up my hacks.
Ok, so why this still something I'm poking at? rather than porting minstrel_ht?
ath_rate_sample tries to minimise airtime, not maximise throughput. I have
extended it with an EWMA based on sub-frame success/failures - high MCS rates
that have partially successful receptions still show super short average frame
times, but a /lot/ of retransmits have to happen for that to work.
So for MCS rates I also track this EWMA and ensure that the rates I'm choosing
don't have super crappy packet failures. I don't mind not getting lower
peak throughput versus minstrel_ht; instead I want to see if I can make "minimise
airtime" work well.
Tested:
* AR9380, STA mode
* AR9344, STA mode
* AR9580, STA/AP mode
These are some fun issues I've found with my upstairs wifi link at such a ridiculous
low signal level (like, < 5dB.)
* Add per-station tx/rx rssi statistics, in potential preparation to use that
in the RX rate control.
* Call the rate control on each received frame to let it potentially use
it as a hint for what rates to potentially use. It's a no-op right now.
* Do ANI calibration during scan as well. The ath_newstate() call was disabling the
ANI timer and only re-enabling it during transitions to _RUN. This has the
unfortunate side-effect that if ANI deafened the NIC because of interference
and it disassociated, it wouldn't be reset and the scan would never hear beacons.
The ANI configuration is stored at least globally on some HALs and per-channel
on others. Because of this a NIC reset wouldn't help; the ANI parameters would
simply be programmed back in.
Now, I have a feeling I also need to do this during AUTH/ASSOC too and maybe,
if I'm feeling clever, I need to reset the ANI parameters on a given channel
during a transition through INIT or if the VAP is destroyed/re-created.
However for now this gets me out of the immediate weeds with connectivity
upstairs (and thus I /can/ commit); I'll keep chipping away at tidying this
stuff up in subsequent commits.
Tested:
* AR9344 (Wasp), 2G STA mode
One of the fun issues with scanning has been how the existing
ANI values were programmed into the hardware when channels were
changed. If you're on a really crappy channel and ANI has made
you deaf then when you scan you continue to be deaf on all channels.
This code passes in a flag to startpcureceive which in AR5416 and later
is also used to enable ANI. This allows it to know if it's a normal
operation or a scan operation.
This fixes my situation at home where a temporary spot of a device
going deaf due to interference starts scanning and .. can't hear
anything until I restart.
Now, this isn't the full fix - ideally:
(a) all the ANI config and per-channel information would be migrated
to the shared HAL stuff and enabled for all of the NICs;
(b) when a station reassociates and some other error conditions
(like missed beacons, NF calibration failures, etc) a knob
to reset ANI parameters would likely help recovery.
But hey, I'm committing bits of code again! woo!
Tested:
* AR9344 (2G), STA operation
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
I keep asking myself "what do these fields mean" and so now I've clarified
it for myself.
Tested:
* Reading the comments, going "a-ha!" a couple times.
Approved by: re (gjb)
This is the initial framework to call into the MCI HAL routines and drive
the basic state engine.
The MCI bluetooth coex model uses a command channel between wlan and
bluetooth, rather than a 2-wire or 3-wire signaling protocol to control things.
This means the wlan and bluetooth chip exchange a lot more information and
signaling, even at the per-packet level. The NICs in question can share
the input LNA and output PA on the die, so they absolutely can't stomp
on each other in a silly fashion. It also allows for the bluetooth side
to signal when profiles come and go, so the driver can take appropriate
control. There's also the possibility of dynamic bluetooth/wlan duty cycle
control which I haven't yet really played with.
It configures things up with a static "wlan wins everything" coexistence,
configures up the available 2GHz channel map for bluetooth, sets a static
duty cycle for bluetooth/wifi traffic priority and drives the basics needed to
keep the MCI HAL code happy.
It doesn't do any actual coexistence except to default to "wlan wins everything",
which at least demonstrates that things do indeed work. Bluetooth inquiry frames
still trump wifi (including beacons), so that demonstrates things really do
indeed seem to work.
Tested:
* AR9462 (WB222), STA mode + bt
* QCA9565 (WB335), STA mode + bt
TODO:
* .. the rest of coexistence. yes, bluetooth, not people. That stuff's hard.
* It doesn't do the initial BT side calibration, which requires a WLAN chip
reset. I'll fix up the reset path a bit more first before I enable that.
* The 1-ant and 2-ant configuration bits aren't being set correctly in
if_ath_btcoex.c - I'll dig into that and fix it in a subsequent commit.
* It's not enabled by default for WB222/WB225 even though I believe it now
can be - I'll chase that up in a subsequent commit.
Obtained from: Qualcomm Atheros, Linux ath9k
This enables LDPC receive support for the AR9300 chips that support it.
It'll announce LDPC support via net80211.
Tested:
* AR9380, STA mode
* AR9331, (to verify the HAL didn't attach it to a chip which
doesn't support LDPC.)
TODO:
* Add in net80211 machinery to make this configurable at runtime.
Right now the only way to force a cold reset is:
* The HAL itself detects it's needed, or
* The sysctl, setting all resets to be cold.
Trouble is, cold resets take quite a bit longer than warm resets.
However, there are situations where a cold reset would be nice.
Specifically, after a stuck beacon, BB/MAC hang, stuck calibration results,
etc.
The vendor HAL has a separate method to set the reset reason (which is
how HAL_RESET_BBPANIC gets set) which informs the HAL during the reset path
why it occured. This is almost but not quite the same; I may eventually
unify both approaches in the future.
This commit just extends HAL_RESET_TYPE to include both status (eg BBPANIC)
and type (eg do COLD.) None of the HAL code uses it yet though; that'll
come later.
It also is a big no-op in each HAL - I need to go teach each of the HALs
about cold/warm reset through this path.
connectivity interact with the net80211 stack.
Historical background: originally wireless devices created an interface,
just like Ethernet devices do. Name of an interface matched the name of
the driver that created. Later, wlan(4) layer was introduced, and the
wlanX interfaces become the actual interface, leaving original ones as
"a parent interface" of wlanX. Kernelwise, the KPI between net80211 layer
and a driver became a mix of methods that pass a pointer to struct ifnet
as identifier and methods that pass pointer to struct ieee80211com. From
user point of view, the parent interface just hangs on in the ifconfig
list, and user can't do anything useful with it.
Now, the struct ifnet goes away. The struct ieee80211com is the only
KPI between a device driver and net80211. Details:
- The struct ieee80211com is embedded into drivers softc.
- Packets are sent via new ic_transmit method, which is very much like
the previous if_transmit.
- Bringing parent up/down is done via new ic_parent method, which notifies
driver about any changes: number of wlan(4) interfaces, number of them
in promisc or allmulti state.
- Device specific ioctls (if any) are received on new ic_ioctl method.
- Packets/errors accounting are done by the stack. In certain cases, when
driver experiences errors and can not attribute them to any specific
interface, driver updates ic_oerrors or ic_ierrors counters.
Details on interface configuration with new world order:
- A sequence of commands needed to bring up wireless DOESN"T change.
- /etc/rc.conf parameters DON'T change.
- List of devices that can be used to create wlan(4) interfaces is
now provided by net.wlan.devices sysctl.
Most drivers in this change were converted by me, except of wpi(4),
that was done by Andriy Voskoboinyk. Big thanks to Kevin Lo for testing
changes to at least 8 drivers. Thanks to pluknet@, Oliver Hartmann,
Olivier Cochard, gjb@, mmoll@, op@ and lev@, who also participated in
testing.
Reviewed by: adrian
Sponsored by: Netflix
Sponsored by: Nginx, Inc.
* 286410
* 286413
* 286416
The initial commit broke a variety of debug and features that aren't
in the GENERIC kernels but are enabled in other platforms.
with the net80211 stack.
Historical background: originally wireless devices created an interface,
just like Ethernet devices do. Name of an interface matched the name of
the driver that created. Later, wlan(4) layer was introduced, and the
wlanX interfaces become the actual interface, leaving original ones as
"a parent interface" of wlanX. Kernelwise, the KPI between net80211 layer
and a driver became a mix of methods that pass a pointer to struct ifnet
as identifier and methods that pass pointer to struct ieee80211com. From
user point of view, the parent interface just hangs on in the ifconfig
list, and user can't do anything useful with it.
Now, the struct ifnet goes away. The struct ieee80211com is the only
KPI between a device driver and net80211. Details:
- The struct ieee80211com is embedded into drivers softc.
- Packets are sent via new ic_transmit method, which is very much like
the previous if_transmit.
- Bringing parent up/down is done via new ic_parent method, which notifies
driver about any changes: number of wlan(4) interfaces, number of them
in promisc or allmulti state.
- Device specific ioctls (if any) are received on new ic_ioctl method.
- Packets/errors accounting are done by the stack. In certain cases, when
driver experiences errors and can not attribute them to any specific
interface, driver updates ic_oerrors or ic_ierrors counters.
Details on interface configuration with new world order:
- A sequence of commands needed to bring up wireless DOESN"T change.
- /etc/rc.conf parameters DON'T change.
- List of devices that can be used to create wlan(4) interfaces is
now provided by net.wlan.devices sysctl.
Most drivers in this change were converted by me, except of wpi(4),
that was done by Andriy Voskoboinyk. Big thanks to Kevin Lo for testing
changes to at least 8 drivers. Thanks to Olivier Cochard, gjb@, mmoll@,
op@ and lev@, who also participated in testing. Details here:
https://wiki.freebsd.org/projects/ifnet/net80211
Still, drivers: ndis, wtap, mwl, ipw, bwn, wi, upgt, uath were not
tested. Changes to mwl, ipw, bwn, wi, upgt are trivial and chances
of problems are low. The wtap wasn't compilable even before this change.
But the ndis driver is complex, and it is likely to be broken with this
commit. Help with testing and debugging it is appreciated.
Differential Revision: D2655, D2740
Sponsored by: Nginx, Inc.
Sponsored by: Netflix
Smart NICs with firmware (eg wpi, iwn, the new atheros parts, the intel 7260
series, etc) support doing a lot of things in firmware. This includes but
isn't limited to things like scanning, sending probe requests and receiving
probe responses. However, net80211 doesn't know about any of this - it still
drives the whole scan/probe infrastructure itself.
In order to move towards suppoting smart NICs, the receive path needs to
know about the channel/details for each received packet. In at least
the iwn and 7260 firmware (and I believe wpi, but I haven't tried it yet)
it will do the scanning, power-save and off-channel buffering for you -
all you need to do is handle receiving beacons and probe responses on
channels that aren't what you're currently on. However the whole receive
path is peppered with ic->ic_curchan and manual scan/powersave handling.
The beacon parsing code also checks ic->ic_curchan to determine if the
received beacon is on the correct channel or not.[1]
So:
* add freq/ieee values to ieee80211_rx_stats;
* change ieee80211_parse_beacon() to accept the 'current' channel
as an argument;
* modify the iv_input() and iv_recv_mgmt() methods to include the rx_stats;
* add a new method - ieee80211_lookup_channel_rxstats() - that looks up
a channel based on the contents of ieee80211_rx_stats;
* if it exists, use it in the mgmt path to switch the current channel
(which still defaults to ic->ic_curchan) over to something determined
by rx_stats.
This is enough to kick-start scan offload support in the Intel 7260
driver that Rui/I are working on. It also is a good start for scan
offload support for a handful of existing NICs (wpi, iwn, some USB
parts) and it'll very likely dramatically improve stability/performance
there. It's not the whole thing - notably, we don't need to do powersave,
we should not scan all channels, and we should leave probe request sending
to the firmware and not do it ourselves. But, this allows for continued
development on the above features whilst actually having a somewhat
working NIC.
TODO:
* Finish tidying up how the net80211 input path works.
Right now ieee80211_input / ieee80211_input_all act as the top-level
that everything feeds into; it should change so the MIMO input routines
are those and the legacy routines are phased out.
* The band selection should be done by the driver, not by the net80211
layer.
* ieee80211_lookup_channel_rxstats() only determines 11b or 11g channels
for now - this is enough for scanning, but not 100% true in all cases.
If we ever need to handle off-channel scan support for things like
static-40MHz or static-80MHz, or turbo-G, or half/quarter rates,
then we should extend this.
[1] This is a side effect of frequency-hopping and CCK modes - you
can receive beacons when you think you're on a different channel.
In particular, CCK (which is used by the low 11b rates, eg beacons!)
is decodable from adjacent channels - just at a low SNR.
FH is a side effect of having the hardware/firmware do the frequency
hopping - it may pick up beacons transmitted from other FH networks
that are in a different phase of hopping frequencies.
These variants have a few differences from the default AR9485 NIC,
namely:
* a non-default antenna switch config;
* slightly different RX gain table setup;
* an external XLNA hooked up to a GPIO pin;
* (and not yet done) RSSI threshold differences when
doing slow diversity.
To make this possible:
* Add the PCI device list from Linux ath9k, complete with vendor and
sub-vendor IDs for various things to be enabled;
* .. and until FreeBSD learns about a PCI device list like this,
write a search function inspired by the USB device enumeration code;
* add HAL_OPS_CONFIG to the HAL attach methods; the HAL can use this
to initialise its local driver parameters upon attach;
* copy these parameters over in the AR9300 HAL;
* don't default to override the antenna switch - only do it for
the chips that require it;
* I brought over ar9300_attenuation_apply() from ath9k which is cleaner
and easier to read for this particular NIC.
This is a work in progress. I'm worried that there's some post-AR9380
NIC out there which doesn't work without the antenna override set as
I currently haven't implemented bluetooth coexistence for the AR9380
and later HAL. But I'd rather have this code in the tree and fix it
up before 11.0-RELEASE happens versus having a set of newer NICs
in laptops be effectively RX deaf.
Tested:
* AR9380 (STA)
* AR9485 CUS198 (STA)
Obtained from: Qualcomm Atheros, Linux ath9k
The original code was .. well, slightly more than incorrect.
It showed up as stalled RX queues if the NIC needed to be frequently
reinitialised (eg during scans.)
This is inspired by work done by Matt Dillon over at the DragonflyBSD
project.
So:
* track when EDMA RX has been stopped and when the MAC has been reset;
* re-initialise the ring only after a reset;
* track whether RX has been stopped/started - just for debugging now;
* don't bother with the RX EOL stuff for EDMA - we don't need the
interrupt at all. We also don't need to disable/enable the interrupt
or start DMA - once new frames are pushed into the ring via the
normal RX path, it'll just restart RX DMA on its own.
Tested:
* AR9380, STA mode
* AR9380, AP mode
* AR9485, STA mode
* AR9462, STA mode
used.
It turns out that the RX DMA engine does the same last-descriptor-link-
pointer-re-reading trick that the TX DMA engine. That is, the hardware
re-reads the link pointer before it moves onto the next descriptor.
Thus we can't free a descriptor before we move on; it's possible the
hardware will need to re-read the link pointer before we overwrite
it with a new one.
Tested:
* AR5416, STA mode
TODO:
* more thorough AP and STA mode testing!
* test on other pre-AR9380 NICs, just to be sure.
* Break out the RX descriptor grabbing bits from the RX completion
bits, like what is done in the RX EDMA code, so ..
* .. the RX lock can be held during ath_rx_proc(), but not across
packet input.
The hardware can generate its own frames (eg RTS/CTS exchanges, other
kinds of 802.11 management stuff, especially when it comes to 802.11n)
and these also have PWRMGT flags. So if the VAP is asleep but the
NIC is in force-awake for some reason, ensure that the self-generated
frames have PWRMGT set to 1.
Now, this (like basically everything to do with powersave) is still
racy - the only way to guarantee that it's all actually consistent
is to pause transmit and let it finish before transitioning the VAP
to sleep, but this at least gets the basic method of tracking and
updating the state debugged.
Tested:
* AR5416, STA mode
* AR9380, STA mode
fixes and beacon programming / debugging into the ath(4) driver.
The basic power save tracking:
* Add some new code to track the current desired powersave state; and
* Add some reference count tracking so we know when the NIC is awake; then
* Add code in all the points where we're about to touch the hardware and
push it to force-wake.
Then, how things are moved into power save:
* Only move into network-sleep during a RUN->SLEEP transition;
* Force wake the hardware up everywhere that we're about to touch
the hardware.
The net80211 stack takes care of doing RUN<->SLEEP<->(other) state
transitions so we don't have to do it in the driver.
Next, when to wake things up:
* In short - everywhere we touch the hardware.
* The hardware will take care of staying awake if things are queued
in the transmit queue(s); it'll then transit down to sleep if
there's nothing left. This way we don't have to track the
software / hardware transmit queue(s) and keep the hardware
awake for those.
Then, some transmit path fixes that aren't related but useful:
* Force EAPOL frames to go out at the lowest rate. This improves
reliability during the encryption handshake after 802.11
negotiation.
Next, some reset path fixes!
* Fix the overlap between reset and transmit pause so we don't
transmit frames during a reset.
* Some noisy environments will end up taking a lot longer to reset
than normal, so extend the reset period and drop the raise the
reset interval to be more realistic and give the hardware some
time to finish calibration.
* Skip calibration during the reset path. Tsk!
Then, beacon fixes in station mode!
* Add a _lot_ more debugging in the station beacon reset path.
This is all quite fluid right now.
* Modify the STA beacon programming code to try and take
the TU gap between desired TSF and the target TU into
account. (Lifted from QCA.)
Tested:
* AR5210
* AR5211
* AR5212
* AR5413
* AR5416
* AR9280
* AR9285
TODO:
* More AP, IBSS, mesh, TDMA testing
* Thorough AR9380 and later testing!
* AR9160 and AR9287 testing
Obtained from: QCA
The AR9485 chip and AR933x SoC both implement LNA diversity.
There are a few extra things that need to happen before this can be
flipped on for those chips (mostly to do with setting up the different
bias values and LNA1/LNA2 RSSI differences) but the first stage is
putting this code into the driver layer so it can be reused.
This has the added benefit of making it easier to expose configuration
options and diagnostic information via the ioctl API. That's not yet
being done but it sure would be nice to do so.
Tested:
* AR9285, with LNA diversity enabled
* AR9285, with LNA diversity disabled in EEPROM
the RX antenna field.
The AR9285/AR9485 use an LNA mixer to determine how to combine the signals
from the two antennas. This is encoded in the RSSI fields (ctl/ext) for
chain 2. So, let's use that here.
This maps RX antennas 0->3 to the RX mixer configuration used to
receive a frame. There's more that can be done but this is good enough
to diagnose if the hardware is doing "odd" things like trying to
receive frames on LNA2 (ie, antenna 2 or "alt" antenna) when there's
only one antenna connected.
Tested:
* AR9285, STA mode
and if queue mechanism; also fix up (non-11n) TX fragment handling.
This may result in a bit of a performance drop for now but I plan on
debugging and resolving this at a later stage.
Whilst here, fix the transmit path so fragment transmission works.
The TX fragmentation handling is a bit more special. In order to
correctly transmit TX fragments, there's a bunch of corner cases that
need to be handled:
* They must be transmitted back to back, in the same order..
* .. ie, you need to hold the TX lock whilst transmitting this
set of fragments rather than interleaving it with other MSDUs
destined to other nodes;
* The length of the next fragment is required when transmitting, in
order to correctly set the NAV field in the current frame to the
length of the next frame; which requires ..
* .. that we know the transmit duration of the next frame, which ..
* .. requires us to set the rate of all fragments to the same length,
or make the decision up-front, etc.
To facilitate this, I've added a new ath_buf field to describe the
length of the next fragment. This avoids having to keep the mbuf
chain together. This used to work before my 11n TX path work because
the ath_tx_start() routine would be handed a single mbuf with m_nextpkt
pointing to the next frame, and that would be maintained all the way
up to when the duration calculation was done. This doesn't hold
true any longer - the actual queuing may occur at any point in the
future (think ath_node TID software queuing) so this information
needs to be maintained.
Right now this does work for non-11n frames but it doesn't at all
enforce the same rate control decision for all frames in the fragment.
I plan on fixing this in a followup commit.
RTS/CTS has the same issue, I'll look at fixing this in a subsequent
commit.
Finaly, 11n fragment support requires the driver to have fully
decided what the rate scenario setup is - including 20/40MHz,
short/long GI, STBC, LDPC, number of streams, etc. Right now that
decision is (currently) made _after_ the NAV field value is updated.
I'll fix all of this in subsequent commits.
Tested:
* AR5416, STA, transmitting 11abg fragments
* AR5416, STA, 11n fragments work but the NAV field is incorrect for
the reasons above.
TODO:
* It would be nice to be able to queue mbufs per-node and per-TID so
we can only queue ath_buf entries when it's time to assemble frames
to send to the hardware.
But honestly, we should just do that level of software queue management
in net80211 rather than ath(4), so I'm going to leave this alone for now.
* More thorough AP, mesh and adhoc testing.
* Ensure that net80211 doesn't hand us fragmented frames when A-MPDU has
been negotiated, as we can't do software retransmission of fragments.
* .. set CLRDMASK when transmitting fragments, just to ensure.
traffic.
When transmitting non-aggregate traffic, we need to keep the hardware
busy whilst transmitting or small bursts in txdone/tx latency will
kill us.
This restores non-aggregate iperf performance, especially when doing
TDMA.
Tested:
* AR5416<->AR5416, TDMA
* AR5416 STA <-> AR9280 AP
of course.)
There's a few things that needed to happen:
* In case someone decides to set the beacon transmission rate to be
at an MCS rate, use the MCS-aware version of the duration calculation
to figure out how long the received beacon frame was.
* If TxOP enforcing is available on the hardware and we're doing TDMA,
enable it after a reset and set the TDMA guard interval to zero.
This seems to behave fine.
TODO:
* Although I haven't yet seen packet loss, the PHY errors that would be
triggered (specifically Transmit-Override-Receive) aren't enabled
by the 11n HAL. I'll have to do some work to enable these PHY errors
for debugging.
What broke:
* My recent changes to the TX queue handling has resulted in the driver
not keeping the hardware queue properly filled when doing non-aggregate
traffic. I have a patch to commit soon which fixes this situation
(albeit by reminding me about how my ath driver locking isn't working
out, sigh.)
So if you want to test this without updating to the next set of patches
that I commit, just bump the sysctl dev.ath.X.hwq_limit from 2 to 32.
Tested:
* AR5416 <-> AR5416, with ampdu disabled, HT40, 5GHz, MCS12+Short-GI.
I saw 30mbit/sec in both directions using a bidirectional UDP test.
The list-based DMA engine has the following behaviour:
* When the DMA engine is in the init state, you can write the first
descriptor address to the QCU TxDP register and it will work.
* Then when it hits the end of the list (ie, it either hits a NULL
link pointer, OR it hits a descriptor with VEOL set) the QCU
stops, and the TxDP points to the last descriptor that was transmitted.
* Then when you want to transmit a new frame, you can then either:
+ write the head of the new list into TxDP, or
+ you write the head of the new list into the link pointer of the
last completed descriptor (ie, where TxDP points), then kick
TxE to restart transmission on that QCU>
* The hardware then will re-read the descriptor to pick up the link
pointer and then jump to that.
Now, the quirks:
* If you write a TxDP when there's been no previous TxDP (ie, it's 0),
it works.
* If you write a TxDP in any other instance, the TxDP write may actually
fail. Thus, when you start transmission, it will re-read the last
transmitted descriptor to get the link pointer, NOT just start a new
transmission.
So the correct thing to do here is:
* ALWAYS use the holding descriptor (ie, the last transmitted descriptor
that we've kept safe) and use the link pointer in _THAT_ to transmit
the next frame.
* NEVER write to the TxDP after you've done the initial write.
* .. also, don't do this whilst you're also resetting the NIC.
With this in mind, the following patch does basically the above.
* Since this encapsulates Sam's issues with the QCU behaviour w/ TDMA,
kill the TDMA special case and replace it with the above.
* Add a new TXQ flag - PUTRUNNING - which indicates that we've started
DMA.
* Clear that flag when DMA has been shutdown.
* Ensure that we're not restarting DMA with PUTRUNNING enabled.
* Fix the link pointer logic during TXQ drain - we should always ensure
the link pointer does point to something if there's a list of frames.
Having it be NULL as an indication that DMA has finished or during
a reset causes trouble.
Now, given all of this, i want to nuke axq_link from orbit. There's now HAL
methods to get and set the link pointer of a descriptor, so what we
should do instead is to update the right link pointer.
* If there's a holding descriptor and an empty TXQ list, set the
link pointer of said holding descriptor to the new frame.
* If there's a non-empty TXQ list, set the link pointer of the
last descriptor in the list to the new frame.
* Nuke axq_link from orbit.
Note:
* The AR9380 doesn't need this. FIFO TX writes are atomic. As long as
we don't append to a list of frames that we've already passed to the
hardware, all of the above doesn't apply. The holding descriptor stuff
is still needed to ensure the hardware can re-read a completed
descriptor to move onto the next one, but we restart DMA by pushing in
a new FIFO entry into the TX QCU. That doesn't require any real
gymnastics.
Tested:
* AR5210, AR5211, AR5212, AR5416, AR9380 - STA mode.
PS-POLL support.
This implements PS-POLL awareness i nthe
* Implement frame "leaking", which allows for a software queue
to be scheduled even though it's asleep
* Track whether a frame has been leaked or not
* Leak out a single non-AMPDU frame when transmitting aggregates
* Queue BAR frames if the node is asleep
* Direct-dispatch the rest of control and management frames.
This allows for things like re-association to occur (which involves
sending probe req/resp as well as assoc request/response) when
the node is asleep and then tries reassociating.
* Limit how many frames can set in the software node queue whilst
the node is asleep. net80211 is already buffering frames for us
so this is mostly just paranoia.
* Add a PS-POLL method which leaks out a frame if there's something
in the software queue, else it calls net80211's ps-poll routine.
Since the ath PS-POLL routine marks the node as having a single frame
to leak, either a software queued frame would leak, OR the next queued
frame would leak. The next queued frame could be something from the
net80211 power save queue, OR it could be a NULL frame from net80211.
TODO:
* Don't transmit further BAR frames (eg via a timeout) if the node is
currently asleep. Otherwise we may end up exhausting management frames
due to the lots of queued BAR frames.
I may just undo this bit later on and direct-dispatch BAR frames
even if the node is asleep.
* It would be nice to burst out a single A-MPDU frame if both ends
support this. I may end adding a FreeBSD IE soon to negotiate
this power save behaviour.
* I should make STAs timeout of power save mode if they've been in power
save for more than a handful of seconds. This way cards that get
"stuck" in power save mode don't stay there for the "inactivity" timeout
in net80211.
* Move the queue depth check into the driver layer (ath_start / ath_transmit)
rather than doing it in the TX path.
* There could be some naughty corner cases with ps-poll leaking.
Specifically, if net80211 generates a NULL data frame whilst another
transmitter sends a normal data frame out net80211 output / transmit,
we need to ensure that the NULL data frame goes out first.
This is one of those things that should occur inside the VAP/ic TX lock.
Grr, more investigations to do..
Tested:
* STA: AR5416, AR9280
* AP: AR5416, AR9280, AR9160
of "right".)
Flip back on the "always continue TX DMA using the holding descriptor"
code - by always setting ATH_BUF_BUSY and never setting axq_link to NULL.
Since the holding descriptor is accessed via txq->axq_link and _that_
is done behind the TXQ lock rather than the TX path lock, the holding
descriptor stuff itself needs to be behind the TXQ lock.
So, do the mental gymnastics needed to do this.
I've not seen any of the hardware failures that I was seeing when
I last tried to do this.
Tested:
* AR5416, STA mode
but partly to just tidy up things.
The problem here - there are too many TX buffers in the queue! By the
time one needs to transmit an EAPOL frame (for this PR, it's the response
to the group rekey notification from the AP) there are no ath_buf entries
free and the EAPOL frame doesn't go out.
Now, the problem!
* Enforcing the TX buffer limitation _before_ we dequeue the frame?
Bad idea. Because..
* .. it means I can't check whether the mbuf has M_EAPOL set.
The solution(s):
* De-queue the frame first
* Don't bother doing the TX buffer minimum free check until after
we know whether it's an EAPOL frame or not.
* If it's an EAPOL frame, allocate the buffer from the mgmt pool
rather than the default pool.
Whilst I'm here:
* Add a tweak to limit how many buffers a single node can acquire.
* Don't enforce that for EAPOL frames.
* .. set that to default to 1/4 of the available buffers, or 32,
whichever is more sane.
This doesn't fix issues due to a sleeping node or a very poor performing
node; but this doesn't make it worse.
Tested:
* AR5416 STA, TX'ing 100+ mbit UDP to an AP, but only 50mbit being received
(thus the TX queue fills up.)
* .. with CCMP / WPA2 encryption configured
* .. and the group rekey time set to 10 seconds, just to elicit the
behaviour very quickly.
PR: kern/138379
Each set of frames pushed into a FIFO is represented by a list of
ath_bufs - the first ath_buf in the FIFO list is marked with
ATH_BUF_FIFOPTR; the last ath_buf in the FIFO list is marked with
ATH_BUF_FIFOEND.
Multiple lists of frames are just glued together in the TAILQ as per
normal - except that at the end of a FIFO list, the descriptor link
pointer will be NULL and it'll be tagged with ATH_BUF_FIFOEND.
For non-EDMA chipsets this is a no-op - the ath_txq frame list (axq_q)
stays the same and is treated the same.
For EDMA chipsets the frames are pushed into axq_q and then when
the FIFO is to be (re) filled, frames will be moved onto the FIFO
queue and then pushed into the FIFO.
So:
* Add a new queue in each hardware TXQ (ath_txq) for staging FIFO frame
lists. It's a TAILQ (like the normal hardware frame queue) rather than
the ath9k list-of-lists to represent FIFO entries.
* Add new ath_buf flags - ATH_TX_FIFOPTR and ATH_TX_FIFOEND.
* When allocating ath_buf entries, clear out the flag value before
returning it or it'll end up having stale flags.
* When cloning ath_buf entries, only clone ATH_BUF_MGMT. Don't clone
the FIFO related flags.
* Extend ath_tx_draintxq() to first drain the FIFO staging queue, _then_
drain the normal hardware queue.
Tested:
* AR9280, hostap
* AR9280, STA
* AR9380/AR9580 - hostap
TODO:
* Test on other chipsets, just to be thorough.
related issues.
Moving the TX locking under one lock made things easier to progress on
but it had one important side-effect - it increased the latency when
handling CABQ setup when sending beacons.
This commit introduces a bunch of new changes and a few unrelated changs
that are just easier to lump in here.
The aim is to have the CABQ locking separate from other locking.
The CABQ transmit path in the beacon process thus doesn't have to grab
the general TX lock, reducing lock contention/latency and making it
more likely that we'll make the beacon TX timing.
The second half of this commit is the CABQ related setup changes needed
for sane looking EDMA CABQ support. Right now the EDMA TX code naively
assumes that only one frame (MPDU or A-MPDU) is being pushed into each
FIFO slot. For the CABQ this isn't true - a whole list of frames is
being pushed in - and thus CABQ handling breaks very quickly.
The aim here is to setup the CABQ list and then push _that list_ to
the hardware for transmission. I can then extend the EDMA TX code
to stamp that list as being "one" FIFO entry (likely by tagging the
last buffer in that list as "FIFO END") so the EDMA TX completion code
correctly tracks things.
Major:
* Migrate the per-TXQ add/removal locking back to per-TXQ, rather than
a single lock.
* Leave the software queue side of things under the ATH_TX_LOCK lock,
(continuing) to serialise things as they are.
* Add a new function which is called whenever there's a beacon miss,
to print out some debugging. This is primarily designed to help
me figure out if the beacon miss events are due to a noisy environment,
issues with the PHY/MAC, or other.
* Move the CABQ setup/enable to occur _after_ all the VAPs have been
looked at. This means that for multiple VAPS in bursted mode, the
CABQ gets primed once all VAPs are checked, rather than being primed
on the first VAP and then having frames appended after this.
Minor:
* Add a (disabled) twiddle to let me enable/disable cabq traffic.
It's primarily there to let me easily debug what's going on with beacon
and CABQ setup/traffic; there's some DMA engine hangs which I'm finally
trying to trace down.
* Clear bf_next when flushing frames; it should quieten some warnings
that show up when a node goes away.
Tested:
* AR9280, STA/hostap, up to 4 vaps (staggered)
* AR5416, STA/hostap, up to 4 vaps (staggered)
TODO:
* (Lots) more AR9380 and later testing, as I may have missed something here.
* Leverage this to fix CABQ hanling for AR9380 and later chips.
* Force bursted beaconing on the chips that default to staggered beacons and
ensure the CABQ stuff is all sane (eg, the MORE bits that aren't being
correctly set when chaining descriptors.)
"complete RX frames."
The 128 entry RX FIFO is really easy to fill up and miss refilling
when it's done in the ath taskq - as that gets blocked up doing
RX completion, TX completion and other random things.
So the 128 entry RX FIFO now gets emptied and refilled in the ath_intr()
task (and it grabs / releases locks, so now ath_intr() can't just be
a FAST handler yet!) but the locks aren't held for very long. The
completion part is done in the ath taskqueue context.
Details:
* Create a new completed frame list - sc->sc_rx_rxlist;
* Split the EDMA RX process queue into two halves - one that
processes the RX FIFO and refills it with new frames; another
that completes the completed frame list;
* When tearing down the driver, flush whatever is in the deferred
queue as well as what's in the FIFO;
* Create two new RX methods - one that processes all RX queues,
one that processes the given RX queue. When MSI is implemented,
we get told which RX queue the interrupt came in on so we can
specifically schedule that. (And I can do that with the non-MSI
path too; I'll figure that out later.)
* Convert the legacy code over to use these new RX methods;
* Replace all the instances of the RX taskqueue enqueue with a call
to a relevant RX method to enqueue one or all RX queues.
Tested:
* AR9380, STA
* AR9580, STA
* AR5413, STA
Since this is being done during buffer free, it's a crap shoot whether
the TX path lock is held or not. I tried putting the ath_freebuf() code
inside the TX lock and I got all kinds of locking issues - it turns out
that the buffer free path sometimes is called with the lock held and
sometimes isn't. So I'll go and fix that soon.
Hence for now the holdingbf buffers are protected by the TXBUF lock.
When working on TDMA, Sam Leffler found that the MAC DMA hardware
would re-read the last TX descriptor when getting ready to transmit
the next one. Thus the whole ATH_BUF_BUSY came into existance -
the descriptor must be left alone (very specifically the link pointer
must be maintained) until the hardware has moved onto the next frame.
He saw this in TDMA because the MAC would be frequently stopping during
active transmit (ie, when it wasn't its turn to transmit.)
Fast-forward to today. It turns out that this is a problem not with
a single MAC DMA instance, but with each QCU (from 0->9). They each
maintain separate descriptor pointers and will re-read the last
descriptor when starting to transmit the next.
So when your AP is busy transmitting from multiple TX queues, you'll
(more) frequently see one QCU stopped, waiting for a higher-priority QCU
to finsh transmitting, before it'll go ahead and continue. If you mess
up the descriptor (ie by freeing it) then you're short of luck.
Thanks to rpaulo for sticking with me whilst I diagnosed this issue
that he was quite reliably triggering in his environment.
This is a reimplementation; it doesn't have anything in common with
the ath9k or the Qualcomm Atheros reference driver.
Now - it in theory doesn't apply on the EDMA chips, as long as you
push one complete frame into the FIFO at a time. But the MAC can DMA
from a list of frames pushed into the hardware queue (ie, you concat
'n' frames together with link pointers, and then push the head pointer
into the TXQ FIFO.) Since that's likely how I'm going to implement
CABQ handling in hostap mode, it's likely that I will end up teaching
the EDMA TX completion code about busy buffers, just to be "sure"
this doesn't creep up.
Tested - iperf ap->sta and sta->ap (with both sides running this code):
* AR5416 STA
* AR9160/AR9220 hostap
To validate that it doesn't break the EDMA (FIFO) chips:
* AR9380, AR9485, AR9462 STA
Using iperf with the -S <tos byte decimal value> to set the TCP client
side DSCP bits, mapping to different TIDs and thus different TX queues.
TODO:
* Make this work on the EDMA chips, if we end up pushing lists of frames
to the hardware (eg how we eventually will handle cabq in hostap/ibss
mode.)
The HAL already included the STBC fields; it just needed to be exposed
to the driver and net80211 stack.
This should allow single-stream STBC TX and RX to be negotiated; however
the driver and rate control code currently don't do anything with it.
Right now the only way to set the chainmask is to set the hardware
configured chainmask through capabilities. This is fine for forcing
the chainmask to be something other than what the hardware is capable
of (eg to reduce TX/RX to one connected antenna) but it does change what
the HAL hardware chainmask configuration is.
For operational mode changes, it (may?) make sense to separately control
the TX/RX chainmask.
Right now it's done as part of ar5416_reset.c - ar5416UpdateChainMasks()
calculates which TX/RX chainmasks to enable based on the operating mode.
(1 for legacy and whatever is supported for 11n operation.) But doing
this in the HAL is suboptimal - the driver needs to know the currently
configured chainmask in order to correctly enable things for each
TX descriptor. This is currently done by overriding the chainmask
config in the ar5416 TX routines but this has to disappear - the AR9300
HAL support requires the driver to dynamically set the TX chainmask based
on the TX power and TX rate in order to meet mini-PCIe slot power
requirements.
So:
* Introduce a new HAL method to set the operational chainmask variables;
* Introduce null methods for the previous generation chipsets;
* Add new driver state to record the current chainmask separate from
the hardware configured chainmask.
Part #2 of this will involve disabling ar5416UpdateChainMasks() and moving
it into the driver; as well as properly programming the TX chainmask
based on the currently configured HAL chainmask.
Tested:
* AR5416, STA mode - both legacy (11a/11bg) and 11n rates - verified
that AR_SELFGEN_MASK (the chainmask used for self-generated frames like
ACKs and RTSes) is correct, as well as the TX descriptor contents is
correct.