cce6344402
My initial rate control code was .. suboptimal. I wanted to at least get MCS rates sent, but it didn't do anywhere near enough to handle low signal level links or remotely keep accurate statistics. So, 8 years later, here's what I should've done back then. * Firstly, I wasn't at all tracking packet sizes other than the two buckets (250 and 1600 bytes.) So, extend it to include 4096, 8192, 16384, 32768 and 65536. I may go add 2048 at some point if I find it's useful. This is important for a few reasons. First, when forming A-MPDU or AMSDU aggregates the frame sizes are larger, and thus the TX time calculation is woefully, increasingly wrong. Secondly, the behaviour of 802.11 channels isn't some fixed thing, both due to channel conditions and radios themselves. Notably, there was some observations done a few years ago on 11n chipsets which noticed longer aggregates showed an increase in failed A-MPDU sub-frame reception as you got further along in the transmit time. It could be due to a variety of things - transmitter linearity, channel conditions changing, frequency/phase drift, etc - but the observation was to potentially form shorter aggregates to improve BER. * .. and then modify the ath TX path to report the length of the aggregate sent, so as the statistics kept would line up with the correct bucket. * Then on the rate control look-up side - i was also only using the first frame length for an A-MPDU rate control lookup which isn't good enough here. So, add a new method that walks the TID software queue for that node to find out what the likely length of data available is. It isn't ALL of the data in the queue because we'll only ever send enough data to fit inside the block-ack window, so limit how many bytes we return to roughly what ath_tx_form_aggr() would do. * .. and cache that in the first ath_buf in the aggregate so it and the eventual AMPDU length can be returned to the rate control code. * THEN, modify the rate control code to look at them both when deciding which bucket to attribute the sent frame on. I'm erring on the side of caution and using the size bucket that the lookup is based on. Ok, so now the rate lookups and statistics are "more correct". However, MCS rates are not the same as 11abg rates in that they're not a monotonically incrementing set of faster rates and you can't assume that just because a given MCS rate fails, the next higher one wouldn't work better or be a lower average tx time. So, I had to do a bunch of surgery to the best rate and sample rate math. This is the bit that's a WIP. * First, simplify the statistics updates (update_stats()) to do a single pass on all rates. * Next, make sure that each rate average tx time is updated based on /its/ failure/success. Eg if you sent a frame with { MCS15, MCS12, MCS8 } and MCS8 succeeded, MCS15 and MCS 12 would have their average tx time updated for /their/ part of the transmission, not the whole transmission. * Next, EWMA wasn't being fully calculated based on the /failures/ in each of the rate attempts. So, if MCS15, MCS12 failed above but MCS8 didn't, then ensure that the statistics noted that /all/ subframes failed at those rates, rather than the eventual set of transmitted/sent frames. This ensures the EWMA /and/ average TX time are updated correctly. * When picking a sample rate and initial rate, probe rates aroud the current MCS but limit it to MCS0..7 /for all spatial streams/, rather than doing crazy things like hitting MCS7 and then probing MCS8 - MCS8 is basically MCS0 but two spatial streams. It's a /lot/ slower than MCS7. Also, the reverse is true - if we're at MCS8 then don't probe MCS7 as part of it, it's not likely to succeed. * Fix bugs in pick_best_rate() where I was /immediately/ choosing the highest MCS rate if there weren't any frames yet transmitted. I was defaulting to 25% EWMA and .. then each comparison would accept the higher rate. Just skip those; sampling will fill in the details. So, this seems to work a lot better. It's not perfect; I'm still seeing a lot of instability around higher MCS rates because there are bursts of loss/retransmissions that aren't /too/ bad. But i'll keep iterating over this and tidying up my hacks. Ok, so why this still something I'm poking at? rather than porting minstrel_ht? ath_rate_sample tries to minimise airtime, not maximise throughput. I have extended it with an EWMA based on sub-frame success/failures - high MCS rates that have partially successful receptions still show super short average frame times, but a /lot/ of retransmits have to happen for that to work. So for MCS rates I also track this EWMA and ensure that the rates I'm choosing don't have super crappy packet failures. I don't mind not getting lower peak throughput versus minstrel_ht; instead I want to see if I can make "minimise airtime" work well. Tested: * AR9380, STA mode * AR9344, STA mode * AR9580, STA/AP mode |
||
---|---|---|
.. | ||
amd64 | ||
arm | ||
arm64 | ||
bsm | ||
cam | ||
cddl | ||
compat | ||
conf | ||
contrib | ||
crypto | ||
ddb | ||
dev | ||
dts | ||
fs | ||
gdb | ||
geom | ||
gnu | ||
i386 | ||
isa | ||
kern | ||
kgssapi | ||
libkern | ||
mips | ||
modules | ||
net | ||
net80211 | ||
netgraph | ||
netinet | ||
netinet6 | ||
netipsec | ||
netpfil | ||
netsmb | ||
nfs | ||
nfsclient | ||
nfsserver | ||
nlm | ||
ofed | ||
opencrypto | ||
powerpc | ||
riscv | ||
rpc | ||
security | ||
sys | ||
teken | ||
tests | ||
tools | ||
ufs | ||
vm | ||
x86 | ||
xdr | ||
xen | ||
Makefile |