This includes the HAL routines to setup, enable/activate/disable spectral
scan and configure the relevant registers.
This still requires driver interaction to enable spectral scan reporting.
Specifically:
* call ah_spectralConfigure() to configure and enable spectral scan;
* .. there's currently no way to disable spectral scan... that will have
to follow.
* call ah_spectralStart() to force start a spectral report;
* call ah_spectralStop() to force stop an active spectral report.
The spectral scan results appear as PHY errors (type 0x5 on the AR9280,
same as radar) but with the spectral scan bit set (0x10 in the last byte
of the frame) identifying it as a spectral report rather than a radar
FFT report.
Caveats:
* It's likely quite difficult to run spectral _and_ radar at the same
time. Enabling spectral scan disables the radar thresholds but
leaves radar enabled. Thus, the driver (for now) needs to ensure
that only one or the other is enabled.
* .. it needs testing on HT40 mode.
Tested:
* AR9280 in STA mode, HT/20 only
TODO:
* Test on AR9285, AR9287;
* Test in both HT20 and HT40 modes;
* .. all the driver glue.
Obtained from: Qualcomm Atheros
* Finish adding the HAL capability to announce whether a NIC supports
spectral scan or not;
* Add spectral scan methods to the HAL structure;
* Add HAL_SPECTRAL_PARAM for configuration of the spectral scan logic.
The capability ID and HAL_SPECTRAL_PARAM struct are from Qualcomm
Atheros.
I couldn't think of a way to maintain the hardware TXQ locks _and_ layer
on top of that per-TXQ software queuing and any other kind of fine-grained
locks (eg per-TID, or per-node locks.)
So for now, to facilitate some further code refactoring and development
as part of the final push to get software queue ps-poll and u-apsd handling
into this driver, just do away with them entirely.
I may eventually bring them back at some point, when it looks slightly more
architectually cleaner to do so. But as it stands at the present, it's
not really buying us much:
* in order to properly serialise things and not get bitten by scheduling
and locking interactions with things higher up in the stack, we need to
wrap the whole TX path in a long held lock. Otherwise we can end up
being pre-empted during frame handling, resulting in some out of order
frame handling between sequence number allocation and encryption handling
(ie, the seqno and the CCMP IV get out of sequence);
* .. so whilst that's the case, holding the lock for that long means that
we're acquiring and releasing the TXQ lock _inside_ that context;
* And we also acquire it per-frame during frame completion, but we currently
can't hold the lock for the duration of the TX completion as we need
to call net80211 layer things with the locks _unheld_ to avoid LOR.
* .. the other places were grab that lock are reset/flush, which don't happen
often.
My eventual aim is to change the TX path so all rejected frame transmissions
and all frame completions result in any ieee80211_free_node() calls to occur
outside of the TX lock; then I can cut back on the amount of locking that
goes on here.
There may be some LORs that occur when ieee80211_free_node() is called when
the TX queue path fails; I'll begin to address these in follow-up commits.
enforcing the TXOP and TBTT limits:
* Frames which will overlap with TBTT will not TX;
* Frames which will exceed TXOP will be filtered.
This is not enabled by default; it's intended to be enabled by the
TDMA code on 802.11n capable chipsets.
now this works for non-debug and debug builds.
* Add a comment reminding me (or someone) to audit all of the relevant
math to ensure there's no weird wrapping issues still lurking about.
But yes, this does seem to be mostly working.
Pointy-hat-to: adrian, yet again
* add some further debugging prints, which are quite nice to have
* add in ALQ hooks (optional!) to allow for the TDMA information to be
logged in-line with the TX and RX descriptor information.
The existing logic wrapped programming nexttbtt at 65535 TU.
This is not good enough for the 11n chips, whose nexttbtt register
(GENERIC_TIMER_0) has an initial value from 0..2^31-1 TSF.
So converting the TU to TSF had the counter wrap at (65535 << 10) TSF.
Once this wrap occured, the nexttbtt value was very very low, much
lower than the current TSF value. At this point, the nexttbtt timer
would constantly fire, leading to the TX queue being constantly gated
open.. and when this occured, the sender was not correctly transmitting
in its slot but just able to continuously transmit. The master would
then delay transmitting its beacon until after the air became free
(which I guess would be after the burst interval, before the next burst
interval would quickly follow) and that big delta in master beacon TX
would start causing big swings in the slot timing adjustment.
With this change, the nexttbtt value is allowed to go all the way up
to the maximum value permissable by the 32 bit representation.
I haven't yet tested it to that point; I really should. The AR5212
HAL now filters out values above 65535 TU for the beacon configuration
(and the relevant legal values for SWBA, DBA and NEXTATIM) and the
AR5416 HAL just dutifully programs in what it should.
With this, TDMA is now useful on the 802.11n chips.
Tested:
* AR5416, AR9280 TDMA slave
* AR5413 TDMA slave
what the maximum legal values are.
The current beacon timer configuration from TDMA wraps things at
HAL_BEACON_PERIOD-1 TU. For the 11a chips this is fine, but for
the 11n chips it's not enough resolution. Since the 11a chips have a
limit on what's "valid", just enforce this so when I do write larger
values in, they get suitably wrapped before programming.
Tested:
* AR5413, TDMA slave
Todo:
* Run it for a (lot) longer on a clear channel, ensure that no strange
slippages occur.
* Re-validate this on STA configurations, just to be sure.
After chatting with the MAC team, the TSF writes (at least on the 11n
MACs, I don't know about pre-11n MACs) are done as 64 bit writes that
can take some time. So, doing a 32 bit TSF write is definitely not
supported. Leave a comment here which explains that.
Whilst here, add a comment which outlines that after a reset or TSF
write, the TSF write may take a while (up to 50uS) to update.
A write or reset shouldn't be done whilst the previous one is in
flight. Also (and this isn't currently done) a read shouldn't
occur until the SLEEP32_TSF_WRITE_STAT is clear. Right now we're
not doing that, mostly because we haven't been doing lots of TSF
resets/writes until recently.
TSF write.
The TSF_L32 update is fine for the AR5413 (and later, I guess) 11abg NICs
however on the 11n NICs this didn't work. The TSF writes were causing
a much larger time to be skipped, leading to the timing to never
converge.
I've tested this 64 bit TSF read, adjust and write on both the
11n NICs and the AR5413 NIC I've been using for testing. It works
fine on each.
This patch allows the AR5416/AR9280 to be used as a TDMA member.
I don't yet know why the AR9280 is ~7uS accurate rather than ~3uS;
I'll look into it soon.
Tested:
* AR5413, TDMA slave (~ 3us accuracy)
* AR5416, TDMA slave (~ 3us accuracy)
* AR9280, TDMA slave (~ 7us accuracy)
on the 802.11n NICs.
The 802.11n NICs return a TBTT value that continues far past the 16 bit
HAL_BEACON_PERIOD time (in TU.) The code would constrain nextslot to
HAL_BEACON_PERIOD, but it wasn't constraining nexttbtt - the pre-11n
NICs would only return TU values from 0 -> HAL_BEACON_PERIOD. Thus,
when nexttbtt exceeded 64 milliseconds, it would not wrap (but nextslot
did) which lead to a huge tsfdelta.
So until the slot calculation is converted to work in TSF rather than
a mix of TSF and TU, "make" the nexttbtt values match the TU assumptions
for pre-11n NICs.
This fixes the crazy deltatsf calculations but it doesn't fix the
non-convergent tsfdelta issue. That'll be fixed in a subsequent commit.
encryption types.
The AR5210 only has four WEP key slots, in contrast to what the
later MACs have (ie, the keycache.) So there's no way to store a "clear"
key.
Even if the driver is taught to not allocate CLR key entries for
the AR5210, the hardware will actually attempt to decode the encrypted
frames with the (likely all 0!) WEP keys.
So for now, disable the hardware encryption entirely and just so it
all in software. That allows both WEP -and- WPA to actually work.
If someone wishes to try and make hardware WEP _but_ software WPA work,
they'll have to create a HAL capability to enable/disable hardware
encryption based on the current STA/Hostap mode. However, making
multi-vap work with one WEP and one WPA VAP will require hardware
encryption to be disabled anyway.
* For CABQ traffic, I -can- chain them together using the next pointer
and just push that particular chain head to the CABQ. However, this
doesn't magically make EDMA TX CABQ work - I have to do some further
hoop jumping.
* upon setup, tell the alq code what the chip information is.
* add TX/RX path logging for legacy chips.
* populate the tx/rx descriptor length fields with a best-estimate.
It's overly big (96 bytes when AH_SUPPORT_AR5416 is enabled)
but it'll do for now.
Whilst I'm here, add CURVNET_RESTORE() here during probe/attach as a
partial solution to fixing crashes during attach when the attach fails.
There are other attach failures that I have to deal with; those'll come
later.
* Add a new method which allows the driver to push the MAC/phy/hal info
into the logging stream.
* Add a new ALQ logging entry which logs the mac/phy/hal information.
* Modify the ALQ startup path to log the MAC/phy/hal information
so the decoder knows which HAL/chip is generating this information.
* Convert the header and mac/phy/hal information to use be32, rather than
host order. I'd like to make this stuff endian-agnostic so I can
decode MIPS generated logs on a PC.
This requires some further driver modifications to correctly log the
right initial chip information.
Also - although noone bar me is currently using this, I've shifted the
debug bitmask around a bit. Consider yourself warned!