It's also marked inactive by the initvals, and enabled after
the baseband/PLL has been configured, but before the RF
registers have been programmed.
The origin and reason for this particular change is currently unknown.
Obtained from: Linux ath9k
Antenna diversity on the >= AR5416 is implemented differently than the
AR5212 and previous chips. So for now, and not to confuse things, just
disable it for now.
diversity.
This is bit dirty and likely should be revised at a later date,
with an eye to unifying/tidying up the whole diversity setup
and allowing developers to do "tricky stuff" as they desire.
For now, this works.
* add a new method, specifically for doing per-RX packet
antenna diversity
* set that HAL method only if it's Kite and a Kite chip that
does diversity.
* add a diversity flag to the HAL debugging section
* add a check to make sure the kite diversity code doesn't run
on boards that don't require it, as not all Kite chips will
implement it.
* add some debug statements when the diversity code makes
changes to the antenna diversity/combining setup.
Note: this HAL currently only supports the AR9285.
From Linux ath9k:
The problem is that when the attenuation is increased,
the rate will start to drop from MCS7 -> MCS6, and finally
will see MCS1 -> CCK_11Mbps. When the rate is changed b/w
CCK and OFDM, it will use register desired_scale to calculate
how much tx gain need to change.
The output power with the same tx gain for CCK and OFDM modulated
signals are different. This difference is constant for AR9280
but not AR9285/AR9271. It has different PA architecture
a constant. So it should be calibrated against this PA
characteristic.
The driver has to read the calibrated values from EEPROM and set
the tx power registers accordingly.
ctl/ext noise floor values.
This routine doesn't check to see whether the radio is MIMO
capable - instead, it simply returns either the raw values,
the "nominal" values if the raw values aren't yet available
or are invalid, or '0' values if there's no valid channel/
no valid MIMO values.
Callers are expected to verify the radio is a MIMO radio
(which for now means it's an 11n chipset, there are non-11n
MIMO chipsets out there but I don't think we support them,
at least in MIMO mode) before exporting the MIMO values.
upper-level HAL.
Right now the per-chain noise floor values aren't used anywhere in
the upper-level HAL, so the driver currently has no real reference
to compare the per-chain RSSI values to.
This is needed before per-chain RSSI values (for ctl and ext radios)
are can be thrown upstairs to the net80211 code.
From the ath9k source:
==
11N: we can no longer afford to self link the last descriptor.
MAC acknowledges BA status as long as it copies frames to host
buffer (or rx fifo). This can incorrectly acknowledge packets
to a sender if last desc is self-linked.
==
Since this is useful for pre-AR5416 chips that communicate PHY errors
via error frames rather than by on-chip counters, leave the support
in there, but disable it for AR5416 and later.
Introduce the AHB glue for Atheros embedded systems. Right now it's
hard-coded for the AR9130 chip whose support isn't yet in this HAL;
it'll be added in a subsequent commit.
Kernel configuration files now need both 'ath' and 'ath_pci' devices; both
modules need to be loaded for the ath device to work.
in the RX path when doing 11n and block-ack'ed frames. Apparently, the MAC
will loop over that self-linked descriptor and treat it as "good enough"
for (incorrectly!) ACKing the frames in the block-ack.
Until I figure out how to work around this issue in the future, this counter
will tell me if packet RX processing ever gets to the point where it's
touching the self-linked descriptor. If there's ever enough packets to get
to that point, BA's will be invalid and likely very unhappy.
I'll clear how it's supposed to work with Bernhard and then look
at enabling this in the correct situations.
But this -does- enable HT RTS protection (using the appropriate legacy
rates) if this bit of code is enabled.
by default.
Adventourous souls with an AR9220/AR9280 or later and who have a device
that sends PS-POLL frames may wish to try tinkering with this option and
get back to me.
Linux ath9k only enables this for AR9280 and later NICs; so
create a capability for it so it isn't enabled for earlier
NICs.
Enabling hardware PS-POLL support will come in a later commit
and will be disabled by default.
Even though they map to setting the error filter register,
ath9k also writes them untouched to AR_RX_FILTER.
The Force-BSSID match bit can stay high, as it maps to a
misc mode register setting rather than an RX filter bit.
The phyerr, radar and bssid-match bits aren't real bits, they map
to enabling bits in other registers. Move those out of the way of
valid RX filter bits.
Add a few new fields from ath9k - compba, ps-poll, mcast-bcast-all.
the channel width is ni->ni_chw, which is set to the negotiated channel
width. ni->ni_htflags is the capability, rather than the negotiated
value.
Teach both the TX path and the sample rate module about this.
This seems to work fine for STA but not HT/20 AP mode.
Further discussion with net80211 people will need to take place
to ensure that the right flags are set based on the negotiated
capabilities of the remote peer, rather than whatever the local
parameters are.
Sending short-gi frames in 20mhz may work on some chips but
it certainly isn't supported on anything currently supported
by the HAL; and sending HT40 frames in HT20 mode just plain
won't work.