* Call it before sending probe responses, so the ACL code has the
chance to reject sending them.
* Pass the whole frame to the ACL code now, rather than just the
destination MAC - that way the ACL module can look at the frame
contents to determine what the response should be.
This is part of some uncommitted work to support band steering.
Sponsored by: Hobnob, Inc.
Some hardware (eg the AR9160 in STA mode) seems to "leak" unicast FROMDS
frames which aren't destined to itself. This angers the net80211 stack -
the existing code would fail to find an address in the node table and try
passing the frame up to each vap BSS. It would then be accepted in the
input routine and its contents would update the local crypto and sequence
number state.
If the sequence number / crypto IV replay counters from the leaked frame
were greater than the "real" state, subsequent "real" frames would be
rejected due to out of sequence / IV replay conditions.
This is also likely helpful if/when multi-STA modes are added to net80211.
Sponsored by: Hobnob, Inc.
This supports both station and hostap modes:
* Station mode quiet time element support listens to quiet time
IE's and modifies the local quiet time configuration as appropriate;
* Hostap mode both obeys the locally configured quiet time period
and includes it in beacon frames so stations also can obey as needed.
Submitted by: Himali Patel <himali.patel@sibridgetech.com>
Sponsored by: Sibridge Technologies
The SYSCTL_NODE macro defines a list that stores all child-elements of
that node. If there's no SYSCTL_DECL macro anywhere else, there's no
reason why it shouldn't be static.
I found this useful when trying to debug the AR9160 STA RX filter issue -
I'd get crypto reply errors but it wasn't entirely clear which TID it
was for.
The ieee80211_swbmiss() callout is not called with the ic lock held, so it's
quite possible the scheduler will run the callout during a state change.
This patch:
* changes the swbmiss callout to be locked by the ic lock
* enforces the ic lock being held across the beacon vap functions
by grabbing it inside beacon_miss() and beacon_swmiss().
This ensures that the ic lock is held (and thus the VAP state
stays constant) during beacon miss and software miss processing.
Since the callout is removed whilst the ic lock is held, it also
ensures that the ic lock can't be called during a state change
or exhibit any race conditions seen above.
Both Edgar and Joel report that this patch fixes the crash and
doesn't introduce new issues.
Reported by: Edgar Martinez <emartinez@kbcnetworks.com>
Reported by: Joel Dahl <joel@vnode.se>
Reported by: emaste
didn't set a sequence number; it didn't show up earlier because the
hardware most people use for hostap (ie, AR5212 series stuff) sets the
sequence numbers up in hardware. Later hardware (AR5416, etc) which
can do 11n and aggregation require sequence numbers to be generated in
software.
Submitted by: paradyse@gmail.com
Approved by: re (kib)
On a TX failure, ic_raw_xmit will still call ieee80211_node_free().
There's no need to call it here.
Submitted by: moonlightakkiy@yahoo.ca
Approved by: re (kib)
channel list ends up with 2 entries, the HT and the legacy channel.
The scan itself is currently always done at legacy rates so we end
up receiving scan results for legacy networks on the HT channel and
erroneously assigning the BSS to the 11n channel. As the channel's
capabilities are used to setup the adapter we might end up with
non-working settings and/or firmware crashes.
Fix this by ensuring that scan results received on a HT channel
are only assigned to that channel if the htcap IE is available,
else use the legacy channel equivalent.
Tested by: Pawel Worach, Raoul Megelas, Maciej Milewski,
Andrei <az at azsupport dot com>
Approved by: re (kib)
TX for the given TID needs to be paused during ADDBA requests (and unpaused
once the session is established.) Since net80211 currently doesn't implement
software aggregation, if this pause/unpause is done in the driver (as it
is in my development branch) then it will need to be unpaused both on
ADDBA response and on ADDBA timeout.
This callback allows the driver to unpause TX for the relevant TID.
Reviewed by: bschmidt
Intel 4965 devices for example have HT40 on 2GHz completely disabled
but it is still supported for 5GHz. To handle that in sta mode we
need to check if we can "upgrade" to a HT40 channel after the
association, if that is not possible but we are still announcing
support to the remote side we are left with a very flabby connection.
Reviewed by: adrian
too. In that case don't fiddle with the seqno as drivers are supposed
to handle that.
Currently only the powersave feature does sent QoS-null-data frames
before and after a background scan which must be handled correctly. Due
to this being quite rare we don't fiddle around with starting of aggr
sessions.
* revert a local path change that shouldn't have made it to the commit
* fix some indenting/wrapping
* Fix the ale data copy - i should be copying into the ale data pointer,
not over the ale entry itself.
handling.
The current sequence number code does a few things incorrectly:
* It didn't try eliminating duplications from HT nodes. I guess it's assumed
that out of order / retransmission handling would be handled by the AMPDU RX
routines. If a HT node isn't doing AMPDU RX, then retransmissions need to
be eliminated. Since most of my debugging is based on this (as AMPDU TX
software packet aggregation isn't yet handled), handle this corner case.
* When a sequence number of 4095 was received, any subsequent sequence number
is going to be (by definition) less than 4095. So if the following sequence
number (0) doesn't initially occur and the retransmit is received, it's
incorrectly eliminated by the IEEE80211_FC1_RETRY && SEQ_LEQ() check.
Try to handle this better.
This almost completely eliminates out of order TCP statistics showing up during
iperf testing for the 11a, 11g and non-aggregate 11n AMPDU RX case. The only
other packet loss conditions leading to this are due to baseband resets or
heavy interference.
defines struct in6_addr, which is needed by ip6_hdr used in here.
Reviewed by: gnn
Sponsored by: The FreeBSD Foundation
Sponsored by: iXsystems
MFC after: 5 days
This is destined to be a lightweight and optional set of ALQ
probes for debugging events which are just impossible to debug
with printf/log (eg packet TX/RX handling; AMPDU handling.)
The probes and operations themselves will appear in subsequent
commits.
This introduces struct ieee80211_rx_stats - which stores the various kinds
of RX statistics which a MIMO and non-MIMO 802.11 device can export.
It also fleshes out the mimo export to userland (node_getmimoinfo()).
It assumes that MIMO radios (for now) export both ctl and ext channels.
Non-11n MIMO radios are possible (and I believe Atheros made at least
one), so if that chipset support is added, extra flags to the
struct ieee80211_rx_stats can be added to extend this support.
Two new input functions have been added - ieee80211_input_mimo() and
ieee80211_input_mimo_all() - which MIMO-aware devices can call with
MIMO specific statistics.
802.11 devices calling the non-MIMO input functions will still function.
The symptom: sometimes 11n (and non-11n) throughput is great.
Sometimes it isn't. Much teeth gnashing occured, and much kernel
bisecting happened, until someone figured out it was the order
of which things were rebooted, not the kernel versions.
(Which was great news to me, it meant that I hadn't broken if_ath.)
What we found was that sometimes the WME parameters for the best-effort
queue had a burst window ("txop") in which the station would be allowed
to TX as many packets as it could fit inside that particular burst
window. This improved throughput.
After initially thinking it was a bug - the WME parameters for the
best-effort queue -should- have a txop of 0, Bernard and I discovered
"aggressive mode" in net80211 - where the WME BE queue parameters
are changed if there's not a lot of high priority traffic going on.
The WME parameters announced in the association response and beacon
frames just "change" based on what the current traffic levels are.
So in fact yes, the STA was acutally supposed to be doing this higher
throughput stuff as it's just meant to be configuring things based on
the WME parameters - but it wasn't.
What was eventually happening was this:
* at startup, the wme qosinfo count field would be 0;
* it'd be parsed in ieee80211_parse_wmeparams();
* and it would be bumped (to say 10);
* .. and the WME queue parameters would be correctly parsed and set.
But then, when you restarted the assocation (eg hostap goes away and
comes back with the same qosinfo count field of 10, or if you
destroy the sta VIF and re-create it), the WME qosinfo count field -
which is associated not to the VIF, but to the main interface -
wouldn't be cleared, so the queue default parameters would be used
(which include no burst setting for the BE queue) and would remain
that way until the hostap qosinfo count field changed, or the STA
was actually rebooted.
This fix simply cleares the wme capability field (which has the count
field) to 0, forcing it to be reset by the next received beacon.
Thanks go to Milu for finding it and helping me track down what was
going on, and Bernard Schmidt for working through the net80211 and
WME specific magic.
uses of ic_curchan occur. Due to the nature of a scan, switching
channels constantly and all this happening without any kind of locks
held, it might happen that ic_curchan points to nowhere leading to
panics. Fix this by not allowing frame injections while in SCAN state.
Tested by: Paul B. Mahol <onemda at gmail.com>
If multiple networks are available the max bandwidth is one
condition used for selecting the "best" BSS. To achieve that
we should consider all parameters which affect the max RX rate.
This includes 20/40MHz, SGI and the of course the MCS set.
If the TX MCS parameters are available we should use those,
because an AP announcing support for receiving frames at 450Mbps
might only be able to transmit at 150Mbps (1T3R). I haven't seen
devices with support for transmitting at higher rates then
receiving, so prefering TX over RX information should be safe.
While here, remove the hardcoded assumption that MCS15 is the max
possible MCS rate, use MCS31 instead which really is the highest
rate (according to the 802.11n std). Also, fix a mismatch of an
40MHz/SGI check.
Contrary to the rateset information in legacy frames the MCS Set
field also contains TX capability information in cases where the
number of available TX and RX spartial streams differ. Because a
rateset doesn't contain that information we have to pull the
those directly from the hardware capabilities.
Get rid of the assumption that every device is capable of 40MHz,
SGI and 2 spartial streams. Instead of printing, in the worst case,
8 times 76 MCS rates, print logically connect ranges and the
support RX/TX streams.
A device without 40MHz and SGI support looks like:
ath0: 2T2R
ath0: 11na MCS 20Mhz
ath0: MCS 0-7: 6.5Mbps - 65Mbps
ath0: MCS 8-15: 13Mbps - 130Mbps
ath0: 11ng MCS 20Mhz
ath0: MCS 0-7: 6.5Mbps - 65Mbps
ath0: MCS 8-15: 13Mbps - 130Mbps
Initialize ic_rxstream/ic_txstream with 2, for compatibility reasons.
Introduce 4 new HTC flags, which are used in addition to ic_rxstream
and ic_txstream to compute the hc_mcsset content and also for initializing
ni_htrates. The number of spatial streams is enough to determine support
for MCS0-31 but not for MCS32-76 as well as some TX parameters in the
hc_mcsset field.
adjust the IEEE80211_HTRATE_MAXSIZE constant, only MCS0 - 76 are valid
the other bits in the mcsset IE (77 - 127) are either reserved or used
for TX parameters.