Right now the only way to force a cold reset is:
* The HAL itself detects it's needed, or
* The sysctl, setting all resets to be cold.
Trouble is, cold resets take quite a bit longer than warm resets.
However, there are situations where a cold reset would be nice.
Specifically, after a stuck beacon, BB/MAC hang, stuck calibration results,
etc.
The vendor HAL has a separate method to set the reset reason (which is
how HAL_RESET_BBPANIC gets set) which informs the HAL during the reset path
why it occured. This is almost but not quite the same; I may eventually
unify both approaches in the future.
This commit just extends HAL_RESET_TYPE to include both status (eg BBPANIC)
and type (eg do COLD.) None of the HAL code uses it yet though; that'll
come later.
It also is a big no-op in each HAL - I need to go teach each of the HALs
about cold/warm reset through this path.
to transmit the buffer.
ath_tx_start() may manipulate/reallocate the mbuf as part of the DMA
code, so we can't expect the mbuf can be returned back to the caller.
Now, the net80211 ifnet work changed the semantics slightly so
if an error is returned here, the mbuf/reference is freed by the
caller (here, it's net80211.)
So, once we reach ath_tx_start(), we never return failure. If we fail
then we still return OK and we free the mbuf/noderef ourselves, and
we increment OERRORS.
* Create ieee80211_free_mbuf() which frees a list of mbufs.
* Use it in the fragment transmit path and ath / uath transmit paths.
* Call it in xmit_pkt() if the transmission fails; otherwise fragments
may be leaked.
This should be a big no-op.
Submitted by: <s3erios@gmail.com>
Differential Revision: https://reviews.freebsd.org/D3769
connectivity interact with the net80211 stack.
Historical background: originally wireless devices created an interface,
just like Ethernet devices do. Name of an interface matched the name of
the driver that created. Later, wlan(4) layer was introduced, and the
wlanX interfaces become the actual interface, leaving original ones as
"a parent interface" of wlanX. Kernelwise, the KPI between net80211 layer
and a driver became a mix of methods that pass a pointer to struct ifnet
as identifier and methods that pass pointer to struct ieee80211com. From
user point of view, the parent interface just hangs on in the ifconfig
list, and user can't do anything useful with it.
Now, the struct ifnet goes away. The struct ieee80211com is the only
KPI between a device driver and net80211. Details:
- The struct ieee80211com is embedded into drivers softc.
- Packets are sent via new ic_transmit method, which is very much like
the previous if_transmit.
- Bringing parent up/down is done via new ic_parent method, which notifies
driver about any changes: number of wlan(4) interfaces, number of them
in promisc or allmulti state.
- Device specific ioctls (if any) are received on new ic_ioctl method.
- Packets/errors accounting are done by the stack. In certain cases, when
driver experiences errors and can not attribute them to any specific
interface, driver updates ic_oerrors or ic_ierrors counters.
Details on interface configuration with new world order:
- A sequence of commands needed to bring up wireless DOESN"T change.
- /etc/rc.conf parameters DON'T change.
- List of devices that can be used to create wlan(4) interfaces is
now provided by net.wlan.devices sysctl.
Most drivers in this change were converted by me, except of wpi(4),
that was done by Andriy Voskoboinyk. Big thanks to Kevin Lo for testing
changes to at least 8 drivers. Thanks to pluknet@, Oliver Hartmann,
Olivier Cochard, gjb@, mmoll@, op@ and lev@, who also participated in
testing.
Reviewed by: adrian
Sponsored by: Netflix
Sponsored by: Nginx, Inc.
* 286410
* 286413
* 286416
The initial commit broke a variety of debug and features that aren't
in the GENERIC kernels but are enabled in other platforms.
with the net80211 stack.
Historical background: originally wireless devices created an interface,
just like Ethernet devices do. Name of an interface matched the name of
the driver that created. Later, wlan(4) layer was introduced, and the
wlanX interfaces become the actual interface, leaving original ones as
"a parent interface" of wlanX. Kernelwise, the KPI between net80211 layer
and a driver became a mix of methods that pass a pointer to struct ifnet
as identifier and methods that pass pointer to struct ieee80211com. From
user point of view, the parent interface just hangs on in the ifconfig
list, and user can't do anything useful with it.
Now, the struct ifnet goes away. The struct ieee80211com is the only
KPI between a device driver and net80211. Details:
- The struct ieee80211com is embedded into drivers softc.
- Packets are sent via new ic_transmit method, which is very much like
the previous if_transmit.
- Bringing parent up/down is done via new ic_parent method, which notifies
driver about any changes: number of wlan(4) interfaces, number of them
in promisc or allmulti state.
- Device specific ioctls (if any) are received on new ic_ioctl method.
- Packets/errors accounting are done by the stack. In certain cases, when
driver experiences errors and can not attribute them to any specific
interface, driver updates ic_oerrors or ic_ierrors counters.
Details on interface configuration with new world order:
- A sequence of commands needed to bring up wireless DOESN"T change.
- /etc/rc.conf parameters DON'T change.
- List of devices that can be used to create wlan(4) interfaces is
now provided by net.wlan.devices sysctl.
Most drivers in this change were converted by me, except of wpi(4),
that was done by Andriy Voskoboinyk. Big thanks to Kevin Lo for testing
changes to at least 8 drivers. Thanks to Olivier Cochard, gjb@, mmoll@,
op@ and lev@, who also participated in testing. Details here:
https://wiki.freebsd.org/projects/ifnet/net80211
Still, drivers: ndis, wtap, mwl, ipw, bwn, wi, upgt, uath were not
tested. Changes to mwl, ipw, bwn, wi, upgt are trivial and chances
of problems are low. The wtap wasn't compilable even before this change.
But the ndis driver is complex, and it is likely to be broken with this
commit. Help with testing and debugging it is appreciated.
Differential Revision: D2655, D2740
Sponsored by: Nginx, Inc.
Sponsored by: Netflix
is detaching.
This mostly fixes a panic - the reset path shouldn't run whilst
the NIC is being torn down.
It's not locked, so it's "mostly" ok, but most of the rest of
the driver doesn't read sc->invalid with sensible locking. Grr.
The real solution is to cleanly tear down taskqueues in the detach/suspend
phase, but ..
years for head. However, it is continuously misused as the mpsafe argument
for callout_init(9). Deprecate the flag and clean up callout_init() calls
to make them more consistent.
Differential Revision: https://reviews.freebsd.org/D2613
Reviewed by: jhb
MFC after: 2 weeks
This symptom is "calibrations don't ever run", which may cause some
pretty spectacularly bad behaviour in noisy environments or with longer
uptimes.
Thanks to dtrace to make it easy to check if specific non-inlined functions
are getting called by things like the ANI and calibration HAL methods.
Grr.
Tested:
* AR9380, STA mode
which showed up after I started changing addresses this early.
It turns out that there's some other malarky going on behind the scenes
in the HAL and merely setting the net80211/ifp mac address this early
isn't enough. If the MAC is set from kenv at attach time, the HAL
also needs to be programmed early.
Without this, the VAP wouldn't work enough for finishing association -
probe requests would be fine as they're broadcast, but association
request would fail.
This is used by the AR71xx platform code to choose a local MAC based on
the "board MAC address", versus whatever potentially invalid/garbage
values are stored in the Atheros calibration data.
I did this wrong - I should've included a state flag for each callout
to see if it was supposed to run or not. I didn't do that.
Instead, just use mutexes anyway.
Suggested by: jhb
These variants have a few differences from the default AR9485 NIC,
namely:
* a non-default antenna switch config;
* slightly different RX gain table setup;
* an external XLNA hooked up to a GPIO pin;
* (and not yet done) RSSI threshold differences when
doing slow diversity.
To make this possible:
* Add the PCI device list from Linux ath9k, complete with vendor and
sub-vendor IDs for various things to be enabled;
* .. and until FreeBSD learns about a PCI device list like this,
write a search function inspired by the USB device enumeration code;
* add HAL_OPS_CONFIG to the HAL attach methods; the HAL can use this
to initialise its local driver parameters upon attach;
* copy these parameters over in the AR9300 HAL;
* don't default to override the antenna switch - only do it for
the chips that require it;
* I brought over ar9300_attenuation_apply() from ath9k which is cleaner
and easier to read for this particular NIC.
This is a work in progress. I'm worried that there's some post-AR9380
NIC out there which doesn't work without the antenna override set as
I currently haven't implemented bluetooth coexistence for the AR9380
and later HAL. But I'd rather have this code in the tree and fix it
up before 11.0-RELEASE happens versus having a set of newer NICs
in laptops be effectively RX deaf.
Tested:
* AR9380 (STA)
* AR9485 CUS198 (STA)
Obtained from: Qualcomm Atheros, Linux ath9k
The original code was .. well, slightly more than incorrect.
It showed up as stalled RX queues if the NIC needed to be frequently
reinitialised (eg during scans.)
This is inspired by work done by Matt Dillon over at the DragonflyBSD
project.
So:
* track when EDMA RX has been stopped and when the MAC has been reset;
* re-initialise the ring only after a reset;
* track whether RX has been stopped/started - just for debugging now;
* don't bother with the RX EOL stuff for EDMA - we don't need the
interrupt at all. We also don't need to disable/enable the interrupt
or start DMA - once new frames are pushed into the ring via the
normal RX path, it'll just restart RX DMA on its own.
Tested:
* AR9380, STA mode
* AR9380, AP mode
* AR9485, STA mode
* AR9462, STA mode
to get upset.
The Qualcomm Atheros reference design code goes through significant
hacks to shut down RX before TX. It doesn't even try do do it in the
driver - it actually makes the DMA stop routines in the HAL shut down
RX before shutting down TX.
So, to make this work for chips that aren't the AR9380 and later, do
it in the driver. Shuffle the TX stop/drain HAL calls to be called
*after* the RX stop HAL call.
Tested:
* AR5413 (STA)
* AR5212 (STA)
* AR5416 (STA)
* AR9380 (STA)
* AR9331 (AP)
* AR9341 (AP)
TODO:
* test ar92xx series NIC and the AR5210/AR5211, in case there's something
even odder about those.
These changes prevent sysctl(8) from returning proper output,
such as:
1) no output from sysctl(8)
2) erroneously returning ENOMEM with tools like truss(1)
or uname(1)
truss: can not get etype: Cannot allocate memory
there is an environment variable which shall initialize the SYSCTL
during early boot. This works for all SYSCTL types both statically and
dynamically created ones, except for the SYSCTL NODE type and SYSCTLs
which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to
be used in the case a tunable sysctl has a custom initialisation
function allowing the sysctl to still be marked as a tunable. The
kernel SYSCTL API is mostly the same, with a few exceptions for some
special operations like iterating childrens of a static/extern SYSCTL
node. This operation should probably be made into a factored out
common macro, hence some device drivers use this. The reason for
changing the SYSCTL API was the need for a SYSCTL parent OID pointer
and not only the SYSCTL parent OID list pointer in order to quickly
generate the sysctl path. The motivation behind this patch is to avoid
parameter loading cludges inside the OFED driver subsystem. Instead of
adding special code to the OFED driver subsystem to post-load tunables
into dynamically created sysctls, we generalize this in the kernel.
Other changes:
- Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask"
to "hw.pcic.intr_mask".
- Removed redundant TUNABLE statements throughout the kernel.
- Some minor code rewrites in connection to removing not needed
TUNABLE statements.
- Added a missing SYSCTL_DECL().
- Wrapped two very long lines.
- Avoid malloc()/free() inside sysctl string handling, in case it is
called to initialize a sysctl from a tunable, hence malloc()/free() is
not ready when sysctls from the sysctl dataset are registered.
- Bumped FreeBSD version to indicate SYSCTL API change.
MFC after: 2 weeks
Sponsored by: Mellanox Technologies
call, which assumes the hardware is awake.
Turn ath_update_mcast() into a routine that's only called from the
net80211 layer - and it forces the hardware awake first.
This fixes a LOR from the EDMA RX path which calls ath_mode_init()
with the RX lock held - the driver lock can't also be grabbed.
This path assumes that the ath_mode_init() callers all wake up
the NIC first.
Tested:
* AR9485, STA mode, powersave
The hardware can generate its own frames (eg RTS/CTS exchanges, other
kinds of 802.11 management stuff, especially when it comes to 802.11n)
and these also have PWRMGT flags. So if the VAP is asleep but the
NIC is in force-awake for some reason, ensure that the self-generated
frames have PWRMGT set to 1.
Now, this (like basically everything to do with powersave) is still
racy - the only way to guarantee that it's all actually consistent
is to pause transmit and let it finish before transitioning the VAP
to sleep, but this at least gets the basic method of tracking and
updating the state debugged.
Tested:
* AR5416, STA mode
* AR9380, STA mode
fixes and beacon programming / debugging into the ath(4) driver.
The basic power save tracking:
* Add some new code to track the current desired powersave state; and
* Add some reference count tracking so we know when the NIC is awake; then
* Add code in all the points where we're about to touch the hardware and
push it to force-wake.
Then, how things are moved into power save:
* Only move into network-sleep during a RUN->SLEEP transition;
* Force wake the hardware up everywhere that we're about to touch
the hardware.
The net80211 stack takes care of doing RUN<->SLEEP<->(other) state
transitions so we don't have to do it in the driver.
Next, when to wake things up:
* In short - everywhere we touch the hardware.
* The hardware will take care of staying awake if things are queued
in the transmit queue(s); it'll then transit down to sleep if
there's nothing left. This way we don't have to track the
software / hardware transmit queue(s) and keep the hardware
awake for those.
Then, some transmit path fixes that aren't related but useful:
* Force EAPOL frames to go out at the lowest rate. This improves
reliability during the encryption handshake after 802.11
negotiation.
Next, some reset path fixes!
* Fix the overlap between reset and transmit pause so we don't
transmit frames during a reset.
* Some noisy environments will end up taking a lot longer to reset
than normal, so extend the reset period and drop the raise the
reset interval to be more realistic and give the hardware some
time to finish calibration.
* Skip calibration during the reset path. Tsk!
Then, beacon fixes in station mode!
* Add a _lot_ more debugging in the station beacon reset path.
This is all quite fluid right now.
* Modify the STA beacon programming code to try and take
the TU gap between desired TSF and the target TU into
account. (Lifted from QCA.)
Tested:
* AR5210
* AR5211
* AR5212
* AR5413
* AR5416
* AR9280
* AR9285
TODO:
* More AP, IBSS, mesh, TDMA testing
* Thorough AR9380 and later testing!
* AR9160 and AR9287 testing
Obtained from: QCA
concurrent updates from any completing transmits in other threads.
This was exposed when doing power save work - net80211 is constantly
doing reassociations and it's causing the rate control state to get
blanked out. This could cause the rate control code to assert.
This should be MFCed to stable/10 as it's a stability fix.
Tested:
* AR5416, STA
MFC after: 7 days
Yes, this means that sc_invalid is slightly racy, but there are other
issues here which need fixing.
This fixes a source of eventual LORs - ath_init() grabs ATH_LOCK to do
work and releases it before it calls ieee80211_start_all().
ieee80211_start_all() will grab the net80211 comlock to iterate over
the VAPs.
TODO:
* .. I should just migrate the ieee80211_start_all() work to a
deferred task so it can be done later; it doesn't have to be
immediately done.
Tested:
* AR5416, STA mode
to this event, adding if_var.h to files that do need it. Also, include
all includes that now are included due to implicit pollution via if_var.h
Sponsored by: Netflix
Sponsored by: Nginx, Inc.
The AR9485 chip and AR933x SoC both implement LNA diversity.
There are a few extra things that need to happen before this can be
flipped on for those chips (mostly to do with setting up the different
bias values and LNA1/LNA2 RSSI differences) but the first stage is
putting this code into the driver layer so it can be reused.
This has the added benefit of making it easier to expose configuration
options and diagnostic information via the ioctl API. That's not yet
being done but it sure would be nice to do so.
Tested:
* AR9285, with LNA diversity enabled
* AR9285, with LNA diversity disabled in EEPROM
for the WB195 combo NIC - an AR9285 w/ an AR3011 USB bluetooth NIC.
The AR3011 is wired up using a 3-wire coexistence scheme to the AR9285.
The code in if_ath_btcoex.c sets up the initial hardware mapping
and coexistence configuration. There's nothing special about it -
it's static; it doesn't try to configure bluetooth / MAC traffic priorities
or try to figure out what's actually going on. It's enough to stop basic
bluetooth traffic from causing traffic stalls and diassociation from
the wireless network.
To use this code, you must have the above NIC. No, it won't work
for the AR9287+AR3012, nor the AR9485, AR9462 or AR955x combo cards.
Then you set a kernel hint before boot or before kldload, where 'X'
is the unit number of your AR9285 NIC:
# kenv hint.ath.X.btcoex_profile=wb195
This will then appear in your boot messages:
[100482] athX: Enabling WB195 BTCOEX
This code is going to evolve pretty quickly (well, depending upon my
spare time) so don't assume the btcoex API is going to stay stable.
In order to use the bluetooth side, you must also load in firmware using
ath3kfw and the binary firmware file (ath3k-1.fw in my case.)
Tested:
* AR9280, no interference
* WB195 - AR9285 + AR3011 combo; STA mode; basic bluetooth inquiries
were enough to cause traffic stalls and disassociations. This has
stopped with the btcoex profile code.
TODO:
* Importantly - the AR9285 needs ASPM disabled if bluetooth coexistence
is enabled. No, I don't know why. It's likely some kind of bug to do
with the AR3011 sending bluetooth coexistence signals whilst the device
is asleep. Since we don't actually sleep the MAC just yet, it shouldn't
be a problem. That said, to be totally correct:
+ ASPM should be disabled - upon attach and wakeup
+ The PCIe powersave HAL code should never be called
Look at what the ath9k driver does for inspiration.
* Add WB197 (AR9287+AR3012) support
* Add support for the AR9485, which is another combo like the AR9285
* The later NICs have a different signaling mechanism between the MAC
and the bluetooth device; I haven't even begun to experiment with
making that HAL code work. But it should be a lot more automatic.
* The hardware can do much more interesting traffic weighting with
bluetooth and wifi traffic. None of this is currently used.
Ideally someone would code up something to watch the bluetooth traffic
GPIO (via an interrupt) and then watch it go high/low; then figure out
what the bluetooth traffic is and adjust things appropriately.
* If I get the time I may add in some code to at least track this stuff
and expose statistics. But it's up to someone else to experiment with
the bluetooth coexistence support and add the interesting stuff (like
"real" detection of bulk, audio, etc bluetooth traffic patterns and
change wifi parameters appropriately - eg, maximum aggregate length,
transmit power, using quiet time to control TX duty cycle, etc.)
the RX antenna field.
The AR9285/AR9485 use an LNA mixer to determine how to combine the signals
from the two antennas. This is encoded in the RSSI fields (ctl/ext) for
chain 2. So, let's use that here.
This maps RX antennas 0->3 to the RX mixer configuration used to
receive a frame. There's more that can be done but this is good enough
to diagnose if the hardware is doing "odd" things like trying to
receive frames on LNA2 (ie, antenna 2 or "alt" antenna) when there's
only one antenna connected.
Tested:
* AR9285, STA mode
* Grab the reset lock first, so any subsequent interrupt, TX, RX work
will fail
* Then shut down interrupts
* Then wait for TX/RX to finish running
At this point no further work will be running, so it's safe to do the
reset path code.
PR: kern/179232
and if queue mechanism; also fix up (non-11n) TX fragment handling.
This may result in a bit of a performance drop for now but I plan on
debugging and resolving this at a later stage.
Whilst here, fix the transmit path so fragment transmission works.
The TX fragmentation handling is a bit more special. In order to
correctly transmit TX fragments, there's a bunch of corner cases that
need to be handled:
* They must be transmitted back to back, in the same order..
* .. ie, you need to hold the TX lock whilst transmitting this
set of fragments rather than interleaving it with other MSDUs
destined to other nodes;
* The length of the next fragment is required when transmitting, in
order to correctly set the NAV field in the current frame to the
length of the next frame; which requires ..
* .. that we know the transmit duration of the next frame, which ..
* .. requires us to set the rate of all fragments to the same length,
or make the decision up-front, etc.
To facilitate this, I've added a new ath_buf field to describe the
length of the next fragment. This avoids having to keep the mbuf
chain together. This used to work before my 11n TX path work because
the ath_tx_start() routine would be handed a single mbuf with m_nextpkt
pointing to the next frame, and that would be maintained all the way
up to when the duration calculation was done. This doesn't hold
true any longer - the actual queuing may occur at any point in the
future (think ath_node TID software queuing) so this information
needs to be maintained.
Right now this does work for non-11n frames but it doesn't at all
enforce the same rate control decision for all frames in the fragment.
I plan on fixing this in a followup commit.
RTS/CTS has the same issue, I'll look at fixing this in a subsequent
commit.
Finaly, 11n fragment support requires the driver to have fully
decided what the rate scenario setup is - including 20/40MHz,
short/long GI, STBC, LDPC, number of streams, etc. Right now that
decision is (currently) made _after_ the NAV field value is updated.
I'll fix all of this in subsequent commits.
Tested:
* AR5416, STA, transmitting 11abg fragments
* AR5416, STA, 11n fragments work but the NAV field is incorrect for
the reasons above.
TODO:
* It would be nice to be able to queue mbufs per-node and per-TID so
we can only queue ath_buf entries when it's time to assemble frames
to send to the hardware.
But honestly, we should just do that level of software queue management
in net80211 rather than ath(4), so I'm going to leave this alone for now.
* More thorough AP, mesh and adhoc testing.
* Ensure that net80211 doesn't hand us fragmented frames when A-MPDU has
been negotiated, as we can't do software retransmission of fragments.
* .. set CLRDMASK when transmitting fragments, just to ensure.
traffic.
When transmitting non-aggregate traffic, we need to keep the hardware
busy whilst transmitting or small bursts in txdone/tx latency will
kill us.
This restores non-aggregate iperf performance, especially when doing
TDMA.
Tested:
* AR5416<->AR5416, TDMA
* AR5416 STA <-> AR9280 AP
of course.)
There's a few things that needed to happen:
* In case someone decides to set the beacon transmission rate to be
at an MCS rate, use the MCS-aware version of the duration calculation
to figure out how long the received beacon frame was.
* If TxOP enforcing is available on the hardware and we're doing TDMA,
enable it after a reset and set the TDMA guard interval to zero.
This seems to behave fine.
TODO:
* Although I haven't yet seen packet loss, the PHY errors that would be
triggered (specifically Transmit-Override-Receive) aren't enabled
by the 11n HAL. I'll have to do some work to enable these PHY errors
for debugging.
What broke:
* My recent changes to the TX queue handling has resulted in the driver
not keeping the hardware queue properly filled when doing non-aggregate
traffic. I have a patch to commit soon which fixes this situation
(albeit by reminding me about how my ath driver locking isn't working
out, sigh.)
So if you want to test this without updating to the next set of patches
that I commit, just bump the sysctl dev.ath.X.hwq_limit from 2 to 32.
Tested:
* AR5416 <-> AR5416, with ampdu disabled, HT40, 5GHz, MCS12+Short-GI.
I saw 30mbit/sec in both directions using a bidirectional UDP test.
The list-based DMA engine has the following behaviour:
* When the DMA engine is in the init state, you can write the first
descriptor address to the QCU TxDP register and it will work.
* Then when it hits the end of the list (ie, it either hits a NULL
link pointer, OR it hits a descriptor with VEOL set) the QCU
stops, and the TxDP points to the last descriptor that was transmitted.
* Then when you want to transmit a new frame, you can then either:
+ write the head of the new list into TxDP, or
+ you write the head of the new list into the link pointer of the
last completed descriptor (ie, where TxDP points), then kick
TxE to restart transmission on that QCU>
* The hardware then will re-read the descriptor to pick up the link
pointer and then jump to that.
Now, the quirks:
* If you write a TxDP when there's been no previous TxDP (ie, it's 0),
it works.
* If you write a TxDP in any other instance, the TxDP write may actually
fail. Thus, when you start transmission, it will re-read the last
transmitted descriptor to get the link pointer, NOT just start a new
transmission.
So the correct thing to do here is:
* ALWAYS use the holding descriptor (ie, the last transmitted descriptor
that we've kept safe) and use the link pointer in _THAT_ to transmit
the next frame.
* NEVER write to the TxDP after you've done the initial write.
* .. also, don't do this whilst you're also resetting the NIC.
With this in mind, the following patch does basically the above.
* Since this encapsulates Sam's issues with the QCU behaviour w/ TDMA,
kill the TDMA special case and replace it with the above.
* Add a new TXQ flag - PUTRUNNING - which indicates that we've started
DMA.
* Clear that flag when DMA has been shutdown.
* Ensure that we're not restarting DMA with PUTRUNNING enabled.
* Fix the link pointer logic during TXQ drain - we should always ensure
the link pointer does point to something if there's a list of frames.
Having it be NULL as an indication that DMA has finished or during
a reset causes trouble.
Now, given all of this, i want to nuke axq_link from orbit. There's now HAL
methods to get and set the link pointer of a descriptor, so what we
should do instead is to update the right link pointer.
* If there's a holding descriptor and an empty TXQ list, set the
link pointer of said holding descriptor to the new frame.
* If there's a non-empty TXQ list, set the link pointer of the
last descriptor in the list to the new frame.
* Nuke axq_link from orbit.
Note:
* The AR9380 doesn't need this. FIFO TX writes are atomic. As long as
we don't append to a list of frames that we've already passed to the
hardware, all of the above doesn't apply. The holding descriptor stuff
is still needed to ensure the hardware can re-read a completed
descriptor to move onto the next one, but we restart DMA by pushing in
a new FIFO entry into the TX QCU. That doesn't require any real
gymnastics.
Tested:
* AR5210, AR5211, AR5212, AR5416, AR9380 - STA mode.