In fact, bus_dmamem_alloc() happily NULLs the dmat pointer passed in,
before replacing it with its own.
This fixes a MIPS crash when kldload'ing if_ath/if_ath_pci -
bus_dmamap_destroy() was passed in a NULL dmat pointer and was doing
all kinds of very bad things.
Reviewed by: scottl
re-used by the upcoming EDMA TX completion code.
Make ath_stoptxdma() public, again so the EDMA TX code can use it.
Don't check for the TXQ bitmap in the ISR when doing EDMA work as it
doesn't apply for EDMA.
necessary to "do" EDMA.
It was just using the TX completion status for logging information about
the descriptor completion. Since with EDMA we don't know this without
checking the TX completion FIFO, we can't provide this information.
So don't.
Now that I understand what's going on with this, I've realised that
it's going to be quite difficult to implement a processq method in
the EDMA case. Because there's a separate TX status FIFO, I can't
just run processq() on each EDMA TXQ to see what's finished.
i have to actually run the TX status queue and handle individual
TXQs.
So:
* unmethodize ath_tx_processq();
* leave ath_tx_draintxq() as a method, as it only uses the completion status
for debugging rather than actively completing the frames (ie, all frames
here are failed);
* Methodize ath_draintxq().
The EDMA ath_draintxq() will have to take care of running the TX
completion FIFO before (potentially) freeing frames in the queue.
The only two places where ath_tx_draintxq() (on a single TXQ) are used:
* ath_draintxq(); and
* the CABQ handling in the beacon setup code - it drains the CABQ before
populating the CABQ with frames for a new beacon (when doing multi-VAP
operation.)
So it's quite possible that once I methodize the CABQ and beacon handling,
I can just drop ath_tx_draintxq() in its entirety.
Finally, it's also quite possible that I can remove ath_tx_draintxq()
in the future and just "teach" it to not check the status when doing
EDMA.
EDMA HAL hardware.
* The EDMA HAL code assumes the nexttbtt and intval values are in TU/8
units, rather than TU. For now, just "hack" around that here, at least
until I code up something to translate it in the HAL.
* Setup some different TXQ flags for EDMA hardware.
* The EDMA HAL doesn't support setting the first rate series via
ath_hal_setuptxdesc() - instead, a call to ath_hal_set11nratescenario()
is always required. So for now, just do an 11n rate series setup
for EDMA beacon frames.
This allows my AR9380 to successfully transmit beacon frames.
However, CABQ TX and all normal data frame TX and TX completion is
still not functional and will require some more significant code churn
to make work.
enabled.
The legacy (pre-802.11n) hardware doesn't support this - although
the AR5212 era hardware supports MRR, it doesn't have all the bits
needed to support MRR + RTS/CTS. The AR5416 and later support
a packet duration and RTS/CTS flags per rate scenario, so we should
support it.
Tested:
* AR9280, STA
PR: kern/170302
wrapping.
The previous code was only wrapping descriptor "block" boundaries rather
than individual descriptors. It sounds equivalent but it isn't.
r238824 changed the descriptor allocation to enforce that an individual
descriptor doesn't wrap a 4KiB boundary rather than the whole block
of descriptors. Eg, for TX descriptors, they're allocated in blocks
of 10 descriptors for each ath_buf (for scatter/gather DMA.)
buffers.
ath_descdma is now being used for things other than the classical
combination of ath_buf + ath_desc allocations. In this particular case,
don't try to free and blank out the ath_buf list if it's not passed in.
of buffers, only the number of descriptors.
This involves:
* Change the allocation function to not use nbuf at all;
* When calling it, pass in "nbuf * ndesc" to correctly update how many
descriptors are being allocated.
Whilst here, fix the descriptor allocation code to correctly allocate
a larger buffer size if the Merlin 4KB WAR is required. It overallocates
descriptors when allocating a block that doesn't ever have a 4KB boundary
being crossed, but that can be fixed at a later stage.
The AR9300 and later descriptors are 128 bytes, however I'd like to make
sure that isn't used for earlier chips.
* Populate the TX descriptor length field in the softc with
sizeof(ath_desc)
* Use this field when allocating the TX descriptors
* Pre-AR93xx TX/RX descriptors will use the ath_desc size; newer ones will
query the HAL for these sizes.
* Introduce TX DMA setup/teardown methods, mirroring what's done in
the RX path.
Although the TX DMA descriptor is setup via ath_desc_alloc() /
ath_desc_free(), there TX status descriptor ring will be allocated
in this path.
* Remove some of the TX EDMA capability probing from the RX path and
push it into the new TX EDMA path.
sized TX descriptor.
This is required for the AR93xx EDMA support which requires 128 byte
TX descriptors (which is significantly larger than the earlier
hardware.)
I was setting up the RX EDMA buffer to be 4096 bytes rather than the
RX data buffer portion. The hardware was likely getting very confused
and DMAing descriptor portions into places it shouldn't, leading to
memory corruption and occasional panics.
Whilst here, don't bother allocating descriptors for the RX EDMA case.
We don't use those descriptors. Instead, just allocate ath_buf entries.
The RX EDMA support requires a modified approach to the RX descriptor
handling.
Specifically:
* There's now two RX queues - high and low priority;
* The RX queues are implemented as FIFOs; they're now an array of pointers
to buffers;
* .. and the RX buffer and descriptor are in the same "buffer", rather than
being separate.
So to that end, this commit abstracts out most of the RX related functions
from the bulk of the driver. Notably, the RX DMA/buffer allocation isn't
updated, primarily because I haven't yet fleshed out what it should look
like.
Whilst I'm here, create a set of matching but mostly unimplemented EDMA
stubs.
Tested:
* AR9280, station mode
TODO:
* Thorough AP and other mode testing for non-EDMA chips;
* Figure out how to allocate RX buffers suitable for RX EDMA, including
correctly setting the mbuf length to compensate for the RX descriptor
and completion status area.
This includes a few new fields in each RXed frame:
* per chain RX RSSI (ctl and ext);
* current RX chainmask;
* EVM information;
* PHY error code;
* basic RX status bits (CRC error, PHY error, etc).
This is primarily to allow me to do some userland PHY error processing
for radar and spectral scan data. However since EVM and per-chain RSSI
is provided, others may find it useful for a variety of tasks.
The default is to not compile in the radiotap vendor extensions, primarily
because tcpdump doesn't seem to handle the particular vendor extension
layout I'm using, and I'd rather not break existing code out there that
may be (badly) parsing the radiotap data.
Instead, add the option 'ATH_ENABLE_RADIOTAP_VENDOR_EXT' to your kernel
configuration file to enable these options.
The existing code tries to use the beacon miss timer to signal that the AP
has gone away. Unfortunately this doesn't seem to be behaving itself.
I'll try to investigate why this is for the sake of completeness.
The result is the STA will stay "associated" to the AP it was associated
with when it suspended. It never receives a bmiss notification so it
never tries reassociating.
PR: kern/169084
fixed for 802.11n TX, this needs to be disabled or users wlil see randomly
hanging aggregation sessions.
Whilst I'm here, remove the warning about 802.11n being full of dragons.
It's nowhere near that scary now.
ath_start() is called.
This (defaults to 10 frames) gives for a little headway in the TX ath_buf
allocation, so buffer cloning is still possible.
This requires a lot omre experimenting and tuning.
It also doesn't stop a node/TID from consuming all of the available
ath_buf's, especially when the node is going through high packet loss
or only talking at a low TX rate. It also doesn't stop a paused TID
from taking all of the ath_bufs. I'll look at fixing that up in subsequent
commits.
PR: kern/168170
traffic.
* Create sc_mgmt_txbuf and sc_mgmt_txdesc, initialise/free them appropriately.
* Create an enum to represent buffer types in the API.
* Extend ath_getbuf() and _ath_getbuf_locked() to take the above enum.
* Right now anything sent via ic_raw_xmit() allocates via ATH_BUFTYPE_MGMT.
This may not be very useful.
* Add ATH_BUF_MGMT flag (ath_buf.bf_flags) which indicates the current buffer
is a mgmt buffer and should go back onto the mgmt free list.
* Extend 'txagg' to include debugging output for both normal and mgmt txbufs.
* When checking/clearing ATH_BUF_BUSY, do it on both TX pools.
Tested:
* STA mode, with heavy UDP injection via iperf. This filled the TX queue
however BARs were still going out successfully.
TODO:
* Initialise the mgmt buffers with ATH_BUF_MGMT and then ensure the right
type is being allocated and freed on the appropriate list. That'd save
a write operation (to bf->bf_flags) on each buffer alloc/free.
* Test on AP mode, ensure that BAR TX and probe responses go out nicely
when the main TX queue is filled (eg with paused traffic to a TID,
awaiting a BAR to complete.)
PR: kern/168170
it turns out that it negatively affects performance. I'm stil investigating
exactly why deferring the IO causes such negative TCP performance but
doesn't affect UDP preformance.
Leave the ath_tx_kick() change in there however; it's going to be useful
to have that there for if_transmit() work.
PR: kern/168649
called to "kick" along TX.
For now, schedule a taskqueue call.
Later on I may go back to the direct call of ath_rx_tasklet() - but for
now, this will do.
I've tested UDP and TCP TX. UDP TX still achieves 240MBit, but TCP
TX gets stuck at around 100MBit or so, instead of the 150MBit it should
be at. I'll re-test with no ACPI/power/sleep states enabled at startup
and see what effect it has.
This is in preparation for supporting an if_transmit() path, which will
turn ath_tx_kick() into a NUL operation (as there won't be an ifnet
queue to service.)
Tested:
* AR9280 STA
TODO:
* test on AR5416, AR9160, AR928x STA/AP modes
PR: kern/168649
implementing parallel TX and TX/RX completion can be done without
simply abusing long-held locks.
Right now, multiple concurrent ath_start() entries can result in
frames being dequeued out of order. Well, they're dequeued in order
fine, but if there's any preemption or race between CPUs between:
* removing the frame from the ifnet, and
* calling and runningath_tx_start(), until the frame is placed on a
software or hardware TXQ
Then although dequeueing the frame is in-order, queueing it to the hardware
may be out of order.
This is solved in a lot of other drivers by just holding a TX lock over
a rather long period of time. This lets them continue to direct dispatch
without races between dequeue and hardware queue.
Note to observers: if_transmit() doesn't necessarily solve this.
It removes the ifnet from the main path, but the same issue exists if
there's some intermediary queue (eg a bufring, which as an aside also
may pull in ifnet when you're using ALTQ.)
So, until I can sit down and code up a much better way of doing parallel
TX, I'm going to leave the TX path using a deferred taskqueue task.
What I will likely head towards is doing a direct dispatch to hardware
or software via if_transmit(), but it'll require some driver changes to
allow queues to be made without using the really large ath_buf / ath_desc
entries.
TODO:
* Look at how feasible it'll be to just do direct dispatch to
ath_tx_start() from if_transmit(), avoiding doing _any_ intermediary
serialisation into a global queue. This may break ALTQ for example,
so I have to be delicate.
* It's quite likely that I should break up ath_tx_start() so it
deposits frames onto the software queues first, and then only fill
in the 802.11 fields when it's being queued to the hardware.
That will make the if_transmit() -> software queue path very
quick and lightweight.
* This has some very bad behaviour when using ACPI and Cx states.
I'll do some subsequent analysis using KTR and schedgraph and file
a follow-up PR or two.
PR: kern/168649
not to disable the PCIe PHY in prepration for reset.
Extend the enablepci method to have a "poweroff" flag, which if equal
to true means the hardware is about to go to sleep.
* Flesh out the pcie disable method for 11n chips, as they were defaulting
to the AR5212 (empty) PCIe disable method.
* Add accessor macros for the HAL PCIe enable/disable calls.
* Call disable on ath_suspend()
* Call enable on ath_resume()
NOTE:
* This has nothing to do with the NIC sleep/run state - the NIC still
will stay in network-run state rather than supporting network-sleep
state. This is preparation work for supporting correct suspend/resume
WARs for the 11n PCIe NICs.
TODO:
* It may be feasible at this point to keep the chip powered down during
initial probe/attach and only power it up upon the first configure/reset
pass. This however would require correct (for values of "correct")
tracking of the NIC power configuration state from the driver and that
just isn't attempted at the moment.
Tested:
* AR9280 on my Lenovo T60, but with no suspend/resume pass (yet).
There's some TX path TDMA code in if_ath_tx.c which should be migrated
out, but first I should likely try and verify/fix/repair the TDMA support
in 9.x and -HEAD.
* migrate the rx processing out into if_ath_rx.c
* migrate the TSF functions into if_ath_tsf.h, as inlines
This is in prepration for supporting the EDMA RX routines, required to
support the AR93xx series NICs.
TODO:
* ath_start() shouldn't be private, but it's called as part of
the RX path. I should likely migrate ath_rx_tasklet() back into
if_ath.c and then return this to be 'static'. The RX code really
shouldn't need to see TX routines (and vice versa.)
* ath_beacon_* should be in if_ath_beacon.[ch].
* ath_tdma_* should be in if_ath_tdma.[ch] ...
TX and RX PCU stop/drain routines have been thoroughly debugged.
It's also very likely that I should add hooks back up to the
interface glue (if_ath_pci / if_ath_ahb) to do any relevant
bus flushes that are required. A WMAC DDR flush may be required
for the AR9130 SoC.