Note: This is totally sub-optimal and a work in progress.
* Support filling an empty FIFO TXQ with frames from the ath_buf queue
in the ath_txq list. However, since there's (currently) no clean, easy
way to separate the frames that are in the FIFO versus just waiting,
the code waits for the FIFO to be totally empty before it attempts to
queue more. This is highly sub-optimal but is enough to get the ball
rolling.
* A _lot_ of the code assumes that the TX status is filled out in the
struct ath_buf bf_status field. So for now, memcpy() the completion over.
* None of the TX drain / reset routines will attempt to complete completed
frames before draining, so it can't be used for 802.11n TX aggregation.
(This won't work anyway, as the aggregation TX descriptor API hasn't
yet been converted; and that'll happen in some future commits.)
* Fix an issue where the FIFO counter wasn't being incremented, leading
to the queue logic just plain not working.
* HAL_EIO means "descriptor wasn't valid", versus "not finished, don't
continue." So don't stop processing descriptors when HAL_EIO is hit.
* Don't service frame completion from the beacon queue. It isn't currently
fully setup like a real queue and the first attempt at accessing the
queue lock will panic the kernel.
Tested:
* AR9380, STA mode
This commit is brought to you by said AR9380 in STA mode.
sizeof(struct ath_desc). This isn't correct for EDMA TX descriptors.
This popped up during iperf tests. Ping tests never created frames that
had enough segments to overflow into a second descriptor. However,
an iperf TCP test would do that after a few seconds; the second descriptor
would almost always certainly have garbage.
Tested:
* AR9380, STA mode
* AR9280, STA mode (802.11n TX, legacy TX)
that we still have a problem with this whole structure of
locks and in_input.c [it does not lock which it should not, but
this *can* lead to crashes]. (I have seen it in our SQA
testbed.. besides the one with a refcnt issue that I will
have SQA work on next week ;-)
EDMA code.
* create a new TX EDMA descriptor struct to represent TX EDMA descriptors
when doing debugging;
* implement an EDMA printing function which:
+ hardcodes the TX map size to 4 for now;
+ correctly prints out the number of segments - there's one descriptor
for up to 4 buffers (segments), not one for each segment;
+ print out 4 DS buffer and len pointers;
+ print out the correct number of DWORDs in the TX descriptor.
TODO:
* Remove all of the hard-coded stuff. Ew.
is marked correctly.
The existing logic assumed that the first descriptor is i == 0, which
doesn't hold for EDMA TX. In this instance, the first time filltxdesc()
is called can be up to i == 3.
So for a two-buffer descriptor:
* firstSeg is set to 0;
* lastSeg is set to 1;
* the ath_hal_filltxdesc() code will treat it as the last segment in
a descriptor chain and blank some of the descriptor fields, causing
the TX to stop.
When firstSeg is set to 1 (regardless of lastSeg), it overrides the
lastSeg setting. Thus, ath_hal_filltxdesc() won't blank out these
fields.
Tested: AR9380, STA mode. With this, association is successful.
used, serves very little value given that FreeBSD runs on real H/W
for a long time.
Note that SKI is open-source (see http://ski.sourceforge.net), so
if there's interest and value again, then this code can be revived.
Discussed with: jhb
segments for the entire allocation to use kmem_alloc_attr() to allocate
KVM rather than using kmem_alloc_contig(). This avoids requiring
a single physically contiguous chunk in this case.
Submitted by: Peter Jeremy (original version)
MFC after: 1 month
assure that *all* tables and such are removed before
we start to free. This won't protect the Hash in ip_input.c
but in theory should protect any other uses that *do* use locks.
MFC after: 1 week (or more)
First, pmap_clear_modify() is write protecting all mappings to the specified
page, not just clearing the modified bit. Specifically, it sets PTE_RO on
the PTE, which is wrong. Moreover, it is calling vm_page_dirty(), which is
not the expected behavior for pmap_clear_modify(). Generally speaking, the
machine-independent VM layer masks these mistakes. For example, setting
PTE_RO will result in additional soft faults, but not a catastrophe.
Second, pmap_clear_modify() may not clear the modified bits because it only
iterates over the PV list when the page has the PV_TABLE_MOD flag set and
elsewhere the pmap clears the PV_TABLE_MOD flag anytime a modified mapping
is write protected or destroyed. However, the page may still have other
mappings with the modified bit set.
Eliminate a stale comment.
timestamp related stack variables to reference ms directly instead of ticks.
The h_ertt(4) Khelp module relies on TCP timestamp information in order to
calculate its enhanced RTT estimates, but was not updated as part of r231767.
Consequently, h_ertt has not been calculating correct RTT estimates since
r231767 was comitted, which in turn broke all delay-based congestion control
algorithms because they rely on the h_ertt RTT estimates.
Fix the breakage by switching h_ertt to use tcp_ts_getticks() in place of all
previous uses of the ticks variable. This ensures all timestamp related
variables in h_ertt use the same units as the TCP stack and therefore results in
meaningful comparisons and RTT estimate calculations.
Reported & tested by: Naeem Khademi (naeemk at ifi uio no)
Discussed with: bz
MFC after: 3 days
Basically, this is automatic rx zero copy when feasible. TCP payload is
DMA'd directly into the userspace buffer described by the uio submitted
in soreceive by an application.
- Works with sockets that are being handled by the TCP offload engine
of a T4 chip (you need t4_tom.ko module loaded after cxgbe, and an
"ifconfig +toe" on the cxgbe interface).
- Does not require any modification to the application.
- Not enabled by default. Use hw.t4nex.<X>.toe.ddp="1" to enable it.
- Setup multiple DDP page sizes. When the driver attempts DDP it will
try to combine physically contiguous pages into regions of these sizes.
- Set the indicate size such that the payload carried in the indicate can
be copied in the header mbuf (and the 16K rx buffer can be recycled).
- Set DDP threshold to the max payload that the chip will coalesce and
deliver to the driver (this is ~16K by default, which is also why the
offload rx queue is backed by 16K buffers). If the chip is able to
coalesce up to the max it's allowed to, it's a good sign that the peer
is transmitting in bulk without any TCP PSH.
MFC after: 2 weeks
TCB. Filters are programmed by modifying the TCB too (via a different
routine) and the reply to any TCB update is delivered via a
CPL_SET_TCB_RPL. Figure out whether the reply is for a filter-write or
something else and route it appropriately.
MFC after: 2 weeks