- Move destruction of per-ring locks to netmap_dtor_locked to mirror the
initialization that happens in NIOCREGIF. Otherwise unloading a netmap-
capable interface that was never put into netmap mode would try to
mtx_destroy an uninitialized mutex, and panic.
- Destroy core_lock in netmap_detach, mirroring init in netmap_attach.
- Also comment out the knlist_destroy for now as there is currently no
knlist_init.
Sponsored by: ADARA Networks
Reviewed by: luigi@
CCB at a time outstanding reliable. It's not there yet, but this
is the direction to go in so might as well commit. So far,
multiple at a time CCBs work (see ISP_INTERNAL_TARGET test mode),
but it fails if there are more downstream than the SIM wants
to handle and SRR is sort of confused when this happens, plus
it is not entirely quite clear what one does if a CCB/CTIO fails
and you have more in flight (that don't fail, say) and more queued
up at the SIM level that haven't been started yet.
Some of this is driven because there apparently is no flow control
to requeue XPT_CONTINUE_IO requests like there are for XPT_SCSI_IO
requests. It is also more driven in that the few target mode
periph drivers there are are not really set up for handling pushback-
heck most of them don't even check for errors (and what would they
really do with them anyway? It's the initiator's problem, really....).
The data transfer arithmetic has been worked over again to handle
multiple outstanding commands, so you have a notion of what's been
moved already as well as what's currently in flight. It turns that
this led to uncovering a REPORT_LUNS bug in the ISP_INTERNAL_TARGET
code which was sending back 24 bytes of rpl data instead of the
specified 16. What happened furthermore here is that sending back
16 bytes and reporting an overrun of 8 bytes made the initiator
(running FC-Tape aware f/w) mad enough to request, and keep
requesting, another FCP response (I guess it didn't like the answer
so kept asking for it again).
Sponsored by: Spectralogic
MFC after: 1 month
Starting or stopping the IPMI watchdog is rather expensive with the
current implementation as all IPMI requests are bounced via thread.
This is not viable during shutdown or dumps, and this avoids headache
in the common case that the watchdog is not enabled. The IPMI watchdog
should probably be reworked to not use a separate thread to fix this
in the case when the watchdog timer is enabled.
MFC after: 2 weeks
packet delivery, always enqueue when possible. Also
correct the DEPLETED test as multiple bits might be
set. Thanks to Randall Stewart for the changes!
on the secondary side of a bridge will not be propagated to the primary
bus unless this is enabled. Busmastering is not enabled by default (we
have relied on firmware to set this bit to date). The OS needs to set it
for any bridges not configured by system firmware.
Tested by: Steve Polyack korvus comcast net
MFC after: 2 weeks
'fw_hdr_intfver' into an anonymous enum, which avoids a clang 3.2
warning about all the enum values being the same value.
Reviewed by: np
MFC after: 1 week
When issuing a non-DMA command, make sure to set the "remaining length of
command to be transferred via DMA" (sc_cmdlen) to zero up-front, otherwise
we might get confused on command competition interrupt (no DMA active but
still data left to transfer).
- Implement handling of MSG_IGN_WIDE_RESIDUE which some targets produce, as
just rejecting these leads to a resend and disconnect loop.
Reported and tested by: mjacob
MFC after: 3 days
to pull vm_param.h was removed. Other big dependency of vm_page.h on
vm_param.h are PA_LOCK* definitions, which are only needed for
in-kernel code, because modules use KBI-safe functions to lock the
pages.
Stop including vm_param.h into vm_page.h. Include vm_param.h
explicitely for the kernel code which needs it.
Suggested and reviewed by: alc
MFC after: 2 weeks
that the wrong UART reference clock will be used for a few of the IDs.
It is currently not possible to figure that out because the Linux FTDI
driver detects this run-time and not compile time based on the bcdDevice
field of the USB device descriptor. Some of the ID's in usbdevs are not
sorted according to the product ID value. Please feel free to fix this.
I'm out of my xemacs magic today.
This syncronises us with the linux kernel at kernel.org (HEAD).
MFC after: 2 weeks
array, similar to what filltxdesc() uses.
This removes the last reference to ds_data in the TX path outside of
debugging statements. These need to be adjusted/fixed.
Tested:
* AR9280 STA/AP with iperf TCP traffic
The existing API only exposes 'seglen' (the current buffer (segment) length)
with the data buffer pointer set in 'ds_data'. This is fine for the legacy
DMA engine but it won't work for the EDMA engines.
The EDMA engine has a significantly different TX descriptor layout.
* The legacy DMA engine had a ds_data pointer at the same offset in the
descriptor for both TX and RX buffers;
* The EDMA engine has no ds_data for RX - the data is DMAed after the
descriptor;
* The EDMA engine has support for 4 TX buffer/segment pairs in the TX
DMA descriptor;
* The EDMA TX completion is in a different FIFO, and the driver will
'link' the status completion entry to a QCU by a "QCU ID".
I don't know why it's just not filled in by the hardware, alas.
So given that, here are the changes:
* Instead of directly fondling 'ds_data' in ath_desc, change the
ath_hal_filltxdesc() to take an array of buffer pointers as well
as segment len pointers;
* The EDMA TX completion status wants a descriptor and queue id.
This (for now) uses bf_state.bfs_txq and will extract the hardware QCU
ID from that.
* .. and this is ugly and wasteful; it should change to just store
the QCU in the bf_state and save 3/7 bytes in the process.
Now, the weird crap:
* The aggregate TX path was using bf_state->bfs_txq for the TXQ, rather than
taking a function argument. I've tidied that up.
* The multicast queue frames get put on a software TXQ and then that is
appended to the hardware CABQ when appropriate. So for now, make sure
that bf_state->bfs_txq points at the CABQ when adding frames to the
multicast queue.
* .. but the multicast queue TX path for now doesn't use the software
queue and instead
(a) directly sets up the descriptor contents at that point;
(b) the frames on the vap->avp_mcastq are then just appended wholesale
to the CABQ.
So for now, I don't have to worry about making the multicast path
work with aggregation or the per-TID software queue. Phew.
What's left to do:
* I need to modify the 11n ath_hal_chaintxdesc() API to do the same.
I'll do that in a subsequent commit.
* Remove bf_state.bfs_txq entirely and store the QCU as appropriate.
* .. then do the runtime "is this going on the right HWQ?" checks using
that, rather than comparing pointer values.
Tested on:
* AR9280 STA/AP
* AR5416 STA/AP
Use the interface number from the USB interface descriptor
like in the other USB serial drivers. These numbers are not
supposed to be different, though in theory they can. Make sure
that the driver then uses the interface number given by the USB
descriptor, and not the logical index of the USB stack.
For the future:
Whenever the term "index" is used in the USB code, it refers to
a number computed by the USB stack.
Whenever the term "number" is used in the USB code, it refers to
a number in a USB descriptor.
MFC after: 2 weeks
support for only the first port, but the CP2105 can have multiple ports.
Although this allowed the first port to mostly work on multi port devices,
there could be issues with this arrangement.
Update the man page to reflect support for both ports and the CP2105.
Many thanks to Silicon Labs (www.silabs.com) for providing a CP2105-EK
dev board for testing.
MFC after: 2 weeks
When forming aggregates, the last descriptor was now not being
correctly setup - instead, the "setuplasttxdesc" call was being
handed the first descriptor in the last subframe, rather than the
last descriptor in the last subframe.
This showed up as "bad series0 hwrate" messages, as the final
descriptor just didn't have any of the rate control information
squirreled away.
Tested:
* AR9280 STA -> 11n AP, iperf TCP
system with sparse CPU IDs, you can have a valid CPU ID > mp_ncpus (e.g. if
you have two CPUs 0 and 4, with mp_maxid == 4 and mp_ncpus == 2).
Introduced at svn r235210
Submitted by: jhb@
Reviewed by: jfv@
- remove special handling of zero length transfers in mpi_pre_fw_upload();
- add missing MPS_CM_FLAGS_DATAIN flag in mpi_pre_fw_upload();
- move mps_user_setup_request() call into proper place;
- increase user command timeout from 30 to 60 seconds;
- avoid NULL dereference panic in case of firmware crash.
Set max DMA segment size to 24bit, as MPI SGE supports it.
Use mps_add_dmaseg() to add empty SGE instead of custom code.
Tune endianness safety.
Reviewed by: Desai, Kashyap <Kashyap.Desai@lsi.com>
Sponsored by: iXsystems, Inc.
enabled.
The legacy (pre-802.11n) hardware doesn't support this - although
the AR5212 era hardware supports MRR, it doesn't have all the bits
needed to support MRR + RTS/CTS. The AR5416 and later support
a packet duration and RTS/CTS flags per rate scenario, so we should
support it.
Tested:
* AR9280, STA
PR: kern/170302
This allows my TI1510 cardbus/PCI bridge to work after a suspend/resume,
without having to unload/reload the cbb driver.
I've also tested this on stable/9. I'll MFC it shortly.
PR: kern/170058
Reviewed by: jhb
MFC after: 1 day
code is called and remove it from ath_buf_set_rate().
For the legacy (non-11n API) TX routines, ath_hal_filltxdesc() takes care
of setting up the intermediary and final descriptors right, complete
with copying the rate control info into the final descriptor so the
rate modules can grab it.
The 11n version doesn't do this - ath_hal_chaintxdesc() doesn't
copy the rate control bits over, nor does it clear isaggr/moreaggr/
pad delimiters. So the call to setuplasttxdesc() is needed here.
So:
* legacy NICs - never call the 11n rate control stuff, so filltxdesc
copies the rate control info right;
* 11n NICs transmitting legacy or 11n non-aggregate frames -
ath_hal_set11nratescenario() is called to setup rate control and
then ath_hal_filltxdesc() chains them together - so the rate control
info is right;
* 11n aggregate frames - set11nratescenario() is called, then
ath_hal_chaintxdesc() is called to chain a list of aggregate and subframes
together. This requires a call to ath_hal_setuplasttxdesc() to complete
things.
Tested:
* AR9280 in station mode
TODO:
* I really should make sure that the descriptor contents get blanked
out correctly or garbage left over from aggregate frames may show
up in non-aggregate frames, leading to badness.
functions, for both legacy and 802.11n.
This will simplify supporting the EDMA chipsets as these two descriptor
setup functions can just be overridden in their entirety, hiding all of
the subtle differences in setting things up.
It's not a permanent solution, as eventually the AR5416 HAL should grow
similar versions of the 11n descriptor functions and then those can be
used.
TODO:
* Push the "clr11naggr" call into the legacy setds, just to ensure
that retried frames don't end up with the aggregate bits set
inappropriately;
* Remove the "setlasttxdesc" call from the 11n TX path and push it
into setds_11n.
* Ensure that setds_11n will work correctly for non-aggregate frames;
* .. and then when it does, just unconditionally call "setds_11n" for
11n NICs and "setds" for non-11n NICs.
For C1 and C2 states use cpu_ticks() to measure sleep time instead of much
slower ACPI timer. We can't do it for C3, as TSC may stop there. But it is
less important there as wake up latency is high any way.
For C1 and C2 states do not check/clear bus mastering activity status, as
it is important only for C3. As side effect it can make CPU enter C2 instead
of C3 if last BM activity was two sleeps back (unlike one before), but
that may be even good because of collecting more statistics. Premature BM
wakeup from C3, entered because of overestimation, can easily be worse then
entering C2 from both performance and power consumption points of view.
Together on dual Xeon E5645 system on sequential 512 bytes read test this
change makes cpu_idle_acpi() as fast as simplest cpu_idle_hlt() and only
few percents slower then cpu_idle_mwait(), while deeper states are still
actively used during idle periods.
To help with diagnostics, add C-state type into dev.cpu.X.cx_supported.
Sponsored by: iXsystems, Inc.
* Changed KASSERT to be debug printf (DWTAP_PRINTF). If state is not
IEEE80211_S_RUN we return without scheduling a new callout;
* When net80211 stack changes state to IEEE802_11_INIT we stop the
beacon callout task;