sdchi encapsulates a generic SD Host Controller logic that relies on
actual hardware driver for register access.
sdhci_pci implements driver for PCI SDHC controllers using new SDHCI
interface
No kernel config modifications are required, but if you load sdhc
as a module you must switch to sdhci_pci instead.
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Remove the global dpt_softcs list and use devclass_get_device() instead.
- Use pci_enable_busmaster() rather than frobbing the PCI command register
directly.
Tested by: no one
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Return an errno value from bt_eisa_attach() if an error occurs rather
than -1.
- Use BUS_PROBE_DEFAULT rather than 0.
Tested by: no one
- Move 'free_scbs' into the softc rather than having it be a global list
and convert it to an SLIST instead of a hand-rolled linked-list.
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Don't call device_set_desc() in the pccard attach routine, instead
set a default description during the pccard probe if the matching
product doesn't have a name.
Tested by: no one
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Compare pointers against NULL.
- Let new-bus allocate a softc rather than doing it by hand.
Tested by: no one
- Use device_printf() and device_get_nameunit() instead of adw_name().
- Remove use of explicit bus space handles and tags.
- Use pci_enable_busmaster() rather than frobbing the PCI command register
directly.
- Use the softc provided by new-bus rather than allocating a new one.
Tested by: no one
only available via the new @every_second shortcut. ENOTIME to
implement crontab(5) format extensions to allow more flexible
scheduling.
In order to address some concerns expressed by Terry Lambert
while discussing the topic few years ago, about per-second cron
possibly causing some bad effects on /etc/crontab by stat()ing
it every second instead of every minute now (i.e. atime update),
only check that database needs to be reloaded on every 60-th
loop run. This should be close enough to the current behaviour.
Add "@every_minute" shortcut while I am here.
MFC after: 1 month
ip6_output(), the IPv6 stack is working in net byte order.
The reason this code worked before is that ip6_output()
doesn't look at ip6_plen at all and recalculates it based
on mbuf length.
* Record TX mbufs when we get them so we can release them.
* Set TX/RX mbuf slots to NULL when we are no longer responsible for them
* Move dma sync on RX into RX intr routine
execution of the nfsd threads while it is reloading the exports.
This avoids clients from getting intermittent access errors
when the exports are being reloaded non-atomically.
It is not an ideal solution, since requests will back up while
the nfsd threads are suspended. Also, when this option is used,
if mountd crashes while reloading exports, mountd will have to
be restarted to get the nfsd threads to resume execution.
This has been tested by Vincent Hoffman (vince at unsane.co.uk)
and John Hickey (jh at deterlab.net).
The nfse patch offers a more comprehensive solution for this issue.
PR: kern/9619, kern/131342
Reviewed by: kib
MFC after: 2 weeks
stashed away in ath_node.
As much as I tried to stuff that behind the ATH_NODE lock, unfortunately
the locking is just too plain hairy (for me! And I wrote it!) to do
cleanly. Hence using atomics here instead of a lock. The ATH_NODE lock
just isn't currently used anywhere besides the rate control updates.
If in the future everything gets migrated back to using a single ATH_NODE
lock or a single global ATH_TX lock (ie, a single TX lock for all TX and
TX completion) then fine, I'll remove the atomics.
processes running as root to suspend/resume execution
of the kernel nfsd threads. An earlier version of this
patch was tested by Vincent Hoffman (vince at unsane.co.uk)
and John Hickey (jh at deterlab.net).
Reviewed by: kib
MFC after: 2 weeks
it run out of multiple concurrent contexts.
Right now the ath(4) TX processing is a bit hairy. Specifically:
* It was running out of ath_start(), which could occur from multiple
concurrent sending processes (as if_start() can be started from multiple
sending threads nowdays.. sigh)
* during RX if fast frames are enabled (so not really at the moment, not
until I fix this particular feature again..)
* during ath_reset() - so anything which calls that
* during ath_tx_proc*() in the ath taskqueue - ie, TX is attempted again
after TX completion, as there's now hopefully some ath_bufs available.
* Then, the ic_raw_xmit() method can queue raw frames for transmission
at any time, from any net80211 TX context. Ew.
This has caused packet ordering issues in the past - specifically,
there's absolutely no guarantee that preemption won't occuring _during_
ath_start() by the TX completion processing, which will call ath_start()
again. It's a mess - 802.11 really, really wants things to be in
sequence or things go all kinds of loopy.
So:
* create a new task struct for TX'ing;
* make the if_start method simply queue the task on the ath taskqueue;
* make ath_start() just be called by the new TX task;
* make ath_tx_kick() just schedule the ath TX task, rather than directly
calling ath_start().
Now yes, this means that I've taken a step backwards in terms of
concurrency - TX -and- RX now occur in the same single-task taskqueue.
But there's nothing stopping me from separating out the TX / TX completion
code into a separate taskqueue which runs in parallel with the RX path,
if that ends up being appropriate for some platforms.
This fixes the CCMP/seqno concurrency issues that creep up when you
transmit large amounts of uni-directional UDP traffic (>200MBit) on a
FreeBSD STA -> AP, as now there's only one TX context no matter what's
going on (TX completion->retry/software queue,
userland->net80211->ath_start(), TX completion -> ath_start());
but it won't fix any concurrency issues between raw transmitted frames
and non-raw transmitted frames (eg EAPOL frames on TID 16 and any other
TID 16 multicast traffic that gets put on the CABQ.) That is going to
require a bunch more re-architecture before it's feasible to fix.
In any case, this is a big step towards making the majority of the TX
path locking irrelevant, as now almost all TX activity occurs in the
taskqueue.
Phew.
Right now processing a full 512 frame queue takes quite a while (measured
on the order of milliseconds.) Because of this, the TX processing ends up
sometimes preempting the taskqueue:
* userland sends a frame
* it goes in through net80211 and out to ath_start()
* ath_start() will end up either direct dispatching or software queuing a
frame.
If TX had to wait for RX to finish, it would add quite a few ms of
additional latency to the packet transmission. This in the past has
caused issues with TCP throughput.
Now, as part of my attempt to bring sanity to the TX/RX paths, the first
step is to make the RX processing happen in smaller 'parts'. That way
when TX is pushed into the ath taskqueue, there won't be so much latency
in the way of things.
The bigger scale change (which will come much later) is to actually
process the frames in the ath_intr taskqueue but process _frames_ in
the ath driver taskqueue. That would reduce the latency between
processing and requeuing new descriptors. But that'll come later.
The actual work:
* Add ATH_RX_MAX at 128 (static for now);
* break out of the processing loop if npkts reaches ATH_RX_MAX;
* if we processed ATH_RX_MAX or more frames during the processing loop,
immediately reschedule another RX taskqueue run. This will handle
the further frames in the taskqueue.
This should have very minimal impact on the general throughput case,
unless the scheduler is being very very strange or the ath taskqueue
ends up spending a lot of time on non-RX operations (such as TX
completion.)
counter, without actually allocating the vnodes. The supposed use of
the getnewvnode_reserve(9) is to reclaim enough free vnodes while the
code still does not hold any resources that might be needed during the
reclamation, and to consume the slack later for getnewvnode() calls
made from the innards. After the critical block is finished, the
caller shall free any reserve left, by getnewvnode_drop_reserve(9).
Reviewed by: avg
Tested by: pho
MFC after: 1 week
and Sierra Wireless MC8790V. Also implement the .ucom_poll method.
Note: This makes it possible to use lqr/echo in ppp.conf. And it
resolves ppp hanging during the PPp> phase.
Reviewed by: hps
MFC after: 1 week
AMD BKDG for CPU families 10h and later requires that the memory
mapped config is always read into or written from al/ax/eax register.
Discussed with: kib, alc
Reviewed by: kib (earlier version)
MFC after: 25 days