parent adapter's _DOD list, only check the low 16 bits of both _ADR and
_DOD. The language in the ACPI spec seems to indicate that the _ADR values
should exactly match the entries in _DOD. However, I assume that the
masking added to _DOD values was added to work around some known busted
machines (the commit history doesn't indicate either way), and the ACPI
spec does require that the low 16 bits are unique for all video outputs,
so only check the low 16 bits should be fine.
This fixes recognition of video outputs that use the new standardized
device ID scheme in ACPI 3.0 that set bit 31 such as certain Dell laptops.
Tested by: Juergen Lock nox jelal kn-bremen de
MFC after: 3 days
(Model 0x2D /* Per Intel document 253669-044US 08/2012. */)
Add manpage to document all the goodness that is available in this
processor model.
No support for uncore events at this time.
Submitted by: hiren panchasara <hiren.panchasara@gmail.com>
Reviewed by: jimharris@ fabient@
Obtained from: Yahoo! Inc.
MFC after: 2 weeks
that revises the netmap memory allocator so that the
various parameters (number and size of buffers, rings, descriptors)
can be modified at runtime through sysctl variables.
The changes become effective when no netmap clients are active.
The API is mostly unchanged, although the NIOCUNREGIF ioctl now
does not bring the interface back to normal mode: and you
need to close the file descriptor for that.
This change was necessary to track who is using the mapped region,
and since it is a simplification of the API there was no
incentive in trying to preserve NIOCUNREGIF.
We will remove the ioctl from the kernel next time we need
a real API change (and version bump).
Among other things, buffer allocation when opening devices is
now much faster: it used to take O(N^2) time, now it is linear.
Submitted by: Giuseppe Lettieri
doesn't automatically clear when VDD rises above Vlow again and needs to be
cleared manually. However, apparently this needs all of the time registers
to be set, i.e. pcf8563_settime(), and not just PCF8563_R_SECOND in order
for PCF8563_R_SECOND_VL to stick. Thus, we just issue a warning during
pcf8563_attach() rather than failing with ENXIO in case it is set.
MFC after: 3 days
This eliminates the need to manage queue depth at the nvd(4) level for
Chatham prototype board workarounds, and also adds the ability to
accept a number of requests on a single qpair that is much larger
than the number of trackers allocated.
Sponsored by: Intel
nvme_ctrlr_submit_io_request().
While here, also fix case where a uio may have more than 1 iovec.
NVMe's definition of SGEs (called PRPs) only allows for the first SGE to
start on a non-page boundary. The simplest way to handle this is to
construct a temporary uio for each iovec, and submit an NVMe request
for each.
Sponsored by: Intel
from an NVMe consumer.
This allows us to mostly build NVMe command buffers without holding the
qpair lock, and also allows for future queueing of nvme_request objects
in cases where the submission queue is full and no nvme_tracker objects
are available.
Sponsored by: Intel
This simplifies the driver significantly where it is constructing
commands to be submitted to hardware. By reducing the number of
PRPs (NVMe parlance for SGE) from 128 to 32, it ensures we do not
allocate too much memory for more common smaller I/O sizes, while
still supporting up to 128KB I/O sizes.
This also paves the way for pre-allocation of nvme_tracker objects
for each queue which will simplify the I/O path even further.
Sponsored by: Intel
around the problem where high speed interfaces (such as ixgbe(4))
are not able to report real ifi_baudrate. bascially, take a spare
byte from struct if_data and use it to store ifi_baudrate power
factor. in other words,
real ifi_baudrate = ifi_baudrate * 10 ^ ifi_baudrate power factor
this should be backwards compatible with old binaries. use ixgbe(4)
as an example on how drivers would set ifi_baudrate power factor
Discussed with: kib, scottl, glebius
MFC after: 1 week
now use function calls:
if_clone_simple()
if_clone_advanced()
to initialize a cloner, instead of macros that initialize if_clone
structure.
Discussed with: brooks, bz, 1 year ago
sdchi encapsulates a generic SD Host Controller logic that relies on
actual hardware driver for register access.
sdhci_pci implements driver for PCI SDHC controllers using new SDHCI
interface
No kernel config modifications are required, but if you load sdhc
as a module you must switch to sdhci_pci instead.
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Remove the global dpt_softcs list and use devclass_get_device() instead.
- Use pci_enable_busmaster() rather than frobbing the PCI command register
directly.
Tested by: no one
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Return an errno value from bt_eisa_attach() if an error occurs rather
than -1.
- Use BUS_PROBE_DEFAULT rather than 0.
Tested by: no one
- Move 'free_scbs' into the softc rather than having it be a global list
and convert it to an SLIST instead of a hand-rolled linked-list.
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Don't call device_set_desc() in the pccard attach routine, instead
set a default description during the pccard probe if the matching
product doesn't have a name.
Tested by: no one
- Use device_printf() and device_get_unit() instead of storing the unit
number in the softc.
- Remove use of explicit bus space handles and tags.
- Compare pointers against NULL.
- Let new-bus allocate a softc rather than doing it by hand.
Tested by: no one
- Use device_printf() and device_get_nameunit() instead of adw_name().
- Remove use of explicit bus space handles and tags.
- Use pci_enable_busmaster() rather than frobbing the PCI command register
directly.
- Use the softc provided by new-bus rather than allocating a new one.
Tested by: no one
stashed away in ath_node.
As much as I tried to stuff that behind the ATH_NODE lock, unfortunately
the locking is just too plain hairy (for me! And I wrote it!) to do
cleanly. Hence using atomics here instead of a lock. The ATH_NODE lock
just isn't currently used anywhere besides the rate control updates.
If in the future everything gets migrated back to using a single ATH_NODE
lock or a single global ATH_TX lock (ie, a single TX lock for all TX and
TX completion) then fine, I'll remove the atomics.
it run out of multiple concurrent contexts.
Right now the ath(4) TX processing is a bit hairy. Specifically:
* It was running out of ath_start(), which could occur from multiple
concurrent sending processes (as if_start() can be started from multiple
sending threads nowdays.. sigh)
* during RX if fast frames are enabled (so not really at the moment, not
until I fix this particular feature again..)
* during ath_reset() - so anything which calls that
* during ath_tx_proc*() in the ath taskqueue - ie, TX is attempted again
after TX completion, as there's now hopefully some ath_bufs available.
* Then, the ic_raw_xmit() method can queue raw frames for transmission
at any time, from any net80211 TX context. Ew.
This has caused packet ordering issues in the past - specifically,
there's absolutely no guarantee that preemption won't occuring _during_
ath_start() by the TX completion processing, which will call ath_start()
again. It's a mess - 802.11 really, really wants things to be in
sequence or things go all kinds of loopy.
So:
* create a new task struct for TX'ing;
* make the if_start method simply queue the task on the ath taskqueue;
* make ath_start() just be called by the new TX task;
* make ath_tx_kick() just schedule the ath TX task, rather than directly
calling ath_start().
Now yes, this means that I've taken a step backwards in terms of
concurrency - TX -and- RX now occur in the same single-task taskqueue.
But there's nothing stopping me from separating out the TX / TX completion
code into a separate taskqueue which runs in parallel with the RX path,
if that ends up being appropriate for some platforms.
This fixes the CCMP/seqno concurrency issues that creep up when you
transmit large amounts of uni-directional UDP traffic (>200MBit) on a
FreeBSD STA -> AP, as now there's only one TX context no matter what's
going on (TX completion->retry/software queue,
userland->net80211->ath_start(), TX completion -> ath_start());
but it won't fix any concurrency issues between raw transmitted frames
and non-raw transmitted frames (eg EAPOL frames on TID 16 and any other
TID 16 multicast traffic that gets put on the CABQ.) That is going to
require a bunch more re-architecture before it's feasible to fix.
In any case, this is a big step towards making the majority of the TX
path locking irrelevant, as now almost all TX activity occurs in the
taskqueue.
Phew.
Right now processing a full 512 frame queue takes quite a while (measured
on the order of milliseconds.) Because of this, the TX processing ends up
sometimes preempting the taskqueue:
* userland sends a frame
* it goes in through net80211 and out to ath_start()
* ath_start() will end up either direct dispatching or software queuing a
frame.
If TX had to wait for RX to finish, it would add quite a few ms of
additional latency to the packet transmission. This in the past has
caused issues with TCP throughput.
Now, as part of my attempt to bring sanity to the TX/RX paths, the first
step is to make the RX processing happen in smaller 'parts'. That way
when TX is pushed into the ath taskqueue, there won't be so much latency
in the way of things.
The bigger scale change (which will come much later) is to actually
process the frames in the ath_intr taskqueue but process _frames_ in
the ath driver taskqueue. That would reduce the latency between
processing and requeuing new descriptors. But that'll come later.
The actual work:
* Add ATH_RX_MAX at 128 (static for now);
* break out of the processing loop if npkts reaches ATH_RX_MAX;
* if we processed ATH_RX_MAX or more frames during the processing loop,
immediately reschedule another RX taskqueue run. This will handle
the further frames in the taskqueue.
This should have very minimal impact on the general throughput case,
unless the scheduler is being very very strange or the ath taskqueue
ends up spending a lot of time on non-RX operations (such as TX
completion.)
and Sierra Wireless MC8790V. Also implement the .ucom_poll method.
Note: This makes it possible to use lqr/echo in ppp.conf. And it
resolves ppp hanging during the PPp> phase.
Reviewed by: hps
MFC after: 1 week