- Replace divisor numbers with more descirptive names
- Properly calculate minimum frequency for SDHCI 3.0
- Properly calculate frequency for SDHCI 3.0 in mmcbr_set_clock
- Add min_freq method to sdhci_if.m and provide default
implementation. By re-implementing this method hardware
drivers can control frequency controller operates when
executing initialization sequence
actually do have to reinitialise the RX side of things after an RX
descriptor EOL error.
* Revert a change of mine from quite a while ago - don't shortcut the
RX initialisation path. There's a RX FIFO bug in the earlier chips
(I'm not sure when it was fixed in this series, but it's fixed
with the AR9380 and later) which causes the same RX descriptor to
be written to over and over. This causes the descriptor to be
marked as "done", and this ends up causing the whole RX path to
go very strange. This should fixed the "kickpcu; handled X packets"
message spam where "X" is consistently small.
When a cylinder group runs short of inodes, a new block for inodes is
allocated, zero'ed, and written to the disk. The zero'ed inodes must
be on the disk before the cylinder group can be updated to claim them.
If the cylinder group claiming the new inodes were written before the
zero'ed block of inodes, the system could crash with the filesystem in
an unrecoverable state.
Rather than adding a soft updates dependency to ensure that the new
inode block is written before it is claimed by the cylinder group
map, we just do a barrier write of the zero'ed inode block to ensure
that it will get written before the updated cylinder group map can
be written. This change should only slow down bulk loading of newly
created filesystems since that is the primary time that new inode
blocks need to be created.
Reported by: Robert Watson
Reviewed by: kib
Tested by: Peter Holm
write is a disk write request that tells the disk that the buffer
being written must be committed to the media along with any writes
that preceeded it before any future blocks may be written to the drive.
Barrier writes are provided by adding the functions bbarrierwrite
(bwrite with barrier) and babarrierwrite (bawrite with barrier).
Following a bbarrierwrite the client knows that the requested buffer
is on the media. It does not ensure that buffers written before that
buffer are on the media. It only ensure that buffers written before
that buffer will get to the media before any buffers written after
that buffer. A flush command must be sent to the disk to ensure that
all earlier written buffers are on the media.
Reviewed by: kib
Tested by: Peter Holm
blocking modes described in section 3.4.3 of RFC 2783, allowing the caller
to retrieve the most recent values without blocking, to block for a specified
time, or to block forever.
Reviewed by: discussion on hackers@
length packets, which was actually harmless.
Note that peers with different version of head/ may grow this
counter, but it is harmless - all pfsync data is processed.
Reported & tested by: Anton Yuzhaninov <citrin citrin.ru>
Sponsored by: Nginx, Inc
* The following bit flags where incroccetly defined:
o Mesh Control Present
o Mesh Power Save Level
o RSPI
This is now corrected according to Table 8.4 as per IEEE 802.11 2012;
Approved by: adrian (mentor)
now disables read-ahead. It used to effectively restore the system default
readahead hueristic if it had been changed; a negative value now restores
the default.
Reviewed by: kib
enumeration lock. Make sure all callers of usbd_enum_lock() check the return
value. Remove the control transfer specific lock. Bump the FreeBSD version
number, hence external USB modules may need to be recompiled due to a USB
device structure change.
MFC after: 1 week
My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:
* With if_start and the ifnet mbuf queue, any temporary latency
would get eaten up by some mbufs being queued. With ath_transmit()
queuing things to ath_buf's, I'd only get 512 TX buffers before I
couldn't queue any further frames.
* There's also some non-zero latency involved with TX being pushed
into a taskqueue via direct dispatch. Any time the scheduler didn't
immediately schedule the ath TX task would cause extra latency.
Various 1ge/10ge drivers implement both direct dispatch (if the TX
lock can be acquired) and deferred task transmission (if the TX lock
can't be acquired), with frames being pushed into a drbd queue.
I'll have to do this at some point, but until I figure out how to
deal with 802.11 fragments, I'll have to wait a while longer.
So what I saw:
* lots of extra latency, specially under load - if the taskqueue
wasn't immediately scheduled, things went pear shaped;
* any extra latency would result in TX ath_buf's taking their sweet time
being replenished, so any further calls to ath_transmit() would drop
mbufs.
* .. yes, there's no explicit backpressure here - things are just dropped.
Eek.
With this, the general performance has gone up, but those subtle if_start()
related race conditions are back. For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.
There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation. Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
thread structure pointer atomically from r13 (the pcpu pointer)
for the current CPU/core.
Add a CTASSERT in machdep.c to make sure that pc_curthread is in
fact the first field in struct pcpu.
The only non-atomic operations left were those related to process-
space operations, such as casuword, subyte, suword16, fubyte,
fuword16, copyin, copyout and their variations.
The casuword function has been re-structured more complete than
the others. This way we have an example of a better bundling
without introducing a lot of risk when we get it wrong. The
other functions can be rebundled in separate commits and with
the appropriate testing.
every architecture's busdma_machdep.c. It is done by unifying the
bus_dmamap_load_buffer() routines so that they may be called from MI
code. The MD busdma is then given a chance to do any final processing
in the complete() callback.
The cam changes unify the bus_dmamap_load* handling in cam drivers.
The arm and mips implementations are updated to track virtual
addresses for sync(). Previously this was done in a type specific
way. Now it is done in a generic way by recording the list of
virtuals in the map.
Submitted by: jeff (sponsored by EMC/Isilon)
Reviewed by: kan (previous version), scottl,
mjacob (isp(4), no objections for target mode changes)
Discussed with: ian (arm changes)
Tested by: marius (sparc64), mips (jmallet), isci(4) on x86 (jharris),
amd64 (Fabian Keil <freebsd-listen@fabiankeil.de>)
since the former is defined everywhere. This cuts off some code not
necessary on non strict aligment arches.
Reviewed by: adrian
Sponsored by: Nginx, Inc.
Fixes several problems if working with read-only pools.
Changed code originaly introduced in onnv-gate 13061:bda0decf867b
Contains changes up to illumos-gate 13700:4bc0783f6064
PR: kern/175897
Suggested by: avg
MFC after: 2 weeks
Prior to this change pinning was implemented via an ioctl (VM_SET_PINNING)
that called 'sched_bind()' on behalf of the user thread.
The ULE implementation of 'sched_bind()' bumps up 'td_pinned' which in turn
runs afoul of the assertion '(td_pinned == 0)' in userret().
Using the cpuset affinity to implement pinning of the vcpu threads works with
both 4BSD and ULE schedulers and has the happy side-effect of getting rid
of a bunch of code in vmm.ko.
Discussed with: grehan
Import vendor bugfixes regarding SA rounding, header size and layout.
This was already partially fixed by avg.
Illumos ZFS issues:
3512 rounding discrepancy in sa_find_sizes()
3513 mismatch between SA header size and layout
References:
https://www.illumos.org/issues/3512https://www.illumos.org/issues/3513
MFC after: 2 weeks