My changed had some rather significant behavioural changes to throughput.
The two issues I noticed:
* With if_start and the ifnet mbuf queue, any temporary latency
would get eaten up by some mbufs being queued. With ath_transmit()
queuing things to ath_buf's, I'd only get 512 TX buffers before I
couldn't queue any further frames.
* There's also some non-zero latency involved with TX being pushed
into a taskqueue via direct dispatch. Any time the scheduler didn't
immediately schedule the ath TX task would cause extra latency.
Various 1ge/10ge drivers implement both direct dispatch (if the TX
lock can be acquired) and deferred task transmission (if the TX lock
can't be acquired), with frames being pushed into a drbd queue.
I'll have to do this at some point, but until I figure out how to
deal with 802.11 fragments, I'll have to wait a while longer.
So what I saw:
* lots of extra latency, specially under load - if the taskqueue
wasn't immediately scheduled, things went pear shaped;
* any extra latency would result in TX ath_buf's taking their sweet time
being replenished, so any further calls to ath_transmit() would drop
mbufs.
* .. yes, there's no explicit backpressure here - things are just dropped.
Eek.
With this, the general performance has gone up, but those subtle if_start()
related race conditions are back. For some reason, this is doubly-obvious
with the AR5416 NIC and I don't quite understand why yet.
There's an unrelated issue with AR5416 performance in STA mode (it's
fine in AP mode when bridging frames, weirdly..) that requires a little
further investigation. Specifically - it works fine on a Lenovo T40
(single core CPU) running a March 2012 9-STABLE kernel, but a Lenovo T60
(dual core) running an early November 2012 kernel behaves very poorly.
The same hardware with an AR9160 or AR9280 behaves perfectly.
thread structure pointer atomically from r13 (the pcpu pointer)
for the current CPU/core.
Add a CTASSERT in machdep.c to make sure that pc_curthread is in
fact the first field in struct pcpu.
The only non-atomic operations left were those related to process-
space operations, such as casuword, subyte, suword16, fubyte,
fuword16, copyin, copyout and their variations.
The casuword function has been re-structured more complete than
the others. This way we have an example of a better bundling
without introducing a lot of risk when we get it wrong. The
other functions can be rebundled in separate commits and with
the appropriate testing.
every architecture's busdma_machdep.c. It is done by unifying the
bus_dmamap_load_buffer() routines so that they may be called from MI
code. The MD busdma is then given a chance to do any final processing
in the complete() callback.
The cam changes unify the bus_dmamap_load* handling in cam drivers.
The arm and mips implementations are updated to track virtual
addresses for sync(). Previously this was done in a type specific
way. Now it is done in a generic way by recording the list of
virtuals in the map.
Submitted by: jeff (sponsored by EMC/Isilon)
Reviewed by: kan (previous version), scottl,
mjacob (isp(4), no objections for target mode changes)
Discussed with: ian (arm changes)
Tested by: marius (sparc64), mips (jmallet), isci(4) on x86 (jharris),
amd64 (Fabian Keil <freebsd-listen@fabiankeil.de>)
since the former is defined everywhere. This cuts off some code not
necessary on non strict aligment arches.
Reviewed by: adrian
Sponsored by: Nginx, Inc.
Fixes several problems if working with read-only pools.
Changed code originaly introduced in onnv-gate 13061:bda0decf867b
Contains changes up to illumos-gate 13700:4bc0783f6064
PR: kern/175897
Suggested by: avg
MFC after: 2 weeks
Prior to this change pinning was implemented via an ioctl (VM_SET_PINNING)
that called 'sched_bind()' on behalf of the user thread.
The ULE implementation of 'sched_bind()' bumps up 'td_pinned' which in turn
runs afoul of the assertion '(td_pinned == 0)' in userret().
Using the cpuset affinity to implement pinning of the vcpu threads works with
both 4BSD and ULE schedulers and has the happy side-effect of getting rid
of a bunch of code in vmm.ko.
Discussed with: grehan
Import vendor bugfixes regarding SA rounding, header size and layout.
This was already partially fixed by avg.
Illumos ZFS issues:
3512 rounding discrepancy in sa_find_sizes()
3513 mismatch between SA header size and layout
References:
https://www.illumos.org/issues/3512https://www.illumos.org/issues/3513
MFC after: 2 weeks
ABORT cause, since the user can also provide this kind of
information. So the receiver doesn't know who provided the
information.
While there: Fix a bug where the stack would send a malformed
ABORT chunk when using a send() call with SCTP_ABORT|SCT_SENDALL
flags.
MFC after: 3 days
of helper functions:
- carp_master() - boolean function which is true if an address
is in the MASTER state.
- ifa_preferred() - boolean function that compares two addresses,
and is aware of CARP.
Utilize ifa_preferred() in ifa_ifwithnet().
The previous version of patch also changed source address selection
logic in jails using carp_master(), but we failed to negotiate this part
with Bjoern. May be we will approach this problem again later.
Reported & tested by: Anton Yuzhaninov <citrin citrin.ru>
Sponsored by: Nginx, Inc
when they're being called from the TX completion handler.
Going (back) through the taskqueue is just adding extra locking and
latency to packet operations. This improves performance a little bit
on most NICs.
It still hasn't restored the original performance of the AR5416 NIC
but the AR9160, AR9280 and later NICs behave very well with this.
Tested:
* AR5416 STA (still tops out at ~ 70mbit TCP, rather than 150mbit TCP..)
* AR9160 hostap (good for both TX and RX)
* AR9280 hostap (good for both TX and RX)
so that simultaneous access cannot happen. Protect scratch area using
the enumeration lock. Also reduce stack usage in usbd_transfer_setup()
by moving some big stack members to the scratch area. This saves around
200 bytes of stack.
- Fix a whitespace.
MFC after: 1 week
freed memory cannot be used during detach.
- Remove all panic() calls from the urtw driver because
panic() is not appropriate here.
- Remove redundant checks for device detached in
device detach callbacks.
- Use DEVMETHOD_END to mark end of device methods.
MFC after: 2 weeks
function, implementing the sysctl vfs.ffs.set_bufoutput (not used in
the tree yet).
- The current directory vnode dereference is unsafe since fd_cdir
could be changed and unreferenced, lock the filedesc around and vref
the fd_cdir.
- The VTOI() conversion of the fd_cdir is unsafe without first
checking that the vnode is indeed from an FFS mount, otherwise
the code dereferences a random memory.
- The cdir could be reclaimed from under us, lock it around the
checks.
- The type of the fp vnode might be not a disk, or it might have
changed while the thread was in flight, check the type.
Reviewed and tested by: mckusick
MFC after: 2 weeks
tmpfs_mapped{read, write}() functions:
- tmpfs_mapped{read, write}() are only called within VOP_{READ, WRITE}(),
which check before-hand to work only on valid VREG vnodes. Also the
vnode is locked for the duration of the work, making vnode reclaiming
impossible, during the operation. Hence, vobj can never be NULL.
- Currently check on resident pages and cached pages without vm object
lock held is racy and can do even more harm than good, as a page could
be transitioning between these 2 pools and then be skipped entirely.
Skip the checks as lookups on empty splay trees are very cheap.
Discussed with: alc
Tested by: flo
MFC after: 2 weeks
* Illumos zfs issue #3035 [1] LZ4 compression support in ZFS.
LZ4 is a new high-speed BSD-licensed compression algorithm created
by Yann Collet that delivers very high compression and decompression
performance compared to lzjb (>50% faster on compression, >80% faster
on decompression and around 3x faster on compression of incompressible
data), while giving better compression ratio [1].
This version of LZ4 corresponds to upstream's [2] revision 85.
Please note that for obvious reasons this is not backward read
compatible. This means once a pool have LZ4 compressed data, these
data can no longer be read by older ZFS implementations.
Local changes:
- On-stack hash table disabled and using kernel slab allocator
instead, at this time. This requires larger kernel thread stack
for zio workers. This may change in the future should we adjusted
the zio workers' thread stack size.
- likely and unlikely will be undefined if they are already defined,
this is required for i386 XEN build.
- Removed De Bruijn sequence based __builtin_ctz family of builtins
in favor of the latter. Both GCC and clang supports these builtins.
- Changed the way the LZ4 code detects endianness.
- Manual pages modifications to mention the feature based on Illumos
counterpart.
- Boot loader changes to make it support LZ4 decompression.
[1] https://www.illumos.org/issues/3035
[2] http://code.google.com/p/lz4/source/list
Obtained from: Illumos (13921:9d721847e469)
Tested on: FreeBSD/amd64
MFC after: 1 month