- We need to add "-mllvm -arm-enable-ehabi" to clangs CFLAGS when
generating the unwind tables to tell it to add the required directives to
the assembly it generates.
when the kernel attempts to unwind through this function.
The .fnstart and .fnend in this function should be moved to macros but we
are currently missing an END macro on ARM.
they can easily be used by later post-processing. When searching for
a compiled-in fdt blob, use the section headers to get the size and
location of the .dynsym section to do a symbol search.
This fixes a problem where the search could overshoot the symbol
table and wander into the string table. Sometimes that was harmless
and sometimes it lead to spurious panic messages about an offset
bigger than the module size.
elf headers, mask out the high nibble of that address. This effectly makes
the entry point the offset from the load address, and it gets adjusted for
the actual load address before jumping to it.
Masking the high nibble makes assumptions about memory layout that are true
for all the arm platforms we support right now, but it makes me uneasy.
This needs to be revisited.
Change size requested to malloc(9) now that callwheel buckets are
callout_list and not callout_tailq anymore. This change was already
there but it seems it got lost after code churn in r248032.
Reported by: alc, kib
- Use u_int values for length and max_length values
- Add a way to reset the max_length heuristic in order to have the
possibility to reuse the mechanism consecutively without rebooting
the machine
- Add a way to quick display top5 contented buckets in the system for
the max_length value.
This should give a quick overview on the quality of the hash table
distribution.
Sponsored by: EMC / Isilon storage division
Reviewed by: jeff, davide
proper locking. This doesn't prevent in any case reclaim of the vnode.
Avoid this not going over-the-wire in this case and relying on subsequent
smbfs_getattr() call to restore consistency.
While I'm here, change a couple of SMBVDEBUG() in MPASS().
sbmfs_smb_lookup() doesn't and shouldn't know about '.' and '..'
Reported by: pho's stress2 suite
I can 100% reliably trigger this on TID 1 traffic by using iperf -S 32
<client fields> to create traffic that maps to TID 1.
The reference driver doesn't do this check.
future further optimizations where the vm_object lock will be held
in read mode most of the time the page cache resident pool of pages
are accessed for reading purposes.
The change is mostly mechanical but few notes are reported:
* The KPI changes as follow:
- VM_OBJECT_LOCK() -> VM_OBJECT_WLOCK()
- VM_OBJECT_TRYLOCK() -> VM_OBJECT_TRYWLOCK()
- VM_OBJECT_UNLOCK() -> VM_OBJECT_WUNLOCK()
- VM_OBJECT_LOCK_ASSERT(MA_OWNED) -> VM_OBJECT_ASSERT_WLOCKED()
(in order to avoid visibility of implementation details)
- The read-mode operations are added:
VM_OBJECT_RLOCK(), VM_OBJECT_TRYRLOCK(), VM_OBJECT_RUNLOCK(),
VM_OBJECT_ASSERT_RLOCKED(), VM_OBJECT_ASSERT_LOCKED()
* The vm/vm_pager.h namespace pollution avoidance (forcing requiring
sys/mutex.h in consumers directly to cater its inlining functions
using VM_OBJECT_LOCK()) imposes that all the vm/vm_pager.h
consumers now must include also sys/rwlock.h.
* zfs requires a quite convoluted fix to include FreeBSD rwlocks into
the compat layer because the name clash between FreeBSD and solaris
versions must be avoided.
At this purpose zfs redefines the vm_object locking functions
directly, isolating the FreeBSD components in specific compat stubs.
The KPI results heavilly broken by this commit. Thirdy part ports must
be updated accordingly (I can think off-hand of VirtualBox, for example).
Sponsored by: EMC / Isilon storage division
Reviewed by: jeff
Reviewed by: pjd (ZFS specific review)
Discussed with: alc
Tested by: pho
Introduce a new KPI that verifies if the page cache is empty for a
specified vm_object. This KPI does not make assumptions about the
locking in order to be used also for building assertions at init and
destroy time.
It is mostly used to hide implementation details of the page cache.
Sponsored by: EMC / Isilon storage division
Reviewed by: jeff
Reviewed by: alc (vm_radix based version)
Tested by: flo, pho, jhb, davide
Use RTM_PINNED flag to mark route as immutable.
Forbid deleting immutable routes without special rtrequest1_fib() flag.
Adding interface address with prefix already in route table is handled
by atomically deleting old prefix and adding interface one.
Discussed with: andre, eri
MFC after: 3 weeks
This patchset implements a new TX lock, covering both the per-VAP (and
thus per-node) TX locking and the serialisation through to the underlying
physical device.
This implements the hard requirement that frames to the underlying physical
device are scheduled to the underlying device in the same order that they
are processed at the VAP layer. This includes adding extra encapsulation
state (such as sequence numbers and CCMP IV numbers.) Any order mismatch
here will result in dropped packets at the receiver.
There are multiple transmit contexts from the upper protocol layers as well
as the "raw" interface via the management and BPF transmit paths.
All of these need to be correctly serialised or bad behaviour will result
under load.
The specifics:
* add a new TX IC lock - it will eventually just be used for serialisation
to the underlying physical device but for now it's used for both the
VAP encapsulation/serialisation and the physical device dispatch.
This lock is specifically non-recursive.
* Methodize the parent transmit, vap transmit and ic_raw_xmit function
pointers; use lock assertions in the parent/vap transmit routines.
* Add a lock assertion in ieee80211_encap() - the TX lock must be held
here to guarantee sensible behaviour.
* Refactor out the packet sending code from ieee80211_start() - now
ieee80211_start() is just a loop over the ifnet queue and it dispatches
each VAP packet send through ieee80211_start_pkt().
Yes, I will likely rename ieee80211_start_pkt() to something that
better reflects its status as a VAP packet transmit path. More on
that later.
* Add locking around the management and BAR TX sending - to ensure that
encapsulation and TX are done hand-in-hand.
* Add locking in the mesh code - again, to ensure that encapsulation
and mesh transmit are done hand-in-hand.
* Add locking around the power save queue and ageq handling, when
dispatching to the parent interface.
* Add locking around the WDS handoff.
* Add a note in the mesh dispatch code that the TX path needs to be
re-thought-out - right now it's doing a direct parent device transmit
rather than going via the vap layer. It may "work", but it's likely
incorrect (as it bypasses any possible per-node power save and
aggregation handling.)
Why not a per-VAP or per-node lock?
Because in order to ensure per-VAP ordering, we'd have to hold the
VAP lock across parent->if_transmit(). There are a few problems
with this:
* There's some state being setup during each driver transmit - specifically,
the encryption encap / CCMP IV setup. That should eventually be dragged
back into the encapsulation phase but for now it lives in the driver TX path.
This should be locked.
* Two drivers (ath, iwn) re-use the node->ni_txseqs array in order to
allocate sequence numbers when doing transmit aggregation. This should
also be locked.
* Drivers may have multiple frames queued already - so when one calls
if_transmit(), it may end up dispatching multiple frames for different
VAPs/nodes, each needing a different lock when handling that particular
end destination.
So to be "correct" locking-wise, we'd end up needing to grab a VAP or
node lock inside the driver TX path when setting up crypto / AMPDU sequence
numbers, and we may already _have_ a TX lock held - mostly for the same
destination vap/node, but sometimes it'll be for others. That could lead
to LORs and thus deadlocks.
So for now, I'm sticking with an IC TX lock. It has the advantage of
papering over the above and it also has the added advantage that I can
assert that it's being held when doing a parent device transmit.
I'll look at splitting the locks out a bit more later on.
General outstanding net80211 TX path issues / TODO:
* Look into separating out the VAP serialisation and the IC handoff.
It's going to be tricky as parent->if_transmit() doesn't give me the
opportunity to split queuing from driver dispatch. See above.
* Work with monthadar to fix up the mesh transmit path so it doesn't go via
the parent interface when retransmitting frames.
* Push the encryption handling back into the driver, if it's at all
architectually sane to do so. I know it's possible - it's what mac80211
in Linux does.
* Make ieee80211_raw_xmit() queue a frame into VAP or parent queue rather
than doing a short-cut direct into the driver. There are QoS issues
here - you do want your management frames to be encapsulated and pushed
onto the stack sooner than the (large, bursty) amount of data frames
that are queued. But there has to be a saner way to do this.
* Fragments are still broken - drivers need to be upgraded to an if_transmit()
implementation and then fragmentation handling needs to be properly fixed.
Tested:
* STA - AR5416, AR9280, Intel 5300 abgn wifi
* Hostap - AR5416, AR9160, AR9280
* Mesh - some testing by monthadar@, more to come.
of upgrading older machines using ataraid(4) to newer releases.
This optional parameter is controlled via kern.geom.raid.legacy_aliases
and will create a /dev/ar0 device that will point at /dev/raid/r0 for
example.
Tested on Dell SC 1425 DDF-1 format software raid controllers installing from
stable/7 and upgrading to stable/9 without having to adjust /etc/fstab
Reviewed by: mav
Obtained from: Yahoo!
MFC after: 2 Weeks
This will avoid a 0-byte read (in g_read_data()) leading to a panic, if
previously read data are erroneous.
Suggested by: John-Mark Gurney <jmg@funkthat.com>
Turn on the CTL disable tunable by default.
This will allow GENERIC configurations to boot on small memory boxes, but
not require end users who want to use CTL to recompile their kernel. They
can simply set kern.cam.ctl.disable=0 in loader.conf.
from being indirectly called via cpu_startup()+vm_ksubmap_init().
The boot order position remains the same at SI_SUB_CPU.
Allocation of the callout array is changed to stardard kernel malloc
from a slightly obscure direct kernel_map allocation.
kern_timeout_callwheel_alloc() is renamed to callout_callwheel_init()
to better describe its purpose.
kern_timeout_callwheel_init() is removed simplifying the per-cpu
initialization.
Reviewed by: davide
kern_timeout_callwheel_alloc() where it is actually used.
This is a mechanical move and no tuning parameters are changed.
The pre-allocated callout array is only used for legacy timeout(9)
calls and is only allocated and active on cpu0. Eventually all
remaining users of timeout(9) should switch to the callout_* API.
Reviewed by: davide