in6p_outputopts at the entrance of the functions. this trick was
necessary when we passed an in6 pcb to in6_embedscope(), within which
the in6p_outputopts member was used, but we do not use this kind of
interface any more.
Submitted by: Keiichi SHIMA <keiichi__at__iijlab.net>
Obtained from: KAME
data link type of the hook. It will be used to ease autoconfiguration
of netgraph and also to print warning messages, when incompatoble nodes
are connected together.
At the end of ng_snd_item(), node queue is processed. In certain
netgraph setups deep recursive calls can occur.
For example this happens, when two nodes are connected and can send
items to each other in both directions. If, for some reason, both nodes
have a lot of items in their queues, then the processing thread will
recurse between these two nodes, delivering items left and right, going
deeper in the stack. Other setups can suffer from deep recursion, too.
The following factors can influence risk of deep netgraph call:
- periodical write-access events on node
- combination of slow link and fast one in one graph
- net.inet.ip.fastforwarding
Changes made:
- In ng_acquire_{read,write}() do not dequeue another item. Instead,
call ng_setisr() for this node.
- At the end of ng_snd_item(), do not process queue. Call ng_setisr(),
if there are any dequeueable items on node queue.
- In ng_setisr() narrow worklist mutex holding.
- In ng_setisr() assert queue mutex.
Theoretically, the first two changes should negatively affect performance.
To check this, some profiling was made:
1) In general real tasks, no noticable performance difference was found.
2) The following test was made: two multithreaded nodes and one
single-threaded were connected into a ring. A large queues of packets
were sent around this ring. Time to pass the ring N times was measured.
This is a very vacuous test: no items/mbufs are allocated, no upcalls or
downcalls outside of netgraph. It doesn't represent a real load, it is
a stress test for ng_acquire_{read,write}() and item queueing functions.
Surprisingly, the performance impact was positive! New code is 13% faster
on UP and 17% faster on SMP, in this particular test.
The problem was originally found, described, analyzed and original patch
was written by Roselyn Lee from Vernier Networks. Thanks!
Submitted by: Roselyn Lee <rosel verniernetworks com>
and return a printable representation.
This fixes recognition of the PC Engines WRAP and improves the
recognition of the Soekris boards (Bios version can now be
seen in the dmesg output for instance).
Also, add watchdog support for PCM-582x platforms.
Submitted by: Adrian Steinmann <ast@marabu.ch>
Slightly changed by: phk
PR: 81360
avg/median/stddev bars onto separate lines for readability if the
ranges overlapped. In 2005, ministat was extended to support more than
2 datasets, but the -s code was not updated. It will coredump if run
with -s and >2 sets.
PR: 82909
Submitted by: Dan Nelson <dnelson@allantgroup.com>
by Vladimir Dergachev for inclusion in DRM CVS, with minor modifications for
FreeBSD CVS and the appropriate license from Nicolai Haehnle on r300_reg.h.
Fixes hangs when using r300.sf.net userland, tested on a Radeon 9600 on amd64.
make the b_iodone callback responsible for setting it if it is needed.
Previously, it was set unconditionally by bufdone() without holding
whichever lock is shared by the b_iodone callback and the corresponding
top-half function. Consequently, in a race, the top-half function could
conclude that operation was done before the b_iodone callback finished.
See, for example, aio_physwakeup() and aio_fphysio().
Note: I don't believe that the other, more widely-used b_iodone callbacks
are affected.
Discussed with: jeff
Reviewed by: phk
MFC after: 2 weeks
states - has to drop the lock when calling back to ip_output(), the state
purge timeout might run and gc the state. This results in a rb-tree
inconsistency. With this change we flag expiring states while holding the
lock and back off if the flag is already set.
Reported by: glebius
MFC after: 2 weeks
- Add a new uma_zfree_internal() flag, ZFREE_STATFREE, which causes it to
to update the zone's uz_frees statistic. Previously, the statistic was
updated unconditionally.
- Use the flag in situations where a "real" free occurs: i.e., one where
the caller is freeing an allocated item, to be differentiated from
situations where uma_zfree_internal() is used to tear down the item
during slab teardown in order to invoke its fini() method. Also use
the flag when UMA is freeing its internal objects.
- When exchanging a bucket with the zone from the per-CPU cache when
freeing an item, flush cache statistics back to the zone (since the
zone lock and critical section are both held) to match the allocation
case.
MFC after: 3 days
section on "make world". The old link still works fine, but all the
hyperlinks of the referenced document are broken; the same links
work find if /doc/en_US.ISO8859-1 is used instead of plain /doc to
reach the online Handbook copy.