ng_apply_item(). There are possible (and I have got one) use-after-free
class panics because of it.
If hook is specified, require it to be valid at the apply time. The only
exceptions are the internal ng_con_part2(), ng_con_part3() and
ng_rmhook_part2() functions which are specially made to work with invalid
hooks.
virtualization work done by Marko Zec (zec@).
This is the first in a series of commits over the course
of the next few weeks.
Mark all uses of global variables to be virtualized
with a V_ prefix.
Use macros to map them back to their global names for
now, so this is a NOP change only.
We hope to have caught at least 85-90% of what is needed
so we do not invalidate a lot of outstanding patches again.
Obtained from: //depot/projects/vimage-commit2/...
Reviewed by: brooks, des, ed, mav, julian,
jamie, kris, rwatson, zec, ...
(various people I forgot, different versions)
md5 (with a bit of help)
Sponsored by: NLnet Foundation, The FreeBSD Foundation
X-MFC after: never
V_Commit_Message_Reviewed_By: more people than the patch
dispatched without Giant, and add NETISR_FORCEQUEUE, which allows specific
netisr handlers to always be dispatched via a queue (deferred). Mark the
usb and if_ppp netisr handlers as NETISR_FORCEQUEUE, and explicitly
acquire Giant in those handlers.
Previously, any netisr handler not marked NETISR_MPSAFE would necessarily
run deferred and with Giant acquired. This change removes Giant
scaffolding from the netisr infrastructure, but NETISR_FORCEQUEUE allows
non-MPSAFE handlers to continue to force deferred dispatch so as to avoid
lock order reversals between their acqusition of Giant and any calling
context.
It is likely we will be able to remove NETISR_FORCEQUEUE once
IFF_NEEDSGIANT is removed, as non-MPSAFE usb and if_ppp drivers will no
longer be supported.
Reviewed by: bz
MFC after: 1 month
X-MFC note: We can't remove NETISR_MPSAFE from stable/7 for KPI reasons,
but the rest can go back.
rev. 1.149 rework.
It allows to save several percents of CPU time on SMP by using UMA's
internal per-CPU allocation limits instead of own global variable
each time updated with atomics.
Tested with: Netperf cluster
- reorder structures fields (XX_refs) a bit to group fields modified
same time together. According to my tests it gives up to 10%
SMP performance benefit on real workload due to reduced inter-CPU
cache trashing.
- change q_flags from long to int as long is not really needed there and
it's usage with atomics is argued by some people.
- move NGF_WORKQ flag into the separate field q_flags2 as it protected by
queue mutex instead of node writer protection used by the rest of flags.
- move nd_work queue entry to ng_queue structure to which it is more
related and make it STAILQ instead of TAILQ as now it is a classic FIFO.
- remove q_node pointer from ng_queue structure as it is not really needed.
- reimplement item queue using STAILQ instead of own equal implementation.
As soon as BT subsystem has own item queues using ng_item.el_next update
it also.
- change depth field in ng_item from uintptr_t to u_int. It was made
uintptr_t to keep ABI compatibility.
Reviewed by: julian, emax
Tested with: Netperf cluster
will never exit ngintr(), while there is some ready requests on the queue.
It was made years ago with hope of parallel queue processing by several
net threads. But even if we have several threads sometimes, we have no
rights to process queue in parallel as it will break original requests
serialization that is critically important for some setups.
Before this patch callback returned result of the last finished call chain.
Now it returns last nonzero result from all call chain results in this request.
As soon as this improvement gives reliable error reporting, it is now possible
to remove dirty workaround in ng_socket, made to return ENOBUFS error statuses
of request-response operations. That workaround was responsible for returning
ENOBUFS errors to completely unrelated requests working at the same time
on socket.
to avoid terrible unpredicted effects for netgraph operation of their
exhaustion while allocating control messages.
Add separate configurable 512 items limit for data items allocation
for DoS/overload protection.
Discussed with: julian
for that argument. This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.
Assign approximate why values to all current consumers of the
kdb_enter() interface.
removing some copy&pasted code.
- Reduce copy and paste in ng_apply_item().
- Resurrect ng_send_fn() as a valid symbol, not a define.
Reviewed by: mav, julian
When item forwarded refence counter is incremented, when item
processed, counter decremented. When counter reaches zero,
apply handler is getting called.
Now it allows to report right connect() call status from user-level
at the right time.
Without it some errors may left unnoticed and unhandeled
that will lead to hooks left in half-connected state.
Reviewed by: julian@
Approved by: re (kensmith), glebius (mentor)
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.
Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.
In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported. In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.
for doing this job. This change will make it easy to migrate from using
spinning locks to adaptive ones.
Reviewed by: glebius, julian
Approved by: cognet (mentor)
from whoever has dequeued the item from the queue. Generally they have
no interest in the result, and even if it is called by the queuer, it
should still pretend that it was queued. The queuer should be assuming
that the call was queued and giving them the false confidence that they
are getting status leads to hard to find bugs.
Make it a void and remove all the code that tried to return status through it.
- Introduce ng_topo_mtx, a mutex to protect topology changes.
- In ng_destroy_node() protect with ng_topo_mtx the process
of checking and pointing at ng_deadnode. [1]
- In ng_con_part2() check that our peer is not a ng_deadnode,
and protect the check with ng_topo_mtx.
- Add KASSERTs to ng_acquire_read/write, to make more
understandible synopsis in case if called on ng_deadnode.
Reported by: Roselyn Lee [1]
- Introduce a new flags NGQF_QREADER and NGQF_QWRITER,
which tell how the item should be actually applied,
overriding NGQF_READER/NGQF_WRITER flags.
- Do not differ between pending reader or writer. Use only
one flag that is raised, when there are pending items.
- Schedule netgraph ISR in ng_queue_rw(), so that callers
do not need to do this job.
- Fix several comments.
Submitted by: julian
semantics, and then was reused for next node, it still would be applied
as writer again.
To fix the regression the decision is made never to alter item->el_flags
after the item has been allocated. This requires checking for overrides
both in ng_dequeue() and in ng_snd_item().
Details:
- Caller of the ng_apply_item() knows what is the current access to
node and specifies it to ng_apply_item(). The latter drops the
given access after item has beem applied.
- ng_dequeue() needs to be supplied with int pointer, where it stores
the obtained access on node.
- Check for node/hook access overrides in ng_dequeue().
times consequently, without checking whether callout has been serviced
or not. (ng_pptpgre and ng_ppp were catched in this behavior).
- In ng_callout() save old item before calling callout_reset(). If the
latter has returned 1, then free this item.
- In ng_uncallout() clear c->c_arg.
Problem reported by: Alexandre Kardanev
either reader or writer flag on item in the function, that
allocates the item. Do not modify these flags when item is
applied or queued.
The only exceptions are node and hook overrides - they can
change item flags to writer.
At the end of ng_snd_item(), node queue is processed. In certain
netgraph setups deep recursive calls can occur.
For example this happens, when two nodes are connected and can send
items to each other in both directions. If, for some reason, both nodes
have a lot of items in their queues, then the processing thread will
recurse between these two nodes, delivering items left and right, going
deeper in the stack. Other setups can suffer from deep recursion, too.
The following factors can influence risk of deep netgraph call:
- periodical write-access events on node
- combination of slow link and fast one in one graph
- net.inet.ip.fastforwarding
Changes made:
- In ng_acquire_{read,write}() do not dequeue another item. Instead,
call ng_setisr() for this node.
- At the end of ng_snd_item(), do not process queue. Call ng_setisr(),
if there are any dequeueable items on node queue.
- In ng_setisr() narrow worklist mutex holding.
- In ng_setisr() assert queue mutex.
Theoretically, the first two changes should negatively affect performance.
To check this, some profiling was made:
1) In general real tasks, no noticable performance difference was found.
2) The following test was made: two multithreaded nodes and one
single-threaded were connected into a ring. A large queues of packets
were sent around this ring. Time to pass the ring N times was measured.
This is a very vacuous test: no items/mbufs are allocated, no upcalls or
downcalls outside of netgraph. It doesn't represent a real load, it is
a stress test for ng_acquire_{read,write}() and item queueing functions.
Surprisingly, the performance impact was positive! New code is 13% faster
on UP and 17% faster on SMP, in this particular test.
The problem was originally found, described, analyzed and original patch
was written by Roselyn Lee from Vernier Networks. Thanks!
Submitted by: Roselyn Lee <rosel verniernetworks com>
an item may be queued and processed later. While this is OK for mbufs,
this is a problem for control messages.
In the framework:
- Add optional callback function pointer to an item. When item gets
applied the callback is executed from ng_apply_item().
- Add new flag NG_PROGRESS. If this flag is supplied, then return
EINPROGRESS instead of 0 in case if item failed to deliver
synchronously and was queued.
- Honor NG_PROGRESS in ng_snd_item().
In ng_socket:
- When userland sends control message add callback to the item.
- If ng_snd_item() returns EINPROGRESS, then sleep.
This change fixes possible races in ngctl(8) scripts.
Reviewed by: julian
Approved by: re (scottl)
specified by caller.
- Change ng_send_item() interface - use 'flags' argument instead of
boolean 'queue'.
- Extend ng_send_fn(), ng_package_data() and ng_package_msg()
interface - add possibility to pass flags. Rename ng_send_fn() to
ng_send_fn1(). Create macro for ng_send_fn().
- Update all macros, that use ng_package_data() and ng_package_msg().
Reviewed by: julian
SI_SUB_INIT_IF but before SI_SUB_DRIVERS. Make Netgraph(4)
framework initialize at SI_SUB_NETGRAPH level.
This does not address the bigger problem: MODULE_DEPEND
does not seem to work when modules are compiled in the
kernel, but it fixes the problem with Netgraph Bluetooth
device drivers reported by a few folks.
PR: i386/69876
Reviewed by: julian, rik, scottl
MFC after: 3 days
out c->c_func, we can't take it after callout_stop(). To take it before
we need to acquire callout_lock, to avoid race. This commit narrows
down area where lock is held, but hack is still present.
This should be redesigned.
Approved by: julian (mentor)
using linker_load_module(). This works OK if NGM_MKPEER message came
from userland and we have process associated with thread. But when
NGM_MKPEER was queued because target node was busy, linker_load_module()
is called from netisr thread leading to panic.
To workaround that we do not load modules by framework, instead ng_socket
loads module (if this is required) before sending NGM_MKPEER.
However, the race condition between return from NgSendMsg() and actual
creation of node still exist and needs to be solved.
PR: kern/62789
Approved by: julian
Also introduce a macro to be called by persistent nodes to signal their
persistence during shutdown to hide this mechanism from the node author.
Make node flags have a consistent style in naming.
Document the change.
is obviously not run a lot. (but is in some test cases).
This code is not usually run because it covers a case that doesn't
happen a lot (removing a node that has data traversing it).
for unknown events.
A number of modules return EINVAL in this instance, and I have left
those alone for now and instead taught MOD_QUIESCE to accept this
as "didn't do anything".
- Assert the mutex in NG_IDHASH_FIND() since the mutex is required to
safely walk the node lists in the ng_ID_hash table.
- Acquire the ng_nodelist_mtx when walking ng_allnodes or ng_allhooks
to generate state dump output from the netgraph sysctls.
behaviour lost in the change from 4.x style netgraph tee nodes.
Alter the tee node to use the new method. Document the behaviour.
Step the ABI version number... old netgraph klds will refuse to load.
Better than just crashing.
Submitted by: Gleb Smirnoff <glebius@cell.sick.ru>
whether or not the isr needs to hold Giant when running; Giant-less
operation is also controlled by the setting of debug_mpsafenet
o mark all netisr's except NETISR_IP as needing Giant
o add a GIANT_REQUIRED assertion to the top of netisr's that need Giant
o pickup Giant (when debug_mpsafenet is 1) inside ip_input before
calling up with a packet
o change netisr handling so swi_net runs w/o Giant; instead we grab
Giant before invoking handlers based on whether the handler needs Giant
o change netisr handling so that netisr's that are marked MPSAFE may
have multiple instances active at a time
o add netisr statistics for packets dropped because the isr is inactive
Supported by: FreeBSD Foundation
conservative lock. The problem with the lock-less algorithm is that
it suffers from the ABA problem. Running an application with funnels
a couple of 100kpkts/s through the netgraph system on a dual CPU system
with MPSAFE drivers will panic almost immediatly with the old algorithm.
It may be possible to eliminate the contention between threads that insert
free items into the list and those that get free items by using the
Michael/Scott queue algorithm that has two locks.
of asserting that an mbuf has a packet header. Use it instead of hand-
rolled versions wherever applicable.
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
drain routines are done by swi_net, which allows for better queue control
at some future point. Packets may also be directly dispatched to a netisr
instead of queued, this may be of interest at some installations, but
currently defaults to off.
Reviewed by: hsu, silby, jayanth, sam
Sponsored by: DARPA, NAI Labs
queue items that can be allocated by netgraph and the number of free queue
items that are cached on a private list.
Netgraph places an upper limit on the number of queue items it may allocate.
When there is a large number of netgraph messages travelling through the
system (100k/sec and more) there is a high probability, that messages get
queued at the nodes and netgraph runs out of queue items. In this case the data
flow through netgraph gets blocked. The tuneable for the number of free
items lets one trade memory for performance.
The tunables are also available as read-only sysctls.
PR: kern/47393
Reviewed by: julian
Approved by: jake (mentor)