a new rwlock, ipx_ifaddr_rw, wrapped with macros. This locking is
necessary but not sufficient, in isolation, to satisfy the stability
requirements of a fully parallel IPX input path during interface
reconfiguration.
MFC after: 3 weeks
using raw IPX sockets. While functional, this support is disabled
using a flag that can't be changed from userspace, and google reveals
no documentation or use of that flag anywhere. This eliminates a
potential lock order reversal and code reentrance issue in which the
output path reentered the input path in IPX.
An alternative to removal would be to use the netisr, as a comment I
added in 2005 suggests. While this change is fairly straight-forward,
the lack of any consumers or the easy possibility of consumers (kernel
modification and recompile required) suggests that this is simply an
unused feature.
Update README to remove this TODO, and a TODO regarding IPX/IP
encapsulation which was also removed a few years ago.
MFC after: 1 week
threads:
- Support up to one netisr thread per CPU, each processings its own
workstream, or set of per-protocol queues. Threads may be bound
to specific CPUs, or allowed to migrate, based on a global policy.
In the future it would be desirable to support topology-centric
policies, such as "one netisr per package".
- Allow each protocol to advertise an ordering policy, which can
currently be one of:
NETISR_POLICY_SOURCE: packets must maintain ordering with respect to
an implicit or explicit source (such as an interface or socket).
NETISR_POLICY_FLOW: make use of mbuf flow identifiers to place work,
as well as allowing protocols to provide a flow generation function
for mbufs without flow identifers (m2flow). Falls back on
NETISR_POLICY_SOURCE if now flow ID is available.
NETISR_POLICY_CPU: allow protocols to inspect and assign a CPU for
each packet handled by netisr (m2cpuid).
- Provide utility functions for querying the number of workstreams
being used, as well as a mapping function from workstream to CPU ID,
which protocols may use in work placement decisions.
- Add explicit interfaces to get and set per-protocol queue limits, and
get and clear drop counters, which query data or apply changes across
all workstreams.
- Add a more extensible netisr registration interface, in which
protocols declare 'struct netisr_handler' structures for each
registered NETISR_ type. These include name, handler function,
optional mbuf to flow ID function, optional mbuf to CPU ID function,
queue limit, and ordering policy. Padding is present to allow these
to be expanded in the future. If no queue limit is declared, then
a default is used.
- Queue limits are now per-workstream, and raised from the previous
IFQ_MAXLEN default of 50 to 256.
- All protocols are updated to use the new registration interface, and
with the exception of netnatm, default queue limits. Most protocols
register as NETISR_POLICY_SOURCE, except IPv4 and IPv6, which use
NETISR_POLICY_FLOW, and will therefore take advantage of driver-
generated flow IDs if present.
- Formalize a non-packet based interface between interface polling and
the netisr, rather than having polling pretend to be two protocols.
Provide two explicit hooks in the netisr worker for start and end
events for runs: netisr_poll() and netisr_pollmore(), as well as a
function, netisr_sched_poll(), to allow the polling code to schedule
netisr execution. DEVICE_POLLING still embeds single-netisr
assumptions in its implementation, so for now if it is compiled into
the kernel, a single and un-bound netisr thread is enforced
regardless of tunable configuration.
In the default configuration, the new netisr implementation maintains
the same basic assumptions as the previous implementation: a single,
un-bound worker thread processes all deferred work, and direct dispatch
is enabled by default wherever possible.
Performance measurement shows a marginal performance improvement over
the old implementation due to the use of batched dequeue.
An rmlock is used to synchronize use and registration/unregistration
using the framework; currently, synchronized use is disabled
(replicating current netisr policy) due to a measurable 3%-6% hit in
ping-pong micro-benchmarking. It will be enabled once further rmlock
optimization has taken place. However, in practice, netisrs are
rarely registered or unregistered at runtime.
A new man page for netisr will follow, but since one doesn't currently
exist, it hasn't been updated.
This change is not appropriate for MFC, although the polling shutdown
handler should be merged to 7-STABLE.
Bump __FreeBSD_version.
Reviewed by: bz
dispatched without Giant, and add NETISR_FORCEQUEUE, which allows specific
netisr handlers to always be dispatched via a queue (deferred). Mark the
usb and if_ppp netisr handlers as NETISR_FORCEQUEUE, and explicitly
acquire Giant in those handlers.
Previously, any netisr handler not marked NETISR_MPSAFE would necessarily
run deferred and with Giant acquired. This change removes Giant
scaffolding from the netisr infrastructure, but NETISR_FORCEQUEUE allows
non-MPSAFE handlers to continue to force deferred dispatch so as to avoid
lock order reversals between their acqusition of Giant and any calling
context.
It is likely we will be able to remove NETISR_FORCEQUEUE once
IFF_NEEDSGIANT is removed, as non-MPSAFE usb and if_ppp drivers will no
longer be supported.
Reviewed by: bz
MFC after: 1 month
X-MFC note: We can't remove NETISR_MPSAFE from stable/7 for KPI reasons,
but the rest can go back.
are the contents of the forwarded mbuf ever copied into mcopy, so there's
no need to have mcopy, conditionally look at mcopy, or conditionally free
it.
Noticed by: Coverity Prevent analysis tool
MFC after: 3 days
structures in IPX/SPX -- primarily, sequence numbering, PCB lists,
and PCBs for IPX raw sockets, IPX datagram sockets, and IPX/SPX.
As such, remove remove NET_NEEDS_GIANT() for IPX, and remove the
assertion of Giant in the ipxintr() IPX input path.
Note that IPX/SPX is not fully MPSAFE, and that there are some
problems with IPX/SPX locking that will require some further work.
However, it is now safe enough to run in general without the Giant
lock.
MFC after: 4 weeks
portion of IPX/SPX:
- Protect IPX PCB lists with the IPX PCB list mutex, in particular
when calling PCB and PCB list manipulation routines in ipx_pcb.c.
- Protect both IPX PCB state and SPX PCB state using the IPX PCB
mutex.
- Generally annotate locking, as well as adding liberal use of lock
assertions to document locking requirements.
- Where possible, use unlocked reads when reading integer or smaller
sized socket options on SPX sockets.
- De-spl throughout.
Notes:
- spx_input() expects both the list mutex and PCB mutex to be held
on entry, but will release both on return. Because sonewconn() is
called from spx_input(), it may actually drop one PCB lock and
acquire another during generation of a new connection, meaning the
caller is not in a position to unlock the PCB mutex.
MFC after: 3 weeks
When processing socket options against IPX PCBs, generally protect
PCB fields using the IPX PCB mutex. Where possible, use unlocked
reads on integer values to avoid locking overhead.
MFC after: 3 weeks
IPX PCB lists. Add macros to initialize, destroy, lock, unlock,
and assert the mutex. Initialize the mutex when IPX is started.
Add per-IPX PCB mutexes, ipxp_mtx in struct ipxpcb, to protect
per-PCB IPX/SPX state. Add macros to initialize, destroy, lock,
unlock, and assert the mutex. Initialize the mutex when a new
PCB is allocated; destroy it when the PCB is free'd.
MFC after: 2 weeks
whether or not the isr needs to hold Giant when running; Giant-less
operation is also controlled by the setting of debug_mpsafenet
o mark all netisr's except NETISR_IP as needing Giant
o add a GIANT_REQUIRED assertion to the top of netisr's that need Giant
o pickup Giant (when debug_mpsafenet is 1) inside ip_input before
calling up with a packet
o change netisr handling so swi_net runs w/o Giant; instead we grab
Giant before invoking handlers based on whether the handler needs Giant
o change netisr handling so that netisr's that are marked MPSAFE may
have multiple instances active at a time
o add netisr statistics for packets dropped because the isr is inactive
Supported by: FreeBSD Foundation
drain routines are done by swi_net, which allows for better queue control
at some future point. Packets may also be directly dispatched to a netisr
instead of queued, this may be of interest at some installations, but
currently defaults to off.
Reviewed by: hsu, silby, jayanth, sam
Sponsored by: DARPA, NAI Labs
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.
Tested on: i386, alpha, sparc64
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
include this in all kernels. Declare some const *intrq_present
variables that can be checked by a module prior to using *intrq
to queue data.
Make the if_tun module capable of processing atm, ip, ip6, ipx,
natm and netatalk packets when TUNSIFHEAD is ioctl()d on.
Review not required by: freebsd-hackers
running IPXRouted -s) between IPX configured interfaces, it generate
syslog messages "ipx_ctlinput: cmd 15." even if kernel compiled with
IPXPRINTFS=0 and IPX_ERRPRINTFS=0 options.
PR: 6875
Reviewed by: phk
Submitted by: Vladimir A. Jakovenko <vovik@ntu-kpi.kiev.ua>