ehternet frames to a netgraph hook.
Submitted by: "Vitaly V. Belekhov" <vitaly@riss-telecom.ru>
translated to 5.0 by me. man page not yet written.
This node still needs a little work.. don't use yet. Not yet linked into
the build.
and add a sysctl to pppoe to activate non standard ethertypes
so that idiot ISPs (apparently in France) who use
equipment from idiot suppliers (rumour says 3com)
who use nonstandard ethertypes can still connect.
"yep, sure we do pppoe, we use a different identifier to that dictated in
the standard, but sure it's pppoe!"
sysctl -w net.graph.stupid_isp=1 enables the changeover.
packet flow into two unidirectional flows.
Part of a suite of nodes developed for packet flow control.
More to follow as I have time to port them to 5.x or
as others do so. The ipfw node will be the hardest..
Submitted by: "Vitaly V. Belekhov" <vitaly@riss-telecom.ru>
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
also try implement teh documented behaviour in socket nodes
so that when there is only one hook, an unaddressed write/send
will DTRT and send the data to that hook.
(e.g. ethernet nodes are persistent until you rip out the hardware)
Use this support in the ethernet and sample nodes.
Add some more abstraction on the 'item's so that node and
hook reference counting can be checked easier.
Slight man page correction.
Make pppoe type dependent on ethernet type.
Clean up node shutdown a little.
Move a mutex from MTX_SPIN to MTX_DEF (oops)
Fix small ref-counting bug.
remove warning on one2many type.
The new method is 'flood' (in addition to the old round-robin)
in which incoming packets are sent to more than one outgoing hook.
(I'm not sure what Rogier is using this for but it seems generally useful
and isn't much extra)
Submitted by: Rogier R. Mulhuijzen (drwilco@drwilco.net )
from a node, but does it via the locking queue, thus ensuring that the
node is locked when it's hook is removed.
Add 'deadnode' and 'deadhook' structures for when a node or hook is
invalidated but not yet freed. (not yet freed)
This version is functional and is aproaching solid..
notice I said APROACHING. There are many node types I cannot test
I have tested: echo hole ppp socket vjc iface tee bpf async tty
The rest compile and "Look" right. More changes to follow.
DEBUGGING is enabled in this code to help if people have problems.
format version number. (userland programs should not need to be
recompiled when the netgraph kernel internal ABI is changed.
Also fix modules that don;t handle the fact that a caller may not supply
a return message pointer. (benign at the moment because the calling code
checks, but that will change)
This clears out my outstanding netgraph changes.
There is a netgraph change of design in the offing and this is to some
extent a superset of soem of the new functionality and some of the old
functionality that may be removed.
This code works as before, but allows some new features that I want to
work with and evaluate. It is the basis for a version of netgraph
with integral locking for SMP use.
This is running on my test machine with no new problems :-)
with Julian and Archie.
Implement a new ``sizedstring'' parse type for dealing with field pairs
consisting of a uint16_t followed by a data field of that size, and use
this to deal with the data_len and data fields.
Written by: Archie with some input by me
Agreed in principle by: julian
macros which provide the same functionality and are a bit more
efficient, convert use of CIRCLEQ's in netgraph PPP code to TAILQ's.
Reviewed by: Archie Cobbs <archie@dellroad.org>
terminated and the data_len field is no longer necessary.
Add ASCII2BINARY and BINARY2ASCII capabilities.
The old format is still understood and dealt with, but can't do
the ASCII2BINARY and BINARY2ASCII stuff.
Approved by: archie
<sys/proc.h> to <sys/systm.h>.
Correctly document the #includes needed in the manpage.
Add one now needed #include of <sys/systm.h>.
Remove the consequent 48 unused #includes of <sys/proc.h>.
of the code in the kernel properly checks for read-onlyness before
writing into an mbuf data area. When that code is fixed, the m_dup()
can go back to being m_copypacket().
Requested by: nsayer
include:
* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The
alpha port is still in transition and currently uses both.)
* Per-CPU idle processes.
* Interrupts are run in their own separate kernel threads and can be
preempted (i386 only).
Partially contributed by: BSDi (BSD/OS)
Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
control field compression. The ng_ppp(4) node correctly follows this
rule. However, PPPoE is an exception: when doing PPPoE *all* frames
are sent with address and control field compression.
Alter this node's behavior so that when an outgoing frame is received,
any leading address and control field bytes are removed. This makes
this node compatible with ng_ppp(4).
- It's worthwhile to use untimeout(9), even though we must still protect
against "false" timeouts, because most of the time it saves having to
handle a dummy timeout event.
- Slight tweaks to the delayed ACK algorithm paramters.
- Fix slowness when operating over fast connections, where the timeout(9)
granularity is on the same order of magnitude as the round trip time.
timeout(9) can happen up to 1 tick early, which was causing receive
ack timeouts to happen too early, causing bogus "lost" packets.
- Increase the local time counter to 64 bits to avoid roll-over.
- Keep statistics on memory allocation failures.
- Add a new option to always include the ack when sending data packets.
Might be useful in high packet loss situations. Might not.
otherwise, the ng_ether.ko KLD will never be unloadable after
all Ethernet interfaces are detached, as it should be, because
of the lingering extra reference.
Submitted by: "Yevmenkin, Maksim N, CSCIO" <myevmenkin@att.com>
netgraph. Eventually we may need to have a separate hook for packets
that already have a source AMC address but for now just drop it in.
Should fix PPPoE.
instead of bumping the recvAck counter by one, pretend that
all outstanding xmit packets are acknowleged, and restart
transmitting anew, with an empty (but halved) transmit window.
Put a lower bound on the adaptive timeout value.