Allow ``set server open'' to re-open the diagnostic socket.
Handle SIGUSR1 by re-opening the diagnostic socket
When receiving SIGUSR2 (and in ``set server none''), don't forget the
socket details so that ``set server open'' and SIGUSR1 open it again.
Don't create the diagnostic socket as uid 0 ! It's far to dangerous.
of the two when calculating the MP throughput average for the ``set
autoload'' implementation.
This makes more sense as all links I know of are full-duplex. This
also means that people may need to adjust their autoload settings
as 100% bandwidth is now the theoretical maximum rather than 200%
(but of course, halfing the current settings is probably not the
correct answer either!).
This involves a ppp version bump as we need to pass an extra
throughput array through the MP local domain socket.
cumulative total of all active links rather than basing it on the
total of PROTO_MP traffic.
This fixes a problem whereby Cisco routers send PROTO_IP packets only
when there's only one link (hmm, what a good idea!).
o If the new ``filter-decapsulation'' is enabled, delve into UDP packets
that contain 0xff 0x03 as the first two bytes, and if we recognise it
as PROTO_IP, decapsulate it for the purpose of filter checking.
If we recognise it as PROTO_<anything else> mention this for logging
purposes only.
This change is aimed at people running PPPoUDP where the UDP traffic is
being sent over another PPP link. It's desireable to have the top level
link connected all the time, but to have the bottom level link capable
of decapsulating the traffic and comparing the payload against the filters,
thus allowing ``set filter dial ...'' to work in tunnelled environments.
The caveat here is that the top ppp cannot employ any compression layers
without making the data unreadable for the bottom ppp. ``disable deflate
pred1 vj'' and ``deny deflate pred1 vj'' is suggested.
This is invaluable for dial-on-demand connections...
In ppp.linkup:
set log -dns -tcp/ip
and in ppp.linkdown
set log +dns +tcp/ip
giving a much better account of why the link came up.
method avoided all race conditions, but suffered from
sometimes running out of buffer space if enough clients
were piled up at the same time.
Now, the client pushes the link descriptor, one end of a
socketpair() and the ppp version via sendmsg() at the
server. The server replies with a pid. The client then
transfers any link lock with uu_lock_txfr() and writev()s
the actual link contents. The socketpair is now the only
place we need to have large socket buffers and the bind()ed
socket can keep the default 4k buffer while still handling
around 90 racing clients.
Previously, ppp attempted to bind() to a local domain tcp socket
based on the peer authname & enddisc. If it succeeded, it listen()ed
and became MP server. If it failed, it connect()ed and became MP
client. The server then select()ed on the descriptor, accept()ed
it and wrote its pid to it then read the link data & link file descriptor,
and finally sent an ack (``!''). The client would read() the server
pid, transfer the link lock to that pid, send the link data & descriptor
and read the ack. It would then close the descriptor and clean up.
There was a race between the bind() and listen() where someone could
attempt to connect() and fail.
This change removes the race. Now ppp makes the RCVBUF big enough on a
socket descriptor and attempts to bind() to a local domain *udp* socket
(same name as before). If it succeeds, it becomes MP server. If it
fails, it sets the SNDBUF and connect()s, becoming MP client. The server
select()s on the descriptor and recvmsg()s the message, insisting on at
least two descriptors (plus the link data). It uses the second descriptor
to write() its pid then read()s an ack (``!''). The client creates a
socketpair() and sendmsg()s the link data, link descriptor and one of
the socketpair descriptors. It then read()s the server pid from the
other socketpair descriptor, transfers any locks and write()s an ack.
Now, there can be no race, and a connect() failure indicates a stale
socket file.
This also fixes MP ppp over ethernet, where the struct msghdr was being
misconstructed when transferring the control socket descriptor.
Also, if we fail to send the link, don't hang around in a ``session
owner'' state, just do the setsid() and fork() if it's required to
disown a tty.
UDP idea suggested by: Chris Bennet from Mindspring at FreeBSDCon
ip_tos == IPTOS_LOWDELAY now get precidence over urgent
packets with ip_tos != IPTOS_LOWDELAY and non-urgent packets
with ip_tos == IPTOS_LOWDELAY.
Enhance the ``set urgent'' syntax to allow for urgent UDP
packets as well as urgent TCP packets.
(LCP/CCP/IPCP), one for urgent IP traffic and one for
everything else.
o Add the ``set urgent'' command for adjusting the list of
urgent port numbers. The default urgent ports are 21, 22,
23, 513, 514, 543 and 544 (Ports 80 and 81 have been
removed from the default priority list).
o Increase the buffered packet threshold from 20 to 30.
o Report the number of packets in the IP output queue and the
list of urgent ports under ``show ipcp''.
that ppp stays in the foreground.
o Add the -quiet switch to quieten ppps startup
o Add the -nat flag and discourage the use of the -alias flag. Both do
the same thing.
o Correct some nat usage strings.
o Change the internal ``alias'' command to ``nat''.
o If we're using RADIUS and the RADIUS mtu is less than our
peers mru/mrru, reduce our mtu to this value for NetBSD too.
o Make struct throughput's sample period dynamic and tweak the ppp
version number to reflect the extra stuff being passed through
the local domain socket as a result (MP mode).
o Measure the current throughput based on the number of samples actually
taken rather than on the full sample period.
o Keep the throughput statisics persistent while being passed to
another ppp invocation through the local domain socket.
o When showing throughput statistics after the timer has stopped, use
the stopped time for overall calculations, not the current time.
Also show the stopped time and how long the current throughput has
been sampled for.
o Use time() consistently in throughput.c
o Tighten up the ``show bundle'' output.
o Introduce the ``set bandwidth'' command.
o Rewrite the ``set autoload'' command. It now takes three arguments
and works based on a rolling bundle throughput average compared against
the theoretical bundle bandwidth over a given period (read: it's now
functional).