numbers in all commands.
If people use hostnames and have dodgy resolvers or try to resolve
the hostname before the link is up, they get what they deserve....
Requested by: ru
don't bother to re-initialise the NCPs. Instead wait for
bundle_LinkClosed() to be called - IFF it actually is called.
By initialising the NCPs at this point, ppp was recursing
back into the fsm_Down() routing for the link, and losing
track of the reason that the link was being brought down.
The end result was that ``set reconnect'' would never do
anything.
Patiently pointed out by: ru
if the childs exec() has succeeded or failed by taking advantage
of the fact that both processes share the same memory.
FWIW:
I tried to implement this by doing a pipe(), setting the
write desciptors close-on-exec flag in the child and writing
errno to the descriptor if the exec() fails. The parent can
then ``if (read()) got errno else exec worked''.
This didn't work though - the child could write() to fd[1] on
exec failure, but the parent got 0 trying to read() from fd[0] !
Is this a bug in execve() ?
dropping out of background/foreground/direct mode.
This avoids either having to wait for the redial timer before
exiting or jaming up in select() waiting for something that'll
never happen.
This is invaluable for dial-on-demand connections...
In ppp.linkup:
set log -dns -tcp/ip
and in ppp.linkdown
set log +dns +tcp/ip
giving a much better account of why the link came up.
value.
This has minimal impact here, but if ppp ever needs to frequently
remove timers before they've timed out, it can badly skew the next
item in the timer list without this change.
The correct fix would be to store usecs in `rest' rather than
TICKUNITs, but the math is easier if we just round...
that we adjust that timers `rest' value (with the current getitimer()
values) before using that to adjust the next items `rest' value.
After adjusting that value, restart the timer service so that we've
now got the correct setitimer() values.
I don't claim to own the code and certainly don't want to discourage
people from fixing or updating it.
[I know it's the 29th, but the FREEZE hasn't yet been posted to committers]
passed to libalias. If there's not enough space, things like ftp
PORT commands start failing....
Reported by: Gianmarco Giovannelli <gmarco@giovannelli.it>
4096 - sizeof struct mbuf, and set MAX_MRU and MAX_MTU
back to 2048.
2048 is big enough as an MTU/MRU, but we need to be able
to allocate larger mbufs after reassembling IP fragments.
twice (once for the arg parsing and once to make it a normal character).
Make the man page example consistent.
Reminded by: Bryan Liesner <bleez@netaxs.com>
a running timer. This fixes a problem where a dial is manually
aborted, the hangup script kicks in and the chat timer ends up
on the timer queue twice (tick tick tick tick *boom*)
method avoided all race conditions, but suffered from
sometimes running out of buffer space if enough clients
were piled up at the same time.
Now, the client pushes the link descriptor, one end of a
socketpair() and the ppp version via sendmsg() at the
server. The server replies with a pid. The client then
transfers any link lock with uu_lock_txfr() and writev()s
the actual link contents. The socketpair is now the only
place we need to have large socket buffers and the bind()ed
socket can keep the default 4k buffer while still handling
around 90 racing clients.
length field rather than the one byte message length field embedded
in the packet. This steps slightly outside of the protocol boundaries,
but should not cause any problems.
Limitation noted by: Simon Winwood <simon@winwood.org>
Previously, ppp attempted to bind() to a local domain tcp socket
based on the peer authname & enddisc. If it succeeded, it listen()ed
and became MP server. If it failed, it connect()ed and became MP
client. The server then select()ed on the descriptor, accept()ed
it and wrote its pid to it then read the link data & link file descriptor,
and finally sent an ack (``!''). The client would read() the server
pid, transfer the link lock to that pid, send the link data & descriptor
and read the ack. It would then close the descriptor and clean up.
There was a race between the bind() and listen() where someone could
attempt to connect() and fail.
This change removes the race. Now ppp makes the RCVBUF big enough on a
socket descriptor and attempts to bind() to a local domain *udp* socket
(same name as before). If it succeeds, it becomes MP server. If it
fails, it sets the SNDBUF and connect()s, becoming MP client. The server
select()s on the descriptor and recvmsg()s the message, insisting on at
least two descriptors (plus the link data). It uses the second descriptor
to write() its pid then read()s an ack (``!''). The client creates a
socketpair() and sendmsg()s the link data, link descriptor and one of
the socketpair descriptors. It then read()s the server pid from the
other socketpair descriptor, transfers any locks and write()s an ack.
Now, there can be no race, and a connect() failure indicates a stale
socket file.
This also fixes MP ppp over ethernet, where the struct msghdr was being
misconstructed when transferring the control socket descriptor.
Also, if we fail to send the link, don't hang around in a ``session
owner'' state, just do the setsid() and fork() if it's required to
disown a tty.
UDP idea suggested by: Chris Bennet from Mindspring at FreeBSDCon
inserting a new item. Without this, it's possible to
mis-insert quite badly... but only by as much as the load of
the first item, which is almost always 1 second.
Initialise the timerservice with `restart' set if we're inserting
at the start of the list.
doing a HangupDone(). The HangupDone() may fuel
bundle_CleanDatalinks(), and if so, the bogus
UpdateSet() ends up select()ing on a closed
descriptor.....
Change the main `do/while' loop to a `for' loop so
that any `continue's do the bundle_CleanDatalinks()
& bundle_IsDead() bit.