size we receive here should fit into the receive buffer. Unfortunately,
there's no 100% foolproof way to distinguish a ridiculously large record
size that a client actually meant to send us from a ridiculously large
record size that was sent as a spoof attempt.
The one value that we can positively identify as bogus is zero. A
zero-sized record makes absolutely no sense, and sending an endless
supply of zeroes will cause the server to loop forever trying to
fill its receive buffer.
Note that the changes made to readtcp() make it okay to revert this
sanity test since the deadlock case where a client can keep the server
occupied forever in the readtcp() select() loop can't happen anymore.
This solution is not ideal, but is relatively easy to implement. The
ideal solution would be to re-arrange the way dispatching is handled
so that the select() loop in readtcp() can be eliminated, but this is
difficult to implement. I do plan to implement the complete solution
eventually but in the meantime I don't want to leave the RPC library
totally vulnerable.
That you very much Sun, may I have another.
uses readtcp() to gather data from the network; readtcp() uses select(),
with a timeout of 35 seconds. The problem with this is that if you
connect to a TCP server, send two bytes of data, then just pause, the
server will remain blocked in readtcp() for up to 35 seconds, which is
sort of a long time. If you keep doing this every 35 seconds, you can
keep the server occupied indefinitely.
To fix this, I modified readtcp() (and its cousin, readunix() in svc_unix.c)
to monitor all service transport handles instead of just the current socket.
This allows the server to keep handling new connections that arrive while
readtcp() is running. This prevents one client from potentially monopolizing
a server.
Also, while I was here, I fixed a bug in the timeout calculations. Someone
attempted to adjust the timeout so that if select() returned EINTR and the
loop was restarted, the timeout would be reduced so that rather than waiting
for another 35 seconds, you could never wait for more than 35 seconds total.
Unfortunately, the calculation was wrong, and the timeout could expire much
sooner than 35 seconds.
recently in BUGTRAQ. The set_input_fragment() routine in the XDR record
marking code blindly trusts that the first two bytes it sees will in fact
be an actual record header and that the specified size will be sane. In
fact, if you just telnet to a listening port of an RPC service and send a
few carriage returns, set_input_fragment() will obtain a ridiculously large
record size and sit there for a long time trying to read from the network.
A sanity test is required: if the record size is larger than the receive
buffer, punt.
recently in BUGTRAQ. If a stream oriented transport fails to properly decode
an RPC message header structure where there should be one, it should mark
the stream as dead so that the connection will be dropped.
Use rpcgen's -C option, although using it for non-headers breaks K&R
support. A local copy of yp.h is built to avoid adding
-I/usr/include/rpcsvc to CFLAGS. This version of yp.h differed from
<rpcsvc/yp.h> only in not declaring prototypes.
Fixed style bugs.
but also assumes that they are 32-bits. This is one place where I don't
think it is appropriate to change 'long' to 'int'. I don't see why the
code couldn't be fixed so that using natural long variables does the
right thing. It's spaggetti code so it'll take some effort. Obviously
NetBSD thought so too because they change 'long' to 'int32_t' etc
and left it at that. As a temporary measure FreeBSD/Alpha can use the
NetBSD code and put this on the list of things to fix.
One bug was relatively harmless (select's timeout had an uninitialized
tv_usec), the other I'm not so sure.. (neglected to catch select returns
less than zero). Both of these were irrelevant on kernels with poll().
chunks of res_comp.c and replacing it with chunks of bind-8.1.1's resolver
code. (There are no interface changes though)
The other parts are better bounds checking related.