is allocated or not, rather than keeping a count and attempting to
know it it is in-use. POSIX says that once a key is deleted, using the
key again results in undefined behaviour.
available and the kernel MIB setting is zero.
Return the result from getpagesize() if the p1003_1b.pagesize MIB
value is zero.
Suggested by: Joerg Schilling <schilling@fokus.gmd.de>
Here is a some example for avoiding a confusion.
It asssumes a logged host domain is "spec.co.jp". All
example is longer than UT_HOSTNAMELEN value.
1) turbo.tama.spec.co.jp: 192.19.0.2 -> trubo.tama
2) turbo.tama.foo.co.jp : 192.19.0.2 -> 192.19.0.2
3) specgw.spec.co.jp : 202.32.13.1 -> specgw
Submitted by: Atsushi Murai <amurai@spec.co.jp>
point to it rather than libscrypt.
This was how it was done prior to libscrypt being added in. This should
stop more people getting burnt with the /usr/lib -> /usr/lib/aout
transition, and the same when the ELF libs come online.
Move a.out libraries to /usr/lib/aout to make space for ELF libs.
Make rtld usr /usr/lib/aout as default library path.
Make ldconfig reject /usr/lib as an a.out library path.
Fix various Makefiles for LIBDIR!=/usr/lib breakage.
This will after a make world & reboot give a system that no
longer uses /usr/lib/*, infact one could remove all the old
libraries there, they are not used anymore.
We are getting close to an ELF make world, but I'll let this
all settle for a week or two...
written without returning to the caller. This only occurs on pipes
where either the number of bytes written is greater than the pipe
buffer or if there is insufficient space in the pipe buffer because the
reader is reading slower than the writer is writing.
size we receive here should fit into the receive buffer. Unfortunately,
there's no 100% foolproof way to distinguish a ridiculously large record
size that a client actually meant to send us from a ridiculously large
record size that was sent as a spoof attempt.
The one value that we can positively identify as bogus is zero. A
zero-sized record makes absolutely no sense, and sending an endless
supply of zeroes will cause the server to loop forever trying to
fill its receive buffer.
Note that the changes made to readtcp() make it okay to revert this
sanity test since the deadlock case where a client can keep the server
occupied forever in the readtcp() select() loop can't happen anymore.
This solution is not ideal, but is relatively easy to implement. The
ideal solution would be to re-arrange the way dispatching is handled
so that the select() loop in readtcp() can be eliminated, but this is
difficult to implement. I do plan to implement the complete solution
eventually but in the meantime I don't want to leave the RPC library
totally vulnerable.
That you very much Sun, may I have another.
uses readtcp() to gather data from the network; readtcp() uses select(),
with a timeout of 35 seconds. The problem with this is that if you
connect to a TCP server, send two bytes of data, then just pause, the
server will remain blocked in readtcp() for up to 35 seconds, which is
sort of a long time. If you keep doing this every 35 seconds, you can
keep the server occupied indefinitely.
To fix this, I modified readtcp() (and its cousin, readunix() in svc_unix.c)
to monitor all service transport handles instead of just the current socket.
This allows the server to keep handling new connections that arrive while
readtcp() is running. This prevents one client from potentially monopolizing
a server.
Also, while I was here, I fixed a bug in the timeout calculations. Someone
attempted to adjust the timeout so that if select() returned EINTR and the
loop was restarted, the timeout would be reduced so that rather than waiting
for another 35 seconds, you could never wait for more than 35 seconds total.
Unfortunately, the calculation was wrong, and the timeout could expire much
sooner than 35 seconds.
recently in BUGTRAQ. The set_input_fragment() routine in the XDR record
marking code blindly trusts that the first two bytes it sees will in fact
be an actual record header and that the specified size will be sane. In
fact, if you just telnet to a listening port of an RPC service and send a
few carriage returns, set_input_fragment() will obtain a ridiculously large
record size and sit there for a long time trying to read from the network.
A sanity test is required: if the record size is larger than the receive
buffer, punt.
recently in BUGTRAQ. If a stream oriented transport fails to properly decode
an RPC message header structure where there should be one, it should mark
the stream as dead so that the connection will be dropped.