to be clever by avoiding the 'check all domains in the search list'
cycle in certain cases, but this would lose if handed a name like
"foo.ctr" which refers to an FQDN of "foo.ctr.columbia.edu". If
"columbia.edu" is in the search list in /etc/resolv.conf then the
DNS lookup code should resolve it, but it didn't.
(a.k.a. /var/yp/Makefile.dist) refers to an obsoleted usage of the
-m option of rpc.yppasswdd. It is currently taken over by the -t
option. -m is used for a different purpose now.
PR: 7279
Reviewed by: phk
Submitted by: Amakawa Shuhei <amakawa@nebula.sf.t.u-tokyo.ac.jp>
a buffer overflow, but might negatively impact those hosts who have
enough aliases to fill MAXHOSTNAMELEN * 2 characters in them.
Good candidate for merging back into -stable. Lightly tested by me, but
it came from OpenBSD a while ago.
Obtained from: OpenBSD
I will not commit late at night. I will not commit late at night.
I swear it's been Monday all week for me.
Apply proper fix for services target submitted by Andre Albsmeier
<andre.albsmeier@mchp.siemens.de>. Sorry for botching this that last
time, Andre. (Could have been worse: t least I didn't break the build.)
/etc/services entries with any protocol instead of just udp and tcp.
Rather thani having the awk script explicitly search for 'udp' or 'tcp'
in the second field using index(), use split() to break up the field
at the '/' character if it exists, which extracts the protocol from
the field no matter what it is.
PR: 2206
underlying database code works. When dealing with first/next queries, you
have the notion of a database 'cursor,' which is essentially a file pointer
for the database. To select the first entry, you do a fetch with the
R_FIRST flag set, then you can use the R_NEXT flag to enumerate the other
entries in the database. Unfortunately, doing a direct fetch with no flag
does _not_ set the 'cursor,' so you can't do a direct fetch and then
enumerate the table from there.
The bug is that cached handles generated as the result of a YPPROC_MATCH
were being treated as though they were the same as handles generated by
a YPPROC_FIRST, which is not the case. The manifestation is that if you
do a 'ypmatch first-key-in-map map' followed by a yp_first()/yp_next()
pair, the yp_first() and yp_next() both return the first key in the
table, which makes the entry appear to be duplicated.
A couple smaller things since I'm here:
- yp_main.c and yp_error.c both have a global 'int debug' in them.
For some reason, our cc/ld doesn't flag this as a multiply defined
symbol even though it should. Removed the declaration from yp_main.c;
we want the one in yp_error.c.
- The Makefile wasn't installing ypinit in the right place.
/tmp/ypmake, thereby fixing problems with successive map updates
possibly reading stale copies of this file left behind by a previous
failed run.
PR: 5571
to work on FreeBSD, man page written by me.)
Also change Makefile.yp a little to be more tolerane in the face of
missing source files. Print a message if we can't find /var/yp/master.passwd
telling the user what to do to fix things.
at the end of gethostanswer()/getanswer()/whatever where it used to
return TRY_AGAIN. This breaks the domain list traversal in ypserv's
async DNS lookup module: it would only retry using the domain(s) from
the 'domain' or 'search' lines in /etc/resolv.conf if __dns_getanswer()
returned TRY_AGAIN.
Changed the test so that either TRY_AGAIN or NO_RECOVERY will work.
This seemed to me the best solution in the event somebody tries to
compile this code on an older system with a different version of BIND.
(You shouldn't do that of course, but then there's a lot of things
in the world that you shouldn't do and people do them anyway.)
is not sane: if the TTL on a pending but unanswered query hits 0 and the
circular queue entry is removed and free()d, the for() loop may still try
to use the entry pointer (which now points at no longer valid memory).
usually, deleting only the last entry off the end of the queue worked, but
if more than one was deleted, the server would crash. I changed things a
bit so this shouldn't happen anymore.
Also arranged to call the prune routine a bit more often.
we decide to do a DNS lookup, we NUL terminate the key string provided
by the client before passing it into the DNS lookup module. This is
actually wrong. Assume the key is 'foo.com'. In this case, key.keydat_val
will be "foo.com" and key.keydat_len will be 7 (seven characters; the
string is not NUL-terminated so it is not 8 as you might expect).
The string "foo.com" is actually allocated by the XDR routines when the
RPC request is decoded; exactly 7 bytes are allocated. By adding a NUL,
the string becomes "foo.com\0", but the '\0' goes into an 8th byte which
was never allocated for this string and which could be anywhere. The result
is that while the initial request may succeed, we could trash other
dynamically allocated structures (like, oh, I dunno, the circular map
cache queue?) and SEGV later. This is in fact what happens.
The fix is to copy the string into a larger local buffer and NUL-terminate
that buffer instead.
Crash first reported by: Ricky Chan <ricky@come.net.uk>
Bug finally located with: Electric Fence 2.0.5
in the transfer request actually exist. Technically ypxfr can do this too,
but why waste the cycles getting ypxfr off the ground for a transfer we
already know is going to fail.
Also apply stricter access control rules; ypproc_xfr_2_svc() is in a
different class than the normal map access procedures procedures.
- servers should be the first target listed in 'all:' in order for slave
servers to be updated correctly: yppush reads the ypservers map to figure
out where all the slaves are, so it needs to be loaded onto the master
ASAP.
- Fixed small bogon in publickey target which nobody has noticed since
we're not using the publickey.byname map yet.
yp_next_record() is called without a key (from xdr_my_ypresp_all()),
in which case it returns the first key in the map. When doing this,
it also needs to update the key index in the map queue entry. Without
this, ypproc_all_2_svc() (and hence ypcat) don't work correctly.
Noticed by: Michael L. Hench <hench@watt.cae.uwm.edu>
This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.
Boy, I'm glad we're not using sup anymore. This update would have been
insane otherwise.
- Fail YPPROC_ALL requests when we hit the child process limit. This
is a little harsh, but it helps prevent the parent from blocking
and causing other requests to time out.
yp_dnslookup.c:
- Check for duplicate RPC transaction IDs that indicate duplicate
requests sent due to RPC retransmissions. We don't want to send
a second DNS request for the same data while an existing request
is in progress.
- Fix small formatting bogon in snprintf() in yp_async_lookup_addr().
assume that the timeval will be preserved. As the man page says:
".. it is unwise to assume that the timeout value will be unmodified
by the select() call." This happens on Linux and on my system at least.
- yp_main.c: Always add the resolver socket to the set of fds
monitored by select(). It can happen that pending == 0 but we
still have some data in the socket buffer from an old query.
This way, the data will be flushed in a timely manner.
- yp_extern.h: remove proto for yp_dns_pending() since we don't need
it anynmore.
- yp_server.c: call yp_async_lookup_name()/yp_async_lookup_addr()
functions with the svc_req pointer as an arg instead of the xprt.
(The svc_req struct includes a pointer to the transport handle,
and it also has the service version number which the async DNS
code will need. (see below))
- yp_dnslookup.c:
o Nuke yp_dns_pending() since we don't need it anymore.
o In yp_run_dnsq(), swallow up and ignore replies if no requests
are pending or the ID doesn't match any of the IDs in the queue.
o In yp_send_dns_reply(), we assume that we will always be
replying to an NIS v2 client. While this will probably always
be the case, we do support the v1 'match' procedure, and it
has a different result struct than v2. For completeness,
support replying to both NIS v1 and v2 clients.
o Update the queue entry structure to include a member to
keep track of the NIS version number.
o Have yp_async_lookup_name/addr() extract the version number
from the svc_req structure and save it with the queue entry
for yp_send_dns_reply() to inspect later.
o Add some comments.
- Don't dereference a NULL hostent pointer (if T_PTR lookup fails).
- Today I asked myself: "Self, you wrote this nifty async resolver
that does a great job handling delayed replies to clients using
the UDP transport, and the yplib code in libc always uses UDP
(except for yp_all()). But what if some dork makes a DNS lookup using
TCP?" Being the only dork on hand at the time, I tried it and was
enlightened. As I suspected, my transaction ID frobbing hacks cause
fireworks if called on a TCP transport handle (duh: the structures
are different). Fix: check the type of socket in xprt->xp_sock using
getsockopt() and don't use svcudp_get_xid() and svcudp_set_xid() for
anything except SOCK_DGRAM sockets. (Since accept() gives you a
new socket for each connection, the transaction ID munging isn't
needed for TCP anyway.)
- yp_dblookup.c: Create non-DB specific database access functions.
Using these allows access to the underlying database functions without
needing explicit knowledge of Berkeley DB. (These are used only
when DB_CACHE is #defined. Other programs that use the non-caching
functions (yp_mkdb, ypxfr, yppush, rpc.yppasswdd) shouldn't notice
the difference.)
- yp_dnslookup: Implement async DNS lookups. We send our own DNS
requests using UDP and put the request in a queue. When the response
arrives, we use the ID in the header to find the corresponsing queue
entry and then send the response to the client. We can go about our
business and handle other YP requests in the meantime. This way, we
can deal with time consuming DNS requests without blocking and without
forking.
- yp_server.c: Convert to using new non-DB-specific database access
functions. This simplifies the code a bit and removes the need for
this module to know anything about Berkeley DB. Also convert the
ypproc_match_2_svc() function to use the async DNS lookup routines.
- yp_main.c: tweak yp_svc_run() to add the resolver socket to the
set of descriptors monitored in the select() loop. Also add a
timeout to select(); we may get stale DNS requests stuck in the
queue which we want to invalidate after a while. If the timeout
hits, we decrement the ttl on all pending DNS requests and nuke
those requests that aren't handled before ttl hits zero.
- yp_extern.h: Add prototypes for new stuff.
- yp_svc_udp.c (new file): The async resolver code needs to be able
to rummage around inside the RPC UDP transport handle in order to
work correcty. There's basically one transport handle, and each time
a request comes in, the transaction ID in the handle is changed.
This means that if we queue a DNS request, then we handle some other
unrelated requests, we will be unable to send the DNS response because
the transaction ID and remote address of the client that made the DNS
request will have been lost. What we need to do is save the client
address and transaction ID in the queue entry for the DNS request,
then put the transaction ID and address back in the transport handle
when we're ready to reply. (And then we have to undo the change so
as not to confuse any other part of the server.) The trouble is that
the transaction ID is hidden in an opaque part of the transport handle,
and only the code in the svc_udp module in the RPC library knows how
to handle it. This file contains a couple of functions that let us
read and set the transaction ID in spite of this. This is really a
dirty trick and I should be taken out and shot for even thinking about
it, but there's no other way to get this stuff to work.
- Makefile: add yp_svc_udp.c to SRCS.
when I came up with this idea weren't strong enough to help me see it
through. If this was a self-contained application and I had complete
control over what data got sent through what socket and when, I might
be able to get everything to work right without blocking, but instead
I have RPC/XDR in between me and the socket layer, and they have their
own ideas about what to do.
Maybe one day I'll go totally mad and figure out the right way to do
this; in the meantime this mess goes on the back burner.
_without_ using fork().
The problem with YPPROC_ALL is that it transmits an entire map through
a TCP pipe as the result of a single RPC call. First of all, this requires
certain hackery in the XDR filter. Second, if the map being sent is
large, the server can end up spending lots of time in the XDR filter
sending to just the one client, while requests for other clients will
go unanswered.
My original solution for this was to fork() the request into a child
process which terminates after the map has been transmitted (or the
transfer is interrupted due to an error). This leaves the parent free
to handle other requests. But this solution is kind of lame: fork()
is relatively expensive, and we have to keep a cap on the number of
child processes to keep from swamping the system.
What we do now is grab control of the service transport handle and XDR
handle from the RPC library and send the records one at a time ourselves
instead of letting the RPC library do it. We send a record, then go
back to the svc_run() loop and select() on the socket. If select() says
we can still write data, we send the next record. Then we call
svc_getreqset() and handle other RPCs and loop around again. This way,
we can handle other RPCs between records.
We manage multiple YPPROC_ALL requests using a circular queue. When a
request is done, we dequeue it and destroy the handle. We also tag
each request with a ttl which is decremented whevever we run the queue
and a handle isn't serviced. This lets us nuke requests that have sat
idle for too long (if we didn't do this, we might run out of socket
descriptors.)
Now all I have to do is come up with an async resolver, and ypserv
won't need to fork() at all. :)
Note: these changes should not go into 2.2 unless they get a very
throrough shakedown before the final cutoff date.
and set the B and S variables here, but I forgot to actually add them to
the master.passwd and hosts.* targets. In other words, they weren't being
passed to yp_mkdb as needed.
This needs to go into 2.2; it doesn't break things a lot, but it leaves
your master.passwd maps available to unprivileged users without you
realizing it.
any maps that may have them. If the YP_SECURE key is present, ypserv
will only allow access to the map from clients on reserved ports.
If the YP_INTERDOMAIN key is present, the server will do DNS lookups
for hostnames that it can't find in hosts.byname or hosts.byaddr.
This is the same as the -d flag (which is retained for backwards
compatibility) but it can be set on a per-map/per-domain basis.
Also modified /var/yp/Makefile to add YP_INTERDOMAIN to the hosts.*
maps and YP_SECURE to master.passwd.* maps by default.
the callback is a fatal error for this function; return immediatlely if
this happens. Also make the "failed to establish callback handle" error
mesaage print the IP address of the target callback host.
have it check to see that it doesn't contain any '/' characters. This
prevents possible silliness like ypcat "../../../kernel". We already
test the domain name for this in yp_validdomain(), and ypserv itself
tests the map name in yp_open_db(), but it doesn't hurt to be paranoid
and test for it in the generic access routine too. rpc.ypxfrd does not
test the map name for slashes, but it does call yp_access() with the
map name, so this removes a potential vulnerability from there.
Also make the tests for IPPORT_RESERVED a little more selective: make
sure it trips when map == master.passwd.*, prog == YPPROC and proc ==
YPPROC_XFR, and prog == YPXFRD_FREEBSD_PROG and proc == YPXFRD_GETMAP.
Also use IPPORT_RESERVED instead of hard-coded value.