associated changes that had to happen to make this possible as well as
bugs fixed along the way.
Bring in required TLI library routines to support this.
Since we don't support TLI we've essentially copied what NetBSD
has done, adding a thin layer to emulate direct the TLI calls
into BSD socket calls.
This is mostly from Sun's tirpc release that was made in 1994,
however some fixes were backported from the 1999 release (supposedly
only made available after this porting effort was underway).
The submitter has agreed to continue on and bring us up to the
1999 release.
Several key features are introduced with this update:
Client calls are thread safe. (1999 code has server side thread
safe)
Updated, a more modern interface.
Many userland updates were done to bring the code up to par with
the recent RPC API.
There is an update to the pthreads library, a function
pthread_main_np() was added to emulate a function of Sun's threads
library.
While we're at it, bring in NetBSD's lockd, it's been far too
long of a wait.
New rpcbind(8) replaces portmap(8) (supporting communication over
an authenticated Unix-domain socket, and by default only allowing
set and unset requests over that channel). It's much more secure
than the old portmapper.
Umount(8), mountd(8), mount_nfs(8), nfsd(8) have also been upgraded
to support TI-RPC and to support IPV6.
Umount(8) is also fixed to unmount pathnames longer than 80 chars,
which are currently truncated by the Kernel statfs structure.
Submitted by: Martin Blapp <mb@imp.ch>
Manpage review: ru
Secure RPC implemented by: wpaul
to be clever by avoiding the 'check all domains in the search list'
cycle in certain cases, but this would lose if handed a name like
"foo.ctr" which refers to an FQDN of "foo.ctr.columbia.edu". If
"columbia.edu" is in the search list in /etc/resolv.conf then the
DNS lookup code should resolve it, but it didn't.
a buffer overflow, but might negatively impact those hosts who have
enough aliases to fill MAXHOSTNAMELEN * 2 characters in them.
Good candidate for merging back into -stable. Lightly tested by me, but
it came from OpenBSD a while ago.
Obtained from: OpenBSD
at the end of gethostanswer()/getanswer()/whatever where it used to
return TRY_AGAIN. This breaks the domain list traversal in ypserv's
async DNS lookup module: it would only retry using the domain(s) from
the 'domain' or 'search' lines in /etc/resolv.conf if __dns_getanswer()
returned TRY_AGAIN.
Changed the test so that either TRY_AGAIN or NO_RECOVERY will work.
This seemed to me the best solution in the event somebody tries to
compile this code on an older system with a different version of BIND.
(You shouldn't do that of course, but then there's a lot of things
in the world that you shouldn't do and people do them anyway.)
is not sane: if the TTL on a pending but unanswered query hits 0 and the
circular queue entry is removed and free()d, the for() loop may still try
to use the entry pointer (which now points at no longer valid memory).
usually, deleting only the last entry off the end of the queue worked, but
if more than one was deleted, the server would crash. I changed things a
bit so this shouldn't happen anymore.
Also arranged to call the prune routine a bit more often.
This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.
Boy, I'm glad we're not using sup anymore. This update would have been
insane otherwise.
- Fail YPPROC_ALL requests when we hit the child process limit. This
is a little harsh, but it helps prevent the parent from blocking
and causing other requests to time out.
yp_dnslookup.c:
- Check for duplicate RPC transaction IDs that indicate duplicate
requests sent due to RPC retransmissions. We don't want to send
a second DNS request for the same data while an existing request
is in progress.
- Fix small formatting bogon in snprintf() in yp_async_lookup_addr().
- yp_main.c: Always add the resolver socket to the set of fds
monitored by select(). It can happen that pending == 0 but we
still have some data in the socket buffer from an old query.
This way, the data will be flushed in a timely manner.
- yp_extern.h: remove proto for yp_dns_pending() since we don't need
it anynmore.
- yp_server.c: call yp_async_lookup_name()/yp_async_lookup_addr()
functions with the svc_req pointer as an arg instead of the xprt.
(The svc_req struct includes a pointer to the transport handle,
and it also has the service version number which the async DNS
code will need. (see below))
- yp_dnslookup.c:
o Nuke yp_dns_pending() since we don't need it anymore.
o In yp_run_dnsq(), swallow up and ignore replies if no requests
are pending or the ID doesn't match any of the IDs in the queue.
o In yp_send_dns_reply(), we assume that we will always be
replying to an NIS v2 client. While this will probably always
be the case, we do support the v1 'match' procedure, and it
has a different result struct than v2. For completeness,
support replying to both NIS v1 and v2 clients.
o Update the queue entry structure to include a member to
keep track of the NIS version number.
o Have yp_async_lookup_name/addr() extract the version number
from the svc_req structure and save it with the queue entry
for yp_send_dns_reply() to inspect later.
o Add some comments.
- Don't dereference a NULL hostent pointer (if T_PTR lookup fails).
- Today I asked myself: "Self, you wrote this nifty async resolver
that does a great job handling delayed replies to clients using
the UDP transport, and the yplib code in libc always uses UDP
(except for yp_all()). But what if some dork makes a DNS lookup using
TCP?" Being the only dork on hand at the time, I tried it and was
enlightened. As I suspected, my transaction ID frobbing hacks cause
fireworks if called on a TCP transport handle (duh: the structures
are different). Fix: check the type of socket in xprt->xp_sock using
getsockopt() and don't use svcudp_get_xid() and svcudp_set_xid() for
anything except SOCK_DGRAM sockets. (Since accept() gives you a
new socket for each connection, the transaction ID munging isn't
needed for TCP anyway.)
- yp_dblookup.c: Create non-DB specific database access functions.
Using these allows access to the underlying database functions without
needing explicit knowledge of Berkeley DB. (These are used only
when DB_CACHE is #defined. Other programs that use the non-caching
functions (yp_mkdb, ypxfr, yppush, rpc.yppasswdd) shouldn't notice
the difference.)
- yp_dnslookup: Implement async DNS lookups. We send our own DNS
requests using UDP and put the request in a queue. When the response
arrives, we use the ID in the header to find the corresponsing queue
entry and then send the response to the client. We can go about our
business and handle other YP requests in the meantime. This way, we
can deal with time consuming DNS requests without blocking and without
forking.
- yp_server.c: Convert to using new non-DB-specific database access
functions. This simplifies the code a bit and removes the need for
this module to know anything about Berkeley DB. Also convert the
ypproc_match_2_svc() function to use the async DNS lookup routines.
- yp_main.c: tweak yp_svc_run() to add the resolver socket to the
set of descriptors monitored in the select() loop. Also add a
timeout to select(); we may get stale DNS requests stuck in the
queue which we want to invalidate after a while. If the timeout
hits, we decrement the ttl on all pending DNS requests and nuke
those requests that aren't handled before ttl hits zero.
- yp_extern.h: Add prototypes for new stuff.
- yp_svc_udp.c (new file): The async resolver code needs to be able
to rummage around inside the RPC UDP transport handle in order to
work correcty. There's basically one transport handle, and each time
a request comes in, the transaction ID in the handle is changed.
This means that if we queue a DNS request, then we handle some other
unrelated requests, we will be unable to send the DNS response because
the transaction ID and remote address of the client that made the DNS
request will have been lost. What we need to do is save the client
address and transaction ID in the queue entry for the DNS request,
then put the transaction ID and address back in the transport handle
when we're ready to reply. (And then we have to undo the change so
as not to confuse any other part of the server.) The trouble is that
the transaction ID is hidden in an opaque part of the transport handle,
and only the code in the svc_udp module in the RPC library knows how
to handle it. This file contains a couple of functions that let us
read and set the transaction ID in spite of this. This is really a
dirty trick and I should be taken out and shot for even thinking about
it, but there's no other way to get this stuff to work.
- Makefile: add yp_svc_udp.c to SRCS.
Fix some comments to reflect reality (in some cases I made changes
to code but not to the comments).
Change some instances of 'inline' to '__inline' to pacify
gcc -ansi -pedantic.
Use rcsid strings more consistently.
Make 'oldaddr' static in yp_access().
Use strcpy()/strcat() in yp_open_db_cache() instead of snprintf().
(Seems to be a little faster this way.)
equivalent to the old ypserv, except that it doesn't support the
-p [port] option to force the server to use a particular port.
The server stubs and yp.h header file are auto-generated from the yp.x
protocol definition file. The auto-generated XDR routines in libc/yp
are also used. The database access code has been broken out into a
seperate module so that other NIS utilities (ypxfr in particular)
can use it.
Note that the old mknetid script is being temporarily moved here; it
will be replaced by an mknetid program which will eventually have
a home under /usr/src/libexec. (The existing script is actually
somewhat broken -- it doesn't handle hosts -- but this isn't a big
deal at this point since the netid.byname map is really only useful
fopr Secure RPC, which we don't have yet.)