This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.
Boy, I'm glad we're not using sup anymore. This update would have been
insane otherwise.
- yp_main.c: Always add the resolver socket to the set of fds
monitored by select(). It can happen that pending == 0 but we
still have some data in the socket buffer from an old query.
This way, the data will be flushed in a timely manner.
- yp_extern.h: remove proto for yp_dns_pending() since we don't need
it anynmore.
- yp_server.c: call yp_async_lookup_name()/yp_async_lookup_addr()
functions with the svc_req pointer as an arg instead of the xprt.
(The svc_req struct includes a pointer to the transport handle,
and it also has the service version number which the async DNS
code will need. (see below))
- yp_dnslookup.c:
o Nuke yp_dns_pending() since we don't need it anymore.
o In yp_run_dnsq(), swallow up and ignore replies if no requests
are pending or the ID doesn't match any of the IDs in the queue.
o In yp_send_dns_reply(), we assume that we will always be
replying to an NIS v2 client. While this will probably always
be the case, we do support the v1 'match' procedure, and it
has a different result struct than v2. For completeness,
support replying to both NIS v1 and v2 clients.
o Update the queue entry structure to include a member to
keep track of the NIS version number.
o Have yp_async_lookup_name/addr() extract the version number
from the svc_req structure and save it with the queue entry
for yp_send_dns_reply() to inspect later.
o Add some comments.
- yp_dblookup.c: Create non-DB specific database access functions.
Using these allows access to the underlying database functions without
needing explicit knowledge of Berkeley DB. (These are used only
when DB_CACHE is #defined. Other programs that use the non-caching
functions (yp_mkdb, ypxfr, yppush, rpc.yppasswdd) shouldn't notice
the difference.)
- yp_dnslookup: Implement async DNS lookups. We send our own DNS
requests using UDP and put the request in a queue. When the response
arrives, we use the ID in the header to find the corresponsing queue
entry and then send the response to the client. We can go about our
business and handle other YP requests in the meantime. This way, we
can deal with time consuming DNS requests without blocking and without
forking.
- yp_server.c: Convert to using new non-DB-specific database access
functions. This simplifies the code a bit and removes the need for
this module to know anything about Berkeley DB. Also convert the
ypproc_match_2_svc() function to use the async DNS lookup routines.
- yp_main.c: tweak yp_svc_run() to add the resolver socket to the
set of descriptors monitored in the select() loop. Also add a
timeout to select(); we may get stale DNS requests stuck in the
queue which we want to invalidate after a while. If the timeout
hits, we decrement the ttl on all pending DNS requests and nuke
those requests that aren't handled before ttl hits zero.
- yp_extern.h: Add prototypes for new stuff.
- yp_svc_udp.c (new file): The async resolver code needs to be able
to rummage around inside the RPC UDP transport handle in order to
work correcty. There's basically one transport handle, and each time
a request comes in, the transaction ID in the handle is changed.
This means that if we queue a DNS request, then we handle some other
unrelated requests, we will be unable to send the DNS response because
the transaction ID and remote address of the client that made the DNS
request will have been lost. What we need to do is save the client
address and transaction ID in the queue entry for the DNS request,
then put the transaction ID and address back in the transport handle
when we're ready to reply. (And then we have to undo the change so
as not to confuse any other part of the server.) The trouble is that
the transaction ID is hidden in an opaque part of the transport handle,
and only the code in the svc_udp module in the RPC library knows how
to handle it. This file contains a couple of functions that let us
read and set the transaction ID in spite of this. This is really a
dirty trick and I should be taken out and shot for even thinking about
it, but there's no other way to get this stuff to work.
- Makefile: add yp_svc_udp.c to SRCS.
when I came up with this idea weren't strong enough to help me see it
through. If this was a self-contained application and I had complete
control over what data got sent through what socket and when, I might
be able to get everything to work right without blocking, but instead
I have RPC/XDR in between me and the socket layer, and they have their
own ideas about what to do.
Maybe one day I'll go totally mad and figure out the right way to do
this; in the meantime this mess goes on the back burner.
_without_ using fork().
The problem with YPPROC_ALL is that it transmits an entire map through
a TCP pipe as the result of a single RPC call. First of all, this requires
certain hackery in the XDR filter. Second, if the map being sent is
large, the server can end up spending lots of time in the XDR filter
sending to just the one client, while requests for other clients will
go unanswered.
My original solution for this was to fork() the request into a child
process which terminates after the map has been transmitted (or the
transfer is interrupted due to an error). This leaves the parent free
to handle other requests. But this solution is kind of lame: fork()
is relatively expensive, and we have to keep a cap on the number of
child processes to keep from swamping the system.
What we do now is grab control of the service transport handle and XDR
handle from the RPC library and send the records one at a time ourselves
instead of letting the RPC library do it. We send a record, then go
back to the svc_run() loop and select() on the socket. If select() says
we can still write data, we send the next record. Then we call
svc_getreqset() and handle other RPCs and loop around again. This way,
we can handle other RPCs between records.
We manage multiple YPPROC_ALL requests using a circular queue. When a
request is done, we dequeue it and destroy the handle. We also tag
each request with a ttl which is decremented whevever we run the queue
and a handle isn't serviced. This lets us nuke requests that have sat
idle for too long (if we didn't do this, we might run out of socket
descriptors.)
Now all I have to do is come up with an async resolver, and ypserv
won't need to fork() at all. :)
Note: these changes should not go into 2.2 unless they get a very
throrough shakedown before the final cutoff date.
any maps that may have them. If the YP_SECURE key is present, ypserv
will only allow access to the map from clients on reserved ports.
If the YP_INTERDOMAIN key is present, the server will do DNS lookups
for hostnames that it can't find in hosts.byname or hosts.byaddr.
This is the same as the -d flag (which is retained for backwards
compatibility) but it can be set on a per-map/per-domain basis.
Also modified /var/yp/Makefile to add YP_INTERDOMAIN to the hosts.*
maps and YP_SECURE to master.passwd.* maps by default.
yp_dblookup.c:
- Implement database handle caching. What this means is that instead
of opening and closing map databases for each request, we open a
database and save the handle (and, if requested, the key index)
in an array. This saves a bit of overhead on things like repeated
YPPROC_NEXT calls, such as you'd get from getpwent(). Normally,
each YPPROC_NEXT would require open()ing the database, seeking
to the location supplied by the caller (which is time consuming with
hash databases as the R_CURSOR flag doesn't work), reading the
data, close()ing the database and then shipping the data off to
the caller. The system call overhead is prohibitive, especially
with very large maps. By caching the handle to an open database,
we elimitate at least the open()/close() system calls, as well
as the associated DB setup and tear-down operations, for a large
percentage of the time. This improves performance substantially at
the cost of consuming a little more memory than before.
Note that all the caching support is surrounded by #ifdef DB_CACHE
so that this same source module can still be used by other programs
that don't need it.
- Make yp_open_db() call yp_validdomain(). Doing it here saves cycles
when caching is enabled since a hit on the map cache list by
definition means that the domain being referenced is valid.
- Also make yp_open_db() check for exhaustion of file descriptors,
just in case.
yp_server.c:
- Reorganize things a little to take advantage of the database
handle caching. Add a call to yp_flush_all() in ypproc_clear_2_svc().
- Remove calls to yp_validdomain() from some of the service procedures.
yp_validdomain() is called inside yp_open_db() now, so procedures that
call into the database package don't need to use yp_validdomain()
themselves.
- Fix a bogosity in ypproc_maplist_2_svc(): don't summarily initiallize
the result.maps pointer to NULL. This causes yp_maplist_free()
to fail and leaks memory.
- Make ypproc_master_2_svc() copy the string it gets from the database
package into a private static buffer before trying to NUL terminate it.
This is necessary with the DB handle caching: stuffing a NUL into the
data returned by DB package will goof it up internally.
yp_main.c:
- Stuff for DB handle caching: call yp_init_dbs() to clear the
handle array and add call to yp_flush_all() to the SIGHUP
signal handler.
Makefile.yp:
- Reorganize to deal with database caching. yp_mkdb(8) can now be used
to send a YPPROC_CLEAR signal to ypserv(8). Call it after each map
is created to refresh ypserv's cache.
- Add support for mail.alias map.
Contributed by Mike Murphy (mrm@sceard.com).
- Make default location for the netgroups source file be /var/yp/netgroup
instead of /etc/netgroup.
mkaliases:
- New file: script to generate mail.alias map.
Contributed by Mike Murphy (mrm@sceard.com).
Makefile:
- Install Makefile.yp as /var/yp/Makefile.dist and link it to
/var/yp/Makefile only if /var/yp/Makefile doesn't already exist.
Suggested by Peter Wemm.
- Install new mkaliases script in /usr/libexec along with mknetid.
- Use somewhat saner approach to generating rpcgen-dependent files
as suggested by Garrett Wollman.
in the same was as the SunOS ypserv (same format, described in ypserv man
page). If the user wants tcpwrapper style access control, they can
recompile ypserv to use that instead. This way we get securenets without
having to ship libwrap.a and tcpd.h with core FreeBSD distribution.
If /var/yp/securenets doesn't exist, ypserv allows all connections.
- Add a ypxfr_callback() function that we can use to signal failure to
yppush(8) in the event that we can't fork()/exec() ypxfr(8). yppush
only checks the return status from YPPROC_XFR enough to determine
that the RPC succeded: it relies on its callback service to figure
out whether or not the transfer actually worked.
- Give yp_dblookup.c its own debug variable (ypdb_debug) so that DB
access debugging messages can be turned on or off independent of the
program's global debug messages.
- Have the Makefile rpcgen the ypushresp_xfr_1() client stub for us and
nuke the unneeded rule for yp_xdr.c that I left in by mistake (the XDR
filters live in libc now).