scheme. Additionally, add the capability for checking for unexpected
kernel page faults. The maximum amount of kva space for buffers hasn't
been decreased from where it is, but it will now be possible to do so.
This scheme manages the kva space similar to the buffers themselves. If
there isn't enough kva space because of usage or fragementation, buffers
will be reclaimed until a buffer allocation is successful. This scheme
should be very resistant to fragmentation problems until/if the LFS code
is fixed and uses the bogus buffer locking scheme -- but a 'fixed' LFS
is not likely to use such a scheme.
Now there should be NO problem allocating buffers up to MAXPHYS.
_without_ using fork().
The problem with YPPROC_ALL is that it transmits an entire map through
a TCP pipe as the result of a single RPC call. First of all, this requires
certain hackery in the XDR filter. Second, if the map being sent is
large, the server can end up spending lots of time in the XDR filter
sending to just the one client, while requests for other clients will
go unanswered.
My original solution for this was to fork() the request into a child
process which terminates after the map has been transmitted (or the
transfer is interrupted due to an error). This leaves the parent free
to handle other requests. But this solution is kind of lame: fork()
is relatively expensive, and we have to keep a cap on the number of
child processes to keep from swamping the system.
What we do now is grab control of the service transport handle and XDR
handle from the RPC library and send the records one at a time ourselves
instead of letting the RPC library do it. We send a record, then go
back to the svc_run() loop and select() on the socket. If select() says
we can still write data, we send the next record. Then we call
svc_getreqset() and handle other RPCs and loop around again. This way,
we can handle other RPCs between records.
We manage multiple YPPROC_ALL requests using a circular queue. When a
request is done, we dequeue it and destroy the handle. We also tag
each request with a ttl which is decremented whevever we run the queue
and a handle isn't serviced. This lets us nuke requests that have sat
idle for too long (if we didn't do this, we might run out of socket
descriptors.)
Now all I have to do is come up with an async resolver, and ypserv
won't need to fork() at all. :)
Note: these changes should not go into 2.2 unless they get a very
throrough shakedown before the final cutoff date.
are always together with Framing Errors and they were incorrectly
treated as FE's and discarded.
Reorganized the BREAK/FE/PE tests.
Found by: NIST-PCTS
with sio devices (not perfectly, since there is no way to flush the tx
holding register on 8250-16450's. I'm not sure if resetting the fifos
flushes the tx shift register).
Reminded by: NIST-PCTS
is completely empty. There is no interrupt for output completion, so
poll for it every 10 ms after output is nearly complete. Now ttywait()
works right.
Reminded by: NIST-PCTS
succeeds. Writing an action now succeeds iff the handler isn't changed.
(POSIX allows attempts to change the handler to be ignored or cause an
error. Changing other parts of the action is allowed (except attempts
to mask unmaskable signals are silently ignored as usual).)
Found by: NIST-PCTS
registers.) Also clean up some namespace pollution, and remove
gcc-1 support (nothing really works with it anymore anyway.)
Submitted by: Bruce Evans <bde@freebsd.org> and me.
the queues and generate a SIGINT. Previously, this wasn't done if ISIG
was clear or the VINTR character was disabled, and it was done by
converting the BREAK to a VINTR character and sometimes bogusly echoing
this character.
Found by: NIST-PCTS
consistent stack frame in fastmove() so that only one new fault handler
is necessary.
Should be in 2.2. Harmless until the i586 versions are reenabled.
Per Wayne Scott of Intel, the old sequence took 20cycles!!! on a P6.
Another nice side-benefit is that the kernel is about 3K smaller!!!
Submitted by: Wayne Scott <wscott@ichips.intel.com>