1. All the suggestions earlier made by Bruce: renaming some symbols,
stricter error checking, removing redundant code, etc.
2. The `psm' driver preserves the default counter resolution and
report rate, whatever they are after reset. (Based on reports and
suggestion from Nate and Rob Bolin).
3. The `psm' driver now does not check the so-called sync. bit in the
first byte of the data packet by default, so that the tapping feature
of ALPUS GlidePoint works (based on reports from Louis Mamakos). I
tested the code with ALPUS Desktop GlidePoint (M/N GP101) and found
no problem; tapping worked. It appears ALPUS produces several models
of GlidePoint. I hope the other models are OK too.
The check code can still be activated by defining the PSM_CHECKSYNC
option in the config file. (The bit checking slightly reduces, if not
completely eliminates, weird mouse behavior cased by unsynchronized
mouse data packets. It also helps us to detect if the mouse interrupt
can ever be lost. But, well, if there are devices which cannot be
supported this way...)
4. The `psm' driver does not include the protocol emulation code by
default. The code can still be compiled in if the PSM_EMULATION option
is specified in the config file. Louis Mamakos suggests the emulation
code is putting too much in the kernel, and `moused' works well.
I will think about this later and decide if the entire emulation
code should be removed.
5. And, of course, the fix in `scprobe()' from Bruce to cure the
UserConfig problem. My code in `kbdio.c' is slightly different from
his patch, but has the same effect. There still is a possibility that
`scprobe()' gets confused, if, for whatever reasons, the user holds
down a key for very long time during the boot process. But we cannot
cope with everything, can we?
Submitted by: Kazutaka YOKOTA (yokota@zodiac.mech.utsunomiya-u.ac.jp)
Most of the standard utilities that depended on (or were broken in
a different way by) the old behaviour of interpreting "" as "."
were fixed a year or two ago. There is still a fairly harmless
bug in tar and a harmless bug in gzip. Tar apparently replaces
"/" by "" when it strips leading slashes.
decrease the size of buffer_map to approx 2/3 of what it used to be
(buffer_map can be smaller now.) The original commit of these changes
increased the size of buffer_map to the point where the system would
not boot on large systems -- now large systems with large caches will
have even less problems than before.
the sd & od drivers. There is also slight changes to fdisk & newfs
in order to comply with different sectorsizes.
Currently sectors of size 512, 1024 & 2048 are supported, the only
restriction beeing in fdisk, which hunts for the sectorsize of
the device.
This is based on patches to od.c and the other system files by
John Gumb & Barry Scott, minor changes and the sd.c patches by
me.
There also exist some patches for the msdos filesys code, but I
havn't been able to test those (yet).
John Gumb (john@talisker.demon.co.uk)
Barry Scott (barry@scottb.demon.co.uk)
a) Removal of private typedefs tulip_uint*_t, use standard u_int_*_t.
b) Change [Dd][Cc]21.4. to just 21.4., seems Dec has done this to all
of the drivers for all OS's. (Did they get in trouble with someone?)
[The few that remain can either not be eliminated, or are waiting for
additional driver functional changes that will remove them.]
c) Move some code from dc21040.h into the driver, later a whole block of that
code and more will move to devar.h, but for now this makes it easier
to study diffs.
d) Add a big bold comment to the README.de file about it not reflecting
reality anymore.
Note that these are all cosmetic changes and should be no functional
change in the driver whatsoever. If _anyone_ spots a problem introduced
by this please let me know ASAP!
scheme. Additionally, add the capability for checking for unexpected
kernel page faults. The maximum amount of kva space for buffers hasn't
been decreased from where it is, but it will now be possible to do so.
This scheme manages the kva space similar to the buffers themselves. If
there isn't enough kva space because of usage or fragementation, buffers
will be reclaimed until a buffer allocation is successful. This scheme
should be very resistant to fragmentation problems until/if the LFS code
is fixed and uses the bogus buffer locking scheme -- but a 'fixed' LFS
is not likely to use such a scheme.
Now there should be NO problem allocating buffers up to MAXPHYS.
_without_ using fork().
The problem with YPPROC_ALL is that it transmits an entire map through
a TCP pipe as the result of a single RPC call. First of all, this requires
certain hackery in the XDR filter. Second, if the map being sent is
large, the server can end up spending lots of time in the XDR filter
sending to just the one client, while requests for other clients will
go unanswered.
My original solution for this was to fork() the request into a child
process which terminates after the map has been transmitted (or the
transfer is interrupted due to an error). This leaves the parent free
to handle other requests. But this solution is kind of lame: fork()
is relatively expensive, and we have to keep a cap on the number of
child processes to keep from swamping the system.
What we do now is grab control of the service transport handle and XDR
handle from the RPC library and send the records one at a time ourselves
instead of letting the RPC library do it. We send a record, then go
back to the svc_run() loop and select() on the socket. If select() says
we can still write data, we send the next record. Then we call
svc_getreqset() and handle other RPCs and loop around again. This way,
we can handle other RPCs between records.
We manage multiple YPPROC_ALL requests using a circular queue. When a
request is done, we dequeue it and destroy the handle. We also tag
each request with a ttl which is decremented whevever we run the queue
and a handle isn't serviced. This lets us nuke requests that have sat
idle for too long (if we didn't do this, we might run out of socket
descriptors.)
Now all I have to do is come up with an async resolver, and ypserv
won't need to fork() at all. :)
Note: these changes should not go into 2.2 unless they get a very
throrough shakedown before the final cutoff date.
are always together with Framing Errors and they were incorrectly
treated as FE's and discarded.
Reorganized the BREAK/FE/PE tests.
Found by: NIST-PCTS
with sio devices (not perfectly, since there is no way to flush the tx
holding register on 8250-16450's. I'm not sure if resetting the fifos
flushes the tx shift register).
Reminded by: NIST-PCTS
is completely empty. There is no interrupt for output completion, so
poll for it every 10 ms after output is nearly complete. Now ttywait()
works right.
Reminded by: NIST-PCTS
succeeds. Writing an action now succeeds iff the handler isn't changed.
(POSIX allows attempts to change the handler to be ignored or cause an
error. Changing other parts of the action is allowed (except attempts
to mask unmaskable signals are silently ignored as usual).)
Found by: NIST-PCTS
registers.) Also clean up some namespace pollution, and remove
gcc-1 support (nothing really works with it anymore anyway.)
Submitted by: Bruce Evans <bde@freebsd.org> and me.
the queues and generate a SIGINT. Previously, this wasn't done if ISIG
was clear or the VINTR character was disabled, and it was done by
converting the BREAK to a VINTR character and sometimes bogusly echoing
this character.
Found by: NIST-PCTS