from last time. Some people have pointed out that there were some odd
side-effects in the changes I made. Two things are different:
- sc_print_addr() will print 'foodev0:' (i.e. sd0:, st0:, cd0:, etc...)
if the device name is known. If it's not known, it'll use a longer
notation. This shortens error messages back to a sane length.
- Added a small function called sc_print_init() to set the sc_printing
flag so that sc_print_addr() will know that we want it to print a
linefeed. Used this in scsi_device_attach() to restore proper carriage
return printing behavior which I broke.
Remaining bogons: the NCR SCSI driver prints out information while the
device-specific attach routine is running with its own linefeeds. This
breaks up the individual messages emitted by the subdriver modules and
causes at least one message to appear on a line by itself without a
device spec prefix. I'm not sure of the correct way to fix this, and
I don't have any NCR SCSI hardware to test with anyway.
There's probably more, but I gather that a rewrite of the SCSI subsystem
is pending anyway, so I'll leave the rest to Those Who Know More About
This Than I (tm).
complained so it cannot be entirely bad :-)
I include the email that probably explains it for people who already know:
> >Compiling with -O3 inlines functions. However the function that is being
> >inlined in makeinfo.c (add_word_args()) is a vararg function and must not be
> >inlined.
> >
> >The code in question is K&R style, and AFIK, there is no way for the compiler
> >to determine that the function uses vararg. Either change the code to use
> >prototypes, or use stdarg, or add a directive to prevent inlining.
>
> Not declaring a varargs function as varargs before it is used gives
> undefined behaviour.
>
> However, in practice the bug is probably in FreeBSD's <varargs.h>, which
> doesn't use gcc's __builtin_next_arg(). gcc should notice that it is
> used and not inline functions that have it. <stdarg.h.> uses it, but I
> think there's another gcc builtin that it should be using.
Patch attached. The ellipsis causes gcc to flag this as a varargs function,
and the name "__builtin_va_alist" is special cased in gcc to hide the last
argument in the arglist.
Reviewed by: bde & phk
Submitted by: jlemon@americantv.com (Jonathan Lemon)
The PS/2 mouse device responds to a reset command with a sequence of
ACK(fa), RESULT(aa) and ID(00). Most PS/2 mice immediately returns
ACK, but spend sometime before sending RESULT. The Armada takes time
before ACK; extra delay is necessary before the call to read ACK.
The problem was reported in comp.unix.bsd.freebsd.misc and the patch
was tested by the reporter. No PR was filed, by the way.
read-mode access to CD-ROM media in the worm(4) driver. No whistles
and bells yet, like all the CDIO* commands, but at least a start.
In order to do this, i had to slightly rearrange the semantics of an
open(2) on the worm driver: now, opening it with O_NONBLOCK set means
no actual IO operations will be intended but only ioctls are to be
processed. This mode is used by wormcontrol(8) to prepare a track
and/or session.
I have only been able to test this on a 2.2-GAMMA system by now, and
only the !DEVFS part is tested yet. Also, i have only done a dummy
burn so far, but wouldn't expect many surprises else. Report bugs to
me ASAP, if there's reasonable demand and i hear no objections, i
might consider merging it into the 2.2 branch as well.
affect programs that sit on top of divert(4) sockets. The
multicast routing code already unconditionally zeros the sum
before recalculating.
Any code that unconditionaly sums a packet without first zeroing
the sum (assuming that it's already zero'd) will break. No such
code seems to exist.
This parameter is intended to allow new kernels to work with old LKM binaries,
provided the revision ID is incremented whenever the PCI LKM interface is
changed. The revision ID does not at all protect against changes in data
structures accesses by the driver.
Disabled the DMA byte counters - I had it this way originally and this is
the recommended setting.
Set crscdt to CRS only (0) since this is what it should be for an MII PHY.
Also fixed some comments.
to search the QINFIFO to remove any possible command that is waiting
otherwise our abort request may not be held up still waiting for the
first command to complete.
I/O port address of most devices is not contiguos, a return value of
probe routine is not so useful for detecting conflict. The return
value was too big, and kernel sometimes detected conflict even though
two devices are not conflict in I/O address between them.
Suggested by: Chiharu Shibata <chi@rd.njk.co.jp>
Fix a few panics during error recovery:
1) Stupid mistake in the "no SCB match handler" where I was using the wrong
variable (busy_scbid instead of scb_index).
2) Unbusy the target of an abort request if the command we are trying to
abort is an untagged transaction. If we don't, we get a fatal NO_MATCH_BUSY
condition which "should never happen".
3) When an abort completes, turn off ahc->in_timeout or else the next timeout
will hit the protective "scb timesout again" panic.
4) Fix a typo that caused the requeued "abort" SCB to have its TAG_ENB and
disconnect bits to be cleared (missing ~) so that devices would complain
about overlapped commands.
Be sure to turn off the unexpected busfree interrupt after we do a bus
reset since we are expecting the bus to go free in that case.
Return XS_TIMEOUT instead of XS_DRIVERSTUFFUP in certain scenarios. XS_TIMEOUT
allows for retries, XS_DRIVERSTUFFUP does not.
Allow commands with SDTR and WDTR negotiation to be tagged. The SCSI II spec
says that you probably should not do this for fear of hitting bogus devices.
The driver did this in the past for almost two years without any problem,
and not doing it causes problems during error recovery to a tag capable device
as the number of openings is higher than two and we'll start sending it
tagged commands causing "overlapped commands attempted" type errors. The
real fix needs to happen in the generic SCSI layer which can limit the
number and type of transactions to a device during error recovery efficiently.
Give ourselves at least 100ms to perform a request sense instead of relying
on the original timeout to be long enough to complete this new command as
well as the one that generated the condition.
Removed some redundant code.
host DMAs. The additional test to ensure that the DMA has stopped is also
unnecessary since we've already waited for the DMA to complete.
Update my copyright for the new year.
(since T_DIRECT just incidentally happens to be equal 0). This causes
more harm than it would do good. Instead , get it at the uk driver.
Reviewed by: obrien@NUXI.com (David O'Brien)
set it in the first place, independent of whether sin->sin_port
is set.
The result is that diverted packets that are being forwarded
will be diverted once and only once on the way in (ip_input())
and again, once and only once on the way out (ip_output()) -
twice in total. ICMP packets that don't contain a port will
now also be diverted.
with <= 100 usec between each character arrival time. This didn't happen
until rev.1.75 of clock.c because DELAY(100) used to delay for closer to
80 usec than 100 usec, and the minimum time between character arrivals is
87.8 usec at the maximum supported speed of 115200 bps 8N1.
Clear DCD timestamp flag on close (the input timestamp flag is already
cleared).
key "print scrn".
It used to stop at the first non-open vty, now it skips the non-open
ones and thereby enable one to cycle around all open vty by pressing
"print scrn".
I have code to calibrate the overhead fairly accurately, but there
is little point in using it since it is most accurate on machines
where an estimate of 0 works well. On slow machines, the accuracy
of DELAY() has a large variance since it is limited by the resolution
of getit() even if the initial delay is calibrated perfectly.
Use fixed point and long longs to speed up scaling in DELAY().
The old method slowed down a lot when the frequency became variable.
Assume the default frequency for short delays so that the fixed
point calculation can be exact.
Fast scaling is only important for small delays. Scaling is done
after looking at the counter and outside the loop, so it doesn't
decrease accuracy or resolution provided it completes before the
delay is up. The comment in the code is still confused about this.
- don't uselessly initialize the fifo "DMA" bit at attach time.
- initialize the fifo "DMA" bit at open time. Without this, the device
interrupts for every character received, reducing input performance
to that of an 8250.
- don't uselessly initialize the fifo trigger level to 8 (scaled to
256) at attach time.
- don't scale the fifo trigger level to 512 bytes. The driver's pseudo-
dma buffer has size 256, so it can't handle bursts of size 512 or 256.
It should be able to handle the second lowest ftl (2 scaled to 64).
- don't reset the fifos in siostop(). Reset triggers a hardware bug
involving wedging of the output interrupt bit This workaround
unfortunately requires ESP support to be configured.