mod makes sure that the Natoma chipset is set into the correct mode. In
the case of my P6DNF, when booting a UP kernel, I see a substantial improvement
in the latency of certain operations. It appears that the cache hit
latency is curiously improved the most, per lat_mem_rd.
found by taking my HP800CT apart, perusing HPs (Very good!) service
manual and inference from a bad gif file I found in Finland.
Sigh... But it's a nice machine :-)
certain variants of the NCR chip from FE_CACHE_SET: FE_CLSE (enable
cache-line size register) and FE_ERMP (enable read-multiple). They
will be re-enabled, if a fix for the underlying problem (a restriction
in the memory to memory move logic of some chips) has been implemented.
I changed a few bits here and there, mainly renaming wd82371.c
to ide_pci.c now that it's supposed to handle different chipsets.
It runs on my P6 natoma board with two Maxtor drives, and also
on a Fujitsu machine I have at work with an Opti chipset and
a Quantum drive.
Submitted by:cgull@smoke.marlboro.vt.us <John Hood>
Original readme:
*** WARNING ***
This code has so far been tested on exactly one motherboard with two
identical drives known for their good DMA support.
This code, in the right circumstances, could corrupt data subtly,
silently, and invisibly, in much the same way that older PCI IDE
controllers do. It's ALPHA-quality code; there's one or two major
gaps in my understanding of PCI IDE still. Don't use this code on any
system with data that you care about; it's only good for hack boxes.
Expect that any data may be silently and randomly corrupted at any
moment. It's a disk driver. It has bugs. Disk drivers with bugs
munch data. It's a fact of life.
I also *STRONGLY* recommend getting a copy of your chipset's manual
and the ATA-2 or ATA-3 spec and making sure that timing modes on your
disk drives and IDE controller are being setup correctly by the BIOS--
because the driver makes only the lamest of attempts to do this just
now.
*** END WARNING ***
that said, i happen to think the code is working pretty well...
WHAT IT DOES:
this code adds support to the wd driver for bus mastering PCI IDE
controllers that follow the SFF-8038 standard. (all the bus mastering
PCI IDE controllers i've seen so far do follow this standard.) it
should provide busmastering on nearly any current P5 or P6 chipset,
specifically including any Intel chipset using one of the PIIX south
bridges-- this includes the '430FX, '430VX, '430HX, '430TX, '440LX,
and (i think) the Orion '450GX chipsets. specific support is also
included for the VIA Apollo VP-1 chipset, as it appears in the
relabeled "HXPro" incarnation seen on cheap US$70 taiwanese
motherboards (that's what's in my development machine). it works out
of the box on controllers that do DMA mode2; if my understanding is
correct, it'll probably work on Ultra-DMA33 controllers as well.
it'll probably work on busmastering IDE controllers in PCI slots, too,
but this is an area i am less sure about.
it cuts CPU usage considerably and improves drive performance
slightly. usable numbers are difficult to come by with existing
benchmark tools, but experimentation on my K5-P90 system, with VIA
VP-1 chipset and Quantum Fireball 1080 drives, shows that disk i/o on
raw partitions imposes perhaps 5% cpu load. cpu load during
filesystem i/o drops a lot, from near 100% to anywhere between 30% and
70%. (the improvement may not be as large on an Intel chipset; from
what i can tell, the VIA VP-1 may not be very efficient with PCI I/O.)
disk performance improves by 5% or 10% with these drives.
real, visible, end-user performance improvement on a single user
machine is about nil. :) a kernel compile was sped up by a whole three
seconds. it *does* feel a bit better-behaved when the system is
swapping heavily, but a better disk driver is not the fix for *that*
problem.
THE CODE:
this code is a patch to wd.c and wd82371.c, and associated header
files. it should be considered alpha code; more work needs to be
done.
wd.c has fairly clean patches to add calls to busmaster code, as
implemented in wd82371.c and potentially elsewhere (one could imagine,
say, a Mac having a different DMA controller).
wd82371.c has been considerably reworked: the wddma interface that it
presents has been changed (expect more changes), many bugs have been
fixed, a new internal interface has been added for supporting
different chipsets, and the PCI probe has been considerably extended.
the interface between wd82371.c and wd.c is still fairly clean, but
i'm not sure it's in the right place. there's a mess of issues around
ATA/ATAPI that need to be sorted out, including ATAPI support, CD-ROM
support, tape support, LS-120/Zip support, SFF-8038i DMA, UltraDMA,
PCI IDE controllers, bus probes, buggy controllers, controller timing
setup, drive timing setup, world peace and kitchen sinks. whatever
happens with all this and however it gets partitioned, it is fairly
clear that wd.c needs some significant rework-- probably a complete
rewrite.
timing setup on disk controllers is something i've entirely punted on.
on my development machine, it appears that the BIOS does at least some
of the necessary timing setup. i chose to restrict operation to
drives that are already configured for Mode4 PIO and Mode2 multiword
DMA, since the timing is essentially the same and many if not most
chipsets use the same control registers for DMA and PIO timing.
does anybody *know* whether BIOSes are required to do timing setup for
DMA modes on drives under their care?
error recovery is probably weak. early on in development, i was
getting drive errors induced by bugs in the driver; i used these to
flush out the worst of the bugs in the driver's error handling, but
problems may remain. i haven't got a drive with bad sectors i can
watch the driver flail on.
complaints about how wd82371.c has been reindented will be ignored
until the FreeBSD project has a real style policy, there is a
mechanism for individual authors to match it (indent flags or an emacs
c-mode or whatever), and it is enforced. if i'm going to use a source
style i don't like, it would help if i could figure out what it *is*
(style(9) is about half of a policy), and a way to reasonably
duplicate it. i ended up wasting a while trying to figure out what
the right thing to do was before deciding reformatting the whole thing
was the worst possible thing to do, except for all the other
possibilities.
i have maintained wd.c's indentation; that was not too hard,
fortunately.
TO INSTALL:
my dev box is freebsd 2.2.2 release. fortunately, wd.c is a living
fossil, and has diverged very little recently. included in this
tarball is a patch file, 'otherdiffs', for all files except wd82371.c,
my edited wd82371.c, a patch file, 'wd82371.c-diff-exact', against the
2.2.2 dist of 82371.c, and another patch file,
'wd82371.c-diff-whitespace', generated with diff -b (ignore
whitespace). most of you not using 2.2.2 will probably have to use
this last patchfile with 'patch --ignore-whitespace'. apply from the
kernel source tree root. as far as i can tell, this should apply
cleanly on anything from -current back to 2.2.2 and probably back to
2.2.0. you, the kernel hacker, can figure out what to do from here.
if you need more specific directions, you probably should not be
experimenting with this code yet.
to enable DMA support, set flag 0x2000 for that drive in your config
file or in userconfig, as you would the 32-bit-PIO flag. the driver
will then turn on DMA support if your drive and controller pass its
tests. it's a bit picky, probably. on discovering DMA mode failures
or disk errors or transfers that the DMA controller can't deal with,
the driver will fall back to PIO, so it is wise to setup the flags as
if PIO were still important.
'controller wdc0 at isa? port "IO_WD1" bio irq 14 flags 0xa0ffa0ff
vector wdintr' should work with nearly any PCI IDE controller.
i would *strongly* suggest booting single-user at first, and thrashing
the drive a bit while it's still mounted read-only. this should be
fairly safe, even if the driver goes completely out to lunch. it
might save you a reinstall.
one way to tell whether the driver is really using DMA is to check the
interrupt count during disk i/o with vmstat; DMA mode will add an
extremely low number of interrupts, as compared to even multi-sector
PIO.
boot -v will give you a copious register dump of timing-related info
on Intel and VIAtech chipsets, as well as PIO/DMA mode information on
all hard drives. refer to your ATA and chipset documentation to
interpret these.
WHAT I'D LIKE FROM YOU and THINGS TO TEST:
reports. success reports, failure reports, any kind of reports. :)
send them to cgull+ide@smoke.marlboro.vt.us.
i'd also like to see the kernel messages from various BIOSes (boot -v;
dmesg), along with info on the motherboard and BIOS on that machine.
i'm especially interested in reports on how this code works on the
various Intel chipsets, and whether the register dump works
correctly. i'm also interested in hearing about other chipsets.
i'm especially interested in hearing success/failure reports for PCI
IDE controllers on cards, such as CMD's or Promise's new busmastering
IDE controllers.
UltraDMA-33 reports.
interoperation with ATAPI peripherals-- FreeBSD doesn't work with my
old Hitachi IDE CDROM, so i can't tell if I've broken anything. :)
i'd especially like to hear how the drive copes in DMA operation on
drives with bad sectors. i haven't been able to find any such yet.
success/failure reports on older IDE drives with early support for DMA
modes-- those introduced between 1.5 and 3 years ago, typically
ranging from perhaps 400MB to 1.6GB.
failure reports on operation with more than one drive would be
appreciated. the driver was developed with two drives on one
controller, the worst-case situation, and has been tested with one
drive on each controller, but you never know...
any reports of messages from the driver during normal operation,
especially "reverting to PIO mode", or "dmaverify odd vaddr or length"
(the DMA controller is strongly halfword oriented, and i'm curious to
know if any FreeBSD usage actually needs misaligned transfers).
performance reports. beware that bonnie's CPU usage reporting is
useless for IDE drives; the best test i've found has been to run a
program that runs a spin loop at an idle priority and reports how many
iterations it manages, and even that sometimes produces numbers i
don't believe. performance reports of multi-drive operation are
especially interesting; my system cannot sustain full throughput on
two drives on separate controllers, but that may just be a lame
motherboard.
THINGS I'M STILL MISSING CLUE ON:
* who's responsible for configuring DMA timing modes on IDE drives?
the BIOS or the driver?
* is there a spec for dealing with Ultra-DMA extensions?
* are there any chipsets or with bugs relating to DMA transfer that
should be blacklisted?
* are there any ATA interfaces that use some other kind of DMA
controller in conjunction with standard ATA protocol?
FINAL NOTE:
after having looked at the ATA-3 spec, all i can say is, "it's ugly".
*especially* electrically. the IDE bus is best modeled as an
unterminated transmission line, these days.
for maximum reliability, keep your IDE cables as short as possible and
as few as possible. from what i can tell, most current chipsets have
both IDE ports wired into a single buss, to a greater or lesser
degree. using two cables means you double the length of this bus.
SCSI may have its warts, but at least the basic analog design of the
bus is still somewhat reasonable. IDE passed beyond the veil two
years ago.
--John Hood, cgull@smoke.marlboro.vt.us
changes relative to the 2.2 compatable version are include file
related, the new multicast interface (!) and the new PCI interface.
This should work "as-is" but has not been tested (I have not been able
to get a dc21x4x based card for testing).
- OVERRIDE_TUNER: allows you to manually choose the tuner type for those
cards that fail to probe properly. See source for legal
values.
- OVERRIDE_DBX: allows you to manually choose DBX or NO DBX for those
cards that fail to probe properly.
0 == no DBX circuit present, 1 == DBX circuit present.
should work with no driver changes, though not all features are currently
used.
Remove code that was conditional on NEW_SCSICONF not being defined. This
was temporary code, that at a time got excluded correctly, until the new
scsiconf became the default, and NEW_SCSICONF was no longer specified.
Add support for quirks defined in scsiconf.c. For now only the HP3724/5
needs an entry, since that drive can't be used with tags.
device probe of a host to PCI bridge may modify that value, based on
its knowledge of device specific registers. This makes the Intel XXpress
work, as verified by: Terje Marthinussen <terjem@cc.uit.no>.
Return failure, if the enable bit corresponding to the map type has not
been set in the command register. This feature was requested by Justin
Gibbs, who pointed out that some early PCI to PCI bridges do not correctly
support memory windows (I assume because of the risk of deadlocks that
have been taken care of in the PCI 2.2 spec) and that some BIOS clears
the memory address decode enable bit in the command register of the PCI
device, if it finds them behind such a bridge.
1) Stop at the first map register that contains a zero value.
2) When testing for the map size work up from low values, since
this works around a bug in some BusLogic SCSI card, which has
the 16 upper port base address bits hardwired to zero.
The config register dump printed in the bootverbose case has
been slightly rearranged.
reality. There will be a new call interface, but for now the file
pci_compat.c (which is to be deleted, after all drivers are converted)
provides an emulation of the old PCI bus driver functions. The only
change that might be visible to drivers is, that the type pcici_t
(which had been meant to be just a handle, whose exact definition
should not be relied on), has been converted into a pcicfgregs* .
The Tekram AMD SCSI driver bogusly relied on the definition of pcici_t
and has been converted to just call the PCI drivers functions to access
configuration space register, instead of inventing its own ...
This code is by no means complete, but assumed to be fully operational,
and brings the official code base more in line with my development code.
A new generic device descriptor data type has to be agreed on. The PCI
code will then use that data type to provide new functionality:
1) userconfig support
2) "wired" PCI devices
3) conflicts checking against ISA/EISA
4) maps will depend on the command register enable bits
5) PCI to Anything bridges can be defined as devices,
and are probed like any "standard" PCI device.
The following features are currently missing, but will be added back,
soon:
1) unknown device probe message
2) suppression of "mirrored" devices caused by ancient, broken chip-sets
This code relies on generic shared interrupt support just commited to
kern_intr.c (plus the modifications of isa.c and isa_device.h).
Added [SR]RGBMASKs ioctl for byte swapping.
1.16 4/20/97 Randall Hopper <rhh@ct.picker.com>
Generalized RGBMASK ioctls for general pixel
format setting [SG]ACTPIXFMT, and added query API
to return driver-supported pix fmts GSUPPIXFMT.
1.17 4/21/97 hasty@rah.star-gate.com
Clipping support added.
1.18 4/23/97 Clean up after failed CAP_SINGLEs where bt
interrupt isn't delivered, and fixed fixing
CAP_SINGLEs that for ODD_ONLY fields.
Submitted by: individuals in above log messages.
There are various options documented in i386/conf/LINT, there is more to
come over the next few days.
The kernel should run pretty much "as before" without the options to
activate SMP mode.
There are a handful of known "loose ends" that need to be fixed, but
have been put off since the SMP kernel is in a moderately good condition
at the moment.
This commit is the result of the tinkering and testing over the last 14
months by many people. A special thanks to Steve Passe for implementing
the APIC code!
large enough to contain the ethernet header. There appears to be a
condition where the card can return "0" in some failure cases, and this
causes bad things to happen (a panic).
type mismatches. There was no problem in practice (at least on 386's).
Removed NetBSD-related TIMEOUT macro. NetBSD uses the same BSD4.4Lite
timeout interface as FreeBSD. As a concession to portability, declare
the timeout function without using the FreeBSD timeout_t typedef.
This patch fixes the problem of vic only capturing an even or odd frame plus
the my early patch for missing frames with resolutions higher than 320x240
in rgb mode.
The yuv422 patch introduces a minor bug in that a green line appears at the
bottom of the captured window . There is no easy work around for this right
now.
Reviewed by: various bt848 hackers
Submitted by: Amancio Hasty <hasty@rah.star-gate.com>
Randall Hopper <rhh@ct.picker.com> GHUE/GBRIGHT bug
Louis Mamakos made a new bt848 struct, including massive changes to the entire
body of code, substituting array offsets with struct members.
Randall Hopper aadded fixes of BT848_GHUE & BT848_GBRIG.
I (fsmp):
added polled hardware i2c routines,
removed all existing software i2c routines.
added eeprom support.
form `tv = time'. Use a new function gettime(). The current version
just forces atomicicity without fixing precision or efficiency bugs.
Simplified some related valid accesses by using the central function.
Michael submitted code to activate the audio muxes.
fsmp:
extended those changes for different boards.
auto-detection of board types.
auto-detection of tuner types.
auto-detection of stereo option.
Fixed a bug in fxp_mdi_write - a hex number was missing a preceding 0x
and this was causing the routine to not wait for a PHY write to complete.
Added support for link0, link1, and link2 flags to toggle auto-
negotiation, 10/100, and half/full duplex:
link0 disable auto-negotiation
When set, these flags then have meaning:
-link1 10Mbps
link1 100Mbps
-link2 half duplex
link2 full duplex
...needs a manual page.
I broke the cable tuning with my 'TEST_A' code. Remove TEST_A define
till I finish this change for both tuning modes. Note that this
will effectively break the new TVTUNER_SETFREQ/TVTUNER_GETFREQ ioctl()s.
These aren't used by anyone but me yet (attempt to provide full resolution
fine tuning for "fringe" stations) so it should be no problem
written:
1) Full duplex mode is now supported (and works!)
2) The 10Mbps-only PCI Pro/10 should now work (untested, however)
Thanks to Justin Gibbs for providing a PCI bus analyzer trace while the
Intel Windows driver was configuring the board...this made it possible
to figure out the mystery bit that I wasn't setting in the PHY for full
duplex to work.
can't perform overlapping commands on both of its channels.
To enable the CMD640B work-around, the kernel must be compiled with
"options CMD640". Without that option there should be no difference
in the code produced compared to the previous revision of wd.c.
Submitted by: Wolfgang Helbig <helbig@ba-stuttgart.de>
This parameter is intended to allow new kernels to work with old LKM binaries,
provided the revision ID is incremented whenever the PCI LKM interface is
changed. The revision ID does not at all protect against changes in data
structures accesses by the driver.
Disabled the DMA byte counters - I had it this way originally and this is
the recommended setting.
Set crscdt to CRS only (0) since this is what it should be for an MII PHY.
Also fixed some comments.
Add auto-termination support as well as support for setting the high byte
termination. Booting with '-v' will display the settings that the driver
chose. If you stick narrow devices onto the external wide port, you had
better make sure that your converter cable terminates the bus, you have a
wide device on there that terminates the bus, or you manually set the
termination properly in SCSI-Select instead of using "Automatic". The
code will get the setting right regardless if you *don't* have internal
wide devices in this type of configuration. Unfortunatly this is a limitation
of the design of the Adaptec cards.