interrupts which now defers them until the transmit queue if filled
up with completed buffers. This has two advantages: first, it reduces
the number of transmitter interrupts to just 1/120th of the rate
that they occured previously, and two, running down many buffers
at once has much improved cache effects.
- probe for PHYs by checking the BMSR (phy status) register instead
of the vendor ID register.
- fix the autonegotiation routine so that it figures out the autonegotiated
modes correctly.
- add tweaks to support the Olicom OC-2326 now that I've actually had
a chance to test one
o Olicom appears to encode the ethernet address in the EEPROM
in 16-bit chunks in network byte order. If we detect an
Olicom card (based on the PCI vendor ID), byte-swap the station
address accordingly.
XXX The Linux driver does not do this. I find this odd since
the README from the Linux driver indicates that patches to
support the Olicom cards came from somebody at Olicom; you'd
think if anyone would get that right, it'd be them. Regardless,
I accepted the word of the disgnoatic program that came bundled
with the card as gospel and fixed the attach routine to make
the station address match what it says.
o The version of the 2326 card that I got for testing is a
strange beast: the card does not look like the picture on
the box in which it was packed. For one thing, the picture
shows what looks like an external NS 83840A PHY, but the
actual card doesn't have one. The card has a TNETE100APCM
chip, which appears to have not only the usual internal
tlan 10Mbps PHY at MII address 32, but also a 10/100 PHY
at MII address 0. Curiously, this PHY's vendor and device ID
registers always return 0x0000. I suspect that this is
a mutant version of the ThunderLAN chip with 100Mbps support.
This combination behaves a little strangely and required the
following changes:
- The internal PHY has to be enabled in tl_softreset().
- The internal PHY doesn't seem to come to life after
detecting the 100Mbps PHY unless it's reset twice.
- If you want to use 100Mbps modes, you have to isolate
the internal PHY.
- If you want to use 10Mbps modes, you have to un-isolate
the internal PHY.
The latter two changes are handled at the end of tl_init(): if
the PHY vendor ID is 0x0000 (which should not be possible if we
have a real external PHY), then tl_init() forces the internal
PHY's BMCR register to the proper values.
Change the port address argument to pci_map_port to pci_port_t* which is
defined as u_int on the alpha, u_short on i386. This is a stopgap with a
hopefully limited lifetime.
Discussed with: Stefan Esser <se@freebsd.org>
respectively. Most of the longs should probably have been
u_longs, but this changes is just to prevent warnings about
casts between pointers and integers of different sizes, not
to fix poorly chosen types.
cure the problems I was having with interrupts not being acknowledged
on time. This fixes a problem I observed where starting two ping -f
processes at 10Mbps would cause an adapter check due to TX GO commands
being issued before TXEOC interrupts were being acked.
Also fix a small problem with tl_start(): the mechanism I was using
to queue new packets onto the TX chain was bogus.
Change adapter check handler so that it resets card state after
tl_softreset() is stored.
Moved all EEPROM-related macro definitions into if_tlreg.h.
Don't allow an autoneg session to start until after the TX queue has
been drained, and don't transmit anything until after the autoneg
session is complete.
Also add support for two more Compaq ThunderLAN-based cards, and three
cards from Olicom which also use the ThunderLAN chip. The only thing
different about the Olicom cards is that they store the station address
at a different location within the EEPROM.
hidden). Now "ticks" are used, which are 4 byte, not 8 byte in size.
The size mismatch did not matter due to sufficient padding at the end
of the structure that holds time stamps (there is an unused member).
The fix suggested by Bruce Evans used "sizeof (ticks_t)", but I prefer
to use "sizeof ticks", and didn't seem to object in his last mail on
this topic.
Submitted by: bde
`#if defined(ONE_THING)' is a style bug, and i386 instead of __i386__
is a bug, since i386 is never defined when the kernel is compiled
by with the default flags (`gcc -ansi ...'). Here the bug disabled
the call to pmap_setvidram(), so ISA video memory was not mapped
WC on 686's. The bug may have been masked by bugs in the committer's
version of gcc - `gcc -ansi' incorrectly defines i386 for gcc = the
version of egcs on the 2.2.6 cdrom.
and don't depend on them being declared there. This will cause lots of
warnings for a few minutes until config is updated. Interrupt handlers
should never have been configured by config, and the machine generated
declarations get in the way of changing the arg type from int to void *.
- connector selection values (should fix aui/bnc),
- non-shifting version of crc calculation using a table,
- interrupt mask adjustments,
- add some brackets where a #ifdef could break an if(),
- don't reset the card unless it's up.
work in progress and has never booted a real machine. Initial
development and testing was done using SimOS (see
http://simos.stanford.edu for details). On the SimOS simulator, this
port successfully reaches single-user mode and has been tested with
loads as high as one copy of /bin/ls :-).
Obtained from: partly from NetBSD/alpha
or unsigned int (this doesn't change the struct layout, size or
alignment in any of the files changed in this commit, at least for
gcc on i386's. Using bitfields of type u_char may affect size and
alignment but not packing)).
FreeBSD/alpha. The most significant item is to change the command
argument to ioctl functions from int to u_long. This change brings us
inline with various other BSD versions. Driver writers may like to
use (__FreeBSD_version == 300003) to detect this change.
The prototype FreeBSD/alpha machdep will follow in a couple of days
time.
I had a reason for doing this, but it violates the principle of least
astonishment. (At some point I may put this back but attach it to one of
the LINK flags so the behavior can be toggled on and off.)
Also replace my tl_calchash() with a much less disgusting and substantially
smaller one supplied by Bill Fenner.
These are probably generated by other PCI devices sharing the TLAN's
interrupt. The programmer's guide says to simply re-enable interrupts
and return if one of these is detected.
Prompted by bug report from: Bill Fenner
in -current is over, I'll put a 2.2.x specific version in the RELENG_2_2
branch. If somebody wants a 2.2 version of this driver now, they can check
out the previous version from CVS or ask me via e-mail.
Gee people, I didn't mean to stir up such a controversy. I just wanted
to make sure I could get this thing to work with both kernel versions
and didn't want to have to maintain two separate copies. All ya hadda
do was ask. :)
drivers here do and it also blows up in building GENERIC during
a release build if you try and include <osreldate.h> (which shot
my SNAP dead - argh!). Use __FreeBSD__ instead.
This driver supports the following cards/integrated ethernet controllers:
Compaq Netelligent 10, Compaq Netelligent 10/100, Compaq Netelligent 10/100,
Compaq Netelligent 10/100 Proliant, Compaq Netelligent 10/100 Dual Port,
Compaq NetFlex-3/P Integrated, Compaq NetFlex-3/P Integrated,
Compaq NetFlex 3/P w/ BNC, Compaq Deskpro 4000 5233MMX.
It should also support Texas Instruments NICs that use the ThunderLAN
chip, though I don't have any to test. If you've got a card that uses
the ThunderLAN chip but isn't listed in the PCI vendor/product list in
if_tl.c, try adding it and see what happens.
The driver supports any MII compliant PHY at 10 or 100Mbps speeds in
full or half duplex. (Those I've personally tested are the National
Semiconductor DP83840A (Prosignia server), the Level 1 LXT970 (Deskpro
desktop), and the ThunderLAN's internal 10baseT PHY.) Autonegotiation,
hardware multicast filtering, BPF and ifmedia support are included.
This chip is pretty fast; Prosignia servers with NCR SCSI, ThunderLAN
ethernet and FreeBSD make for a nice combination.
Submitted by: Roger Hardiman <roger@cs.strath.ac.uk>
options BROOKTREE_SYSTEM_DEFAULT=BROOKTREE_PAL
in the kernel config file makes the driver's video_open() function
select PAL rather than NTSC. This fixed all the hangs on my
Dual Crystal card when using a PAL video signal.
As a result, you can loose the tsleep (of 2 seconds - now 0.25!!)
which I previously added. (Unless someone else wanted the 0.25
second tsleep).
submitted ioctl to clear the video buffer
prior to starting video capture
Amancio : clean up yuv12 so that it does not
affect rgb capture. Basically, fxtv after
capturing in yuv12 mode , switching to rgb
would cause the video capture to be too bright.
1.32 disable inverse gamma function for rgb and yuv
capture. fixed meteor brightness ioctl it now
converts the brightness value from unsigned to
signed.
1.33 added sysctl: hw.bt848.tuner, hw.bt848.reverse_mute,
hw.bt848.card
card takes a value from 0 to bt848_max_card
tuner takes a value from 0 to bt848_max_tuner
reverse_mute : 0 no effect, 1 reverse tuner
mute function some tuners are wired reversed :(
as variables declared in the main block in the function, so shadowing
of parameters by variables declared in the main block is not just an
obfuscation).
Found by: lint
Technologies' Socket 7 chipsets. This covers all of the Apollo chipsets
except the Master (82C570) and the MVP3, and it also covers the cheap
VXPro and VXTWO knockoffs of the VP1 and VPX.
PR: 6481
Reviewed by: phk
Submitted by: Lee Cremeans <lcremean@tidalwave.net>
Submitted by: Roger Hardiman <roger@cs.strath.ac.uk>
Roger Hardiman <roger@cs.strath.ac.uk> :
Revised autodetection code to correctly handle both
old and new VideoLogic Captivator PCI cards.
Added tsleep of 2 seconds to initialistion code for PAL users.
Corrected clock selection code on format change.
--- Amancio
- Attempt to handle PCI devices where the interrupt is
an ISA/EISA interrupt according to the mp table.
- Attempt to handle multiple IO APIC pins connected to
the same PCI or ISA/EISA interrupt source. Print a
warning if this happens, since performance is suboptimal.
This workaround is only used for PCI devices.
With these two workarounds, the -SMP kernel is capable of running on
my Asus P/I-P65UP5 motherboard when version 1.4 of the MP table is disabled.
"time" wasn't a atomic variable, so splfoo() protection were needed
around any access to it, unless you just wanted the seconds part.
Most uses of time.tv_sec now uses the new variable time_second instead.
gettime() changed to getmicrotime(0.
Remove a couple of unneeded splfoo() protections, the new getmicrotime()
is atomic, (until Bruce sets a breakpoint in it).
A couple of places needed random data, so use read_random() instead
of mucking about with time which isn't random.
Add a new nfs_curusec() function.
Mark a couple of bogosities involving the now disappeard time variable.
Update ffs_update() to avoid the weird "== &time" checks, by fixing the
one remaining call that passwd &time as args.
Change profiling in ncr.c to use ticks instead of time. Resolution is
the same.
Add new function "tvtohz()" to avoid the bogus "splfoo(), add time, call
hzto() which subtracts time" sequences.
Reviewed by: bde
Fixed pedantic semantics errors (in ANSI C, static arrays must have
a size, and static objects should be consistently declared as static
unless you know more than anyone should have to know about the
linkage rules).
1. Takeshi Ohashi <ohashi@atohasi.mickey.ai.kyutech.ac.jp> submitted
code to support bktr_read . /usr/src/share/examples/rgb24.c now works 8)
2. Flemming Jacobsen <fj@schizo.dk.tfs.com> submitted code to support
radio available with in some bt848 based cards;additionally, wrote
code to correctly recognized his bt848 card.
3. Roger Hardiman <roger@cs.strath.ac.uk> submitted various fixes to smooth
out the microcode and made all modes consistent.
4. Added supported for yuv12 so we know can capture raw streams and feed it
to mpeg_encoder . The upshot is that we can now mpeg encode more and save
nearly 100 percent of the disk requirements previously for programs such
as fxtv first save the raw video image to disk then converted to a
format suitable for mpeg_encode.
a BSD4.4Lite1 feature, not a FreeBSD feature. <sys/ioctl.h> is a
compatibility misfeature.
Moved NPCI ifdef. This file didn't compile if NPCI <= 0. It shouldn't
be configured in that case, but it is easy to support (mis)configuration
of drivers without buses by generating null objects, and many drivers
do it.
Removed unused includes.
This introduce an xxxFS_BOOT for each of the rootable filesystems.
(Presently not required, but encouraged to allow a smooth move of option *FS
to opt_dontuse.h later.)
LFS is temporarily disabled, and will be re-enabled tomorrow.
and initializes the next two ports in order starting at 03e0. This
also patches pcic_p.h to reduce the I/O ports mapped from 4 to 2.
Submitted by: Ted Faber <faber@ISI.EDU>
This will not make any of object files that LINT create change; there
might be differences with INET disabled, but hardly anything compiled
before without INET anyway. Now the 'obvious' things will give a
proper error if compiled without inet - ipx_ip, ipfw, tcp_debug. The
only thing that _should_ work (but can't be made to compile reasonably
easily) is sppp :-(
This commit move struct arpcom from <netinet/if_ether.h> to
<net/if_arp.h>.
Submitted by: Jonathan Hanna <pangolin@rogers.wave.ca>
The patch is for a Hauppauge Win/TV dbx with FM. I still need to
config OVERRIDE_TUNER, but it works nicely.
The #ifdef IPXIP in netipx/ipx_if.h is OK (used from ipx_usrreq.c and
ifconfig.c only).
I also fixed a typo IPXTUNNEL -> IPTUNNEL (and #ifdef'ed out the code
inside, as it never could have compiled - doh.)
number of tags (NCR_SCSI_DFLT_TAGS), which is 0 in the FAILSAFE case.
This should fix the incompatibility between kernel and ncrcontrol,
which is the result of FAILSAFE being defined in the kernel config
file, invisible to the build of ncrcontrol. (See kern/5133, which
should be fixed by this change.)
mode. Currently, the only supported controller is the Cirrus Logic
PD6832, but others can be supported with docs on them.
Submitted by: Ted Faber <faber@ISI.EDU>
2) Fix temporal decimation, disable it when
doing CAP_SINGLEs, and in dual-field capture, don't
capture fields for different frames
Submitted by: Luigi Rizzo & Randall Hopper
This make the Miro PCTV work for me, including audio, and should
hopefully fix the other audio problems some people have been having.
Reviewed by: ahasty & Luigi Rizzo (freebsd-multimedia)
a change that might have an effect on the problems some have seen
with older chips, it looks like the driver may have mistakenly thought
there was an SIA when there isn't.
This driver includes the following patches submitted by:
1.0 Hideyuki Suzuki <hideyuki@sat.t.u-tokyo.ac.jp>
Japanese Cable support
2.0 Keith Sklower <sklower@CS.Berkeley.EDU>
Minor update to the BSDI section so it compiles cleanly on BSDI
3.0 Joao Carlos Mendes Luis <jonny@coppe.ufrj.br>
ioctl interface to select video format , NTSC, PAL, etc...
overruns (not that it was a problem, but it could be):
1) Doubled the number of receive buffers in the DMA chain to 64.
2) Do packet receive processing before transmit in the interrupt routine.
if it is in 10Mbps mode and gets certain types of garbage prior to
the packet header. The work-around involves reprogramming the
multicast filter if nothing is received in some number of seconds
(currently set at 15). As a side effect, implemented complete support
for multicasting rather than the previous 'receive all multicasts'
hack, since we now have the ability to program the filter table.
Fixed a serious bug which crept in with the timeout() changes;
the cookie was only saved on the first timeout() call in fxp_init()
and wasn't updated in the most common place in fxp_stats_update()
when the timeout was rescheduled. This bug would have resulted in
an eventual panic if fxp_stop() was called (which happens when any
interface flags are changed, for example).
Fixed a bug in Alpha support that would have caused the TxCB
descriptor chain to span a page boundry, causing serious problems
if the pages didn't happen to be contiguous.
Removed some gratuitous bit masking that was left over from an
older implementation.
Fixed a bug where too much was copied from the configuration
template, spilling over into memory that followed it.
Fixed handling of if_timer...it was cleared too early in some cases.
of multiple PCI IDE controllers(Dyson), and some updates and cleanups from
John Hood, who originally made our IDE DMA stuff work :-).
I have run tests with 7 IDE drives connected to my system, all in DMA
mode, with no errors. Modulo any bugs, this stuff makes IDE look
really good (within it's limitations.)
Submitted by: John Hood <cgull@smoke.marlboro.vt.us>
rather than extracting the diff from Mark's patch, but it turns out that
I was freeing one allocation twice due to a previous cut/paste braino.
My botch, not Mark's.
Pointed out by: Mark Valentine <mv@pobox.com>
large (over 4KB) softc struct. The descriptor array is accessed by
busmaster dma and must be physically contiguous in memory. malloc() of
a block greater than a page is only virtually contiguous, and not
necessarily physically contigious.
contigmalloc() could do this, but that is a bit on the overkill side.
I'm not sure of the origins of the problem report and diagnosis, I learned
of the problem via mail forwarded from Jim Shankland <jas@flyingfox.com>.
Jim said that Matt Thomas's workaround was to reduce the number of
transmit descriptors from 128 to 32, but I was concerned that it might
cost performance. Anyway, this change is my fault, not Jim's. :-)
Reviewed by: davidg
* lots of fixes to error handling-- mostly works now
* improve DMA timing config for Triton chipsets-- PIIX4 and UDMA drive
still untested
* generally improve DMA config in many ways-- mostly cleanup
* clean up boot-time messages
* rewrite PRD generation algorithm
* first wd timeout is now longer, to handle drive spinup
Submitted by: John Hood <cgull@smoke.marlboro.vt.us>
could cause a solid system lockup in the driver attach:
These chips do not abort an access to the internal SRAM, when
the driver set the software reset bit in the istat register. But
the chip will never acknowledge the requested PCI bus transfer
in the situation, causing an infinite wait and a lockout of other
bus-masters.
The problem has been reported for rev 0x11 of the 53c825a and
rev 0x01 of the 53c875.
Revisions 0x13 of the 53c825a and 0x03 of the 53c875 are known
to support SRAM accesses, even in the software reset state.
- Do not malloc SCRIPTS memory for those parts of the microcode that
are to be loaded into the on-chip SRAM of the 53c825a or 875 ...
- Modify ncr_chip_lookup to make adding new entries easier.
- Disable use of on-chip SRAM for the 53c825 rev 0x10 to 0x12, since
there seems to be a problem with rev 0x11, while 0x13 is known to
work. (Tested by Chuck Robey <chuckr@glue.umd.edu>).
This code will be merged into 2.2-stable after a few more days of
testing in -current.
mod makes sure that the Natoma chipset is set into the correct mode. In
the case of my P6DNF, when booting a UP kernel, I see a substantial improvement
in the latency of certain operations. It appears that the cache hit
latency is curiously improved the most, per lat_mem_rd.
found by taking my HP800CT apart, perusing HPs (Very good!) service
manual and inference from a bad gif file I found in Finland.
Sigh... But it's a nice machine :-)
certain variants of the NCR chip from FE_CACHE_SET: FE_CLSE (enable
cache-line size register) and FE_ERMP (enable read-multiple). They
will be re-enabled, if a fix for the underlying problem (a restriction
in the memory to memory move logic of some chips) has been implemented.
I changed a few bits here and there, mainly renaming wd82371.c
to ide_pci.c now that it's supposed to handle different chipsets.
It runs on my P6 natoma board with two Maxtor drives, and also
on a Fujitsu machine I have at work with an Opti chipset and
a Quantum drive.
Submitted by:cgull@smoke.marlboro.vt.us <John Hood>
Original readme:
*** WARNING ***
This code has so far been tested on exactly one motherboard with two
identical drives known for their good DMA support.
This code, in the right circumstances, could corrupt data subtly,
silently, and invisibly, in much the same way that older PCI IDE
controllers do. It's ALPHA-quality code; there's one or two major
gaps in my understanding of PCI IDE still. Don't use this code on any
system with data that you care about; it's only good for hack boxes.
Expect that any data may be silently and randomly corrupted at any
moment. It's a disk driver. It has bugs. Disk drivers with bugs
munch data. It's a fact of life.
I also *STRONGLY* recommend getting a copy of your chipset's manual
and the ATA-2 or ATA-3 spec and making sure that timing modes on your
disk drives and IDE controller are being setup correctly by the BIOS--
because the driver makes only the lamest of attempts to do this just
now.
*** END WARNING ***
that said, i happen to think the code is working pretty well...
WHAT IT DOES:
this code adds support to the wd driver for bus mastering PCI IDE
controllers that follow the SFF-8038 standard. (all the bus mastering
PCI IDE controllers i've seen so far do follow this standard.) it
should provide busmastering on nearly any current P5 or P6 chipset,
specifically including any Intel chipset using one of the PIIX south
bridges-- this includes the '430FX, '430VX, '430HX, '430TX, '440LX,
and (i think) the Orion '450GX chipsets. specific support is also
included for the VIA Apollo VP-1 chipset, as it appears in the
relabeled "HXPro" incarnation seen on cheap US$70 taiwanese
motherboards (that's what's in my development machine). it works out
of the box on controllers that do DMA mode2; if my understanding is
correct, it'll probably work on Ultra-DMA33 controllers as well.
it'll probably work on busmastering IDE controllers in PCI slots, too,
but this is an area i am less sure about.
it cuts CPU usage considerably and improves drive performance
slightly. usable numbers are difficult to come by with existing
benchmark tools, but experimentation on my K5-P90 system, with VIA
VP-1 chipset and Quantum Fireball 1080 drives, shows that disk i/o on
raw partitions imposes perhaps 5% cpu load. cpu load during
filesystem i/o drops a lot, from near 100% to anywhere between 30% and
70%. (the improvement may not be as large on an Intel chipset; from
what i can tell, the VIA VP-1 may not be very efficient with PCI I/O.)
disk performance improves by 5% or 10% with these drives.
real, visible, end-user performance improvement on a single user
machine is about nil. :) a kernel compile was sped up by a whole three
seconds. it *does* feel a bit better-behaved when the system is
swapping heavily, but a better disk driver is not the fix for *that*
problem.
THE CODE:
this code is a patch to wd.c and wd82371.c, and associated header
files. it should be considered alpha code; more work needs to be
done.
wd.c has fairly clean patches to add calls to busmaster code, as
implemented in wd82371.c and potentially elsewhere (one could imagine,
say, a Mac having a different DMA controller).
wd82371.c has been considerably reworked: the wddma interface that it
presents has been changed (expect more changes), many bugs have been
fixed, a new internal interface has been added for supporting
different chipsets, and the PCI probe has been considerably extended.
the interface between wd82371.c and wd.c is still fairly clean, but
i'm not sure it's in the right place. there's a mess of issues around
ATA/ATAPI that need to be sorted out, including ATAPI support, CD-ROM
support, tape support, LS-120/Zip support, SFF-8038i DMA, UltraDMA,
PCI IDE controllers, bus probes, buggy controllers, controller timing
setup, drive timing setup, world peace and kitchen sinks. whatever
happens with all this and however it gets partitioned, it is fairly
clear that wd.c needs some significant rework-- probably a complete
rewrite.
timing setup on disk controllers is something i've entirely punted on.
on my development machine, it appears that the BIOS does at least some
of the necessary timing setup. i chose to restrict operation to
drives that are already configured for Mode4 PIO and Mode2 multiword
DMA, since the timing is essentially the same and many if not most
chipsets use the same control registers for DMA and PIO timing.
does anybody *know* whether BIOSes are required to do timing setup for
DMA modes on drives under their care?
error recovery is probably weak. early on in development, i was
getting drive errors induced by bugs in the driver; i used these to
flush out the worst of the bugs in the driver's error handling, but
problems may remain. i haven't got a drive with bad sectors i can
watch the driver flail on.
complaints about how wd82371.c has been reindented will be ignored
until the FreeBSD project has a real style policy, there is a
mechanism for individual authors to match it (indent flags or an emacs
c-mode or whatever), and it is enforced. if i'm going to use a source
style i don't like, it would help if i could figure out what it *is*
(style(9) is about half of a policy), and a way to reasonably
duplicate it. i ended up wasting a while trying to figure out what
the right thing to do was before deciding reformatting the whole thing
was the worst possible thing to do, except for all the other
possibilities.
i have maintained wd.c's indentation; that was not too hard,
fortunately.
TO INSTALL:
my dev box is freebsd 2.2.2 release. fortunately, wd.c is a living
fossil, and has diverged very little recently. included in this
tarball is a patch file, 'otherdiffs', for all files except wd82371.c,
my edited wd82371.c, a patch file, 'wd82371.c-diff-exact', against the
2.2.2 dist of 82371.c, and another patch file,
'wd82371.c-diff-whitespace', generated with diff -b (ignore
whitespace). most of you not using 2.2.2 will probably have to use
this last patchfile with 'patch --ignore-whitespace'. apply from the
kernel source tree root. as far as i can tell, this should apply
cleanly on anything from -current back to 2.2.2 and probably back to
2.2.0. you, the kernel hacker, can figure out what to do from here.
if you need more specific directions, you probably should not be
experimenting with this code yet.
to enable DMA support, set flag 0x2000 for that drive in your config
file or in userconfig, as you would the 32-bit-PIO flag. the driver
will then turn on DMA support if your drive and controller pass its
tests. it's a bit picky, probably. on discovering DMA mode failures
or disk errors or transfers that the DMA controller can't deal with,
the driver will fall back to PIO, so it is wise to setup the flags as
if PIO were still important.
'controller wdc0 at isa? port "IO_WD1" bio irq 14 flags 0xa0ffa0ff
vector wdintr' should work with nearly any PCI IDE controller.
i would *strongly* suggest booting single-user at first, and thrashing
the drive a bit while it's still mounted read-only. this should be
fairly safe, even if the driver goes completely out to lunch. it
might save you a reinstall.
one way to tell whether the driver is really using DMA is to check the
interrupt count during disk i/o with vmstat; DMA mode will add an
extremely low number of interrupts, as compared to even multi-sector
PIO.
boot -v will give you a copious register dump of timing-related info
on Intel and VIAtech chipsets, as well as PIO/DMA mode information on
all hard drives. refer to your ATA and chipset documentation to
interpret these.
WHAT I'D LIKE FROM YOU and THINGS TO TEST:
reports. success reports, failure reports, any kind of reports. :)
send them to cgull+ide@smoke.marlboro.vt.us.
i'd also like to see the kernel messages from various BIOSes (boot -v;
dmesg), along with info on the motherboard and BIOS on that machine.
i'm especially interested in reports on how this code works on the
various Intel chipsets, and whether the register dump works
correctly. i'm also interested in hearing about other chipsets.
i'm especially interested in hearing success/failure reports for PCI
IDE controllers on cards, such as CMD's or Promise's new busmastering
IDE controllers.
UltraDMA-33 reports.
interoperation with ATAPI peripherals-- FreeBSD doesn't work with my
old Hitachi IDE CDROM, so i can't tell if I've broken anything. :)
i'd especially like to hear how the drive copes in DMA operation on
drives with bad sectors. i haven't been able to find any such yet.
success/failure reports on older IDE drives with early support for DMA
modes-- those introduced between 1.5 and 3 years ago, typically
ranging from perhaps 400MB to 1.6GB.
failure reports on operation with more than one drive would be
appreciated. the driver was developed with two drives on one
controller, the worst-case situation, and has been tested with one
drive on each controller, but you never know...
any reports of messages from the driver during normal operation,
especially "reverting to PIO mode", or "dmaverify odd vaddr or length"
(the DMA controller is strongly halfword oriented, and i'm curious to
know if any FreeBSD usage actually needs misaligned transfers).
performance reports. beware that bonnie's CPU usage reporting is
useless for IDE drives; the best test i've found has been to run a
program that runs a spin loop at an idle priority and reports how many
iterations it manages, and even that sometimes produces numbers i
don't believe. performance reports of multi-drive operation are
especially interesting; my system cannot sustain full throughput on
two drives on separate controllers, but that may just be a lame
motherboard.
THINGS I'M STILL MISSING CLUE ON:
* who's responsible for configuring DMA timing modes on IDE drives?
the BIOS or the driver?
* is there a spec for dealing with Ultra-DMA extensions?
* are there any chipsets or with bugs relating to DMA transfer that
should be blacklisted?
* are there any ATA interfaces that use some other kind of DMA
controller in conjunction with standard ATA protocol?
FINAL NOTE:
after having looked at the ATA-3 spec, all i can say is, "it's ugly".
*especially* electrically. the IDE bus is best modeled as an
unterminated transmission line, these days.
for maximum reliability, keep your IDE cables as short as possible and
as few as possible. from what i can tell, most current chipsets have
both IDE ports wired into a single buss, to a greater or lesser
degree. using two cables means you double the length of this bus.
SCSI may have its warts, but at least the basic analog design of the
bus is still somewhat reasonable. IDE passed beyond the veil two
years ago.
--John Hood, cgull@smoke.marlboro.vt.us
changes relative to the 2.2 compatable version are include file
related, the new multicast interface (!) and the new PCI interface.
This should work "as-is" but has not been tested (I have not been able
to get a dc21x4x based card for testing).
- OVERRIDE_TUNER: allows you to manually choose the tuner type for those
cards that fail to probe properly. See source for legal
values.
- OVERRIDE_DBX: allows you to manually choose DBX or NO DBX for those
cards that fail to probe properly.
0 == no DBX circuit present, 1 == DBX circuit present.
should work with no driver changes, though not all features are currently
used.
Remove code that was conditional on NEW_SCSICONF not being defined. This
was temporary code, that at a time got excluded correctly, until the new
scsiconf became the default, and NEW_SCSICONF was no longer specified.
Add support for quirks defined in scsiconf.c. For now only the HP3724/5
needs an entry, since that drive can't be used with tags.
device probe of a host to PCI bridge may modify that value, based on
its knowledge of device specific registers. This makes the Intel XXpress
work, as verified by: Terje Marthinussen <terjem@cc.uit.no>.
Return failure, if the enable bit corresponding to the map type has not
been set in the command register. This feature was requested by Justin
Gibbs, who pointed out that some early PCI to PCI bridges do not correctly
support memory windows (I assume because of the risk of deadlocks that
have been taken care of in the PCI 2.2 spec) and that some BIOS clears
the memory address decode enable bit in the command register of the PCI
device, if it finds them behind such a bridge.
1) Stop at the first map register that contains a zero value.
2) When testing for the map size work up from low values, since
this works around a bug in some BusLogic SCSI card, which has
the 16 upper port base address bits hardwired to zero.
The config register dump printed in the bootverbose case has
been slightly rearranged.
reality. There will be a new call interface, but for now the file
pci_compat.c (which is to be deleted, after all drivers are converted)
provides an emulation of the old PCI bus driver functions. The only
change that might be visible to drivers is, that the type pcici_t
(which had been meant to be just a handle, whose exact definition
should not be relied on), has been converted into a pcicfgregs* .
The Tekram AMD SCSI driver bogusly relied on the definition of pcici_t
and has been converted to just call the PCI drivers functions to access
configuration space register, instead of inventing its own ...
This code is by no means complete, but assumed to be fully operational,
and brings the official code base more in line with my development code.
A new generic device descriptor data type has to be agreed on. The PCI
code will then use that data type to provide new functionality:
1) userconfig support
2) "wired" PCI devices
3) conflicts checking against ISA/EISA
4) maps will depend on the command register enable bits
5) PCI to Anything bridges can be defined as devices,
and are probed like any "standard" PCI device.
The following features are currently missing, but will be added back,
soon:
1) unknown device probe message
2) suppression of "mirrored" devices caused by ancient, broken chip-sets
This code relies on generic shared interrupt support just commited to
kern_intr.c (plus the modifications of isa.c and isa_device.h).
Added [SR]RGBMASKs ioctl for byte swapping.
1.16 4/20/97 Randall Hopper <rhh@ct.picker.com>
Generalized RGBMASK ioctls for general pixel
format setting [SG]ACTPIXFMT, and added query API
to return driver-supported pix fmts GSUPPIXFMT.
1.17 4/21/97 hasty@rah.star-gate.com
Clipping support added.
1.18 4/23/97 Clean up after failed CAP_SINGLEs where bt
interrupt isn't delivered, and fixed fixing
CAP_SINGLEs that for ODD_ONLY fields.
Submitted by: individuals in above log messages.
There are various options documented in i386/conf/LINT, there is more to
come over the next few days.
The kernel should run pretty much "as before" without the options to
activate SMP mode.
There are a handful of known "loose ends" that need to be fixed, but
have been put off since the SMP kernel is in a moderately good condition
at the moment.
This commit is the result of the tinkering and testing over the last 14
months by many people. A special thanks to Steve Passe for implementing
the APIC code!
large enough to contain the ethernet header. There appears to be a
condition where the card can return "0" in some failure cases, and this
causes bad things to happen (a panic).
type mismatches. There was no problem in practice (at least on 386's).
Removed NetBSD-related TIMEOUT macro. NetBSD uses the same BSD4.4Lite
timeout interface as FreeBSD. As a concession to portability, declare
the timeout function without using the FreeBSD timeout_t typedef.
This patch fixes the problem of vic only capturing an even or odd frame plus
the my early patch for missing frames with resolutions higher than 320x240
in rgb mode.
The yuv422 patch introduces a minor bug in that a green line appears at the
bottom of the captured window . There is no easy work around for this right
now.
Reviewed by: various bt848 hackers
Submitted by: Amancio Hasty <hasty@rah.star-gate.com>
Randall Hopper <rhh@ct.picker.com> GHUE/GBRIGHT bug
Louis Mamakos made a new bt848 struct, including massive changes to the entire
body of code, substituting array offsets with struct members.
Randall Hopper aadded fixes of BT848_GHUE & BT848_GBRIG.
I (fsmp):
added polled hardware i2c routines,
removed all existing software i2c routines.
added eeprom support.
form `tv = time'. Use a new function gettime(). The current version
just forces atomicicity without fixing precision or efficiency bugs.
Simplified some related valid accesses by using the central function.
Michael submitted code to activate the audio muxes.
fsmp:
extended those changes for different boards.
auto-detection of board types.
auto-detection of tuner types.
auto-detection of stereo option.
Fixed a bug in fxp_mdi_write - a hex number was missing a preceding 0x
and this was causing the routine to not wait for a PHY write to complete.
Added support for link0, link1, and link2 flags to toggle auto-
negotiation, 10/100, and half/full duplex:
link0 disable auto-negotiation
When set, these flags then have meaning:
-link1 10Mbps
link1 100Mbps
-link2 half duplex
link2 full duplex
...needs a manual page.
I broke the cable tuning with my 'TEST_A' code. Remove TEST_A define
till I finish this change for both tuning modes. Note that this
will effectively break the new TVTUNER_SETFREQ/TVTUNER_GETFREQ ioctl()s.
These aren't used by anyone but me yet (attempt to provide full resolution
fine tuning for "fringe" stations) so it should be no problem
written:
1) Full duplex mode is now supported (and works!)
2) The 10Mbps-only PCI Pro/10 should now work (untested, however)
Thanks to Justin Gibbs for providing a PCI bus analyzer trace while the
Intel Windows driver was configuring the board...this made it possible
to figure out the mystery bit that I wasn't setting in the PHY for full
duplex to work.
can't perform overlapping commands on both of its channels.
To enable the CMD640B work-around, the kernel must be compiled with
"options CMD640". Without that option there should be no difference
in the code produced compared to the previous revision of wd.c.
Submitted by: Wolfgang Helbig <helbig@ba-stuttgart.de>