address, account for cards which report the Texas Instruments PCI vendor ID
in addition to Compaq and Olicom. (I don't actually have a card that
reports the TI vendor/device ID, but it appears that some Racore adapters
work this way, and failing to account for it when we have the ID listed
in the supported devices list is a bug.)
iicbus(4) and smb(4).
User programs are available to retrieve SDRAM and sensor info, contact
the author.
Submitted by: Takanori Watanabe <takawata@shidahara1.planet.sci.kobe-u.ac.jp>
Reviewed by: Mike Smith <msmith@freebsd.org>
of the CRC as the multicast hash table bit, not the lower six bits. Plus
we have to flip on all bits in the table for multicast mode.
Pointed out by: Kazushi SUGYO <k-sugyou@nwsl.mesh.ad.jp>
The previous code just ignored the invalid map register, but this gave
surprising results because of the way pci_map_port() associated the map
register offset supplied with a map entry in the map array.
to look up cookies properly, at least for standard controllers.
Cookies are used so that we don't have to pass around lots of args.
All of the dmainit functions use the unit number so it is essential
that we pass them a cookie with the correct unit number.
This may break working configurations if there are bugs in the
dmainit functions like the ones I just fixed for VIA chipsets.
Broken in: rev 1.4 of ide_pci.c and rev.1.139 of wd.c.
Prefetch/postwrite was enabled for the wrong controller. (VIA
is bitwise big endian and we confused ourself by shifting left
instead of right.)
Extracted from: last set of patches from the author
(john hood <cgull@smoke.marlboro.vt.us>) on 7 Feb 1998
Instead of initializing UDMA mode, we turned it off and made sure that
it stays off by turning on the "UDMA enable by SET FEATURES" disable.
The damage was limited by bugs in cookie lookup, and suitable
initialization by some BIOSes. The cookie list has slaves before
masters, and the unit number is ignored when cookies are looked up,
so cookie lookup always finds cookies for slaves and the bug only
clobbers slaves, so the bug was harmless for common configurations
with no slaves or only non-UDMA slaves. UDMA initialization for
masters actually worked if the BIOS turns on the UDMA mode bit and
turns off the "UDMA enable by SET FEATURES" disable.
- In wb_rxeof(), if the received packet is less than MINCLSIZE bytes,
copy it to an mbuf chain so as to be more frugal in our use of mbuf
clusters.
- The Winbond chip, like the ASIX, wants the 'TX interrupt request'
bit set in the _first_ fragment of a transmitted frame, not the
last. (At least the Winbond manual states this unambiguously; too
bad I wasn't paying attention when I read it the first time.)
- Turn off the transmit threshold mechanism (initialize the threshold
to 0). This effectively puts the chip in 'store and forward' mode
which seems to cut down on transmit errors a little. It may also
reduce transmit performace a bit, but I'm willing to do that if it
means better reliability.
- Normally, the driver allocates an mbuf cluster for each receive
descriptor. This is because we have to be prepared to accomodate up to
1500 bytes (a cluster buffer can hold up to 2K). However, using up a
whole cluster buffer for a tiny packet is a bit of a waste. Also,
it seems to me that sometimes mbufs will linger in the kernel for
a while after being passed out of the driver, which means we might
drain the mbuf cluster pool. The cluster pool is smaller than the
mbuf pool in general, so we do the following: if the packet is less
that MINCLSIZE bytes, then we copy it into a small mbuf chain and
leave the mbuf cluster in place for another go-round. This saves
mbuf clusters in some cases while still allowing them to be used
for heavy traffic exchanges with lots of full-sized frames.
- The transmit descriptor has a bit in the control word which allows
the driver to request that a 'TX OK' interrupt be generated when
a frame has been completed. Sometimes, a frame can be fragmented
across several descriptors. The manual for the real DEC 21140A says
that if this happens, the 'TX interrupt request' bit is only valid
in the descriptor of the last fragment. With the ASIX chip, it seems
the 'TX interrupt request' bit is only valid in the descriptor of
the _first_ fragment. Actually, the manual contains conflicting
information, but I think it's supposed to be the first fragment.
To play it safe, set the bit in both the first and last fragment to
be sure that we get a TX OK interrupt. Without this fix, the driver
can sometimes be late in releasing mbufs from the transmit queue
after transmission.
if option CY_PCI_FASTINTR is configured and mapping the irq to a
fastintr is possible. Unfortunately, this has to be optional because
pci_map_int_right() doesn't handle the INTR_EXCL flag right --
INTR_EXCL is honoured even if the interrupt needs to be non-exclusive
for other devices to work.
using the new pci_map_int_right() variant of pci_map_int(). Fast
interrupts work for PCI devices if and only if they are exclusive.
(The PCI interrupt mux doesn't support fast interrupts and can't
support a mixture of fast and slow interrupts even in principle.)
Don't assume that intrmask_t == unsigned in pci_map_int().
register for the PLX id). Merge the vendor's modification of the 2.2.*
release version into -current for reference. Will be cleaned up in next
commit.
Obtained from: ftp://ftp.cyclades.com/pub/cyclades/cyclom-y/freebsd/3.0/cyy30.tar.gz
performance and reliability a little. There was a condition before
where transmission would stall during periods of heavy traffic
exchange between two hosts. Also set the 'want interrupt' bit in
receive descriptor control words.
on the ASIX AX88140A chip. Update /sys/conf/files, RELNOTES.TXT,
/sys/i388/i386/userconfig.c, sysinstall/devices.c, GENERIC and LINT
accordingly.
For now, the only board that I know of that uses this chip is the
Alfa Inc. GFC2204. (Its predecessor, the GFC2202, was a DEC tulip card.)
Thanks again to Ulf for obtaining the board for me. If anyone runs
across another, please feel free to update the man page and/or the
release notes. (The same applies for the other drivers.)
FreeBSD should now have support for all of the DEC tulip workalike
chipsets currently on the market (Macronix, Lite-On, Winbond, ASIX).
And unless I'm mistaken, it should also have support for all PCI fast
ethernet chipsets in general (except maybe the SMC FEAST chip, which
nobody seems to ever use, including SMC). Now if only we could convince
3Com, Intel or whoever to cough up some documentation for gigabit
ethernet hardware.
Also updated RELNOTEX.TXT to mention that the SVEC PN102TX is supported
by the Macronix driver (assuming you actually have an SVEC PN102TX with
a Macronix chip on it; I tried to order a PN102TX once and got a box
labeled 'Hawking Technology PN102TX' that had a VIA Rhine board inside
it).
and mx_setcfg() so that we can read the internal MII registers on the
MX98713 chip correctly. With these changes, media autoselection now
works correctly on the original 98713. All Macronix chips should now
be properly supported (unless there's a surprise waiting in the 98725).
Thanks to Ulf Zimmermann for providing a 98713 board.
isolated to revision 33 PNIC chips is also present in revision 32 chips.
Cards with rev. 32 chips include the LinkSys LNE100TX and the Matrox
FastNIC 10/100. This accounts for all the cards that I have to test
with.
(I was never able to personally trip the bug on this chip rev, but today
one of the guys in the lab did it with the software they're working on
for their cellular IP project, which uses BPF and promiscuous mode
extensively.)
This commit enables the promiscuous mode software workaround code for
both revison 32 and revision 33 chips. It's possible all of the PNIC
chips suffer from this bug, but these are the only two revs where I
know for a fact it exists.
chip revisions. (A buggy taiwanese chip? I'm just shocked; shocked I tell
you.) So far I have only observed the anomalous behavior on board with
PCI revision 33 chips. At the moment, this seems to include only the
Netgear FA310-TX rev D1 boards with chips labeled NGMC169B. (Possibly this
means it's an 82c169B part from Lite-On.)
The bug only manifests itself in promiscuous mode, and usually only at
10Mbps half-duplex. (I have not observed the problem in full-duplex mode,
and I don't think it ever happens at 100Mbps.) The bug appears to be in
the receiver DMA engine. Normally, the chip is programmed with a linked
list of receiver descriptors, each with a receive buffer capable of holding
a complete full-sized ethernet frame. During periods of heavy traffic
(i.e. ping -c 100 -f 8100 <otherhost>), the receiver will sometimes appear
to upload its entire FIFO memory contents instead of just uploading the
desired received frame. The uploaded data will span several receive
buffers, in spite of the fact that the chip has been told to only use
one descriptor per frame, and appears to consist of previously transmitted
frames with the correct received frame appended to the end.
Unfortunately, there is no way to determine exactly how much data is
uploaded when this happens; the chip doesn't tell you anything except the
size of the desired received frame, and the amount of bogus data varies.
Sometimes, the desired frame is also split across multiple buffers.
The workaround is ugly and nasty. The driver assembles all of the data
from the bogus frames into a single buffer. The receive buffers are always
zeroed out, and we program the chip to always include the receive CRC
at the end of each frame. We therefore know that we can start from the
end of the buffer and scan back until we encounter a non-zero data byte,
and say conclusively that this is the end of the desired frame. We can
then subtract the frame length from this address to determine the real
start of the frame, and copy it into an mbuf and pass it on.
This is kludgy and time consuming, but it's better than dropping frames.
It's not too bad since the problem only happens at 10Mbps.
The workaround is only enabled for chips with PCI revision == 33. The
LinkSys LNE100TX and Matrox FastNIC 10/100 cards use a revision 32 chip
and work fine in promiscuous mode. Netgear support has confirmed that
they "have some previous knowledge of problems in promiscuous mode" but
didn't have a workaround. The people at Lite-On who would be able to
suggest a possible fix are on vacation. So, I decided to implement a
workaround of my own until I hear from them. I suppose this problem made
it through Netgear's QA department since Windows doesn't normally use
promiscuous mode, and if Windows doesn't need the feature than it can't
possibly be important, right? Grrr.
this has a problem with capture but i am not sure if it is related
to the mixer or what else.
But in the meantime, this is ok to listen to mpegs.
I also have a much simpler version of the driver in the works which
reuses a lot more of the existing "pcm" routines. Next year...