the receive code so that it correctly chains receive descriptors together
and handles the case that only a part of a packet is done at the time
we get here.
This is just to make sure we initialize the chip correctly: we need to
make the sure the port select bit in CSR6 is set properly so that we
use the internal PHY for 10/100 support. (The eval boards I have also
include an external HomePNA PHY, but I need to play with that more
before I can support it.)
Collect together the components of several drivers and export eisa from
the i386-only area (It's not, it's on some alphas too). The code hasn't
been updated to work on the Alpha yet, but that can come later.
Repository copies were done a while ago.
Moving these now keeps them in consistant place across the 4.x series
as the newbusification progresses.
Submitted by: mdodd
packets into a single buffer, and set the DC_TX_COALESCE flag for the
Davicom DM9102 chip. I thought I had escaped this problem, but... This
chip appears to silently corrupt or discard transmitted frames when
using scatter/gather DMA (i.e. DMAing each packet fragment in place
with a separate descriptor). The only way to insure reliable transmission
is to coalesce transmitted packets into a single cluster buffer. (There
may also be an alignment constraint here, but mbuf cluster buffers are
naturally aligned on 2K boundaries, which seems to be good enough.)
The DM9102 driver for Linux written by Davicom also uses this workaround.
Unfortunately, the Davicom datasheet has no errata section describing
this or any other apparently known defect.
Problem noted by: allan_chou@davicom.com.tw
makes it a little easier to notice that parity checking an 8bit sram
isn't working.
Turn on scb and internal data-path parity checking for all pci chips types.
We were only doing this for ultra2 chips.
After clearing the parity interrupt status, clear the BRKADRINT. This
avoids seeing a bogus BRKADRINT interrupt after external SCB probing
once normal interrupts are enabled.
controllers will run at U2 speeds until I can complete the U160 support
for this driver.
Correct a termination buglet for the 2940UW-Pro.
Be more paranoid in how we probe and enable external ram, fast external
ram timing and external ram parity checking. We should now work on
20ns and 8bit SRAM parts.
Perform initial setup for the DT feature on cards that support it.
Factorize and clean up code. Use tables where it makes sense, etc.
Add some delays in dealing with the board control logic. I've never
seen this code fail, but with the ever increasing speed of processors,
its better to insert deterministic delays just to be safe. This stuff
is only touched during probe and attach, so the extra delay is of no
concern.
driver seems relatively functional, but could use some souping up,
particularly in the performance area. This has both NetBSD and FreeBSD
attachment code and a fair amount of effort has been put into making
it easy to port to different *BSD platforms.
The basic design is a one tfd per mbuf transmit (with no transmit
related interrupts- tfds are gc'd as needed). The receive ring
uses a 2K buffer per rfd with a +2 byte adjust for the ethernet
header (so the payload is aligned). There's support that *almost*
works for doing large packets- the rfd chaining code works, but there's
some problem with getting good checksums at the IP reassembly level
(ditto for doing short tfd's too).
The chip has support for TCP checksums insertion for transmit and
TCP checksum calculation on receive (for both you have to do some
appropriate backoff && twiddling), but this isn't in place.
This is nearly entirely reverse engineered from the released Intel
driver, so there's a lot of "We have to do this but do not know why"
stuff. There is somebody who has the chip specs who works in FreeBSD
but they're being a bit standoffish about even sharing hints which
is somewhat annoying. It's also apparent that all I had to work with
were the first rev boards.
This driver has been lightly tested on intel && alpha, but only
point-to-point. There may be some issues with switches- use of
boot time environment variables that override EEPROM settings
(e.g., 'set wx_ilos=1' which inverts the sense of optical signal
loss) may help with this.
I had this out for review for three weeks, and nobody said anything
negative or positive, ergo, this checkin has no 'reviewed by' field
which I would have preferred.
down, the dc driver and receiver can fall out of sync with one another,
resulting in a condition where the chip continues to receive packets
but the driver never notices. Normally, the receive handler checks each
descriptor starting from the current producer index to see if the chip
has relinquished ownership, indicating that a packet has been received.
The driver hands the packet off to ether_input() and then prepares the
descriptor to receive another frame before moving on to the next
descriptor in the ring. But sometimes, the chip appears to skip a
descriptor. This leaves the driver testing the status word in a descriptor
that never gets updated. The driver still gets "RX done" interrupts but
never advances further into the RX ring, until the ring fills up and the
chip interrupts again to signal an error condition. Sometimes, the
driver will remain in this desynchronized state, resulting in spotty
performance until the interface is reset.
Fortunately, it's fairly simple to detect this condition: if we call
the rxeof routine but the number of received packets doesn't increase,
we suspect that there could be a problem. In this case, we call a new
routine called dc_rx_resync(), which scans ahead in the RX ring to see
if there's a frame waiting for us somewhere beyond that the driver thinks
is the current producer index. If it finds one, it bumps up the index
and calls the rxeof handler again to snarf up the packet and bring the
driver back in sync with the chip. (It may actually do this several times
in the event that there's more than one "hole" in the ring.)
So far the only card supported by if_dc which has exhibited this problem
is a LinkSys LNE100TX v2.0 (82c115 PNIC II), and it only seems to happen
on one particular system, however the fix is general enough and has low
enough overhead that we may as well apply it for all supported chipsets.
I also implemented the same fix for the 3Com xl driver, which is apparently
vulnerable to the same problem.
Problem originally noted and patch tested by: Matt Dillon
probes are at the 'chip' level and will get overridden by pcic_p if it is
compiled in. It's still nice to get the better probe message if it's not...
Requested by: imp
is an application space macro and the applications are supposed to be free
to use it as they please (but cannot). This is consistant with the other
BSD's who made this change quite some time ago. More commits to come.