like the original PNIC and the MX98715A (from which the PNIC II is derived).
This requires special handling. Save the card type, and in mx_calchash(),
if we see that the card is a PNIC, return only the low 7 bits of the
hash instead of the low 9 bits.
support is compiled in)
2) Add probing for generic USB host controllers as well so we get them all
3) make the returned strings look alike in the whole file
Also removed the BSDI support (for now)
This allows the driver to be loaded/unloaded as a KLD
and loaded in the boot loader phase whithout making a custom kernel.
happens if you have a BIOS with a 'Plug & Play OS' setting and you leave
it set to 'Yes.' This is wrong for FreeBSD (and LoseNT): it should be set
to 'No.' Apparently it's still possible to map the iobase of the NIC and
have the card work by reading the config space manually (which is what
the driver does if pci_map_port() fails) but we need to warn the user to
do fix their machine anyway. Anyway, warn the user to check the 'Plug &
Play OS' setting in their BIOS if mapping the io space fails.
The driver now identifies the IBM PCI-PCI Bridge fitted to newer
Matrox cards and initialises it.
Sumitted by: Anton Berezin <tobez@plab.ku.dk>
The Protein Laboratory, University of Copenhagen
via an IBM PCI-PCI bridge (82351 or 82352 or 82353)
The driver must identify if it is on a secondary PCI bus, which is
created via the IBM PCI-PCI bridge. If it is, then it must initialise
the IBM PCI-PCI bridge correctly.
To do this, the following new functions are added.
Because they use the pcici_t tag, they are considered 2.2 compatibility APIs
pcici_t * pci_get_parent_from_tag(pcici_t tag);
int pci_get_bus_from_tag(pcici_t tag);
(The _from_tag suffix is used to prevent clashes with similarly named
newbus PCI API functions)
Submitted by: Anton Berezin <tobez@plab.ku.dk>
Reviewed by: Doug Rabson <dfr@nlsystems.com>
Reworked by: Me (roger)
The cdevsw_add() function now finds the major number(s) in the
struct cdevsw passed to it. cdevsw_add_generic() is no longer
needed, cdevsw_add() does the same thing.
cdevsw_add() will print an message if the d_maj field looks bogus.
Remove nblkdev and nchrdev variables. Most places they were used
bogusly. Instead check a dev_t for validity by seeing if devsw()
or bdevsw() returns NULL.
Move bdevsw() and devsw() functions to kern/kern_conf.c
Bump __FreeBSD_version to 400006
This commit removes:
72 bogus makedev() calls
26 bogus SYSINIT functions
if_xe.c bogusly accessed cdevsw[], author/maintainer please fix.
I4b and vinum not changed. Patches emailed to authors. LINT
probably broken until they catch up.
chipset. First you thrilled to the 3c905, then you trembled at the
3c905B, now gaze in wonder at: the 3c905C! This appears to be another
3c90X series chip called the Tornado (PCI ID 0x10B7/0x9200) and should
be equivalent (from the driver API perspective) to the 3c905B, so all
we have to do is add the PCI ID to the list.
Reformat and initialize correctly all "struct cdevsw".
Initialize the d_maj and d_bmaj fields.
The d_reset field was not removed, although it is never used.
I used a program to do most of this, so all the files now use the
same consistent format. Please keep it that way.
Vinum and i4b not modified, patches emailed to respective authors.
similar to the PNIC I (supported by the pn driver). In fact, it's really
a Macronix 98715A with wake on LAN support added. According to LinkSys,
the PNIC II was jointly developed by Lite-On and Macronis. I get the
feeling Macronix did most of the work. (The datasheet has the Macronix
logo on it, and is in fact nearly identical to the 98715 datasheet, except
for the extra wake on LAN registers.) In any case, the PNIC II works just
fine with the Macronix driver.
The changes are:
- Move PCI ID for the PNIC II from the pn driver to the mx driver.
- Mention PNIC II support in mx.4.
- Mention PNIC II support in RELNOTES.TXT and HARDWARE.TXT.
Allow chipset drivers to specify the direct-mapped DMA window's mask in
preparation for tsunami support. Previous chipsets' direct-mapped DMA
mask was always 1024*1024*1024. The Tsunami chipset needs it to be
2*1024*1024*1024
Reviewed by: Doug Rabson <dfr@nlsystems.com>
preparation for tsunami support. Previous chipsets' direct-mapped DMA
mask was always 1024*1024*1024. The Tsunami chipset needs it to be
2*1024*1024*1024
These changes should not affect the i386 port
Reviewed by: Doug Rabson <dfr@nlsystems.com>
- Clear the IFF_OACTIVE flag when al_txeof() runs down the last TX mbuf chain.
- Mark the workaround for the transmitter stalling bug with
#ifdef AL_TX_STALL_WAR/#endif.
only support 'mirroring' the vendor and device ids, so we don't
lose any information. Certain revisions of the aic7880 will not
perform the mirroring so to match all possiblities would double
the number of table entries. This change also allows us to match
things like the 2944B which I missed in the original table.
This is an old OPTi chipset.
If you use a Bt878 card with this chipset, be sure to enable
the SIS/VIA chipset compatiblity mode workaround.
Tested By: Ben Laurie <ben@algroup.co.uk>
SIS/VIA/ OPTi chipset PCI bus workarounds.
These make the Bt878/879 chips stabler on certain
older and non-intel motherboards.
Use options BKTR_430_FX_MODE
or options BKTR_SIS_VIA_MODE
to enable these modes.
Also rename 849 to 849A
in the transmit code: the TX descriptor ring, and a 'shadow' ring of mbuf
pointers, one for each TX descriptor. When transmitting a packet that
consists of several fragments in an mbuf chain, we link each fragment
to a descriptor in the TX ring, but we only save a pointer to the mbuf
chain. This pointer is saved in the shadow ring entry which corresponds
to the first fragment in the packet. Later, ti_txeof() can release the
whole chain with a single m_freem() call. (We need the second ring to
keep track of the virtual addresses of the mbuf chains.)
The problem with this is that the Tigon isn't actually through with the
mbuf chain until it reaches the last fragment (which has the TI_BDFLAG_END
bit set), however the current scheme releases the mbuf chain as soon as
the first fragment is consumed. This is wrong, since the mbufs can then
be yanked out from under the Tigon and modified before the other fragments
can be transmitted.
The fix is to make a one line change to ti_encap() so that it saves the
mbuf chain pointer in the shadow ring entry that corresponds to the last
fragment in TX ring instead of the first. This prevents the mbufs from
being released until the last fragment is transmitted.
Painstakingly diagnosed and fixed by: Robert Picco <picco@mail.wevinc.com>
Brought to my attention by: dg
driver lacks error recovery and still needs more testing, but it's
about time I got it under revision control.
Submitted by: Tekram Inc.
Bus Space/DMA and cleanup: gibbs
ADMtek AL981 "Comet" chipset. The AL981 is yet another DEC tulip clone,
except with simpler receive filter options. The AL981 has a built-in
transceiver, power management support, wake on LAN and flow control.
This chip performs extremely well; it's on par with the ASIX chipset
in terms of speed, which is pretty good (it can do 11.5MB/sec with TCP
easily).
I would have committed this driver sooner, except I ran into one problem
with the AL981 that required a workaround. When the chip is transmitting
at full speed, it will sometimes wedge if you queue a series of packets
that wrap from the end of the transmit descriptor list back to the
beginning. I can't explain why this happens, and none of the other tulip
clones behave this way. The workaround this is to just watch for the end
of the transmit ring and make sure that al_start() breaks out of its
packet queuing loop and waiting until the current batch of transmissions
completes before wrapping back to the start of the ring. Fortunately, this
does not significantly impact transmit performance.
This is one of those things that takes weeks of analysis just to come
up with two or three lines of code changes.
The specific intent of this commit is to pave the way for importing
Compaq XP1000 support. These changes should not affect the i386 port.
Reviewed by: Doug Rabson <dfr@nlsystems.com>
(actually, he walked me through most of it & deserves more than reviewd-by
credit )
Change Intel GPIO mask to hopefully stop turning the Intel Camera off
Fixed tuner selection on Hauppauge card with tuner 0x0a
Replaced none tuner with no tuner for Theo de Raadt <deraadt@openbsd.org>.
Ivan Brawley <brawley@internode.com.au> added
the Australian channel frequencies.
Sync up device Ids with the master Adaptec list.
Add probe support for the 2940 Pro although it isn't obvious that
all of the termination support is correct for this adapter yet.
driver to use bus_space_read_foo()/bus_space_write_foo(). The line is not
visible unless you compile the driver to use PCI memory mapped mode, which
not done by default, but it should be fixed anyway.
displace a real driver.
Revert rev 1.109.
Pick up a few things from elsewhere (a couple of SiS id's).
As an *experiment*, have the chip* driver claim (for reporting purposes)
IDE controllers if there isn't another PCI-aware ide or ata driver to
grab them. I've exported the match function since it could be used from
the ata-all.c code replacing ata_pcimatch() - but I have not touched the
ata code. I'd like to catch a few more devices this way, including USB
and other bridges etc.
after some of the previous commits). Add in support for the 1240
dual channel ISP card. Try the dance of unmapping a PCI interrupt
if we don't configure (if that ever works it'll be helpful).
bttv's audio mux values.
Automatically locate the EEPROM i2c address and read the subsystem_vendor_id
from EEPROM and not the PCI registers.
Add NSMBUS checks around smbus/iicbus i2c bus code
Add GPIO mask for the audio mux to each card type.
Add CARD_ZOLTRIX and CARD_KISS from mailing list searches.
Tested by: Paul Reece <paul@fastlane.net.au>,
Ivan Brawley <brawley@internode.com.au> and
Gilad Rom <rom_glsa@ein-hashofet.co.il>
was available to the programmer to hold chip state information:
Use the SDID register instead of CTEST3. This change actually
simplifies the SCRIPTS code, but I'm not absolutely sure, that
it is OK for all variants of NCR chips around and all device
combinations. I have had this code running on several systems
with 53c810, 875 and 895 controllers for several months.
Suggested by: Gerard Roudier <groudier@club-internet.fr>
#define COMPAT_PCI_DRIVER(name,data) DATA_SET(pcidevice_set,data)
.. to 2.2.x and 3.x if people think it's worth it. Driver writers can do
this if it's not defined. (The reason for this is that I'm trying to
progressively eliminate use of linker_sets where it hurts modularity and
runtime load capability, and these DATA_SET's keep getting in the way.)
interrupt configuration reported. (I just discovered my vga card is
being configured for irq 5... :-) This is just reporting. The vga_isa
driver does the real work using the isa compat mappings.
Restore 0x710110b9 ("AcerLabs M15x3 Power Management Unit") - but only
if NALPM == 0.
Restore 0x00051166 ("Ross (?) host to PCI bridge") so that
fixbushigh_Ross() gets called.
Delete generic_pci_bridge(), it's been replaced by other mechanisms (see
the isab and pcib match/probes and the pci_bridge_type() function)
NOTE: These changes will require recompilation of any userland
applications, like cdrecord, xmcd, etc., that use the CAM passthrough
interface. A make world is recommended.
camcontrol.[c8]:
- We now support two new commands, "tags" and "negotiate".
- The tags commands allows users to view the number of tagged
openings for a device as well as a number of other related
parameters, and it allows users to set tagged openings for
a device.
- The negotiate command allows users to enable and disable
disconnection and tagged queueing, set sync rates, offsets
and bus width. Note that not all of those features are
available for all controllers. Only the adv, ahc, and ncr
drivers fully support all of the features at this point.
Some cards do not allow the setting of sync rates, offsets and
the like, and some of the drivers don't have any facilities to
do so. Some drivers, like the adw driver, only support enabling
or disabling sync negotiation, but do not support setting sync
rates.
- new description in the camcontrol man page of how to format a disk
- cleanup of the camcontrol inquiry command
- add support in the 'devlist' command for skipping unconfigured devices if
-v was not specified on the command line.
- make use of the new base_transfer_speed in the path inquiry CCB.
- fix CCB bzero cases
cam_xpt.c, cam_sim.[ch], cam_ccb.h:
- new flags on many CCB function codes to designate whether they're
non-immediate, use a user-supplied CCB, and can only be passed from
userland programs via the xpt device. Use these flags in the transport
layer and pass driver to categorize CCBs.
- new flag in the transport layer device matching code for device nodes
that indicates whether a device is unconfigured
- bump the CAM version from 0x10 to 0x11
- Change the CAM ioctls to use the version as their group code, so we can
force users to recompile code even when the CCB size doesn't change.
- add + fill in a new value in the path inquiry CCB, base_transfer_speed.
Remove a corresponding field from the cam_sim structure, and add code to
every SIM to set this field to the proper value.
- Fix the set transfer settings code in the transport layer.
scsi_cd.c:
- make some variables volatile instead of just casting them in various
places
- fix a race condition in the changer code
- attach unless we get a "logical unit not supported" error. This should
fix all of the cases where people have devices that return weird errors
when they don't have media in the drive.
scsi_da.c:
- attach unless we get a "logical unit not supported" error
scsi_pass.c:
- for immediate CCBs, just malloc a CCB to send the user request in. This
gets rid of the 'held' count problem in camcontrol tags.
scsi_pass.h:
- change the CAM ioctls to use the CAM version as their group code.
adv driver:
- Allow changing the sync rate and offset separately.
adw driver
- Allow changing the sync rate and offset separately.
aha driver:
- Don't return CAM_REQ_CMP for SET_TRAN_SETTINGS CCBs.
ahc driver:
- Allow setting offset and sync rate separately
bt driver:
- Don't return CAM_REQ_CMP for SET_TRAN_SETTINGS CCBs.
NCR driver:
- Fix the ultra/ultra 2 negotiation bug
- allow setting both the sync rate and offset separately
Other HBA drivers:
- Put code in to set the base_transfer_speed field for
XPT_GET_TRAN_SETTINGS CCBs.
Reviewed by: gibbs, mjacob (isp), imp (aha)
- Change to the same transmit scheme as the PNIC driver.
- Dynamically set the cache alignment, and set burst size the same as
the PNIC driver in mx_init().
- Enable 'store and forward' mode by default. This is the slowest option
and it does reduce 100Mbps performance somewhat, but it's the most
reliable setting I can find. I'm more interested in having the driver
work reliably than trying to squeeze the best performance out of it.
The reason I'm doing this is that on *some* systems you may see a lot
of transmit underruns (which I can't explain: these are *fast* test
systems) and these errors seem to cause unusual and decidedly
non-tulip-like behavior. In normal 10Mbps mode, performance is fine
(you can easily saturate a 10Mbps link).
Also tweak some of the other drivers:
- Increase the size of the TX ring for the Winbond, ASIX, VIA Rhine
and PNIC drivers.
- Set a larger value for ifq_maxlen in the ThunderLAN driver. The setting
of TL_TX_LIST_CNT - 1 is too low (the ThunderLAN driver only allocates
20 transmit descriptors, and I don't want to fiddle with that now
because the ThunderLAN's descriptor structure is an oddball size
compared to the others).
uses the AUI port with an on-board AUI to 10baseFL transceiver, not the
10baseT port like I had earlier suspected. The 3c900B-FL should be properly
supported now.
bug in the stats accounting (nicSendBDs counter was bogus when TX ring was
configured to be in host memory).
Update if_tireg.h to look for new firmware fix level.
* Make the network code in the bootstrap more chatty (helps debugging)
* Add nfs root stuff to cpu_rootconf(). I also added a check to make sure
it really was netbooting which allows the use of the same kernel for local
and network boots.
* Tweak the de driver so that it takes the speed setting from the console
for the alpha (some PWSs have broken de chipsets). This is the same
behaviour as NetBSD/alpha.
Submitted by: Andrew Gallatin <gallatin@cs.duke.edu>
resource. Avoids useless interrupts occurring between the allocation
of the interrupt resource and the final initialisation of the
kernel. Cause of these interrupts is unknown (a resuming device?).
- Try to unbreak what I broke by screwing with the tx queuing again.
I'm waiting for a few more people to test out this code and report back
before I move it into current. Hopefully it will be soon. Basically I
reverted to the old TX queuing strategy.
- Add experimental support for the 3c900B-FL (10mbps ST fiber). The card
should be detected properly and the 10baseFL mode supported, but again
I'm still waiting for word from a tester to see if this actually works.
It shouldn't affect the other cards though; all the differences are in
media selection.
- Set the TX start threshold register to get better performance.
- Increase the size of the RX and TX rings. UDP performance was pretty
bad because the TX ring was too small. Should be substantially better
now (I can saturate the link with either TCP or UDP now).
- Change some of the #defines to reflect proper 3Com ASIC names (boomerang,
cyclone, krakatoa, hurricane).
- Simplify and reorganize interrupt handler; ack all interrupts right
away and then process them. This avoids a potential race condition.
(Noted by Matt Dillon.)
- Reorganize the bridging code to eliminate using a goto to jump into
the middle of an if() {} clause. Sorry, that just made my brain itch.
- Use m_adj() in xl_rxeof().
- Make the payload alignment in xl_newbuf() the default (instead of
just conditionally defined for the alpha) to improve NFS performance
(avoids need for nfs_realign()).
from ever catching up to the transmit consumer index. We can't let this
happen because ti_txeof() depends on the assumption that producer == consumer
means the ring is empty, and producer != consumer means the ring has some
number of active descriptors in it.
This will allow software teletext/intercast/subtitles decoding
while watching a TV station.
Based on code from Hiroki Mori <mori@infocity.co.jp> but reworked by
myself.
Add new #ifdef. By defining BKTR_NO_MSP_RESET you can prevent the
MSP34xx being reset by the bt848 driver. This is handy
if you pre-initialise the MSP34xx stereo audio chip in another
operating system first (eg MS Windows).
Suggested by: Randal Hopper<aa8vb@ipass.net>
Suggested by: Yuri Gindin <yuri@xpert.com>
style pci drivers with a simple one-line change to use a module that
registers itself under new-bus and should in theory enable just about all
of the pci drivers to be loadable (kldload and loader(8)) but without
having the impact of converting the APIs yet.
This also fixes the problem of having undefined variables when only
new-style pci drivers are present.
Convert to new bus and bus dma.
Use latest PCI API.
bt_pci.c:
Fix a few bugs in how resourses are released left over from
when this driver was converted to new bus.
Interrupts under the new scheme are managed by the i386 nexus with the
awareness of the resource manager. There is further room for optimizing
the interfaces still. All the users of register_intr()/intr_create()
should be gone, with the exception of pcic and i386/isa/clock.c.
had a quirk that made a shim rather hard to implement properly and it was
just easier to convert the drivers in one go. The changes to the
buslogic driver go beyond just this - the whole driver was new-bus'ed
including pci and isa. I have only tested the EISA part of this so far.
Submitted by: Doug Rabson <dfr@nlsystems.com>
i386 platform boots, it is no longer ISA-centric, and is fully dynamic.
Most old drivers compile and run without modification via 'compatability
shims' to enable a smoother transition. eisa, isapnp and pccard* are
not yet using the new resource manager. Once fully converted, all drivers
will be loadable, including PCI and ISA.
(Some other changes appear to have snuck in, including a port of Soren's
ATA driver to the Alpha. Soren, back this out if you need to.)
This is a checkpoint of work-in-progress, but is quite functional.
The bulk of the work was done over the last few years by Doug Rabson and
Garrett Wollman.
Approved by: core
3c900B-TPC (twisted pair and coax). Treated similarly to the
3c900B-COMBO, except no AUI port.
- Fix media selection so that it's possible to select the AUI and BNC
ports on the 3c905B-COMBO. This board is now fully supported.
- Change TX queueing strategy to hopefully be more efficient by avoiding
register accesses in xl_start(). Should provide small performance
improvement and a little better reliability.
transceiver. Note in the manual page that autoselection doesn't
work on the 82c168 because the built-in NWAY support is horribly
broken. Manual mode selection works fine, but autoneg is broken for
everything except maybe 10Mbps half-duplex. There's no simple way
to fix this at the moment, so I have to settle for documenting the
bug for now. Fortunately, there aren't anywhere near as many 82c168
boards around as there are 82c169s.
All it did was match a specific device ID and turn on a quirk for
the wdc driver.
Incidently, at line 1462 there is a return that prevents the generic
ide_pci code from trying to look at the device. I'd be interested
to know if we can take out the return and let the generic code "see" it.
I've left the return in because that's the way it worked before.
(Be sure to rerun config after cvsup or you'll get undefined files!)
perform a cleanup/unifdef sweep over it to tidy things up. The atapi
code is permanently attached to the wd driver and is always probed.
I will add an extra option bit in the flags to disable an atapi probe on
either the master or slave if needed, if people want this.
Remember, this driver is destined to die some time. It's possible that
it will loose all atapi support down the track and only be used for
dumb non-ATA disks and all ata/atapi devices will be handled by the new
ata system.
ATAPI, ATAPI_STATIC and CMD640 are no longer options, all are implicit.
Previously discussed with: sos
- It turns out that the 'promiscuous mode' bug what I discovered with the
PNIC is not restricted to promiscuous mode. I've been doing some remote
debugging for someone with a P75 system, and at 100Mbps, the receiver
screws up even when the NIC is in normal mode. Thus, enable the workaround
for this bug all the time. Note that the workaround is still not enabled
for the PNIC II, since I haven't tested one yet.
- Set the 'arbitration' bit in the bus configuration register and set the
maximum burst size to 16 longwords. This seems to fix problems with
transmit corruption on the P75 system mentioned above. (It probably hurts
performance a bit too, but I've given up trying to make the PNIC perform
well.)
- Rewrite the transmit section to be a little less bogus.
- Set ifq_maxlen correctly. RL_TX_LIST_CNT - 1 is wrong, because for the
RealTek, RL_TX_LIST_CNT is 4. Set it to IFQ_MAXLEN instead.
(cut-down version of the "cyclone" for the small office/home office
"cheap bastard" market). Basically the same as a 3c905B but without
Wake-on-LAN, ROM socket, etc...
- Wait longer for the reset to complete in xl_attach() to try and avoid
'command never completed' warnings.
- Clean up a few odds and ends in xl_attach().
- Add PCI ID for the 3c905B-COMBO (a new card). Right now this is
treated as a 3c905B; I need to dig up one of these cards for testing
before I can make the AUI and BNC ports work.
- Add a hack to force reading the I/O address directly from the PCI
registers if pci_map_port() fails. I SHOULD NOT HAVE TO DO THIS:
SOMEBODY WITH MORE PCI CLUES THAN I SHOULD INVESTIGATE WHY THIS
HAPPENS.
no more memory (M_WAITOK -> M_NOWAIT). It may be called early enough
during boot that M_WAITOK isn't OK. (In theory - right now it isn't called
from anywhere).
transceiver. Thanks to Brian Walenze for donating a NIC with this chip
on it (LinkSys didn't really sell that many of them and they're not
in production anymore). The driver now distinguishes between the
82c168 and 82c169 when probing. If no MII transceiver is detected,
it switches over to using the internal one.
Oh, I forgot to mention: this driver also works on FreeBSD/alpha (big
thanks to Andrew Gallatin). And there is a 2.2.x version available for
those who stubbornly refuse to upgrade.
Networks Tigon 1 and Tigon 2 chipsets. There are a _lot_ of OEM'ed
gigabit ethernet adapters out there which use the Alteon chipset so
this driver covers a fair amount of hardware. I know that it works with
the Alteon AceNIC, 3Com 3c985 and Netgear GA620, however it should also
work with the DEC/Compaq EtherWORKS 1000, Silicon Graphics Gigabit
ethernet board, NEC Gigabit Ethernet board and maybe even the IBM and
and Sun boards. The Netgear board is the cheapest (~$350US) but still
yields fairly good performance.
Support is provided for jumbo frames with all adapters (just set the
MTU to something larger than 1500 bytes), as well as hardware multicast
filtering and vlan tagging (in conjunction with the vlan support in
-current, which I should merge into -stable soon). There are some hooks
for checksum offload support, but they're turned off for now since
FreeBSD doesn't have an officially sanctioned way to support checksum
offloading (yet).
I have not added the 'device ti0' entry to GENERIC since the driver
with all the firmware compiled in is quite large, and it doesn't really
fit into the category of generic hardware.
Like the PNIC, we have to copy packet headers in the receive handler
because the chip will only DMA to longword aligned buffers.
Also do some mindor cleanups.
the alpha. Now the ThunderLAN driver works on the alpha (both my
sample cards check out.) Update the alpha GENERIC config to include
ThunderLAN driver now that I've tested it.
- When trying to map ports, if mapping TL_PCI_LOIO or TL_PCI_LOMEM fails,
try mapping the other one. Apparently, some ThunderLAN parts swap these
two registers while others don't.
- Add support for bitrate (non-MII) PHYs. If no MII-based PHY is found,
program the chip for bitrate mode. This is required for the TNETE110
part, which doesn't have MII support. (It's also obsolete, but there
are still some people out there who have them.) With this change and the
change above, the Compaq Netflex-3/P 10baseT/BNC board works correctly.
(Thanks to Matthew Dodd for getting me one of these cards.)
- Convert to bus_space_foo() for register accesses.
- Add changes to support FreeBSD/Alpha. I still have to actually test
this in my Alpha box so I'm not going to update /sys/alpha/conf/GENERIC
yet.
Contributed-by: "Richard Seaman, Jr." <dick@tar.com>
Tested-by: Chris Piazza <cpiazza@home.net>
Tugrul Galatali <tugrul@ianai.BlackSun.org>
grog
This code includes lots of stuff for verbose probing. I'm not 100%
sure that the output of the verbose probe is correct, but everything
else works fine, and -CURRENT was broken for the 5591 before, so I'm
committing it anyway.
sys/alpha/conf/GENERIC.
Note: the PNIC ignores the lower few bits of the RX buffer DMA address,
which means we have to add yet another kludge to make it happy. Since
we can't offset the packet data, we copy the first few bytes of the
received data into a separate mbuf with proper alignment. This puts
the IP header where it needs to be to prevent unaligned accesses.
Also modified the PNIC driver to use a non-interrupt driven TX
strategy. This improves performance somewhat on x86/SMP systems where
interrupt delivery doesn't seem to be as fast with an SMP kernel as
with a UP kernel.
Recognize aic7895 controllers that have been "acquired" by a RAIDPort
card as normal aic7895s.
Recognize the aic7815 Raid Parity/Memory controller chip and notify
the user that it's RAID functionality will be ignored.
gave yet another internal register layout model for what is
*still* the same architecture. I hope they saved billyuns of gates
'coz otherwise this is *really* annoying.
chip int. and ext. clock synchronisation). Fixed workaround for
transmit threshold underrun. Added volatile keyword to CSR_READ_* and
CSR_WRITE_* macroses. Added DELAYs to eliminate randomness caused
by processor speed. Fixed all TXCON and RXCON registers to be accessed
only when chip is idle, as manual told. Changed epic_init_phy to
drop link by isolating and going loopback, should should force link
partner to restart autonegotiation.
PR: kern/10535, kern/9742, kern/10575
Submitted by: Peter Jeremy, David Greenman
a wierd double-queue arrangement.. It always empties the if_snd queue
then puts the transmit packets into a different queue that is limited
by the number of TX descriptors and does it's own discards...
This should stop the boot-time XXX warning anyway.
The i++ loop from 1..1000 is too small on very fast machines like
PII 450 MHz. Increasing the loop from 1..100000 lets the machine
access PHY. After this patch it's possible to use a SMC PCI card
on a HP Kayak XA series PC Workstation. Workaround until this fix
was to enable debugging in the driver (#define EPIC_DEBUG 1).
Without that patch you get an undefined state:
while true
do
ifconfig -a | grep status:
done
The status messages flaps between twwo values, but not
"connected".
Obtained from: Ustimenko Semen <semen@iclub.nsu.ru>
tulip_addr_filter() on SIOCSIFFLAGS, and was nuking the IFF_ALLMULTI
on entering tulip_addr_filter(). As a result it was impossible to run
a multicast router on a machine with a "de" interface.
Added autodetection of MMAC Osprey 100 card for
Jan Schmidt <mmedia@rz.uni-greifswald.de>. The MMAC card has an EEPROM
which contains an ASCII string beginning with "MMAC".
Corrected Hauppauge Audio Mux Mute value from 0x01 to 0x04.
Fixed a typo.
Sumitted change:
Added ALPS Tuner Type submitted by Hiroki Mori <mori@infocity.co.jp>
Submitted by: Roger Hardiman and Hiroki Mori <mori@infocity.co.jp>
Addtron appear to have their own VIA Rhine II and RealTek 8139 boards
with custom PCI vendor and device IDs. This commit updates the PCI
vendor and device lists in the vr and rl drivers so that we can probe
the additional devices.
Found by: nosing around the PCI vendor and device code list at:
http://www.halcyon.com/scripts/jboemler/pci/pcicode
AX88140A with power management and magic packet support. Correct the
addresses of the PCI power management registers and add some code to
detect the revision ID of the AX88141 and identify it in the probe
messages.
No other changes are needed since the AX88141 is functionally
identical to the AX88140A.
Now should be able to report speed for cards using NatSemi PHY.
(if you have one please let me know if it works as I
only have the Intel version)
Reviewed by: David Greenman <dg@root.com>
3c905B, the RX and TX reset commands also reset the cyclone chip's internal
PHY, which causes it to restart its autonegotiation session. This takes a
second or two to complete, which makes the interface seem to stop responding
for a few seconds every time you do something that reinitializes it.
Issuing the RX and TX resets on the older 3c905 boomerang adapters doesn't
cause any delay because the boomerang chip requires an external PHY.
This should fix the problem where people doing network installs via 3c905B
cards experience a delay after the interface is first initialized, among
other things.
Submitted by Roger Hardiman.
Added ioctl TVTUNER_GETCHANSET to discover which regions the bktr driver
supports. Submitted by Vsevolod Lobko <seva@alex-ua.com>
Added BT848_GPIO_SET_EN,BT848_GPIO_SET_DATA (and GETs) to allow user land
control of the GPIO pins. This allows a Radio module on the GPIO port
to be controlled. Submitted by Vsevolod Lobko <seva@alex-ua.com>
The kernel option BKTR_GPIO_ACCESS must be used to enable the GPIO ioctls.
Submitted by: Roger Hardiman and Vsevolod Lobko <seva@alex-ua.com>
Improved MSP34xx reset for bt848 Hauppauge boards.
Added detection for Bt848a.
Vsevolod Lobko<seva@sevasoft.alex-ua.com> added more XUSSR channels.
Submitted by: parts from Vsevolod Lobko<seva@sevasoft.alex-ua.com>
Obtained from: parts from OpenBSD
v_caddr_t with extreme prejudice. Here the bogons were originally
the same as for c_caddr_t (half-baked K&R support), but rev.1.95
changed one wrong cast and one harmless cast to 2 wrong casts,
and rev.1.96 only fixed the originally wrong cast.
c_caddr_t with extreme prejudice. Here the original casts to
caddr_t were to support K&R compilers (or missing prototypes),
but the relevant source files require an ANSI compiler.
Auto Detection Mode. This leaves MSP3400C owners still unsupported.
Thanks to Gerd Knorr <kraxel@cs.tu-berlin.de> for providing some early
assistance and sample code in the linux bttv driver.
Nicolas Souchu <nsouch@freebsd.org> ported the msp_read/write/reset
functions to smbus/iicbus.
METEOR_INPUT_DEV2 now selects a composite camera on the SVIDEO port.
For true SVIDEO, use METEOR_INPUT_DEV_SVIDEO.
If you get a monochrome image from the SVIDEO port, you have
seleted the wrong input type.
Tested by: Johan Larsson<gozer@ludd.luth.se>
address, account for cards which report the Texas Instruments PCI vendor ID
in addition to Compaq and Olicom. (I don't actually have a card that
reports the TI vendor/device ID, but it appears that some Racore adapters
work this way, and failing to account for it when we have the ID listed
in the supported devices list is a bug.)
iicbus(4) and smb(4).
User programs are available to retrieve SDRAM and sensor info, contact
the author.
Submitted by: Takanori Watanabe <takawata@shidahara1.planet.sci.kobe-u.ac.jp>
Reviewed by: Mike Smith <msmith@freebsd.org>
of the CRC as the multicast hash table bit, not the lower six bits. Plus
we have to flip on all bits in the table for multicast mode.
Pointed out by: Kazushi SUGYO <k-sugyou@nwsl.mesh.ad.jp>
The previous code just ignored the invalid map register, but this gave
surprising results because of the way pci_map_port() associated the map
register offset supplied with a map entry in the map array.
to look up cookies properly, at least for standard controllers.
Cookies are used so that we don't have to pass around lots of args.
All of the dmainit functions use the unit number so it is essential
that we pass them a cookie with the correct unit number.
This may break working configurations if there are bugs in the
dmainit functions like the ones I just fixed for VIA chipsets.
Broken in: rev 1.4 of ide_pci.c and rev.1.139 of wd.c.
Prefetch/postwrite was enabled for the wrong controller. (VIA
is bitwise big endian and we confused ourself by shifting left
instead of right.)
Extracted from: last set of patches from the author
(john hood <cgull@smoke.marlboro.vt.us>) on 7 Feb 1998
Instead of initializing UDMA mode, we turned it off and made sure that
it stays off by turning on the "UDMA enable by SET FEATURES" disable.
The damage was limited by bugs in cookie lookup, and suitable
initialization by some BIOSes. The cookie list has slaves before
masters, and the unit number is ignored when cookies are looked up,
so cookie lookup always finds cookies for slaves and the bug only
clobbers slaves, so the bug was harmless for common configurations
with no slaves or only non-UDMA slaves. UDMA initialization for
masters actually worked if the BIOS turns on the UDMA mode bit and
turns off the "UDMA enable by SET FEATURES" disable.
- In wb_rxeof(), if the received packet is less than MINCLSIZE bytes,
copy it to an mbuf chain so as to be more frugal in our use of mbuf
clusters.
- The Winbond chip, like the ASIX, wants the 'TX interrupt request'
bit set in the _first_ fragment of a transmitted frame, not the
last. (At least the Winbond manual states this unambiguously; too
bad I wasn't paying attention when I read it the first time.)
- Turn off the transmit threshold mechanism (initialize the threshold
to 0). This effectively puts the chip in 'store and forward' mode
which seems to cut down on transmit errors a little. It may also
reduce transmit performace a bit, but I'm willing to do that if it
means better reliability.
- Normally, the driver allocates an mbuf cluster for each receive
descriptor. This is because we have to be prepared to accomodate up to
1500 bytes (a cluster buffer can hold up to 2K). However, using up a
whole cluster buffer for a tiny packet is a bit of a waste. Also,
it seems to me that sometimes mbufs will linger in the kernel for
a while after being passed out of the driver, which means we might
drain the mbuf cluster pool. The cluster pool is smaller than the
mbuf pool in general, so we do the following: if the packet is less
that MINCLSIZE bytes, then we copy it into a small mbuf chain and
leave the mbuf cluster in place for another go-round. This saves
mbuf clusters in some cases while still allowing them to be used
for heavy traffic exchanges with lots of full-sized frames.
- The transmit descriptor has a bit in the control word which allows
the driver to request that a 'TX OK' interrupt be generated when
a frame has been completed. Sometimes, a frame can be fragmented
across several descriptors. The manual for the real DEC 21140A says
that if this happens, the 'TX interrupt request' bit is only valid
in the descriptor of the last fragment. With the ASIX chip, it seems
the 'TX interrupt request' bit is only valid in the descriptor of
the _first_ fragment. Actually, the manual contains conflicting
information, but I think it's supposed to be the first fragment.
To play it safe, set the bit in both the first and last fragment to
be sure that we get a TX OK interrupt. Without this fix, the driver
can sometimes be late in releasing mbufs from the transmit queue
after transmission.
if option CY_PCI_FASTINTR is configured and mapping the irq to a
fastintr is possible. Unfortunately, this has to be optional because
pci_map_int_right() doesn't handle the INTR_EXCL flag right --
INTR_EXCL is honoured even if the interrupt needs to be non-exclusive
for other devices to work.
using the new pci_map_int_right() variant of pci_map_int(). Fast
interrupts work for PCI devices if and only if they are exclusive.
(The PCI interrupt mux doesn't support fast interrupts and can't
support a mixture of fast and slow interrupts even in principle.)
Don't assume that intrmask_t == unsigned in pci_map_int().
register for the PLX id). Merge the vendor's modification of the 2.2.*
release version into -current for reference. Will be cleaned up in next
commit.
Obtained from: ftp://ftp.cyclades.com/pub/cyclades/cyclom-y/freebsd/3.0/cyy30.tar.gz
performance and reliability a little. There was a condition before
where transmission would stall during periods of heavy traffic
exchange between two hosts. Also set the 'want interrupt' bit in
receive descriptor control words.
on the ASIX AX88140A chip. Update /sys/conf/files, RELNOTES.TXT,
/sys/i388/i386/userconfig.c, sysinstall/devices.c, GENERIC and LINT
accordingly.
For now, the only board that I know of that uses this chip is the
Alfa Inc. GFC2204. (Its predecessor, the GFC2202, was a DEC tulip card.)
Thanks again to Ulf for obtaining the board for me. If anyone runs
across another, please feel free to update the man page and/or the
release notes. (The same applies for the other drivers.)
FreeBSD should now have support for all of the DEC tulip workalike
chipsets currently on the market (Macronix, Lite-On, Winbond, ASIX).
And unless I'm mistaken, it should also have support for all PCI fast
ethernet chipsets in general (except maybe the SMC FEAST chip, which
nobody seems to ever use, including SMC). Now if only we could convince
3Com, Intel or whoever to cough up some documentation for gigabit
ethernet hardware.
Also updated RELNOTEX.TXT to mention that the SVEC PN102TX is supported
by the Macronix driver (assuming you actually have an SVEC PN102TX with
a Macronix chip on it; I tried to order a PN102TX once and got a box
labeled 'Hawking Technology PN102TX' that had a VIA Rhine board inside
it).
and mx_setcfg() so that we can read the internal MII registers on the
MX98713 chip correctly. With these changes, media autoselection now
works correctly on the original 98713. All Macronix chips should now
be properly supported (unless there's a surprise waiting in the 98725).
Thanks to Ulf Zimmermann for providing a 98713 board.
isolated to revision 33 PNIC chips is also present in revision 32 chips.
Cards with rev. 32 chips include the LinkSys LNE100TX and the Matrox
FastNIC 10/100. This accounts for all the cards that I have to test
with.
(I was never able to personally trip the bug on this chip rev, but today
one of the guys in the lab did it with the software they're working on
for their cellular IP project, which uses BPF and promiscuous mode
extensively.)
This commit enables the promiscuous mode software workaround code for
both revison 32 and revision 33 chips. It's possible all of the PNIC
chips suffer from this bug, but these are the only two revs where I
know for a fact it exists.
chip revisions. (A buggy taiwanese chip? I'm just shocked; shocked I tell
you.) So far I have only observed the anomalous behavior on board with
PCI revision 33 chips. At the moment, this seems to include only the
Netgear FA310-TX rev D1 boards with chips labeled NGMC169B. (Possibly this
means it's an 82c169B part from Lite-On.)
The bug only manifests itself in promiscuous mode, and usually only at
10Mbps half-duplex. (I have not observed the problem in full-duplex mode,
and I don't think it ever happens at 100Mbps.) The bug appears to be in
the receiver DMA engine. Normally, the chip is programmed with a linked
list of receiver descriptors, each with a receive buffer capable of holding
a complete full-sized ethernet frame. During periods of heavy traffic
(i.e. ping -c 100 -f 8100 <otherhost>), the receiver will sometimes appear
to upload its entire FIFO memory contents instead of just uploading the
desired received frame. The uploaded data will span several receive
buffers, in spite of the fact that the chip has been told to only use
one descriptor per frame, and appears to consist of previously transmitted
frames with the correct received frame appended to the end.
Unfortunately, there is no way to determine exactly how much data is
uploaded when this happens; the chip doesn't tell you anything except the
size of the desired received frame, and the amount of bogus data varies.
Sometimes, the desired frame is also split across multiple buffers.
The workaround is ugly and nasty. The driver assembles all of the data
from the bogus frames into a single buffer. The receive buffers are always
zeroed out, and we program the chip to always include the receive CRC
at the end of each frame. We therefore know that we can start from the
end of the buffer and scan back until we encounter a non-zero data byte,
and say conclusively that this is the end of the desired frame. We can
then subtract the frame length from this address to determine the real
start of the frame, and copy it into an mbuf and pass it on.
This is kludgy and time consuming, but it's better than dropping frames.
It's not too bad since the problem only happens at 10Mbps.
The workaround is only enabled for chips with PCI revision == 33. The
LinkSys LNE100TX and Matrox FastNIC 10/100 cards use a revision 32 chip
and work fine in promiscuous mode. Netgear support has confirmed that
they "have some previous knowledge of problems in promiscuous mode" but
didn't have a workaround. The people at Lite-On who would be able to
suggest a possible fix are on vacation. So, I decided to implement a
workaround of my own until I hear from them. I suppose this problem made
it through Netgear's QA department since Windows doesn't normally use
promiscuous mode, and if Windows doesn't need the feature than it can't
possibly be important, right? Grrr.
this has a problem with capture but i am not sure if it is related
to the mixer or what else.
But in the meantime, this is ok to listen to mpegs.
I also have a much simpler version of the driver in the works which
reuses a lot more of the existing "pcm" routines. Next year...
they use the same value in the VID register.
PR kern/9137: Matrox Mystique chip name typo error
Submitted by: Alex D. Chen <dhchen@Canvas.dorm7.nccu.edu.tw>
AcerLabs Aladdin-V. It makes the PCI probing work when system booting. I
will try to merge some additional funtion(i.e. wdc1 problem cause tons of
PR appear :<) ASAP if I could.
Remind me if something wrong after committing, thanks!