What the heck, the OpenBSD version will benefit.
1. Add wx_txint_delay as a tunable (defaults to 5000 now, or ~5ms) and switch
to using delayed TXDW interrupts. Since the chip continues to reload the
TIDV with this value for each descriptor written back, this allows continued
deferral of the actual interrupt until the last packet completes (assuming
that 5ms between multiple packets transmitting is reasonable).
2. Add two other SYSCTL entities:
hw.wx.dump_stats
hw.wx.clear_stats
to be used, hackey hackey, to get the watchdog routine to dump/clear
the current softc statistics.
Usage would be:
sysctl -w hw.wx.dump_stats=UNIT
to cause the current stats to be dumped for UNIT.
3. Attempt to clean up wx_detach routine so we don't panic. Well, things
still panic, but given that the code is just like other NIC drivers,
I suspect it's actually something elsewhere, like e1000phy, that's actually
blowing up.
4. Skip the entire test for runt packets- after doing somet thinking
and experimenting, I believe that the chip only doesn't like it if
the whole frame to xmit is < 16 bytes- each TFD can be some fragment
of that. This should improve performance a chunk because of all of the
(14 byte ETHERHEADER + DATA) mbuf chains.
5. Keep track of total frame length. Try not to xmit an odd byte frame-
this is supposed to get around some dumb Cisco switch problems.
6. On the last packet, also set Interrupt Delay && Report Packet Sent
(see #1 above)
7. Attempt to do xmit garbage collection *first* in order to avoid setting
IFF_OACTIVE if at all possible.
MFC after: 1 week