One to notify the system that the MTU for VLAN can be 1500 so the vlan
will automatically be configured with a 1500 MTU the other is to ignore
the error case if the received frame is to long.
The frame size notification came from code in the SIS driver, and
the support for long frames derived from the NetBSD Tulip driver.
Tested on: 4 port D-Link adapter DFE-570TX 4 Intel 21143
Netgear card with 82c169 PNIC 10/100BaseTX
Reviewed by: ru (manpage), wpaul (not objected to), archie
Approved by: imp
Obtained from: NetBSD
motherboard chipsets. We need to force the chip to reload its MAC address
into the receive filter, and enable software access mode for the PHY.
PR: kern/33294
to Phil Kernick:
"The problem is that in full duplex mode, the Conexant chip always reports a
carrier lost error, even when the frame is successfully sent. So, if we
have a Conexant chip, then ignore carrier lost when in full duplex
mode."
Since the Xircom chips seem to have the same issue and since we already
have a workaround for this, just expand the workaround test to also
check for DC_IS_CONEXANT().
to a label is inside an #ifdef block, then the label should *also* be
inside an #ifdef block. Hide the "done:" label which is only used if
DEVICE_POLLING is enabled under #ifdef DEVICE_POLLING.
a major slowdown, and re-enable stats overflow interrupts.
For future reference, the bug was in our code, and not
some bug in the 3com chips.
Reviewed by: wpaul
MFC after: 2 days
Non-SMP, i386-only, no polling in the idle loop at the moment.
To use this code you must compile a kernel with
options DEVICE_POLLING
and at runtime enable polling with
sysctl kern.polling.enable=1
The percentage of CPU reserved to userland can be set with
sysctl kern.polling.user_frac=NN (default is 50)
while the remainder is used by polling device drivers and netisr's.
These are the only two variables that you should need to touch. There
are a few more parameters in kern.polling but the default values
are adequate for all purposes. See the code in kern_poll.c for
more details on them.
Polling in the idle loop will be implemented shortly by introducing
a kernel thread which does the job. Until then, the amount of CPU
dedicated to polling will never exceed (100-user_frac).
The equivalent (actually, better) code for -stable is at
http://info.iet.unipi.it/~luigi/polling/
and also supports polling in the idle loop.
NOTE to Alpha developers:
There is really nothing in this code that is i386-specific.
If you move the 2 lines supporting the new option from
sys/conf/{files,options}.i386 to sys/conf/{files,options} I am
pretty sure that this should work on the Alpha as well, just that
I do not have a suitable test box to try it. If someone feels like
trying it, I would appreciate it.
NOTE to other developers:
sure some things could be done better, and as always I am open to
constructive criticism, which a few of you have already given and
I greatly appreciated.
However, before proposing radical architectural changes, please
take some time to possibly try out this code, or at the very least
read the comments in kern_poll.c, especially re. the reason why I
am using a soft netisr and cannot (I believe) replace it with a
simple timeout.
Quick description of files touched by this commit:
sys/conf/files.i386
new file kern/kern_poll.c
sys/conf/options.i386
new option
sys/i386/i386/trap.c
poll in trap (disabled by default)
sys/kern/kern_clock.c
initialization and hardclock hooks.
sys/kern/kern_intr.c
minor swi_net changes
sys/kern/kern_poll.c
the bulk of the code.
sys/net/if.h
new flag
sys/net/if_var.h
declaration for functions used in device drivers.
sys/net/netisr.h
NETISR_POLL
sys/dev/fxp/if_fxp.c
sys/dev/fxp/if_fxpvar.h
sys/pci/if_dc.c
sys/pci/if_dcreg.h
sys/pci/if_sis.c
sys/pci/if_sisreg.h
device driver modifications
Introduce an additional device flag for those NICs which require the
transmit buffers to be aligned to 32-bit boundaries.
(the equivalen fix for STABLE is slightly simpler because there are
no supported chips which require this alignment there.)
controllers. There still seems to be some issues with the DRI copying code
for some adapters, at least it doesn't hang the system now. Input would be
appreciated.
PR: 32301
Obtained from: Eric Anhlot <eanholt@gladstone.uoregon.edu>, Joe <joeo@nks.net>
The reason we are required to commit to -current first is so that later
MFC's do not risk the loss of existing bug fixes. Even if this was not
strictly required in -current, it should still be fixed there too.
The reason we are required to commit to -current first is so that later
MFC's do not risk the loss of existing bug fixes. Even if this was not
strictly required in -current, it should still be fixed there too.
mbuf allocation fails, and fix (i hope) a couple of style bugs.
I believe these printf() are extremely dangerous because now they can
occur on every incoming packet and are not rate limited. They were
meant to warn the sysadmin about lack of resources, but now they
can become a nice way to panic your system under load.
Other drivers (e.g. the fxp driver) have nothing like this.
There is a pending discussion on putting this kind of warnings
elsewhere, and I hope we can fix this soon.
underlying unaligned bcopy) on incoming packets that are already
available (albeit unaligned) in a buffer.
The performance improvement varies, depending on CPU and memory
speed, but can be quite large especially on slow CPUs. I have seen
over 50% increase on forwarding speed on the sis driver for the
486/133 (embedded systems), which does exactly the same thing.
The behaviour is controlled by a sysctl variable, hw.dc_quick which
defaults to 1. Set it to 0 to restore the old behaviour.
After running a few experiments (in userland, though) I am convinced
that doing the m_devget() is detrimental to performance in almost
all cases.
Even if your CPU has degraded performance with misaligned data,
the bcopy() in the driver has the same overhead due to misaligment
as the one that you save in the uiomove(), plus you do one extra
copy and pollute the cache.
But more often than not, you do not even have to touch the payload,
e.g. when you are forwarding packets, and even in the often-cited
case of NFS, you often end up passing a pointer to the payload to
the disk controller.
In any case, you can play with the sysctl variable to toggle between
the two behaviours, and see if it makes a difference.
MFC-after: 3 days
have alignment problems.
On small boxes (e.g. the net4501 from Soekris, featuring a 486/133)
this provides huge performance benefits: the peak forwarding rate
with avg.sized packets goes up by 50-70% because of this change
alone. Faster CPUs might benefit less from this change, but in any
case the CPU has better things to do than waste time on useless
memory-to-memory copies.
Several drivers (for Tulip-like cards) might benefit from a similar
change.
Right now the new behaviour is controlled by a sysctl variable,
hw.sis_quick which defaults to 1 (on), you can set it to 0 to
reintroduce the old behaviour (and compare the results). The
variable is only there to show how much you can gain with this
change, it will go away soon.
Also, slightly simplify the code to initialize the ring buffers,
and remove a couple of dangerous printf's which could trigger on
any packet in case of mbuf shortage.
MFC-after: 3 days
idle and the driver would not detect the event, requiring userland
to cycle the interface to bring it up again.
The fix consists in adding SIS_IMR_RX_IDLE to the interrupt mask and
add a command in sis_intr() to restart the receiver when this happens.
While at it, make the test of status bits more efficient.
in the 21143, instead of giving priority to the receive unit.
This gives a 10-15% performance improvement in the forwarding rate
under heavy load.
Reviewed-by: Bill Paul
Jonathon Lemon's driver (gx) is at least as fast and has more features
and is likely to be better supported.
It is also possible that Intel might support this chipset in FreeBSD
with their own driver. Somewhat secretive and furtive rumblings from
certain Yahoo employees have indicated that this might happen soon.
I'm a little unhappy at the lack of discussion on the net list about
this, or on developers, or on hackers, or the lack of mention on
audit. This then leaves me to try and figure out the right thing
to do.
I've concluded that the right thing to do is to remove wx from FreeBSD,
as this is probably best for FreeBSD.
What the heck, the OpenBSD version will benefit.
1. Add wx_txint_delay as a tunable (defaults to 5000 now, or ~5ms) and switch
to using delayed TXDW interrupts. Since the chip continues to reload the
TIDV with this value for each descriptor written back, this allows continued
deferral of the actual interrupt until the last packet completes (assuming
that 5ms between multiple packets transmitting is reasonable).
2. Add two other SYSCTL entities:
hw.wx.dump_stats
hw.wx.clear_stats
to be used, hackey hackey, to get the watchdog routine to dump/clear
the current softc statistics.
Usage would be:
sysctl -w hw.wx.dump_stats=UNIT
to cause the current stats to be dumped for UNIT.
3. Attempt to clean up wx_detach routine so we don't panic. Well, things
still panic, but given that the code is just like other NIC drivers,
I suspect it's actually something elsewhere, like e1000phy, that's actually
blowing up.
4. Skip the entire test for runt packets- after doing somet thinking
and experimenting, I believe that the chip only doesn't like it if
the whole frame to xmit is < 16 bytes- each TFD can be some fragment
of that. This should improve performance a chunk because of all of the
(14 byte ETHERHEADER + DATA) mbuf chains.
5. Keep track of total frame length. Try not to xmit an odd byte frame-
this is supposed to get around some dumb Cisco switch problems.
6. On the last packet, also set Interrupt Delay && Report Packet Sent
(see #1 above)
7. Attempt to do xmit garbage collection *first* in order to avoid setting
IFF_OACTIVE if at all possible.
MFC after: 1 week
We only support the size of frame we are currently allocating, which
is MCLBYTES - sizeof (struct ether_header) usable, so don't set an
MTU that would go over this.