Add some overflow checks to read/write (from bde).
Change all modifications to vm_page::flags, vm_page::busy, vm_object::flags
and vm_object::paging_in_progress to use operations which are not
interruptable.
Reviewed by: Bruce Evans <bde@zeta.org.au>
stopped being used in /usr/src almost 2 years ago.
Don't pretend to support gcc-[3-infinity].
Added __printf0like(). Feature tests for the __printf0__ feature
are problematic, so this can't be used for the err() family yet
- it's only in recent versions of FreeBSD's hacked version of gcc.
Added comments about __unused and __*like().
for the Lite2 fix for always returning EIO in dead_read().
Cleaned up the cdevswitch initializers for all tty drivers.
Removed explicit calls to ttsetwater() from all (tty) drivers. ttsetwater()
is now called centrally for opens, not just for parameter changes.
another specialized mbuf type in the process. Also clean up some
of the cruft surrounding IPFW, multicast routing, RSVP, and other
ill-explored corners.
of associated mbuf clusters) in the RX ring from 4 to 16. On my
really fast PI 400Mhz test machines, 4 descriptors (and associated
mbuf clusters) is enough to achieve decent performance without any
RX overruns. However, one person reported problems with the following
scenario:
- P90 system running FreeBSD with a 3c905B-TX adapter, slow IDE hard
disk (Quantum Bigfoot?)
- PII 266 with SCSI disks running LoseNT and also with a 3c905B-TX
- Both machines connected together via crossover cable at 100Mbps
full-duplex
- LoseNT machine writing largs amounts of data (2.5 GB work of
files each in the neighborhood of 1 to 2 MB in size) via samba to
the FreeBSD machine
In this case, the LoseNT machine is sending data very fast. Apparently
there weren't any problems initially because the user was writing to
one particular disk which was relatively fast, however after this disk
filled up and the user started writing to the second slower disk, RX
overruns would occur and sometimes the RX DMA engine would stall after
a 100 to 500MB had been transfered. The xl_rxeof() handler is supposed
to detect this condition and restart the upload engine; I'm not sure
why it doesn't, unless interrupts are being lost and the rx handler
isn't getting called.
This is still an improvement over the Linux driver, which uses 32
descriptors in its receive ring. :)
Problem reported by: Heiko Schaefer <hschaefer@fto.de>
'three-stage' bootstrap.
There are a number of caveats with the code in its current state:
- The i386 bootstrap only supports booting from a floppy.
- The kernel and kld do not yet know how to deal with the extended
information and module summary passed in.
- PnP-based autodetection and demand loading of modules is not implemented.
- i386 ELF kernel loading is not ready yet.
- The i386 bootstrap is loaded via an ugly blockmap.
On the alpha, both net- and disk-booting (SRM console machines only) is
supported. No blockmaps are used by this code.
Obtained from: Parts from the NetBSD/i386 standalone bootstrap.
They shouldn't be used there either. They should have gone away
about 3 years ago when the statically initialized devswitches went
away, but su.c unfortunately still frobs the cdevswitch in the old
way.
The check for dropping unicast packets not sent to our ethernet
address is after the bpf tap, but not conditioned on it. All packets
received should get handed to bpf, and unicast packets not to us (mac)
should get dropped whether or not there is a bpf listener. I believe
that the common optimization that the interface is in hw promisc mode
iff there is a bpf listener is in general wrong, but more frequently
so on wavelans.
I think Max's fix makes bpf listeners not see unicast packets sent to
others, but I'm not sure.
One can argue that checking on MOD_ENAL is wrong, but the code only
drops packets that shouldn't be received. The correctness condition
is that it be run whenever unicast packets without our mac address can
be received.
PR: kern/7144
Submitted by: Greg Troxel <gdt@ir.bbn.com>
If I'm reading the manual correctly, the 3c905B actually loses its
PCI configuration during the transition from D3(hot) back to D0, not
during the transition from D0 to D3(hot). This means it should be possible
to save the existing PCI settings, restet the power state, then restore
the PCI settings afterwards. Changed xl_attach() to attempt this first
thing before the normal PCI setup. I'm not certain this will work correctly,
but it shouldn't hurt.
If xl_init() is called while an autoneg session is in progress, the
autoneg timeout and chip state will get clobbered. Try to avoid this
by checking sc->xl_autoneg at the start of xl_init() and defer
the initialization until later if it's set. (xl_init() is always called
at the end of an autoneg session by xl_autoneg_mii().)
Problem pointed out by: Larry Baird <lab@gta.com>
for 1 second's worth of input) and larger tty output buffers. The
interrupt-level buffers are still too small for speeds above 115200
bps (only a little too small for 230400 bps if RTS flow control is
enabled).
Don't call ttsetwater() explicitly in open(). It is now called for
the TTYDISC l_open() and should be static.
Don't attempt to register the cdevsw more than once.
since (hardware) ttys have too low a bandwidth to benefit significantly
from large buffers. Use twice the old limit for the new-default case
and 8 times the old limit for the driver-specifies-watermark case.
Nothing uses these cases yet.
Removed related debugging code.
I don't have access to a real VT220 to verify this against.
However, I'm committing the patch in `good faith' because
(a) getting hold of a real VT220 is going to be increasingly difficult
the longer the PR sits around,
(b) some one was troubled enough to in a PR and
(c) the fix is minor and has no other implications.
PR: 7559
Submitted by: Christian Weisgerber <naddy@mips.rhein-neckar.de>
in a SMP system. Unexpected things could happen if each cpu
has a different ldt setting and one cpu tries to use value
of currentldt set by another cpu.
The fix is to move currentldt to the per-cpu area. It includes
patches I filed in PR i386/6219 which are also user ldt related.
PR: i386/7591, i386/6219
Submitted by: Luoqi Chen <luoqi@watermarkgroup.com>
clustering is obsolescent technology so hardly anyone noticed. On
a DORS 32160 SCSI drive with 4 tags, read clustering makes very
little difference even for huge sequential reads. However, on a
ZIP SCSI drive with 0 tags, the minimum overhead per block is about
40 msec, so very large clusters must be used to get anywhere near
the maximum transfer rate. Using clusters consisting of 1 8K block
reduces the transfer rate to about 250K/sec. Under msdosfs, missing
read clustering is normal and a cluster size of 1 512 byte block
reduces the transfer rate to about 25K/sec.
Broken in: rev.1.18