They shouldn't be used there either. They should have gone away
about 3 years ago when the statically initialized devswitches went
away, but su.c unfortunately still frobs the cdevswitch in the old
way.
The check for dropping unicast packets not sent to our ethernet
address is after the bpf tap, but not conditioned on it. All packets
received should get handed to bpf, and unicast packets not to us (mac)
should get dropped whether or not there is a bpf listener. I believe
that the common optimization that the interface is in hw promisc mode
iff there is a bpf listener is in general wrong, but more frequently
so on wavelans.
I think Max's fix makes bpf listeners not see unicast packets sent to
others, but I'm not sure.
One can argue that checking on MOD_ENAL is wrong, but the code only
drops packets that shouldn't be received. The correctness condition
is that it be run whenever unicast packets without our mac address can
be received.
PR: kern/7144
Submitted by: Greg Troxel <gdt@ir.bbn.com>
If I'm reading the manual correctly, the 3c905B actually loses its
PCI configuration during the transition from D3(hot) back to D0, not
during the transition from D0 to D3(hot). This means it should be possible
to save the existing PCI settings, restet the power state, then restore
the PCI settings afterwards. Changed xl_attach() to attempt this first
thing before the normal PCI setup. I'm not certain this will work correctly,
but it shouldn't hurt.
If xl_init() is called while an autoneg session is in progress, the
autoneg timeout and chip state will get clobbered. Try to avoid this
by checking sc->xl_autoneg at the start of xl_init() and defer
the initialization until later if it's set. (xl_init() is always called
at the end of an autoneg session by xl_autoneg_mii().)
Problem pointed out by: Larry Baird <lab@gta.com>
for 1 second's worth of input) and larger tty output buffers. The
interrupt-level buffers are still too small for speeds above 115200
bps (only a little too small for 230400 bps if RTS flow control is
enabled).
Don't call ttsetwater() explicitly in open(). It is now called for
the TTYDISC l_open() and should be static.
Don't attempt to register the cdevsw more than once.
since (hardware) ttys have too low a bandwidth to benefit significantly
from large buffers. Use twice the old limit for the new-default case
and 8 times the old limit for the driver-specifies-watermark case.
Nothing uses these cases yet.
Removed related debugging code.
I don't have access to a real VT220 to verify this against.
However, I'm committing the patch in `good faith' because
(a) getting hold of a real VT220 is going to be increasingly difficult
the longer the PR sits around,
(b) some one was troubled enough to in a PR and
(c) the fix is minor and has no other implications.
PR: 7559
Submitted by: Christian Weisgerber <naddy@mips.rhein-neckar.de>
in a SMP system. Unexpected things could happen if each cpu
has a different ldt setting and one cpu tries to use value
of currentldt set by another cpu.
The fix is to move currentldt to the per-cpu area. It includes
patches I filed in PR i386/6219 which are also user ldt related.
PR: i386/7591, i386/6219
Submitted by: Luoqi Chen <luoqi@watermarkgroup.com>
clustering is obsolescent technology so hardly anyone noticed. On
a DORS 32160 SCSI drive with 4 tags, read clustering makes very
little difference even for huge sequential reads. However, on a
ZIP SCSI drive with 0 tags, the minimum overhead per block is about
40 msec, so very large clusters must be used to get anywhere near
the maximum transfer rate. Using clusters consisting of 1 8K block
reduces the transfer rate to about 250K/sec. Under msdosfs, missing
read clustering is normal and a cluster size of 1 512 byte block
reduces the transfer rate to about 25K/sec.
Broken in: rev.1.18
not the necessarily the same as the seconds part of getmicrotime()
yet, and anyway, we should have used `time_second' if we only wanted
a sloppy value for the seconds part. There is no point in making
ibcs2's time(2) more efficient than FreeBSD's time(3).
description of DPT_SHUTDOWN_SLEEP in LINT. Didn't add timestamps
so that the (combined?) sleep interval can be printed as intended
in the original printf.
stability now. ALso modify /sys/conf/files, /sys/i386/conf/GENERIC
and /sys/i386/conf/LINT to add entries for the XL driver. Deactivate
support for the XL adapters in the vortex driver. LAstly, add a man
page.
(Also added an MLINKS entry for the ThunderLAN man page which I forgot
previously.)
applications. Here's how it works.
The kernel should include <machine/elf.h> to get the definitions
for the native architecture. It can reference the various ELF
structures with generic names like Elf_Sym, Elf_Shdr, etc. A define
__ELF_WORD_SIZE is also available with the value 32 or 64 as
appropriate for the native architecture.
Generic applications should include <elf.h>, which is just a wrapper
for <machine/elf.h>.
Applications such as object file dumpers that need to deal with
foreign ELF files can include <sys/elf32.h> and/or <sys/elf64.h>.
Both can be included from the same source file if desired. The
structure names must be referenced using wordsize-specific names
like Elf32_Sym, Elf64_Shdr, etc.
I haven't change the alpha stuff, but I haven't broken it either.