controller ported from NetBSD. It supports the following Gigabit
Ethernet adapters.
o Antares Microsystems Gigabit Ethernet
o ASUS NX1101 Gigabit Ethernet
o D-Link DL-4000 Gigabit Ethernet
o IC Plus IP1000A Gigabit Ethernet
o Sundance ST-2021 Gigabit Ethernet
o Sundance ST-2023 Gigabit Ethernet
o Sundance TC9021 Gigabit Ethernet
o Tamarack TC9021 Gigabit Ethernet
The IP1000A Gigabit Ethernet is also found on some motherboards
(LOM) from ABIT.
Unlike NetBSD stge(4) it does not require promiscuous mode operation
to revice packet and it supports all hardware features(TCP/UDP/IP
checksum offload, VLAN tag stripping/insertion features and JUMBO
frame) and polling(4).
Due to lack of hardware, hardwares that have TBI trantransceivers
were not tested at all.
Special thanks to wpaul who provided valauble datasheet for the
controller and helped to debug jumbo frame related issues. Whitout
his datasheet I would have spent many hours to debug this chip.
Tested on: i386, sparc64
mac-io bus, we cannot setup FAST interrupt handlers. This because we
use spinlocks to protect the hardware and all interrupt resources are
assigned the same interrupt handler. When the interrupt handler is
invoked for interrupt X, it could be preempted for interrupt Y while
it was holding the lock (where X and Y are the interrupt resources
corresponding a single instance of this driver). This is a deadlock.
By only using a MPSAFE handler in that case we prevent preemption.
specific routines from uipc_socket2.c following repo-copy. We might
rethink the location of one or two at some point, but the division was
relatively clean. uipc_sockbuf.c is now the home of routines that
manipulate socket buffers.
take a timeval indicating when the packet was captured. Move
microtime() to the calling functions and grab the timestamp as soon
as we know that we're going to call catchpacket at least once.
This means that we call microtime() once per matched packet, as
opposed to once per matched packet per bpf listener. It also means
that we return the same timestamp to all bpf listeners, rather than
slightly different ones.
It would be more accurate to call microtime() even earlier for all
packets, as you have to grab (1+#listener) locks before you can
determine if the packet will be logged. You could always grab a
timestamp before the locks, but microtime() can be costly, so this
didn't seem like a good idea.
(I guess most ethernet interfaces will have a bpf listener these
days because of dhclient. That means that we could be doing two bpf
locks on most packets going through the interface.)
PR: 71711
soreceive(), and sopoll(), which are wrappers for pru_sosend,
pru_soreceive, and pru_sopoll, and are now used univerally by socket
consumers rather than either directly invoking the old so*() functions
or directly invoking the protocol switch method (about an even split
prior to this commit).
This completes an architectural change that was begun in 1996 to permit
protocols to provide substitute implementations, as now used by UDP.
Consumers now uniformly invoke sosend(), soreceive(), and sopoll() to
perform these operations on sockets -- in particular, distributed file
systems and socket system calls.
Architectural head nod: sam, gnn, wollman
The register layout has changed since the original NV4 - sigh.
Hotplug support has been fixed for all nVidia chipsets that supports it
(including the MCP51/55).
HW donated by: Kingsley College
the kernel malloc(9) state for vmstat -m. libmemstat is now used to
generate a machine-readable version which is converged by vmstat -m
into a human-readable version.
Not for MFC.
used to mark UNIX domain sockets as being in the process of binding or
connecting. Use these to prevent simultaneous bind or connect
operations by multiple threads or processes on the same socket at the
same time, which closes race conditions present in the UNIX domain
socket implementation since inception.