so that last_work_seen has a reasonable value at the transition
to the SYNCER_SHUTTING_DOWN state, even if net_worklist_len happened
to be zero at the time.
Initialize last_work_seen to zero as a safety measure in case the
syncer never ran in the SYNCER_RUNNING state.
Tested by: phk
device is open. This allows certain old and rather special dual
floppy controllers to work on both channels, as long as you only
have one open at a time.
When two drivers share an ISA DMA channel, they both call isa_dmainit()
and the second call fails if DIAGNOSTIC is on.
If isa_dmainit() was already called successfully, just return silently.
This only works if both drivers agree on the bounce buffer size,
but since sharing DMA is usually only possible on very special
hardware and then typically only for devices of the same type (which
would have multiple instances of the same device driver), this is
not a problem in practice.
belong in the respective drivers. I've not removed ALL of them, as a
few still haven't moved. I've just removed the ones that aren't used.
# these can be removed from amd64, but I'm having issues getting to
# sledge at the moment for a build.
named link, foo_link or link_foo to lnk, foo_lnk or lnk_foo, fixing
signed / unsigned comparisons, and shoving unused function arguments
under the carpet.
I was hoping WARNS?=6 might reveal more serious problems, and perhaps
the source of the -O2 breakage, but found no smoking gun.
- Eliminate the use of a recursive mutex.
- Mark the driver INTR_MPSAFE.
This work is incomplete and will be refined in a future commit.
- Most notably, _locked() variants of entry points need to be introduced.
- The mii upcall/downcall may still be racy.
- Add a stubbed-out guard against racing rl_detach() for the time being.
Tested on: UP, debug.mpsafenet && !debug.mpsafenet
Reviewed by: silence on -net
Use C99 types. Use ANSI function definitions. Sort prototypes.
Split long lines correctly. Punctuate/wordsmith comments.
Use device_printf()/if_printf() where possible.
Reviewed by: -net (silence)
- Eliminate the use of a recursive mutex.
- Mark the driver as INTR_MPSAFE.
- Split the default media choice code out into xl_choose_media() to
avoid making poor assumptions about the state of the lock during attach.
- The miibus upcall/downcall paths may still be racy.
Change to commented-out locking assertions there for now.
- Tested with nfsclient, routed, ssh, ntp, dhclient and quagga bgpd.
- This needs SMP test coverage. I do not have such resources.
Tested on: UP, !debug.mpsafenet && debug.mpsafenet
Hardware: 3C905B-TX (0x905510b7)
Speed up the syncer when shutting down by sleeping for a shorter
period of time instead of cranking up rushjob and using the
normal one second sleep.
Skip empty worklist slots when shutting down to avoid lengthy
intervals of inactivity.
Give I/O more time to complete between steps by not speeding the
syncer quite as much.
Terminate the syncer after one full pass through the worklist
plus one second with the worklist containing nothing but syncer
vnodes.
Print an indication of shutdown progress to the console.
Add a sysctl, vfs.worklist_len, to allow the size of the syncer worklist
to be monitored.
- Use device_printf() during device probe/attach.
- Move if_xname initialization to before xl_reset() is called.
- Use if_printf() at all other times after struct ifnet has been
initialized.
with sleepable locks held from further up in the network stack, and
attempts to allocate memory to hold multicast group membership information
with M_WAITOK.
This panic was triggered specifically when an exiting routing daemon
process closes its raw sockets after joining multicast groups on them.
While we're here, comment some possible locking badness.
PR: kern/48560
this more accurately reflects what the underlying hardware of most
acpi machines that don't have children pci busses.
We still need a better way to get this information from acpi/hardware.
dereferenced directly. Toss an ifdef around it for the moment and
allow this to compile. This likely means that priority packets aren't
queued to the special high priority queue. The maintainer of this
should look into the problem.
This is likely fallout from the netgraph migration to using a more
generic meta tag from the mbug recently.
Fixes: pc98 tinerbox
and WITNESS is not built, then force all M_WAITOK allocations to
M_NOWAIT behavior (transparently). This is to be used temporarily
if wierd deadlocks are reported because we still have code paths
that perform M_WAITOK allocations with lock(s) held, which can
lead to deadlock. If WITNESS is compiled, then the sysctl is ignored
and we ask witness to tell us wether we have locks held, converting
to M_NOWAIT behavior only if it tells us that we do.
Note this removes the previous mbuf.h inclusion as well (only needed
by last revision), and cleans up unneeded [artificial] comparisons
to just the mbuf zones. The problem described above has nothing to
do with previous mbuf wait behavior; it is a general problem.
zones, and do it by direct comparison of uma_zone_t instead of strcmp.
The mbuf subsystem used to provide M_TRYWAIT/M_DONTWAIT semantics, but
this is mostly no longer the case. M_WAITOK has taken over the spot
M_TRYWAIT used to have, and for mbuf things, still may return NULL if
the code path is incorrectly holding a mutex going into mbuf allocation
functions.
The M_WAITOK/M_NOWAIT semantics are absolute; though it may deadlock
the system to try to malloc or uma_zalloc something with a mutex held
and M_WAITOK specified, it is absolutely required to not return NULL
and will result in instability and/or security breaches otherwise.
There is still room to add the WITNESS_WARN() to all cases so that
we are notified of the possibility of deadlocks, but it cannot change
the value of the "badness" variable and allow allocation to actually
fail except for the specialized cases which used to be M_TRYWAIT.
functionality by setting to a non-zero value. This is an integer, but
is treated as a boolean by the code, so clamp it to a boolean value
when set so as to avoid unnecessary bridge reinitialization if it's
changed to another value.
PR: kern/61174
Requested by: Bruce Cran
around in the vnodes surroundings when we allocate a block.
Assign a blocksize when we create a vnode, and yell a warning (and ignore it)
if we got the wrong size.
Please email all such warnings to me.
generic filesystem events to userspace. Currently only mount and unmount
of filesystems are signalled. Soon to be added, up/down status of NFS.
Introduce a sysctl node used to route requests to/from filesystems
based on filesystem ids.
Introduce a new vfsop, vfs_sysctl(mp, req) that is used as the callback/
entrypoint by the sysctl code to change individual filesystems.
ffs_mount -> bdevvp -> getnewvnode(..., mp = NULL, ...) ->
insmntqueue(vp, mp = NULL) -> KASSERT -> panic
Make getnewvnode() only call insmntqueue() if the mountpoint parameter
is not NULL.
our cached 'next vnode' being removed from this mountpoint. If we
find that it was recycled, we restart our traversal from the start
of the list.
Code to do that is in all local disk filesystems (and a few other
places) and looks roughly like this:
MNT_ILOCK(mp);
loop:
for (vp = TAILQ_FIRST(&mp...);
(vp = nvp) != NULL;
nvp = TAILQ_NEXT(vp,...)) {
if (vp->v_mount != mp)
goto loop;
MNT_IUNLOCK(mp);
...
MNT_ILOCK(mp);
}
MNT_IUNLOCK(mp);
The code which takes vnodes off a mountpoint looks like this:
MNT_ILOCK(vp->v_mount);
...
TAILQ_REMOVE(&vp->v_mount->mnt_nvnodelist, vp, v_nmntvnodes);
...
MNT_IUNLOCK(vp->v_mount);
...
vp->v_mount = something;
(Take a moment and try to spot the locking error before you read on.)
On a SMP system, one CPU could have removed nvp from our mountlist
but not yet gotten to assign a new value to vp->v_mount while another
CPU simultaneously get to the top of the traversal loop where it
finds that (vp->v_mount != mp) is not true despite the fact that
the vnode has indeed been removed from our mountpoint.
Fix:
Introduce the macro MNT_VNODE_FOREACH() to traverse the list of
vnodes on a mountpoint while taking into account that vnodes may
be removed from the list as we go. This saves approx 65 lines of
duplicated code.
Split the insmntque() which potentially moves a vnode from one mount
point to another into delmntque() and insmntque() which does just
what the names say.
Fix delmntque() to set vp->v_mount to NULL while holding the
mountpoint lock.