calls MiniportQueryInformation(), it will return NDIS_STATUS_PENDING.
When this happens, ndis_get_info() will sleep waiting for a completion
event. If two threads call ndis_get_info() and both end up having to
sleep, they will both end up waiting on the same wait channel, which
can cause a panic in sleepq_add() if INVARIANTS are turned on.
Fix this by having ndis_get_info() use a common mutex rather than
using the process mutex with PROC_LOCK(). Also do the same for
ndis_set_info(). Note that Pierre's original patch also made ndis_thsuspend()
use the new mutex, but ndis_thsuspend() shouldn't need this since
it will make each thread that calls it sleep on a unique wait channel.
Also, it occured to me that we probably don't want to enter
MiniportQueryInformation() or MiniportSetInformation() from more
than one thread at any given time, so now we acquire a Windows
spinlock before calling either of them. The Microsoft documentation
says that MiniportQueryInformation() and MiniportSetInformation()
are called at DISPATCH_LEVEL, and previously we would call
KeRaiseIrql() to set the IRQL to DISPATCH_LEVEL before entering
either routine, but this only guarantees mutual exclusion on
uniprocessor machines. To make it SMP safe, we need to use a real
spinlock. For now, I'm abusing the spinlock embedded in the
NDIS_MINIPORT_BLOCK structure for this purpose. (This may need to be
applied to some of the other routines in kern_ndis.c at a later date.)
Export ntoskrnl_init_lock() (KeInitializeSpinlock()) from subr_ntoskrnl.c
since we need to use in in kern_ndis.c, and since it's technically part
of the Windows kernel DDK API along with the other spinlock routines. Use
it in subr_ndis.c too rather than frobbing the spinlock directly.
the last action of kern_exit(). Instead, it is a MD callout to cleanup
per-process state during exit.
- Add notes of concern to Alpha and ia64 about the possible need to drop
fp state in cpu_thread_exit() rather than in cpu_exit() since it is
per-thread state rather than per-process.
- ip_fw_chk() returns action as function return value. Field retval is
removed from args structure. Action is not flag any more. It is one
of integer constants.
- Any action-specific cookies are returned either in new "cookie" field
in args structure (dummynet, future netgraph glue), or in mbuf tag
attached to packet (divert, tee, some future action).
o Convert parsing of return value from ip_fw_chk() in ipfw_check_{in,out}()
to a switch structure, so that the functions are more readable, and a future
actions can be added with less modifications.
Approved by: andre
MFC after: 2 months
and KASSERT coverage.
After this check there is only one "nasty" cast in this code but there
is a KASSERT to protect against the wrong argument structure behind
that cast.
Un-inlining the meat of VOP_FOO() saves 35kB of text segment on a typical
kernel with no change in performance.
We also now run the checking and tracing on VOP's which have been layered
by nullfs, umapfs, deadfs or unionfs.
Add new (non-inline) VOP_FOO_AP() functions which take a "struct
foo_args" argument and does everything the VOP_FOO() macros
used to do with checks and debugging code.
Add KASSERT to VOP_FOO_AP() check for argument type being
correct.
Slim down VOP_FOO() inline functions to just stuff arguments
into the struct foo_args and call VOP_FOO_AP().
Put function pointer to VOP_FOO_AP() into vop_foo_desc structure
and make VCALL() use it instead of the current offsetoff() hack.
Retire vcall() which implemented the offsetoff()
Make deadfs and unionfs use VOP_FOO_AP() calls instead of
VCALL(), we know which specific call we want already.
Remove unneeded arguments to VCALL() in nullfs and umapfs bypass
functions.
Remove unused vdesc_offset and VOFFSET().
Generally improve style/readability of the generated code.
so we need to acquire Giant in netgraph methods, so that we don't
race with line discipline methods. Remove NET_NEEDS_GIANT.
- Packets coming into node from netgraph are queued in ifqueue
attached to node private data.
- Mutex in struct ifqueue is used to lock not only the queue, but
the whole private data, and tp->t_lsc field.
- tp->t_lsc pointer is used to indicate whether line discipline is
attached to netgraph or not.
- Use FLG_DIE flag to indicate that node may be destroyed.
(This protection doesn't work, and it didn't before. Must be redesigned.)
- Increment ngt_unit atomically, removing mutex.
- Acquire Giant, when executing ngt_start() from netgraph context.
- Acquire Giant, when {,de}registering line discipline.
- Uncomment forcing queue mode on peers hook, since this is reasonable.
- Force queue mode on our hook, to avoid acquiring Giant when coming from
network stack. We may already hold some mutexes at this point.
Cleanups:
- Use callout_pending() instead of our own flag.
- Remove spl(9) calls. Now we can use return() instead of ERROUT().
style(9):
- Sort includes.
- Sparse initializer for struct linesw.
- Remove some empty lines, sort declarations.
Reviewed by: julian, phk
MFC after: 1 month
of len in tcp_output(), in the case where the FIN has already been
transmitted. The mis-computation of len is because of a gcc
optimization issue, which this change works around.
Submitted by: Mohan Srinivasan
interrupt is wired up to all the I/O APICs in the system. If the system
has only one I/O APIC, then just act as if the entry specified that APIC.
We still don't try to handle global entries in a system with multiple I/O
APICs.
Tested by: Peter Trifonov pvtrifonov at mail dot ru
MFC after: 1 week
up its pending error state, which may be set in some rare conditions resulting
in connect() syscall returning that bogus error and making application believe
that attempt to change association has failed, while it has not in fact.
There is sockets/reconnect regression test which excersises this bug.
MFC after: 2 weeks
errno can be tampered potentially by nested signal handle.
Now all error codes are returned in negative value, positive value are
reserved for future expansion.
external source (i.e., _STA). The previous case only handled calls
occurring within AML. This should fix Toshibas, among others. Thanks
to Robert Moore of Intel for the fix.
MFC after: 2 days