"scheduler" here has very little to do with scheduling. It is actually
the swapper, and it really must be the last SYSINIT'ed item like its
comment says, since proc0 metamorphoses into swapper by calling
scheduler() last in mi_start(), and scheduler() never returns.. Rev.1.29
of subr_4bsd.c broke this by adding another SI_ORDER_FIRST item
(kproc_start() for schedcpu_thread() onto the SI_SUB_RUN_SCHEDULER_LIST.
The sorting of SYSINITs with identical orders (at all levels) is
apparently nondeterministic, so this resulted in schedule() sometimes
being called second last and schedcpu_thread() not being called at all.
This quick fix just changes the code to almost match the comment
(SI_ORDER_FIRST -> SI_ORDER_ANY). "LAST" is misspelled "ANY", and
there is no way to ensure that there is only 1 very lst SYSINIT.
A more complete fix would remove the SYSINIT obfuscation.
won't associate in BSS mode if you use AUTHMODE_SHARED. I probably don't
understand enough to know when SHARED should be used vs. OPEN or WPA.
For now, go back to what works.
bit for this being the last CTIO2. It didn't matter since it really was the
last CTIO2 and the resources recycled, but still....
Add in CTIO3 define for future DAC work.
for direct-mapped addresses. Assume that any address less than KVA
is one of these and return it. Also assert that an address is KVA
does have a valid mapping - callers of pmap_kextract don't check
the return value, since they assume that they have a valid virtual
address.
addressing of memory. Makes a substantial improvement for apps that
stress the limited amount of KVM on PPC (e.g. untarring the ports tree).
uma_machdep.c stolen from amd64/ia64.
updated for the regparm ABI on amd64.
Context switch debug regs.
Update for fpu simplification
Don't needlessly reload %cr3, in case the cpu has the tlb flush filter
turned off. Re-add LAZY_SWITCH stubs.
profiling buffers and hash table. This makes it a lot easier to
do multiple profiling runs without rebooting or performing
gratuitous arithmetic. Sysctl is named debug.mutex.prof.reset.
Reviewed by: jake
- witness_lock() is split into two pieces: witness_checkorder() and
witness_lock(). Witness_checkorder() determines if acquiring a specified
lock at the time it is called would result in a lock order. It
optionally adds a new lock order relationship as well. witness_lock()
updates witness's data structures to assume that a lock has been acquired
by stick a new lock instance in the appropriate lock instance list.
- The mutex and sx lock functions now call checkorder() prior to trying to
acquire a lock and continue to call witness_lock() after the acquire is
completed. This will let witness catch a deadlock before it happens
rather than trying to do so after the threads have deadlocked (i.e. never
actually report it).
- A new function witness_defineorder() has been added that adds a lock
order between two locks at runtime without having to acquire the locks.
If the lock order cannot be added it will return an error. This function
is available to programmers via the WITNESS_DEFINEORDER() macro which
accepts either two mutexes or two sx locks as its arguments.
- A few simple wrapper macros were added to allow developers to call
witness_checkorder() anywhere as a way of enforcing locking assertions
in code that might acquire a certain lock in some situations. The
macros are: witness_check_{mutex,shared_sx,exclusive_sx} and take an
appropriate lock as the sole argument.
- The code to remove a lock instance from a lock list in witness_unlock()
was unnested by using a goto to vastly improve the readability of this
function.
instead of taskqueue_swi. This shaves from 1 to 10% of the overhead.
Overhaul the locking once more, there was a few possible races that
are now closed.
panic() so that the buffer overflow just beyond this point is always
caught, even when the code is not compiled with INVARIANTS.
Change chn_setblocksize() buffer reallocation code to attempt to avoid
the feed_vchan16() buffer overflow by attempting to always keep the
bufsoft buffer at least as large as the bufhard buffer.
Print a diagnositic message
Danger! %s bufsoft size increasing from %d to %d after CHANNEL_SETBLOCKSIZE()
if our best attempts fail. If feed_vchan16() were to be called by
the interrupt handler while locks are dropped in chn_setblocksize()
to increase the size bufsoft to match the size of bufhard, the panic()
code in feed_vchan16() will be triggered. If the diagnostic message
is printed, it is a warning that a panic is possible if the system
were to see events in an "unlucky" order.
Change the locking code to avoid the need for MTX_RECURSIVE mutexes.
Add the MTX_DUPOK option to the channel mutexes and change the locking
sequence to always lock the parent channel before its children to avoid
the possibility of deadlock.
Actually implement locking assertions for the channel mutexes and fix
the problems found by the resulting assertion violations.
Clean up the locking code in dsp_ioctl().
Allocate the channel buffers using the malloc() M_WAITOK option instead
of M_NOWAIT so that buffer allocation won't fail. Drop locks across
the malloc() calls.
Add/modify KASSERTS() in attempt to detect problems early.
Abuse layering by adding a pointer to the snd_dbuf structure that points
back to the pcm_channel that owns it. This allows sndbuf_resize() to do
proper locking without having to change the its API, which is used by
the hardware drivers.
Don't dereference a NULL pointer when setting hw.snd.maxautovchans
if a hardware driver is not loaded. Noticed by Ryan Sommers
<ryans at gamersimpact.com>.
Tested by: Stefan Ehmann <shoesoft AT gmx.net>
Tested by: matk (Mathew Kanner)
Tested by: Gordon Bergling <gbergling AT 0xfce3.net>
debug.ddb_use_printf sysctl, output kernel debugger data to both the
console and kernel message buffer via printf. This fixes the case where
backtrace() went directly to the console and should help debugging greatly.
Thanks to Ian Dowse for the work, minor edits or any bugs are by myself.
Submitted by: iedowse