we update the registers. That way we don't have any dirty registers to
worry about and also know that bsp=bspstore, which makes updating the
RSE related registers predictable.
This is not the end of it. We need more validity checks, but for now
this allows us to complete the gdb testsuite without crashing the
kernel.
if_start routines cannot currently be entered without Giant. When
the kernel is running with debug.mpsafenet != 0, this will defer
if_start execution to a task queue thread holding Giant, which may
introduce additional latency, but avoid incorrect execution.
Suggested by: dfr
full, avoiding the cost of mutex operations if it is. We re-test
once the mutex is acquired to make sure it's still true before doing
the -modify-write part of the read-modify-write. Note that due to
the maximum fifo depth being pretty deep, this is unlikely to improve
harvesting performance yet.
Approved by: markm
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.
Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
a standard configuration similar to [NO_]ADAPTIVE_MUTEXES. This
feature causes Giant to be included in the set of mutexes adaptively
spun on. It appears to have a positive effect on performance on SMP
across several workloads, including measurements of a 16% improvement
on buildworld, and 30%+ improvement for MySQL using the supersmack
benchmark with Giant over the network stack; a 6% improvement without
Giant on the network stack (as a result of less giant contention).
we may sleep when doing so; check that we didn't race with another thread
allocating storage for the vnode after allocation is made to a local
pointer, and only update the vnode pointer if it's still NULL. Otherwise,
accept that another thread got there first, and release the local storage.
Discussed with: jmg
Implement the protection check required by the pmap_extract_and_hold()
specification.
Remove the acquisition and release of Giant from pmap_extract_and_hold() and
pmap_protect().
Many thanks to Ken Smith for resolving a sparc64-specific initialization
problem in my original patch.
Tested by: kensmith@
bio_driver1 (as all the rest).
This introduced a small memory leak, but it wasn't really critical,
because maximum memory for g_stripe_zone is always set, so after few
requests gstripe was working in "economic" mode.
contents of /usr/src/rescue. Until now, the files were shipped with
releases but sysinstall would ignore them (resulting in a non-buildable
source tree).
Sanity checked by: jhb
are resevered, they can be written with anything, but they always read
as zero, we should simulate it in set_regs() as we are reading/writting
real hardware %rflags register.
contributed to the transferable load count. This prevents any potential
problems with sched_pin() being used around calls to setrunqueue().
- Change the sched_add() load balancing algorithm to try to migrate on
wakeup. This attempts to place threads that communicate with each other
on the same CPU.
- Don't clear the idle counts in kseq_transfer(), let the cpus do that when
they call sched_add() from kseq_assign().
- Correct a few out of date comments.
- Make sure the ke_cpu field is correct when we preempt.
- Call kseq_assign() from sched_clock() to catch any assignments that were
done without IPI. Presently all assignments are done with an IPI, but I'm
trying a patch that limits that.
- Don't migrate a thread if it is still runnable in sched_add(). Previously,
this could only happen for KSE threads, but due to changes to
sched_switch() all threads went through this path.
- Remove some code that was added with preemption but is not necessary.