85f5b24573
In such cases, the busying of the page and the unlocking of the containing object by vm_map_pmap_enter() and vm_fault_prefault() is unnecessary overhead. To eliminate this overhead, this change modifies pmap_enter_quick() so that it expects the object to be locked on entry and it assumes the responsibility for busying the page and unlocking the object if it must sleep. Note: alpha, amd64, i386 and ia64 are the only implementations optimized by this change; arm, powerpc, and sparc64 still conservatively busy the page and unlock the object within every pmap_enter_quick() call. Additionally, this change is the first case where we synchronize access to the page's PG_BUSY flag and busy field using the containing object's lock rather than the global page queues lock. (Modifications to the page's PG_BUSY flag and busy field have asserted both locks for several weeks, enabling an incremental transition.) |
||
---|---|---|
.. | ||
apic_vector.s | ||
atomic.c | ||
autoconf.c | ||
bios.c | ||
bioscall.s | ||
busdma_machdep.c | ||
critical.c | ||
db_disasm.c | ||
db_interface.c | ||
db_trace.c | ||
dump_machdep.c | ||
elan-mmcr.c | ||
elf_machdep.c | ||
exception.s | ||
gdb_machdep.c | ||
genassym.c | ||
geode.c | ||
i686_mem.c | ||
identcpu.c | ||
in_cksum.c | ||
initcpu.c | ||
intr_machdep.c | ||
io_apic.c | ||
io.c | ||
k6_mem.c | ||
legacy.c | ||
local_apic.c | ||
locore.s | ||
longrun.c | ||
machdep.c | ||
mem.c | ||
mp_clock.c | ||
mp_machdep.c | ||
mp_watchdog.c | ||
mpboot.s | ||
mptable_pci.c | ||
mptable.c | ||
nexus.c | ||
p4tcc.c | ||
perfmon.c | ||
pmap.c | ||
support.s | ||
swtch.s | ||
symbols.raw | ||
sys_machdep.c | ||
trap.c | ||
tsc.c | ||
uio_machdep.c | ||
vm86.c | ||
vm86bios.s | ||
vm_machdep.c |