85f5b24573
In such cases, the busying of the page and the unlocking of the containing object by vm_map_pmap_enter() and vm_fault_prefault() is unnecessary overhead. To eliminate this overhead, this change modifies pmap_enter_quick() so that it expects the object to be locked on entry and it assumes the responsibility for busying the page and unlocking the object if it must sleep. Note: alpha, amd64, i386 and ia64 are the only implementations optimized by this change; arm, powerpc, and sparc64 still conservatively busy the page and unlock the object within every pmap_enter_quick() call. Additionally, this change is the first case where we synchronize access to the page's PG_BUSY flag and busy field using the containing object's lock rather than the global page queues lock. (Modifications to the page's PG_BUSY flag and busy field have asserted both locks for several weeks, enabling an incremental transition.) |
||
---|---|---|
.. | ||
autoconf.c | ||
bus_machdep.c | ||
cache.c | ||
cheetah.c | ||
clock.c | ||
counter.c | ||
critical.c | ||
db_disasm.c | ||
db_hwwatch.c | ||
db_interface.c | ||
db_trace.c | ||
dump_machdep.c | ||
eeprom_ebus.c | ||
eeprom_sbus.c | ||
eeprom.c | ||
elf_machdep.c | ||
exception.S | ||
gdb_machdep.c | ||
genassym.c | ||
identcpu.c | ||
in_cksum.c | ||
interrupt.S | ||
intr_machdep.c | ||
iommu.c | ||
locore.S | ||
machdep.c | ||
mem.c | ||
mp_exception.S | ||
mp_locore.S | ||
mp_machdep.c | ||
nexus.c | ||
ofw_bus.c | ||
ofw_machdep.c | ||
pmap.c | ||
prof_machdep.c | ||
rtc.c | ||
rwindow.c | ||
sc_machdep.c | ||
spitfire.c | ||
support.S | ||
swtch.S | ||
sys_machdep.c | ||
tick.c | ||
tlb.c | ||
trap.c | ||
tsb.c | ||
uio_machdep.c | ||
vm_machdep.c |