5d1ae027f0
mutexes, which offers lower overhead on both UP and SMP. When allocating from or freeing to the per-cpu cache, without INVARIANTS enabled, we now no longer perform any mutex operations, which offers a 1%-3% performance improvement in a variety of micro-benchmarks. We rely on critical sections to prevent (a) preemption resulting in reentrant access to UMA on a single CPU, and (b) migration of the thread during access. In the event we need to go back to the zone for a new bucket, we release the critical section to acquire the global zone mutex, and must re-acquire the critical section and re-evaluate which cache we are accessing in case migration has occured, or circumstances have changed in the current cache. Per-CPU cache statistics are now gathered lock-free by the sysctl, which can result in small races in statistics reporting for caches. Reviewed by: bmilekic, jeff (somewhat) Tested by: rwatson, kris, gnn, scottl, mike at sentex dot net, others |
||
---|---|---|
.. | ||
default_pager.c | ||
device_pager.c | ||
memguard.c | ||
memguard.h | ||
phys_pager.c | ||
pmap.h | ||
swap_pager.c | ||
swap_pager.h | ||
uma_core.c | ||
uma_dbg.c | ||
uma_dbg.h | ||
uma_int.h | ||
uma.h | ||
vm_contig.c | ||
vm_extern.h | ||
vm_fault.c | ||
vm_glue.c | ||
vm_init.c | ||
vm_kern.c | ||
vm_kern.h | ||
vm_map.c | ||
vm_map.h | ||
vm_meter.c | ||
vm_mmap.c | ||
vm_object.c | ||
vm_object.h | ||
vm_page.c | ||
vm_page.h | ||
vm_pageout.c | ||
vm_pageout.h | ||
vm_pageq.c | ||
vm_pager.c | ||
vm_pager.h | ||
vm_param.h | ||
vm_unix.c | ||
vm_zeroidle.c | ||
vm.h | ||
vnode_pager.c | ||
vnode_pager.h |