77b7809eb0
compare the zone element size (+1 for the byte of linkage) against UMA_SLAB_SIZE - sizeof(struct uma_slab), and not just UMA_SLAB_SIZE. Add a KASSERT in zone_small_init to make sure that the computed ipers (items per slab) for the zone is not zero, despite the addition of the check, just to be sure (this part submitted by: silby) - UMA_ZONE_VM used to imply BUCKETCACHE. Now it implies CACHEONLY instead. CACHEONLY is like BUCKETCACHE in the case of bucket allocations, but in addition to that also ensures that we don't setup the zone with OFFPAGE slab headers allocated from the slabzone. This means that we're not allowed to have a UMA_ZONE_VM zone initialized for large items (zone_large_init) because it would require the slab headers to be allocated from slabzone, and hence kmem_map. Some of the zones init'd with UMA_ZONE_VM are so init'd before kmem_map is suballoc'd from kernel_map, which is why this change is necessary. |
||
---|---|---|
.. | ||
default_pager.c | ||
device_pager.c | ||
phys_pager.c | ||
pmap.h | ||
swap_pager.c | ||
swap_pager.h | ||
uma_core.c | ||
uma_dbg.c | ||
uma_dbg.h | ||
uma_int.h | ||
uma.h | ||
vm_contig.c | ||
vm_extern.h | ||
vm_fault.c | ||
vm_glue.c | ||
vm_init.c | ||
vm_kern.c | ||
vm_kern.h | ||
vm_map.c | ||
vm_map.h | ||
vm_meter.c | ||
vm_mmap.c | ||
vm_object.c | ||
vm_object.h | ||
vm_page.c | ||
vm_page.h | ||
vm_pageout.c | ||
vm_pageout.h | ||
vm_pageq.c | ||
vm_pager.c | ||
vm_pager.h | ||
vm_param.h | ||
vm_unix.c | ||
vm_zeroidle.c | ||
vm.h | ||
vnode_pager.c | ||
vnode_pager.h |