Make the slabrefzone, the zone from which we allocated slabs with
internal reference counters, UMA_ZONE_NOFREE. This way, those slabs (with their ref counts) will be effectively type-stable, then using a trick like this on the refcount is no longer dangerous: MEXT_REM_REF(m); if (atomic_cmpset_int(m->m_ext.ref_cnt, 0, 1)) { if (m->m_ext.ext_type == EXT_PACKET) { uma_zfree(zone_pack, m); return; } else if (m->m_ext.ext_type == EXT_CLUSTER) { uma_zfree(zone_clust, m->m_ext.ext_buf); m->m_ext.ext_buf = NULL; } else { (*(m->m_ext.ext_free))(m->m_ext.ext_buf, m->m_ext.ext_args); if (m->m_ext.ext_type != EXT_EXTREF) free(m->m_ext.ref_cnt, M_MBUF); } } uma_zfree(zone_mbuf, m); Previously, a second thread hitting the above cmpset might actually read the refcnt AFTER it has already been freed. A very rare occurance. Now we'll know that it won't be freed, though. Spotted by: julian, pjd
This commit is contained in:
parent
daac3fffc8
commit
e66468ea7a
@ -1473,7 +1473,8 @@ uma_startup(void *bootmem)
|
||||
slabrefzone = uma_zcreate("UMA RCntSlabs",
|
||||
slabsize,
|
||||
NULL, NULL, NULL, NULL,
|
||||
UMA_ALIGN_PTR, UMA_ZFLAG_INTERNAL);
|
||||
UMA_ALIGN_PTR,
|
||||
UMA_ZFLAG_INTERNAL|UMA_ZONE_NOFREE);
|
||||
|
||||
hashzone = uma_zcreate("UMA Hash",
|
||||
sizeof(struct slabhead *) * UMA_HASH_SIZE_INIT,
|
||||
|
Loading…
x
Reference in New Issue
Block a user