cache: ensure that the number of bucket locks does not exceed hash size

The size can be changed by side effect of modifying kern.maxvnodes.

Since numbucketlocks was not modified, setting a sufficiently low value
would give more locks than actual buckets, which would then lead to
corruption.

Force the number of buckets to be not smaller.

Note this should not matter for real world cases.

Reported and tested by:	pho
This commit is contained in:
Mateusz Guzik 2016-11-23 19:50:12 +00:00
parent c85d716664
commit 8b0e0c91e0

View File

@ -1780,6 +1780,8 @@ nchinit(void *dummy __unused)
nchashtbl = hashinit(desiredvnodes * 2, M_VFSCACHE, &nchash);
numbucketlocks = cache_roundup_2(mp_ncpus * 64);
if (numbucketlocks > nchash + 1)
numbucketlocks = nchash + 1;
bucketlocks = malloc(sizeof(*bucketlocks) * numbucketlocks, M_VFSCACHE,
M_WAITOK | M_ZERO);
for (i = 0; i < numbucketlocks; i++)
@ -1828,7 +1830,11 @@ cache_changesize(int newmaxvnodes)
uint32_t hash;
int i;
new_nchashtbl = hashinit(newmaxvnodes * 2, M_VFSCACHE, &new_nchash);
newmaxvnodes = cache_roundup_2(newmaxvnodes * 2);
if (newmaxvnodes < numbucketlocks)
newmaxvnodes = numbucketlocks;
new_nchashtbl = hashinit(newmaxvnodes, M_VFSCACHE, &new_nchash);
/* If same hash table size, nothing to do */
if (nchash == new_nchash) {
free(new_nchashtbl, M_VFSCACHE);