locks: tweak backoff a little bit

Previous limits were chosen when locking primitives had spurious lock
accesses.

Flipping the starting point to 1 (or rather 2 as the first call shifts it)
provides a modest win when mild contention is seen while not hurting worse
cases. Tested on a bunch of one, two and four socket old and new systems
(Westmere, Skylake, Threadreaper and others) by doing concurrent page faults,
buildkernel/buildworld and other stuff (although not all systems got all the
tests).

Another thing is the upper limit. It is semi-arbitrarily chosen as it was
getting out of hand for slightly less small systems (e.g. a 128-thread one).

Note that backoff is fundamentally a speculative bandaid and this change just
makes it fit a little bit better. It remains completely oblivious to the
hardware topology or the contention pattern. This is being experimented with.
This commit is contained in:
mjg 2018-04-08 16:34:10 +00:00
parent a8ff05273f
commit 8d689b8e84

View File

@ -156,8 +156,10 @@ void
lock_delay_default_init(struct lock_delay_config *lc)
{
lc->base = lock_roundup_2(mp_ncpus) / 4;
lc->max = lc->base * 1024;
lc->base = 1;
lc->max = lock_roundup_2(mp_ncpus) * 256;
if (lc->max > 32678)
lc->max = 32678;
}
#ifdef DDB