308 Commits

Author SHA1 Message Date
delphij
0a9ff24bbc INVARIANTS: treat LA_LOCKED as the same of LA_XLOCKED in mtx_assert.
The Linux lockdep API assumes LA_LOCKED semantic in lockdep_assert_held(),
meaning that either a shared lock or write lock is Ok.  On the other hand,
the timeout code uses lc_assert() with LA_XLOCKED, and we need both to
work.

For mutexes, because they can not be shared (this is unique among all lock
classes, and it is unlikely that we would add new lock class anytime soon),
it is easier to simply extend mtx_assert to handle LA_LOCKED there, despite
the change itself can be viewed as a slight abstraction violation.

Reviewed by:	mjg, cem, jhb
MFC after:	1 month
Differential Revision:	https://reviews.freebsd.org/D21362
2019-08-23 06:39:40 +00:00
mjg
da67d6603f locks: plug warnings about unitialized variables
They only showed up after I redefined LOCKSTAT_ENABLED to 0.

doing_lockprof in mutex.c is a real (but harmless) bug. Should the
value be non-zero it will do checks for lock profiling which would
otherwise be skipped.

state in rwlock.c is a wart from the compiler, the value can't be
used if lock profiling is not enabled.

Sponsored by:	The FreeBSD Foundation
2018-11-13 21:29:56 +00:00
jhb
81a93c8824 Add a KPI for the delay while spinning on a spin lock.
Replace a call to DELAY(1) with a new cpu_lock_delay() KPI.  Currently
cpu_lock_delay() is defined to DELAY(1) on all platforms.  However,
platforms with a DELAY() implementation that uses spin locks should
implement a custom cpu_lock_delay() doesn't use locks.

Reviewed by:	kib
MFC after:	3 days
2018-11-05 21:34:17 +00:00
mjg
cdca29b9c6 Remove an unused argument to turnstile_unpend.
PR:	228694
Submitted by:	Julian Pszczołowski <julian.pszczolowski@gmail.com>
2018-06-02 22:37:53 +00:00
markj
b6855b6d9d Drop KTR_CONTENTION.
It is incomplete, has not been adopted in the other locking primitives,
and we have other means of measuring lock contention (lock_profiling,
lockstat, KTR_LOCK). Drop it to slightly de-clutter the mutex code and
free up a precious KTR class index.

Reviewed by:	jhb, mjg
MFC after:	1 week
Differential Revision:	https://reviews.freebsd.org/D14771
2018-03-20 15:51:05 +00:00
mjg
bce760d1e8 locks: slightly depessimize lockstat
The slow path is always taken when lockstat is enabled. This induces
rdtsc (or other) calls to get the cycle count even when there was no
contention.

Still go to the slow path to not mess with the fast path, but avoid
the heavy lifting unless necessary.

This reduces sys and real time during -j 80 buildkernel:
before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total
after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total
disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total

So note this is still a significant hit.

LOCK_PROFILING results are not affected.
2018-03-17 19:26:33 +00:00
mjg
11dd702fc8 mtx: tidy up recursion handling in thread lock
Normally after grabbing the lock it has to be verified we got the right one
to begin with. However, if we are recursing, it must not change thus the
check can be avoided. In particular this avoids a lock read for non-recursing
case which found out the lock was changed.

While here avoid an irq trip of this happens.

Tested by:	pho (previous version)
2018-03-04 22:01:23 +00:00
mjg
428cfb036c mtx: add debug assertions to mtx_spin_wait_unlocked 2018-02-20 20:39:34 +00:00
mjg
2fd1c912f6 mtx: add mtx_spin_wait_unlocked
The primitive can be used to wait for the lock to be released. Intended
usage is for locks in structures which are about to be freed.

The benefit is the avoided interrupt enable/disable trip + atomic op to
grab the lock and shorter wait if the lock is held (since there is no
worry someone will contend on the lock, re-reads can be more aggressive).

Briefly discussed with:	 kib
2018-02-19 00:38:14 +00:00
mjg
ddfd5797d3 mtx: use fcmpset to cover setting MTX_CONTESTED 2018-01-12 13:40:50 +00:00
mjg
c7b2a94e2a mtx: deduplicate indefinite wait check in spinlocks and thread lock 2017-12-31 00:34:29 +00:00
mjg
a59af230d8 mtx: pre-read the lock value in thread_lock_flags_
Since this function is effectively slow path, if we get here the lock is most
likely already taken in which case it is cheaper to not blindly attempt the
atomic op.

While here move hwpmc probe out of the loop to match other primitives.
2017-12-31 00:33:28 +00:00
pfg
cc22a86800 sys/kern: adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.

The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
2017-11-27 15:20:12 +00:00
mjg
16d5d58622 Add the missing lockstat check for thread lock. 2017-11-25 20:49:27 +00:00
mjg
c8a2652582 locks: pass the found lock value to unlock slow path
This avoids an explicit read later.

While here whack the cheaply obtainable 'tid' argument.
2017-11-22 22:04:04 +00:00
mjg
b1c2309fd4 locks: remove the file + line argument from internal primitives when not used
The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed)
for many locks even in production kernels.

While here whack the tid argument from wlock hard and xlock hard.

There is no kbi change of any sort - "external" primitives still accept the
pair.
2017-11-22 21:51:17 +00:00
mjg
3fb5232780 locks: fix compilation issues without SMP or KDTRACE_HOOKS 2017-11-17 23:27:06 +00:00
mjg
23988560e7 mtx: add missing parts of the diff in r325920
Fixes build breakage.
2017-11-17 02:59:28 +00:00
mjg
24a0d3819f mtx: unlock before traversing threads to wake up
This shortens the lock hold time while not affecting corretness.
All the woken up threads end up competing can lose the race against
a completely unrelated thread getting the lock anyway.
2017-11-17 02:25:04 +00:00
mjg
630e712410 mtx: implement thread lock fastpath
MFC after:	1 week
2017-10-21 22:40:09 +00:00
mjg
002d36b454 mtx: fix up UP build after r324778
Reported by:	Michael Butler
2017-10-20 14:04:01 +00:00
mjg
cb00d2eba3 mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes
There is nothing panic-breaking to do in the unlock case and the lock
case will fallback to the slow path doing the check already.

MFC after:	1 week
2017-10-20 00:34:25 +00:00
mjg
96ff69bdc9 mtx: clean up locking spin mutexes
1) shorten the fast path by pushing the lockstat probe to the slow path
2) test for kernel panic only after it turns out we will have to spin,
in particular test only after we know we are not recursing

MFC after:	1 week
2017-10-20 00:30:35 +00:00
mjg
e99dca871b mtx: fix up owner_mtx after r324609
Now that MTX_UNOWNED is 0 the test was alwayas false.
2017-10-14 00:47:30 +00:00
mjg
77579b97a2 mtx: drop the tid argument from _mtx_lock_sleep
tid must be equal to curthread and the target routine was already reading
it anyway, which is not a problem. Not passing it as a parameter allows for
a little bit shorter code in callers.

MFC after:	1 week
2017-09-27 00:57:05 +00:00
mjg
21bf4a2f0f Annotate Giant with __exclusive_cache_line 2017-09-08 06:46:24 +00:00
mjg
fb0f2cc9b2 Sprinkle __read_frequently on few obvious places.
Note that some of annotated variables should probably change their types
to something smaller, preferably bit-sized.
2017-09-06 20:33:33 +00:00
markj
e0ef97d135 Correct the predicates on which lockstat:::{thread,spin}-spin fire.
In particular, they should fire only if the lock was owned by another
thread when we first attempted to acquire that lock.

MFC after:	1 week
2017-07-31 00:59:28 +00:00
markj
611738a7d7 Fix the !TD_IS_IDLETHREAD(curthread) locking assertions.
Most of the lock slowpaths assert that the calling thread isn't an idle
thread. However, this may not be true if the system has panicked, and in
some cases the assertion appears before a SCHEDULER_STOPPED() check.

MFC after:	3 days
Sponsored by:	Dell EMC Isilon
2017-06-19 21:09:50 +00:00
mjg
bbc11bbfe1 mtx: fix whitespace damage in _mtx_trylock_flags_
MFC after:	3 days
2017-05-30 02:25:47 +00:00
imp
712c7fc2ea KDTRACE_HOOKS isn't guaranteed to be defined. Change to check to see
if it is defined or not rather than if it is non-zero.

Sponsored by: Netflix, Inc
2017-02-24 01:39:08 +00:00
mjg
867b4a4739 mtx: microoptimize lockstat handling in spin mutexes and thread lock
While here make the code compilablle on kernels with LOCK_PROFILING but without
KDTRACE_HOOKS.
2017-02-23 22:46:01 +00:00
mjg
48e0ee172a mtx: fix spin mutexes interaction with failed fcmpset
While doing so move recursion support down to the fallback routine.
2017-02-20 19:08:36 +00:00
mjg
093d7b0fcc locks: make trylock routines check for 'unowned' value
Since fcmpset can fail without lock contention e.g. on arm, it was possible
to get spurious failures when the caller was expecting the primitive to succeed.

Reported by:	mmel
2017-02-19 16:28:46 +00:00
mjg
9d1d07d1cb locks: clean up trylock primitives
In particular thius reduces accesses of the lock itself.
2017-02-18 22:06:03 +00:00
mjg
3f4eab20e8 mtx: plug the 'opts' argument when not used 2017-02-18 01:52:10 +00:00
mjg
889b809c62 mtx: get rid of file/line args from slow paths if they are unused
This denotes changes which went in by accident in r313877.

On most production kernels both said parameters are zeroed and have nothing
reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change
stops passing them by internal consumers which this is the case.

Kernel modules use _flags variants which are not affected kbi-wise.
2017-02-17 15:40:24 +00:00
mjg
2480531c43 mtx: restrict r313875 to kernels without LOCK_PROFILING 2017-02-17 15:34:40 +00:00
mjg
51590b2f62 mtx: microoptimize lockstat handling in __mtx_lock_sleep
This saves a function call and multiple branches after the lock is acquired.
2017-02-17 14:55:59 +00:00
mjg
56448704f5 locks: let primitives for modules unlock without always goging to the slsow path
It is only needed if the LOCK_PROFILING is enabled. It has to always check if
the lock is about to be released which requires an avoidable read if the option
is not specified..
2017-02-17 05:39:40 +00:00
mjg
92dde5f426 locks: remove SCHEDULER_STOPPED checks from primitives for modules
They all fallback to the slow path if necessary and the check is there.

This means a panicked kernel executing code from modules will be able to
succeed doing actual lock/unlock, but this was already the case for core code
which has said primitives inlined.
2017-02-17 05:09:51 +00:00
mjg
bcee8cb651 locks: tidy up unlock fallback paths
Update comments to note these functions are reachable if lockstat is
enabled.

Check if the lock has any bits set before attempting unlock, which saves
an unnecessary atomic operation.
2017-02-09 08:19:30 +00:00
mjg
210d7e9a55 locks: change backoff to exponential
Previous implementation would use a random factor to spread readers and
reduce chances of starvation. This visibly reduces effectiveness of the
mechanism.

Switch to the more traditional exponential variant. Try to limit starvation
by imposing an upper limit of spins after which spinning is half of what
other threads get. Note the mechanism is turned off by default.

Reviewed by:	kib (previous version)
2017-02-07 14:49:36 +00:00
mjg
331b02c0ea locks: fix recursion support after recent changes
When a relevant lockstat probe is enabled the fallback primitive is called with
a constant signifying a free lock. This works fine for typical cases but breaks
with recursion, since it checks if the passed value is that of the executing
thread.

Read the value if necessary.
2017-02-06 09:40:14 +00:00
mjg
adc87445e0 mtx: fixup r313278, the assignemnt was supposed to go inside the loop 2017-02-05 09:53:13 +00:00
mjg
bd7ac91205 mtx: fix up _mtx_obtain_lock_fetch usage in thread lock
Since _mtx_obtain_lock_fetch no longer sets the argument to MTX_UNOWNED,
callers have to do it on their own.
2017-02-05 09:35:17 +00:00
mjg
91ae39e258 mtx: move lockstat handling out of inline primitives
Lockstat requires checking if it is enabled and if so, calling a 6 argument
function. Further, determining whether to call it on unlock requires
pre-reading the lock value.

This is problematic in at least 3 ways:
- more branches in the hot path than necessary
- additional cacheline ping pong under contention
- bigger code

Instead, check first if lockstat handling is necessary and if so, just fall
back to regular locking routines. For this purpose a new macro is introduced
(LOCKSTAT_PROFILE_ENABLED).

LOCK_PROFILING uninlines all primitives. Fold in the current inline lock
variant into the _mtx_lock_flags to retain the support. With this change
the inline variants are not used when LOCK_PROFILING is defined and thus
can ignore its existence.

This results in:
   text	   data	    bss	    dec	    hex	filename
22259667	1303208	4994976	28557851	1b3c21b	kernel.orig
21797315	1303208	4994976	28095499	1acb40b	kernel.patched

i.e. about 3% reduction in text size.

A remaining action is to remove spurious arguments for internal kernel
consumers.
2017-02-05 08:04:11 +00:00
mjg
d593add0a5 mtx: switch to fcmpset
The found value is passed to locking routines in order to reduce cacheline
accesses.

mtx_unlock grows an explicit check for regular unlock. On ll/sc architectures
the routine can fail even if the lock could have been handled by the inline
primitive.

Discussed with:	jhb
Tested by:	pho (previous version)
2017-02-05 03:26:34 +00:00
mjg
5004e7bcb7 Sprinkle __read_mostly on backoff and lock profiling code.
MFC after:	1 month
2017-01-27 15:03:51 +00:00
mjg
f62b14bceb mtx: plug open-coded mtx_lock access missed in r311172 2017-01-04 02:25:31 +00:00