Commit Graph

320 Commits

Author SHA1 Message Date
Alexander Motin
4730a8972b callout(9): Allow spin locks use with callout_init_mtx().
Implement lock_spin()/unlock_spin() lock class methods, moving the
assertion to _sleep() instead.  Change assertions in callout(9) to
allow spin locks for both regular and C_DIRECT_EXEC cases. In case of
C_DIRECT_EXEC callouts spin locks are the only locks allowed actually.

As the first use case allow taskqueue_enqueue_timeout() use on fast
task queues.  It actually becomes more efficient due to avoided extra
context switches in callout(9) thanks to C_DIRECT_EXEC.

MFC after:	2 weeks
Reviewed by:	hselasky
Differential Revision:	https://reviews.freebsd.org/D31778
2021-09-02 21:16:46 -04:00
Eric van Gyzen
428624130a Fix lockstat:::thread-spin dtrace probe with LOCK_PROFILING
The spinning start time is missing from the calculation due to a
misplaced #endif.  Return the #endif where it's supposed to be.

Submitted by:	Alexander Alexeev <aalexeev@isilon.com>
Reviewed by:	bdrewery, mjg
MFC after:	1 week
Sponsored by:	Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D31384
2021-08-02 14:44:23 -05:00
Mateusz Guzik
6a467cc5e1 lockprof: pass lock type as an argument instead of reading the spin flag 2021-05-23 17:55:27 +00:00
Mateusz Guzik
f90d57b808 locks: push lock_delay_arg_init calls down
Minor cleanup to skip doing them when recursing on locks and so that
they can act on found lock value if need be.
2020-11-24 03:49:37 +00:00
Mateusz Guzik
bd66a0750f mtx: add mtx_wait_unlocked 2020-08-04 23:00:00 +00:00
Mateusz Guzik
c795344ff7 locks: fix a long standing bug for primitives with kdtrace but without spinning
In such a case the second argument to lock_delay_arg_init was NULL which was
immediately causing a null pointer deref.

Since the sructure is only used for spin count, provide a dedicate routine
initializing it.

Reported by:	andrew
2020-07-23 17:26:53 +00:00
Pawel Biernacki
7029da5c36 Mark more nodes as CTLFLAG_MPSAFE or CTLFLAG_NEEDGIANT (17 of many)
r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are
still not MPSAFE (or already are but aren’t properly marked).
Use it in preparation for a general review of all nodes.

This is non-functional change that adds annotations to SYSCTL_NODE and
SYSCTL_PROC nodes using one of the soon-to-be-required flags.

Mark all obvious cases as MPSAFE.  All entries that haven't been marked
as MPSAFE before are by default marked as NEEDGIANT

Approved by:	kib (mentor, blanket)
Commented by:	kib, gallatin, melifaro
Differential Revision:	https://reviews.freebsd.org/D23718
2020-02-26 14:26:36 +00:00
Mateusz Guzik
879e0604ee Add KERNEL_PANICKED macro for use in place of direct panicstr tests 2020-01-12 06:07:54 +00:00
Mateusz Guzik
2e77cad11d locks: add default delay struct
Use it for all primitives. This makes everything fit in 8 bytes.
2020-01-05 12:48:19 +00:00
Mateusz Guzik
6b8dd26e7c locks: convert delay times to u_short
int is just a waste of space for this purpose.
2020-01-05 12:47:29 +00:00
Mateusz Guzik
3fd19ce7a5 mtx: eliminate recursion support from thread lock
Now that it is not used after schedlock changes got merged.

Note the unlock routine temporarily still checks for it on account of just using
regular spin unlock.

This is a prelude towards a general clean up.
2019-12-16 00:04:33 +00:00
Jeff Roberson
61a74c5ccd schedlock 1/4
Eliminate recursion from most thread_lock consumers.  Return from
sched_add() without the thread_lock held.  This eliminates unnecessary
atomics and lock word loads as well as reducing the hold time for
scheduler locks.  This will eventually allow for lockless remote adds.

Discussed with:	kib
Reviewed by:	jhb
Tested by:	pho
Differential Revision:	https://reviews.freebsd.org/D22626
2019-12-15 21:11:15 +00:00
Xin LI
a11bf9a49b INVARIANTS: treat LA_LOCKED as the same of LA_XLOCKED in mtx_assert.
The Linux lockdep API assumes LA_LOCKED semantic in lockdep_assert_held(),
meaning that either a shared lock or write lock is Ok.  On the other hand,
the timeout code uses lc_assert() with LA_XLOCKED, and we need both to
work.

For mutexes, because they can not be shared (this is unique among all lock
classes, and it is unlikely that we would add new lock class anytime soon),
it is easier to simply extend mtx_assert to handle LA_LOCKED there, despite
the change itself can be viewed as a slight abstraction violation.

Reviewed by:	mjg, cem, jhb
MFC after:	1 month
Differential Revision:	https://reviews.freebsd.org/D21362
2019-08-23 06:39:40 +00:00
Mateusz Guzik
f183fb162c locks: plug warnings about unitialized variables
They only showed up after I redefined LOCKSTAT_ENABLED to 0.

doing_lockprof in mutex.c is a real (but harmless) bug. Should the
value be non-zero it will do checks for lock profiling which would
otherwise be skipped.

state in rwlock.c is a wart from the compiler, the value can't be
used if lock profiling is not enabled.

Sponsored by:	The FreeBSD Foundation
2018-11-13 21:29:56 +00:00
John Baldwin
4cbbb74888 Add a KPI for the delay while spinning on a spin lock.
Replace a call to DELAY(1) with a new cpu_lock_delay() KPI.  Currently
cpu_lock_delay() is defined to DELAY(1) on all platforms.  However,
platforms with a DELAY() implementation that uses spin locks should
implement a custom cpu_lock_delay() doesn't use locks.

Reviewed by:	kib
MFC after:	3 days
2018-11-05 21:34:17 +00:00
Mateusz Guzik
d0a22279db Remove an unused argument to turnstile_unpend.
PR:	228694
Submitted by:	Julian Pszczołowski <julian.pszczolowski@gmail.com>
2018-06-02 22:37:53 +00:00
Mark Johnston
8c7549da2b Drop KTR_CONTENTION.
It is incomplete, has not been adopted in the other locking primitives,
and we have other means of measuring lock contention (lock_profiling,
lockstat, KTR_LOCK). Drop it to slightly de-clutter the mutex code and
free up a precious KTR class index.

Reviewed by:	jhb, mjg
MFC after:	1 week
Differential Revision:	https://reviews.freebsd.org/D14771
2018-03-20 15:51:05 +00:00
Mateusz Guzik
09bdec20a0 locks: slightly depessimize lockstat
The slow path is always taken when lockstat is enabled. This induces
rdtsc (or other) calls to get the cycle count even when there was no
contention.

Still go to the slow path to not mess with the fast path, but avoid
the heavy lifting unless necessary.

This reduces sys and real time during -j 80 buildkernel:
before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total
after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total
disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total

So note this is still a significant hit.

LOCK_PROFILING results are not affected.
2018-03-17 19:26:33 +00:00
Mateusz Guzik
9f4e008d4a mtx: tidy up recursion handling in thread lock
Normally after grabbing the lock it has to be verified we got the right one
to begin with. However, if we are recursing, it must not change thus the
check can be avoided. In particular this avoids a lock read for non-recursing
case which found out the lock was changed.

While here avoid an irq trip of this happens.

Tested by:	pho (previous version)
2018-03-04 22:01:23 +00:00
Mateusz Guzik
500ca73d43 mtx: add debug assertions to mtx_spin_wait_unlocked 2018-02-20 20:39:34 +00:00
Mateusz Guzik
d257698833 mtx: add mtx_spin_wait_unlocked
The primitive can be used to wait for the lock to be released. Intended
usage is for locks in structures which are about to be freed.

The benefit is the avoided interrupt enable/disable trip + atomic op to
grab the lock and shorter wait if the lock is held (since there is no
worry someone will contend on the lock, re-reads can be more aggressive).

Briefly discussed with:	 kib
2018-02-19 00:38:14 +00:00
Mateusz Guzik
310f24d72a mtx: use fcmpset to cover setting MTX_CONTESTED 2018-01-12 13:40:50 +00:00
Mateusz Guzik
15140a8ade mtx: deduplicate indefinite wait check in spinlocks and thread lock 2017-12-31 00:34:29 +00:00
Mateusz Guzik
1f4d28c7ea mtx: pre-read the lock value in thread_lock_flags_
Since this function is effectively slow path, if we get here the lock is most
likely already taken in which case it is cheaper to not blindly attempt the
atomic op.

While here move hwpmc probe out of the loop to match other primitives.
2017-12-31 00:33:28 +00:00
Pedro F. Giffuni
8a36da99de sys/kern: adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.

The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
2017-11-27 15:20:12 +00:00
Mateusz Guzik
2c50bafef5 Add the missing lockstat check for thread lock. 2017-11-25 20:49:27 +00:00
Mateusz Guzik
b584eb2e90 locks: pass the found lock value to unlock slow path
This avoids an explicit read later.

While here whack the cheaply obtainable 'tid' argument.
2017-11-22 22:04:04 +00:00
Mateusz Guzik
013c0b493f locks: remove the file + line argument from internal primitives when not used
The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed)
for many locks even in production kernels.

While here whack the tid argument from wlock hard and xlock hard.

There is no kbi change of any sort - "external" primitives still accept the
pair.
2017-11-22 21:51:17 +00:00
Mateusz Guzik
284194f183 locks: fix compilation issues without SMP or KDTRACE_HOOKS 2017-11-17 23:27:06 +00:00
Mateusz Guzik
2ccee9cc52 mtx: add missing parts of the diff in r325920
Fixes build breakage.
2017-11-17 02:59:28 +00:00
Mateusz Guzik
8448e02081 mtx: unlock before traversing threads to wake up
This shortens the lock hold time while not affecting corretness.
All the woken up threads end up competing can lose the race against
a completely unrelated thread getting the lock anyway.
2017-11-17 02:25:04 +00:00
Mateusz Guzik
be49509eea mtx: implement thread lock fastpath
MFC after:	1 week
2017-10-21 22:40:09 +00:00
Mateusz Guzik
62bf13cbf9 mtx: fix up UP build after r324778
Reported by:	Michael Butler
2017-10-20 14:04:01 +00:00
Mateusz Guzik
cbc2d7c218 mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes
There is nothing panic-breaking to do in the unlock case and the lock
case will fallback to the slow path doing the check already.

MFC after:	1 week
2017-10-20 00:34:25 +00:00
Mateusz Guzik
0d74fe267b mtx: clean up locking spin mutexes
1) shorten the fast path by pushing the lockstat probe to the slow path
2) test for kernel panic only after it turns out we will have to spin,
in particular test only after we know we are not recursing

MFC after:	1 week
2017-10-20 00:30:35 +00:00
Mateusz Guzik
e280ce465b mtx: fix up owner_mtx after r324609
Now that MTX_UNOWNED is 0 the test was alwayas false.
2017-10-14 00:47:30 +00:00
Mateusz Guzik
2f1ddb89fc mtx: drop the tid argument from _mtx_lock_sleep
tid must be equal to curthread and the target routine was already reading
it anyway, which is not a problem. Not passing it as a parameter allows for
a little bit shorter code in callers.

MFC after:	1 week
2017-09-27 00:57:05 +00:00
Mateusz Guzik
6a569d3525 Annotate Giant with __exclusive_cache_line 2017-09-08 06:46:24 +00:00
Mateusz Guzik
574adb65c8 Sprinkle __read_frequently on few obvious places.
Note that some of annotated variables should probably change their types
to something smaller, preferably bit-sized.
2017-09-06 20:33:33 +00:00
Mark Johnston
d0f68f913b Correct the predicates on which lockstat:::{thread,spin}-spin fire.
In particular, they should fire only if the lock was owned by another
thread when we first attempted to acquire that lock.

MFC after:	1 week
2017-07-31 00:59:28 +00:00
Mark Johnston
704cb42f2a Fix the !TD_IS_IDLETHREAD(curthread) locking assertions.
Most of the lock slowpaths assert that the calling thread isn't an idle
thread. However, this may not be true if the system has panicked, and in
some cases the assertion appears before a SCHEDULER_STOPPED() check.

MFC after:	3 days
Sponsored by:	Dell EMC Isilon
2017-06-19 21:09:50 +00:00
Mateusz Guzik
c7a6a1b325 mtx: fix whitespace damage in _mtx_trylock_flags_
MFC after:	3 days
2017-05-30 02:25:47 +00:00
Warner Losh
df1c30f6bd KDTRACE_HOOKS isn't guaranteed to be defined. Change to check to see
if it is defined or not rather than if it is non-zero.

Sponsored by: Netflix, Inc
2017-02-24 01:39:08 +00:00
Mateusz Guzik
dfaa7859d6 mtx: microoptimize lockstat handling in spin mutexes and thread lock
While here make the code compilablle on kernels with LOCK_PROFILING but without
KDTRACE_HOOKS.
2017-02-23 22:46:01 +00:00
Mateusz Guzik
13d2ef0f3a mtx: fix spin mutexes interaction with failed fcmpset
While doing so move recursion support down to the fallback routine.
2017-02-20 19:08:36 +00:00
Mateusz Guzik
b247fd395d locks: make trylock routines check for 'unowned' value
Since fcmpset can fail without lock contention e.g. on arm, it was possible
to get spurious failures when the caller was expecting the primitive to succeed.

Reported by:	mmel
2017-02-19 16:28:46 +00:00
Mateusz Guzik
5c5df0d99b locks: clean up trylock primitives
In particular thius reduces accesses of the lock itself.
2017-02-18 22:06:03 +00:00
Mateusz Guzik
a24c8eb847 mtx: plug the 'opts' argument when not used 2017-02-18 01:52:10 +00:00
Mateusz Guzik
cbebea4e67 mtx: get rid of file/line args from slow paths if they are unused
This denotes changes which went in by accident in r313877.

On most production kernels both said parameters are zeroed and have nothing
reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change
stops passing them by internal consumers which this is the case.

Kernel modules use _flags variants which are not affected kbi-wise.
2017-02-17 15:40:24 +00:00
Mateusz Guzik
09f1319acd mtx: restrict r313875 to kernels without LOCK_PROFILING 2017-02-17 15:34:40 +00:00