obtaining and releasing shared and exclusive locks. The algorithms for
manipulating the lock cookie are very similar to that rwlocks. This patch
also adds support for exclusive locks using the same algorithm as mutexes.
A new sx_init_flags() function has been added so that optional flags can be
specified to alter a given locks behavior. The flags include SX_DUPOK,
SX_NOWITNESS, SX_NOPROFILE, and SX_QUITE which are all identical in nature
to the similar flags for mutexes.
Adaptive spinning on select locks may be enabled by enabling the
ADAPTIVE_SX kernel option. Only locks initialized with the SX_ADAPTIVESPIN
flag via sx_init_flags() will adaptively spin.
The common cases for sx_slock(), sx_sunlock(), sx_xlock(), and sx_xunlock()
are now performed inline in non-debug kernels. As a result, <sys/sx.h> now
requires <sys/lock.h> to be included prior to <sys/sx.h>.
The new kernel option SX_NOINLINE can be used to disable the aforementioned
inlining in non-debug kernels.
The size of struct sx has changed, so the kernel ABI is probably greatly
disturbed.
MFC after: 1 month
Submitted by: attilio
Tested by: kris, pjd
the mutex locked. Also tweak the wording to make it more consistant
between pthread_cond_wait and pthread_cond_tiedwait.
Confirmed with the opengroup's web site that this is a valid return
value. Wording used specifically not that of opengroup's online man
pages.
MFC After: 1 week
imitating an Ethernet device, so vlan(4) and if_bridge(4) can be
attached to it for testing and benchmarking purposes. Its source
can be an introduction to the anatomy of a network interface driver
due to its simplicity as well as to a bunch of comments in it.
argument from a mutex to a lock_object. Add cv_*wait*() wrapper macros
that accept either a mutex, rwlock, or sx lock as the second argument and
convert it to a lock_object and then call _cv_*wait*(). Basically, the
visible difference is that you can now use rwlocks and sx locks with
condition variables using the same API as with mutexes.
This is supposed to be a brief overview of the locking primatives.
It is not yet complete and contains many place-holders for information
I do not know.
The locking is getting so diverse that I've lost track of it all.
We need this page to keep outselves in sync with what the primitives do.
note.. not part of the build yet.
the background fsck indefinitely. This allows the administrator to run
it at a convenient time. To support running it from cron, the
forcestart argument now causes the fsck to start with no delay and all
output to be suppressed.
event. Locking primitives that support this (mtx, rw, and sx) now each
include their own foo_sleep() routine.
- Rename msleep() to _sleep() and change it's 'struct mtx' object to a
'struct lock_object' pointer. _sleep() uses the recently added
lc_unlock() and lc_lock() function pointers for the lock class of the
specified lock to release the lock while the thread is suspended.
- Add wrappers around _sleep() for mutexes (mtx_sleep()), rw locks
(rw_sleep()), and sx locks (sx_sleep()). msleep() still exists and
is now identical to mtx_sleep(), but it is deprecated.
- Rename SLEEPQ_MSLEEP to SLEEPQ_SLEEP.
- Rewrite much of sleep.9 to not be msleep(9) centric.
- Flesh out the 'RETURN VALUES' section in sleep.9 and add an 'ERRORS'
section.
- Add __nonnull(1) to _sleep() and msleep_spin() so that the compiler will
warn if you try to pass a NULL wait channel. The functions already have
a KASSERT to that effect.
interrupt sleeps. Rather, unmasked signals interrupt restarts and can
either interrupt the system call by having it return EINTR in userland or
force the system call to be restarted.
- Don't claim that the mutex is atomically reacquired when a cv_wait
routine returns. There's nothing atomic or magical about the lock
reacquire. The only magic is that we atomically drop the lock by
placing the thread on the sleep queue before dropping the lock.