Commit Graph

94 Commits

Author SHA1 Message Date
Robert Watson
2b59d50cfb In lockstatus(), don't lock and unlock the interlock when testing the
sleep lock status while kdb_active, or we risk contending with the
mutex on another CPU, resulting in a panic when using "show
lockedvnods" while in DDB.

MFC after:	3 days
Reviewed by:	jhb
Reported by:	kris
2005-09-27 21:02:59 +00:00
Suleiman Souhlal
1f71de49e1 Print out a warning and a backtrace if we try to unlock a lockmgr that
we do not hold.

Glanced at by:	phk
MFC after:	3 days
2005-09-02 15:56:01 +00:00
Pawel Jakub Dawidek
e37a499443 Add 'depth' argument to CTRSTACK() macro, which allows to reduce number
of ktr slots used. If 'depth' is equal to 0, the whole stack will be
logged, just like before.
2005-08-29 11:34:08 +00:00
Jeff Roberson
7499fd8de9 - Fix a problem that slipped through review; the stack member of the lockmgr
structure should have the lk_ prefix.
 - Add stack_print(lkp->lk_stack) to the information printed with
   lockmgr_printinfo().
2005-08-03 04:59:07 +00:00
Jeff Roberson
e8ddb61d38 - Replace the series of DEBUG_LOCKS hacks which tried to save the vn_lock
caller by saving the stack of the last locker/unlocker in lockmgr.  We
   also put the stack in KTR at the moment.

Contributed by:		Antoine Brodin <antoine.brodin@laposte.net>
2005-08-03 04:48:22 +00:00
Jeff Roberson
436901a86b - Differentiate two UPGRADE panics so I have a better idea of what's going
on here.
2005-04-12 05:43:03 +00:00
Jeff Roberson
a96ab77002 - Remove dead code. 2005-04-06 10:11:14 +00:00
Jeff Roberson
20728d8f63 - Slightly restructure acquire() so I can add more ktr information and
an assert to help find two strange bugs.
 - Remove some nearby spls.
2005-04-03 11:49:02 +00:00
Jeff Roberson
c4c0ec5ba7 - Add a LK_NOSHARE flag which forces all shared lock requests to be
treated as exclusive lock requests.

Sponsored by:	Isilon Systems, Inc.
2005-03-31 05:18:19 +00:00
Jeff Roberson
b641353e3a - Remove apause(). It makes no sense with our present mutex implementation
since simply unlocking a mutex does not ensure that one of the waiters
   will run and acquire it.  We're more likely to reacquire the mutex
   before anyone else has a chance.  It has also bit me three times now, as
   it's not safe to drop the interlock before sleeping in many cases.

Sponsored by:	Isilon Systems, Inc.
2005-03-31 04:25:59 +00:00
Jeff Roberson
bf5c2a1940 - Don't bump the count twice in the LK_DRAIN case.
Sponsored by:	Isilon Systems, Inc.
2005-03-28 12:52:10 +00:00
Jeff Roberson
f158df07ab - Restore COUNT() in all of its original glory. Don't make it dependent
on DEBUG as ufs will soon grow a dependency on this count.

Discussed with:	bde
Sponsored by:	Isilon Systems, Inc.
2005-03-25 00:00:44 +00:00
Jeff Roberson
92e251caf7 - Complete the implementation of td_locks. Track the number of outstanding
lockmgr locks that this thread owns.  This is complicated due to
   LK_KERNPROC and because lockmgr tolerates unlocking an unlocked lock.

Sponsored by:	Isilon Systes, Inc.
2005-03-24 09:35:06 +00:00
Jeff Roberson
f5f0da0a0e - transferlockers() requires the interlock to be SMP safe.
Sponsored by:	Isilon Systems, Inc.
2005-03-15 09:27:45 +00:00
Jeff Roberson
04186764a4 - Include LK_INTERLOCK in LK_EXTFLG_MASK so that it makes its way into
acquire.
 - Correct the condition that causes us to skip apause() to only require
   the presence of LK_INTERLOCK.

Sponsored by:	Isilon Systems, Inc.
2005-01-25 16:06:05 +00:00
Jeff Roberson
41bd6c15f2 - Do not use APAUSE if LK_INTERLOCK is set. We lose synchronization
if the lockmgr interlock is dropped after the caller's interlock
   is dropped.
 - Change some lockmgr KTRs to be slightly more helpful.

Sponsored By:	Isilon Systems, Inc.
2005-01-24 10:20:59 +00:00
Warner Losh
9454b2d864 /* -> /*- for copyright notices, minor format tweaks as necessary 2005-01-06 23:35:40 +00:00
Paul Saab
d8b8e875a2 When upgrading the shared lock to an exclusive lock, if we discover
that the exclusive lock is already held, then we call panic. Don't
clobber internal lock state before panic'ing. This change improves
debugging if this case were to happen.

Submitted by:	Mohan Srinivasan mohans at yahoo-inc dot com
Reviewed by:	rwatson
2004-11-29 22:58:32 +00:00
Alexander Kabaev
4cef6d5a53 Reintroduce slightly modified patch from kern/69964. Check for
LK_HAVE_EXL in both acquire invocations.

MFC after:	5 days
2004-08-27 01:41:28 +00:00
Alexander Kabaev
cffdaf2dce Temporarily back out r1.74 as it seems to cause a number of regressions
accordimg to numerous reports. It  might get reintroduced some time later
when an exact failure mode is understood better.
2004-08-23 02:39:45 +00:00
Alexander Kabaev
c8b876219f Upgrading a lock does not play well together with acquiring an exclusive lock
and can lead to two threads being granted exclusive access. Check that no one
has the same lock in exclusive  mode before proceeding to acquire it.

The LK_WANT_EXCL and LK_WANT_UPGRADE bits act as mini-locks and can block
other threads.  Normally this is not a problem since the mini locks are
upgraded to full locks and the release of the locks will unblock the other
threads.  However if a thread reset the bits without obtaining a full lock
other threads are not awoken. Add missing wakeups for these cases.

PR:		kern/69964
Submitted by:	Stephan Uphoff <ups at tree dot com>
Very good catch by: Stephan Uphoff <ups at tree dot com>
2004-08-16 15:01:22 +00:00
Robert Watson
ff381670df Don't include a "\n" in KTR output, it confuses automatic parsing. 2004-07-23 20:12:56 +00:00
Tim J. Robbins
fa2a4d0595 Move TDF_DEADLKTREAT into td_pflags (and rename it accordingly) to avoid
having to acquire sched_lock when manipulating it in lockmgr(), uiomove(),
and uiomove_fromphys().

Reviewed by:	jhb
2004-06-03 01:47:37 +00:00
Alexander Kabaev
c969c60c60 Add pid to the info printed in lockmgr_printinfo. This makes VFS
diagnostic messages slightly more useful.
2004-01-06 04:34:13 +00:00
Don Lewis
6ff1481d5c Rearrange the SYSINIT order to call lockmgr_init() earlier so that
the runtime lockmgr initialization code in lockinit() can be eliminated.

Reviewed by:	jhb
2003-07-16 01:00:39 +00:00
Don Lewis
857d9c60d0 Extend the mutex pool implementation to permit the creation and use of
multiple mutex pools with different options and sizes.  Mutex pools can
be created with either the default sleep mutexes or with spin mutexes.
A dynamically created mutex pool can now be destroyed if it is no longer
needed.

Create two pools by default, one that matches the existing pool that
uses the MTX_NOWITNESS option that should be used for building higher
level locks, and a new pool with witness checking enabled.

Modify the users of the existing mutex pool to use the appropriate pool
in the new implementation.

Reviewed by:	jhb
2003-07-13 01:22:21 +00:00
David E. O'Brien
677b542ea2 Use __FBSDID(). 2003-06-11 00:56:59 +00:00
John Baldwin
c06394f53f Use the KTR_LOCK mask for logging events via KTR in lockmgr() rather
than KTR_LOCKMGR.  lockmgr locks are locks just like other locks.
2003-03-11 20:00:37 +00:00
John Baldwin
263067951a Replace calls to WITNESS_SLEEP() and witness_list() with equivalent calls
to WITNESS_WARN().
2003-03-04 21:03:05 +00:00
Jeff Roberson
17661e5ac4 - Add an interlock argument to BUF_LOCK and BUF_TIMELOCK.
- Remove the buftimelock mutex and acquire the buf's interlock to protect
   these fields instead.
 - Hold the vnode interlock while locking bufs on the clean/dirty queues.
   This reduces some cases from one BUF_LOCK with a LK_NOWAIT and another
   BUF_LOCK with a LK_TIMEFAIL to a single lock.

Reviewed by:	arch, mckusick
2003-02-25 03:37:48 +00:00
Jeff Roberson
5e8feb5bed - Add a WITNESS_SLEEP() for the appropriate cases in lockmgr(). 2003-02-16 10:39:49 +00:00
Julian Elischer
822ded67fe The lockmanager has to keep track of locks per thread, not per process.
Submitted by:	david Xu (davidxu@)
Reviewed by:	jhb@
2003-02-05 19:36:58 +00:00
Julian Elischer
6f8132a867 Reversion of commit by Davidxu plus fixes since applied.
I'm not convinced there is anything major wrong with the patch but
them's the rules..

I am using my "David's mentor" hat to revert this as he's
offline for a while.
2003-02-01 12:17:09 +00:00
David Xu
0dbb100b9b Move UPCALL related data structure out of kse, introduce a new
data structure called kse_upcall to manage UPCALL. All KSE binding
and loaning code are gone.

A thread owns an upcall can collect all completed syscall contexts in
its ksegrp, turn itself into UPCALL mode, and takes those contexts back
to userland. Any thread without upcall structure has to export their
contexts and exit at user boundary.

Any thread running in user mode owns an upcall structure, when it enters
kernel, if the kse mailbox's current thread pointer is not NULL, then
when the thread is blocked in kernel, a new UPCALL thread is created and
the upcall structure is transfered to the new UPCALL thread. if the kse
mailbox's current thread pointer is NULL, then when a thread is blocked
in kernel, no UPCALL thread will be created.

Each upcall always has an owner thread. Userland can remove an upcall by
calling kse_exit, when all upcalls in ksegrp are removed, the group is
atomatically shutdown. An upcall owner thread also exits when process is
in exiting state. when an owner thread exits, the upcall it owns is also
removed.

KSE is a pure scheduler entity. it represents a virtual cpu. when a thread
is running, it always has a KSE associated with it. scheduler is free to
assign a KSE to thread according thread priority, if thread priority is changed,
KSE can be moved from one thread to another.

When a ksegrp is created, there is always N KSEs created in the group. the
N is the number of physical cpu in the current system. This makes it is
possible that even an userland UTS is single CPU safe, threads in kernel still
can execute on different cpu in parallel. Userland calls kse_create to add more
upcall structures into ksegrp to increase concurrent in userland itself, kernel
is not restricted by number of upcalls userland provides.

The code hasn't been tested under SMP by author due to lack of hardware.

Reviewed by: julian
2003-01-26 11:41:35 +00:00
Kirk McKusick
c6964d3bc9 Remove a race condition / deadlock from snapshots. When
converting from individual vnode locks to the snapshot
lock, be sure to pass any waiting processes along to the
new lock as well. This transfer is done by a new function
in the lock manager, transferlockers(from_lock, to_lock);
Thanks to Lamont Granquist <lamont@scriptkiddie.org> for
his help in pounding on snapshots beyond all reason and
finding this deadlock.

Sponsored by:   DARPA & NAI Labs.
2002-11-30 19:00:51 +00:00
Kirk McKusick
3a096f6c09 Have lockinit() initialize the debugging fields of a lock
when DEBUG_LOCKS is defined.

Sponsored by:	DARPA & NAI Labs.
2002-10-18 01:34:10 +00:00
Bruce Evans
8302d183f3 Include <sys/lockmgr.h> for the definitions of the locking interfaces that
are implemented here instead of depending on namespace pollution in
<sys/lock.h>.  Fixed nearby include messes (1 disordered include and 1
unused include).
2002-08-27 09:59:47 +00:00
Philippe Charnier
93b0017f88 Replace various spelling with FALLTHROUGH which is lint()able 2002-08-25 13:23:09 +00:00
Jeff Roberson
7181624aaa Record the file, line, and pid of the last successful shared lock holder. This
is useful as a last effort in debugging file system deadlocks.  This is enabled
via 'options DEBUG_LOCKS'
2002-05-30 05:55:22 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Eivind Eklund
04858e7ee4 Change wmesg to const char * instead of char * 2002-03-05 17:45:12 +00:00
Matthew Dillon
23b590188f Fix a BUF_TIMELOCK race against BUF_LOCK and fix a deadlock in vget()
against VM_WAIT in the pageout code.  Both fixes involve adjusting
the lockmgr's timeout capability so locks obtained with timeouts do not
interfere with locks obtained without a timeout.

Hopefully MFC: before the 4.5 release
2001-12-20 22:42:27 +00:00
Matthew Dillon
f286003909 Create a mutex pool API for short term leaf mutexes.
Replace the manual mutex pool in kern_lock.c (lockmgr locks) with the new API.
Replace the mutexes embedded in sxlocks with the new API.
2001-11-13 21:55:13 +00:00
John Baldwin
61d80e90a9 Add missing includes of sys/ktr.h. 2001-10-11 17:53:43 +00:00
John Baldwin
6a40eccec3 Malloc mutexes pre-zero'd as random garbage (including 0xdeadcode) my
trigget the check to make sure we don't initalize a mutex twice.
2001-10-10 20:43:50 +00:00
John Baldwin
bce9841972 Fix locking on td_flags for TDF_DEADLKTREAT. If the comments in the code
are true that curthread can change during this function, then this flag
needs to become a KSE flag, not a thread flag.
2001-09-13 22:33:37 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
John Baldwin
3f085c228e If we've panic'd already, then just bail in lockmgr rather than blocking or
possibly panic'ing again.
2001-08-10 23:29:15 +00:00
Alfred Perlstein
6157b69f4a Instead of asserting that a mutex is not still locked after unlocking it,
assert that the mutex is owned and not recursed prior to unlocking it.

This should give a clearer diagnostic when a programming error is caught.
2001-04-28 12:11:01 +00:00
Alfred Perlstein
98689e1e70 Assert that when using an interlock mutex it is not recursed when lockmgr()
is called.

Ok'd by: jhb
2001-04-20 22:38:40 +00:00