When the number of dirty buffers rises too high, the buf_daemon runs

to help clean up. After selecting a potential buffer to write, this
patch has it acquire a lock on the vnode that owns the buffer before
trying to write it. The vnode lock is necessary to avoid a race with
some other process holding the vnode locked and trying to flush its
dirty buffers. In particular, if the vnode in question is a snapshot
file, then the race can lead to a deadlock. To avoid slowing down the
buf_daemon, it does a non-blocking lock request when trying to lock
the vnode. If it fails to get the lock it skips over the buffer and
continues down its queue looking for buffers to flush.

Sponsored by:	DARPA & NAI Labs.
This commit is contained in:
Kirk McKusick 2002-10-18 01:29:59 +00:00
parent ef6c0bb296
commit bc7bdd50c1

View File

@ -2042,6 +2042,8 @@ buf_daemon()
static int
flushbufqueues(void)
{
struct thread *td = curthread;
struct vnode *vp;
struct buf *bp;
int r = 0;
@ -2070,9 +2072,21 @@ flushbufqueues(void)
bp = TAILQ_FIRST(&bufqueues[QUEUE_DIRTY]);
continue;
}
vfs_bio_awrite(bp);
++r;
break;
/*
* We must hold the lock on a vnode before writing
* one of its buffers. Otherwise we may confuse, or
* in the case of a snapshot vnode, deadlock the
* system. Rather than blocking waiting for the
* vnode, we just push on to the next buffer.
*/
if ((vp = bp->b_vp) == NULL ||
vn_lock(vp, LK_EXCLUSIVE | LK_NOWAIT, td) == 0) {
vfs_bio_awrite(bp);
++r;
if (vp != NULL)
VOP_UNLOCK(vp, 0, td);
break;
}
}
bp = TAILQ_NEXT(bp, b_freelist);
}