The paradigm of a callout is that it has three consequent states:

not scheduled -> scheduled -> running -> not scheduled. The API and the
manual page assume that, some comments in the code assume that, and looks
like some contributors to the code also did. The problem is that this
paradigm isn't true. A callout can be scheduled and running at the same
time, which makes API description ambigouous. In such case callout_stop()
family of functions/macros should return 1 and 0 at the same time, since it
successfully unscheduled future callout but the current one is running.
Before this change we returned 1 in such a case, with an exception that
if running callout was migrating we returned 0, unless CS_MIGRBLOCK was
specified.

With this change, we now return 0 in case if future callout was unscheduled,
but another one is still in action, indicating to API users that resources
are not yet safe to be freed.

However, the sleepqueue code relies on getting 1 return code in that case,
and there already was CS_MIGRBLOCK flag, that covered one of the edge cases.
In the new return path we will also use this flag, to keep sleepqueue safe.

Since the flag CS_MIGRBLOCK doesn't block migration and now isn't limited to
migration edge case, rename it to CS_EXECUTING.

This change fixes panics on a high loaded TCP server.

Reviewed by:	jch, hselasky, rrs, kib
Approved by:	re (gjb)
Differential Revision:	https://reviews.freebsd.org/D7042
This commit is contained in:
Gleb Smirnoff 2016-07-05 18:47:17 +00:00
parent a0d20ecb94
commit d153eeee97
4 changed files with 27 additions and 30 deletions

View File

@ -29,7 +29,7 @@
.\"
.\" $FreeBSD$
.\"
.Dd September 14, 2015
.Dd July 4, 2016
.Dt TIMEOUT 9
.Os
.Sh NAME
@ -247,6 +247,10 @@ has already been serviced, then
negative one is returned.
If the callout is currently being serviced and cannot be stopped,
then zero will be returned.
If the callout is currently being serviced and cannot be stopped, and at the
same time a next invocation of the same callout is also scheduled, then
.Fn callout_stop
unschedules the next run and returns zero.
If the callout has an associated lock,
then that lock must be held when this function is called.
.Pp
@ -814,7 +818,7 @@ and
.Fn callout_drain
functions return a value of one if the callout was still pending when it was
called, a zero if the callout could not be stopped and a negative one is it
was either not running or haas already completed.
was either not running or has already completed.
The
.Fn timeout
function returns a

View File

@ -1166,7 +1166,7 @@ _callout_stop_safe(struct callout *c, int flags, void (*drain)(void *))
struct callout_cpu *cc, *old_cc;
struct lock_class *class;
int direct, sq_locked, use_lock;
int not_on_a_list;
int cancelled, not_on_a_list;
if ((flags & CS_DRAIN) != 0)
WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, c->c_lock,
@ -1236,28 +1236,14 @@ _callout_stop_safe(struct callout *c, int flags, void (*drain)(void *))
}
/*
* If the callout isn't pending, it's not on the queue, so
* don't attempt to remove it from the queue. We can try to
* stop it by other means however.
* If the callout is running, try to stop it or drain it.
*/
if (!(c->c_iflags & CALLOUT_PENDING)) {
if (cc_exec_curr(cc, direct) == c) {
/*
* If it wasn't on the queue and it isn't the current
* callout, then we can't stop it, so just bail.
* It probably has already been run (if locking
* is properly done). You could get here if the caller
* calls stop twice in a row for example. The second
* call would fall here without CALLOUT_ACTIVE set.
* Succeed we to stop it or not, we must clear the
* active flag - this is what API users expect.
*/
c->c_flags &= ~CALLOUT_ACTIVE;
if (cc_exec_curr(cc, direct) != c) {
CTR3(KTR_CALLOUT, "failed to stop %p func %p arg %p",
c, c->c_func, c->c_arg);
CC_UNLOCK(cc);
if (sq_locked)
sleepq_release(&cc_exec_waiting(cc, direct));
return (-1);
}
if ((flags & CS_DRAIN) != 0) {
/*
@ -1376,20 +1362,28 @@ _callout_stop_safe(struct callout *c, int flags, void (*drain)(void *))
cc_exec_drain(cc, direct) = drain;
}
CC_UNLOCK(cc);
return ((flags & CS_MIGRBLOCK) != 0);
return ((flags & CS_EXECUTING) != 0);
}
CTR3(KTR_CALLOUT, "failed to stop %p func %p arg %p",
c, c->c_func, c->c_arg);
if (drain) {
cc_exec_drain(cc, direct) = drain;
}
CC_UNLOCK(cc);
KASSERT(!sq_locked, ("sleepqueue chain still locked"));
return (0);
}
cancelled = ((flags & CS_EXECUTING) != 0);
} else
cancelled = 1;
if (sq_locked)
sleepq_release(&cc_exec_waiting(cc, direct));
if ((c->c_iflags & CALLOUT_PENDING) == 0) {
CTR3(KTR_CALLOUT, "failed to stop %p func %p arg %p",
c, c->c_func, c->c_arg);
CC_UNLOCK(cc);
return (cancelled);
}
c->c_iflags &= ~CALLOUT_PENDING;
c->c_flags &= ~CALLOUT_ACTIVE;
@ -1406,7 +1400,7 @@ _callout_stop_safe(struct callout *c, int flags, void (*drain)(void *))
}
callout_cc_del(c, cc);
CC_UNLOCK(cc);
return (1);
return (cancelled);
}
void

View File

@ -600,7 +600,7 @@ sleepq_check_timeout(void)
* another CPU, so synchronize with it to avoid having it
* accidentally wake up a subsequent sleep.
*/
else if (_callout_stop_safe(&td->td_slpcallout, CS_MIGRBLOCK, NULL)
else if (_callout_stop_safe(&td->td_slpcallout, CS_EXECUTING, NULL)
== 0) {
td->td_flags |= TDF_TIMEOUT;
TD_SET_SLEEPING(td);

View File

@ -64,9 +64,8 @@ struct callout_handle {
/* Flags for callout_stop_safe() */
#define CS_DRAIN 0x0001 /* callout_drain(), wait allowed */
#define CS_MIGRBLOCK 0x0002 /* Block migration, return value
indicates that the callout was
executing */
#define CS_EXECUTING 0x0002 /* Positive return value indicates that
the callout was executing */
#ifdef _KERNEL
/*