callout_cpu_switch() allows preemption when dropping the outcoming
callout cpu lock (and after having dropped it). If the newly scheduled thread wants to acquire the old queue it will just spin forever. Fix this by disabling preemption and interrupts entirely (because fast interrupt handlers may incur in the same problem too) while switching locks. Reported by: hrs, Mike Tancsa <mike AT sentex DOT net>, Chip Camden <sterling AT camdensoftware DOT com> Tested by: hrs, Mike Tancsa <mike AT sentex DOT net>, Chip Camden <sterling AT camdensoftware DOT com>, Nicholas Esborn <nick AT desert DOT net> Approved by: re (kib) MFC after: 10 days
This commit is contained in:
parent
de67b4966c
commit
e75baa2802
@ -269,10 +269,17 @@ callout_cpu_switch(struct callout *c, struct callout_cpu *cc, int new_cpu)
|
||||
MPASS(c != NULL && cc != NULL);
|
||||
CC_LOCK_ASSERT(cc);
|
||||
|
||||
/*
|
||||
* Avoid interrupts and preemption firing after the callout cpu
|
||||
* is blocked in order to avoid deadlocks as the new thread
|
||||
* may be willing to acquire the callout cpu lock.
|
||||
*/
|
||||
c->c_cpu = CPUBLOCK;
|
||||
spinlock_enter();
|
||||
CC_UNLOCK(cc);
|
||||
new_cc = CC_CPU(new_cpu);
|
||||
CC_LOCK(new_cc);
|
||||
spinlock_exit();
|
||||
c->c_cpu = new_cpu;
|
||||
return (new_cc);
|
||||
}
|
||||
|
Loading…
x
Reference in New Issue
Block a user