Cap the priority calculated from the current thread's running tick count
at SCHED_PRI_RANGE to prevent overflows in the priority value. This can happen due to irregularities with clock interrupts under certain virtualization environments. Tested by: Larry Rosenman ler lerctr org MFC after: 2 weeks
This commit is contained in:
parent
9de96e891c
commit
0c0d27d5dd
@ -1434,7 +1434,8 @@ sched_priority(struct thread *td)
|
||||
} else {
|
||||
pri = SCHED_PRI_MIN;
|
||||
if (td->td_sched->ts_ticks)
|
||||
pri += SCHED_PRI_TICKS(td->td_sched);
|
||||
pri += min(SCHED_PRI_TICKS(td->td_sched),
|
||||
SCHED_PRI_RANGE);
|
||||
pri += SCHED_PRI_NICE(td->td_proc->p_nice);
|
||||
KASSERT(pri >= PRI_MIN_BATCH && pri <= PRI_MAX_BATCH,
|
||||
("sched_priority: invalid priority %d: nice %d, "
|
||||
|
Loading…
Reference in New Issue
Block a user