sched/fair: Fix the negative lag increase fix

Vincent reported that my rework of his original patch lost a little
something.

Specifically it got the return value wrong; it should not compare
against the old se->vlag, but rather against the current value. Since
the thing that matters is if the effective vruntime of an entity is
affected and the thing needs repositioning or not.

Fixes: 059258b0d4 ("sched/fair: Prevent negative lag increase during delayed dequeue")
Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://patch.msgid.link/20260423094107.GT3102624%40noisy.programming.kicks-ass.net
This commit is contained in:
Peter Zijlstra
2026-04-23 13:22:22 +02:00
parent 254f49634e
commit c5cd6fd75b

View File

@@ -847,13 +847,19 @@ static s64 entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se, u64 avrunt
* Similarly, check that the entity didn't gain positive lag when DELAY_ZERO
* is set.
*
* Return true if the lag has been adjusted.
* Return true if the vlag has been modified. Specifically:
*
* se->vlag != avg_vruntime() - se->vruntime
*
* This can be due to clamping in entity_lag() or clamping due to
* sched_delayed. Either way, when vlag is modified and the entity is
* retained, the tree needs to be adjusted.
*/
static __always_inline
bool update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
s64 vlag = entity_lag(cfs_rq, se, avg_vruntime(cfs_rq));
bool ret;
u64 avruntime = avg_vruntime(cfs_rq);
s64 vlag = entity_lag(cfs_rq, se, avruntime);
WARN_ON_ONCE(!se->on_rq);
@@ -863,10 +869,9 @@ bool update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
if (sched_feat(DELAY_ZERO))
vlag = min(vlag, 0);
}
ret = (vlag == se->vlag);
se->vlag = vlag;
return ret;
return avruntime - vlag != se->vruntime;
}
/*