sched/fair: Propagate asynchrous detach
authorVincent Guittot <vincent.guittot@linaro.org>
Tue, 8 Nov 2016 09:53:46 +0000 (10:53 +0100)
committerIngo Molnar <mingo@kernel.org>
Wed, 16 Nov 2016 09:29:10 +0000 (10:29 +0100)
A task can be asynchronously detached from cfs_rq when migrating
between CPUs. The load of the migrated task is then removed from
source cfs_rq during its next update. We use this event to set
propagation flag.

During the load balance, we take advantage of the update of blocked
load to propagate any pending changes.

The propagation relies on patch:

  "sched: Fix hierarchical order in rq->leaf_cfs_rq_list"

... which orders children and parents, to ensure that it's done in one pass.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: kernellwp@gmail.com
Cc: pjt@google.com
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1478598827-32372-6-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 8cf26fd7ce586ea5c57fcf65eead4d1ea13e8793..090a9bb51ab2a4a63ed601e718e563d43aac41d1 100644 (file)
@@ -3219,6 +3219,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
                sub_positive(&sa->load_avg, r);
                sub_positive(&sa->load_sum, r * LOAD_AVG_MAX);
                removed_load = 1;
+               set_tg_cfs_propagate(cfs_rq);
        }
 
        if (atomic_long_read(&cfs_rq->removed_util_avg)) {
@@ -3226,6 +3227,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
                sub_positive(&sa->util_avg, r);
                sub_positive(&sa->util_sum, r * LOAD_AVG_MAX);
                removed_util = 1;
+               set_tg_cfs_propagate(cfs_rq);
        }
 
        decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa,
@@ -6872,6 +6874,10 @@ static void update_blocked_averages(int cpu)
 
                if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, true))
                        update_tg_load_avg(cfs_rq, 0);
+
+               /* Propagate pending load changes to the parent */
+               if (cfs_rq->tg->se[cpu])
+                       update_load_avg(cfs_rq->tg->se[cpu], 0);
        }
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 }