sched: Move up affinity check to mitigate useless redoing overhead
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>
Tue, 23 Apr 2013 08:27:40 +0000 (17:27 +0900)
committerIngo Molnar <mingo@kernel.org>
Wed, 24 Apr 2013 06:52:44 +0000 (08:52 +0200)
commitd31980846f9688db3ee3e5863525c6ff8ace4c7c
tree92ec80802dd11061e5b1515a0bf41d9b9be22cb1
parentcfc03118047172f5bdc58d63c607d16d33ce5305
sched: Move up affinity check to mitigate useless redoing overhead

Currently, LBF_ALL_PINNED is cleared after affinity check is
passed. So, if task migration is skipped by small load value or
small imbalance value in move_tasks(), we don't clear
LBF_ALL_PINNED. At last, we trigger 'redo' in load_balance().

Imbalance value is often so small that any tasks cannot be moved
to other cpus and, of course, this situation may be continued
after we change the target cpu. So this patch move up affinity
check code and clear LBF_ALL_PINNED before evaluating load value
in order to mitigate useless redoing overhead.

In addition, re-order some comments correctly.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Jason Low <jason.low2@hp.com>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1366705662-3587-5-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c