sched: minor fast-path overhead reduction
authorMike Galbraith <efault@gmx.de>
Fri, 17 Oct 2008 13:33:21 +0000 (15:33 +0200)
committerIngo Molnar <mingo@elte.hu>
Fri, 17 Oct 2008 13:36:58 +0000 (15:36 +0200)
Greetings,

103638d added a bit of avoidable overhead to the fast-path.

Use sysctl_sched_min_granularity instead of sched_slice() to restrict buddy wakeups.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/sched_fair.c

index 18fd17172eb66bb567ca4bcc47ca6c0cea923462..67084936b6029037448aba1bdacefaf6b381d99c 100644 (file)
@@ -747,7 +747,7 @@ pick_next(struct cfs_rq *cfs_rq, struct sched_entity *se)
        struct rq *rq = rq_of(cfs_rq);
        u64 pair_slice = rq->clock - cfs_rq->pair_start;
 
-       if (!cfs_rq->next || pair_slice > sched_slice(cfs_rq, cfs_rq->next)) {
+       if (!cfs_rq->next || pair_slice > sysctl_sched_min_granularity) {
                cfs_rq->pair_start = rq->clock;
                return se;
        }