sched/rt: Remove redundant nr_cpus_allowed test
authorShawn Bohrer <sbohrer@rgmadvisors.com>
Fri, 4 Oct 2013 19:24:53 +0000 (14:24 -0500)
committerIngo Molnar <mingo@kernel.org>
Sun, 6 Oct 2013 09:28:40 +0000 (11:28 +0200)
In 76854c7e8f3f4172fef091e78d88b3b751463ac6 ("sched: Use
rt.nr_cpus_allowed to recover select_task_rq() cycles") an
optimization was added to select_task_rq_rt() that immediately
returns when p->nr_cpus_allowed == 1 at the beginning of the
function.

This makes the latter p->nr_cpus_allowed > 1 check redundant,
which can now be removed.

Signed-off-by: Shawn Bohrer <sbohrer@rgmadvisors.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Galbraith <mgalbraith@suse.de>
Cc: tomk@rgmadvisors.com
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1380914693-24634-1-git-send-email-shawn.bohrer@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/rt.c

index 01970c8e64df64def4585bf3bd517c3bdb8a9354..ceebfba0a1dd52272f73fc6ffa4eb118062a1c4d 100644 (file)
@@ -1213,8 +1213,7 @@ select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
         */
        if (curr && unlikely(rt_task(curr)) &&
            (curr->nr_cpus_allowed < 2 ||
-            curr->prio <= p->prio) &&
-           (p->nr_cpus_allowed > 1)) {
+            curr->prio <= p->prio)) {
                int target = find_lowest_rq(p);
 
                if (target != -1)