sched: move_task_off_dead_cpu(): Remove retry logic
authorOleg Nesterov <oleg@redhat.com>
Mon, 15 Mar 2010 09:10:14 +0000 (10:10 +0100)
committerIngo Molnar <mingo@elte.hu>
Fri, 2 Apr 2010 18:12:02 +0000 (20:12 +0200)
The previous patch preserved the retry logic, but it looks unneeded.

__migrate_task() can only fail if we raced with migration after we dropped
the lock, but in this case the caller of set_cpus_allowed/etc must initiate
migration itself if ->on_rq == T.

We already fixed p->cpus_allowed, the changes in active/online masks must
be visible to racer, it should migrate the task to online cpu correctly.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20100315091014.GA9138@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/sched.c

index 27774b5aeb61196a72b1bdeb633df3515c83e98d..f475c608b073d43de7b101b6f09a48f9487ec146 100644 (file)
@@ -5456,7 +5456,7 @@ static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
        struct rq *rq = cpu_rq(dead_cpu);
        int needs_cpu, uninitialized_var(dest_cpu);
        unsigned long flags;
-again:
+
        local_irq_save(flags);
 
        raw_spin_lock(&rq->lock);
@@ -5464,14 +5464,13 @@ again:
        if (needs_cpu)
                dest_cpu = select_fallback_rq(dead_cpu, p);
        raw_spin_unlock(&rq->lock);
-
-       /* It can have affinity changed while we were choosing. */
+       /*
+        * It can only fail if we race with set_cpus_allowed(),
+        * in the racer should migrate the task anyway.
+        */
        if (needs_cpu)
-               needs_cpu = !__migrate_task(p, dead_cpu, dest_cpu);
+               __migrate_task(p, dead_cpu, dest_cpu);
        local_irq_restore(flags);
-
-       if (unlikely(needs_cpu))
-               goto again;
 }
 
 /*