sched: Provide rt_mutex specific scheduler helpers
authorPeter Zijlstra <peterz@infradead.org>
Fri, 8 Sep 2023 16:22:51 +0000 (18:22 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 20 Sep 2023 07:31:12 +0000 (09:31 +0200)
commit6b596e62ed9f90c4a97e68ae1f7b1af5beeb3c05
tree5fdde551fcf4a48f0a1046c34b6d994e0be0ffd6
parentde1474b46d889ee0367f6e71d9adfeb0711e4a8d
sched: Provide rt_mutex specific scheduler helpers

With PREEMPT_RT there is a rt_mutex recursion problem where
sched_submit_work() can use an rtlock (aka spinlock_t). More
specifically what happens is:

  mutex_lock() /* really rt_mutex */
    ...
      __rt_mutex_slowlock_locked()
task_blocks_on_rt_mutex()
          // enqueue current task as waiter
          // do PI chain walk
        rt_mutex_slowlock_block()
          schedule()
            sched_submit_work()
              ...
              spin_lock() /* really rtlock */
                ...
                  __rt_mutex_slowlock_locked()
                    task_blocks_on_rt_mutex()
                      // enqueue current task as waiter *AGAIN*
                      // *CONFUSION*

Fix this by making rt_mutex do the sched_submit_work() early, before
it enqueues itself as a waiter -- before it even knows *if* it will
wait.

[[ basically Thomas' patch but with different naming and a few asserts
   added ]]

Originally-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230908162254.999499-5-bigeasy@linutronix.de
include/linux/sched.h
include/linux/sched/rt.h
kernel/sched/core.c