sched: Optimize rq_lockp() usage
authorPeter Zijlstra <peterz@infradead.org>
Wed, 3 Mar 2021 15:45:41 +0000 (16:45 +0100)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 12 May 2021 09:43:27 +0000 (11:43 +0200)
commit9ef7e7e33bcdb57be1afb28884053c28b5f05240
tree40e43fa4c6d82adf7cd39fbc1f5dfb868701165b
parent9edeaea1bc452372718837ed2ba775811baf1ba1
sched: Optimize rq_lockp() usage

rq_lockp() includes a static_branch(), which is asm-goto, which is
asm volatile which defeats regular CSE. This means that:

if (!static_branch(&foo))
return simple;

if (static_branch(&foo) && cond)
return complex;

Doesn't fold and we get horrible code. Introduce __rq_lockp() without
the static_branch() on.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Don Hiatt <dhiatt@digitalocean.com>
Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210422123308.316696988@infradead.org
kernel/sched/core.c
kernel/sched/deadline.c
kernel/sched/fair.c
kernel/sched/sched.h