perf/core: Use local64_try_cmpxchg in perf_swevent_set_period
authorUros Bizjak <ubizjak@gmail.com>
Sat, 8 Jul 2023 08:10:57 +0000 (10:10 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Mon, 10 Jul 2023 07:52:35 +0000 (09:52 +0200)
Use local64_try_cmpxchg instead of local64_cmpxchg (*ptr, old, new) == old
in perf_swevent_set_period.  x86 CMPXCHG instruction returns success in ZF
flag, so this change saves a compare after cmpxchg (and related move
instruction in front of cmpxchg).

Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg
fails. There is no need to re-read the value in the loop.

No functional change intended.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230708081129.45915-1-ubizjak@gmail.com
kernel/events/core.c

index 78ae7b6f90fdbf8c4b509ebba86eacc25f0050e8..f84e2640ea2fc55f26912e741cb01d4d45a283fa 100644 (file)
@@ -9595,16 +9595,16 @@ u64 perf_swevent_set_period(struct perf_event *event)
 
        hwc->last_period = hwc->sample_period;
 
-again:
-       old = val = local64_read(&hwc->period_left);
-       if (val < 0)
-               return 0;
+       old = local64_read(&hwc->period_left);
+       do {
+               val = old;
+               if (val < 0)
+                       return 0;
 
-       nr = div64_u64(period + val, period);
-       offset = nr * period;
-       val -= offset;
-       if (local64_cmpxchg(&hwc->period_left, old, val) != old)
-               goto again;
+               nr = div64_u64(period + val, period);
+               offset = nr * period;
+               val -= offset;
+       } while (!local64_try_cmpxchg(&hwc->period_left, &old, val));
 
        return nr;
 }