MN10300: Prevent cnt32_to_63() from being preempted in sched_clock()
authorDavid Howells <dhowells@redhat.com>
Wed, 27 Oct 2010 16:28:35 +0000 (17:28 +0100)
committerDavid Howells <dhowells@redhat.com>
Wed, 27 Oct 2010 16:28:35 +0000 (17:28 +0100)
Prevent cnt32_to_63() from being preempted in sched_clock() because it may
read its internal counter, get preempted, get delayed for more than the half
period of the 'TSC' and then write the internal counter, thus corrupting it.

Whilst some callers of sched_clock() have interrupts disabled or hold
spinlocks, not all do, and so preemption must be held here.

Note that sched_clock() is called from lockdep, but that shouldn't be a problem
because although preempt_disable() calls into lockdep, lockdep has a recursion
counter to deal with this.

Signed-off-by: David Howells <dhowells@redhat.com>
arch/mn10300/kernel/time.c

index 8f7f6d22783d5065a61bb8bb4636c776b5892e5f..0b5c856b42665ef75208e1f3377686a8ddb60d1e 100644 (file)
@@ -54,6 +54,9 @@ unsigned long long sched_clock(void)
        unsigned long tsc, tmp;
        unsigned product[3]; /* 96-bit intermediate value */
 
+       /* cnt32_to_63() is not safe with preemption */
+       preempt_disable();
+
        /* read the TSC value
         */
        tsc = 0 - get_cycles(); /* get_cycles() counts down */
@@ -64,6 +67,8 @@ unsigned long long sched_clock(void)
         */
        tsc64.ll = cnt32_to_63(tsc) & 0x7fffffffffffffffULL;
 
+       preempt_enable();
+
        /* scale the 64-bit TSC value to a nanosecond value via a 96-bit
         * intermediate
         */