x86/mm: Use new merged flush logic in arch_tlbbatch_flush()
authorAndy Lutomirski <luto@kernel.org>
Sun, 28 May 2017 17:00:13 +0000 (10:00 -0700)
committerIngo Molnar <mingo@kernel.org>
Mon, 5 Jun 2017 07:59:43 +0000 (09:59 +0200)
Now there's only one copy of the local tlb flush logic for
non-kernel pages on SMP kernels.

The only functional change is that arch_tlbbatch_flush() will now
leave_mm() on the local CPU if that CPU is in the batch and is in
TLBSTATE_LAZY mode.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/mm/tlb.c

index 12b8812e8926a224cdc907c3328937f166154465..c03b4a0ce58c2143093f98b91dccdb5967afe3f1 100644 (file)
@@ -382,12 +382,8 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 
        int cpu = get_cpu();
 
-       if (cpumask_test_cpu(cpu, &batch->cpumask)) {
-               count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
-               local_flush_tlb();
-               trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
-       }
-
+       if (cpumask_test_cpu(cpu, &batch->cpumask))
+               flush_tlb_func_local(&info, TLB_LOCAL_SHOOTDOWN);
        if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids)
                flush_tlb_others(&batch->cpumask, &info);
        cpumask_clear(&batch->cpumask);