kfence, slab: fix cache_alloc_debugcheck_after() for bulk allocations
authorMarco Elver <elver@google.com>
Sat, 13 Mar 2021 05:07:53 +0000 (21:07 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 13 Mar 2021 19:27:30 +0000 (11:27 -0800)
cache_alloc_debugcheck_after() performs checks on an object, including
adjusting the returned pointer.  None of this should apply to KFENCE
objects.  While for non-bulk allocations, the checks are skipped when we
allocate via KFENCE, for bulk allocations cache_alloc_debugcheck_after()
is called via cache_alloc_debugcheck_after_bulk().

Fix it by skipping cache_alloc_debugcheck_after() for KFENCE objects.

Link: https://lkml.kernel.org/r/20210304205256.2162309-1-elver@google.com
Signed-off-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Jann Horn <jannh@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slab.c

index 51fd424e0d6d03bc0f16338aee251bc47c60249e..ae651bf540b7033180c37221bdb4c2a4842d98fa 100644 (file)
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2992,7 +2992,7 @@ static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
                                gfp_t flags, void *objp, unsigned long caller)
 {
        WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO));
-       if (!objp)
+       if (!objp || is_kfence_address(objp))
                return objp;
        if (cachep->flags & SLAB_POISON) {
                check_poison_obj(cachep, objp);