drm/msm: don't allocate pages from the MOVABLE zone
authorLucas Stach <l.stach@pengutronix.de>
Thu, 28 Feb 2019 06:23:29 +0000 (07:23 +0100)
committerRob Clark <robdclark@chromium.org>
Thu, 18 Apr 2019 17:04:09 +0000 (10:04 -0700)
The pages backing the GEM objects are kept pinned in place as
long as they are alive, so they must not be allocated from the
MOVABLE zone. Blocking page migration for too long will cause
the VM subsystem headaches and will outright break CMA, as a
few pinned pages in CMA will lead to failure to find the
required large contiguous regions.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
drivers/gpu/drm/msm/msm_gem.c

index 18ca651ab942a3d65dd78a5189e58a38c476f517..76940a9da9805537aa9d087e83bd0568f3d8e64b 100644 (file)
@@ -1026,6 +1026,13 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
                ret = drm_gem_object_init(dev, obj, size);
                if (ret)
                        goto fail;
+               /*
+                * Our buffers are kept pinned, so allocating them from the
+                * MOVABLE zone is a really bad idea, and conflicts with CMA.
+                * See comments above new_inode() why this is required _and_
+                * expected if you're going to pin these pages.
+                */
+               mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
        }
 
        return obj;