ceph: zero the dir_entries memory when allocating it
authorXiubo Li <xiubli@redhat.com>
Thu, 17 Feb 2022 08:15:42 +0000 (16:15 +0800)
committerIlya Dryomov <idryomov@gmail.com>
Tue, 1 Mar 2022 17:26:37 +0000 (18:26 +0100)
This potentially will cause a bug in future if using an old ceph
version that sends a smaller inode struct, which can cause some members
to be skipped in handle_reply.

Signed-off-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
fs/ceph/mds_client.c

index 0b7bde73bf03a81ee3460329d73334d62f93d09c..ef9145477aaef99eee84bdc42e565628dc0434e2 100644 (file)
@@ -2202,7 +2202,8 @@ int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
        order = get_order(size * num_entries);
        while (order >= 0) {
                rinfo->dir_entries = (void*)__get_free_pages(GFP_KERNEL |
-                                                            __GFP_NOWARN,
+                                                            __GFP_NOWARN |
+                                                            __GFP_ZERO,
                                                             order);
                if (rinfo->dir_entries)
                        break;