mm/swap: avoid a xa load for swapout path
authorKairui Song <kasong@tencent.com>
Tue, 17 Oct 2023 01:17:28 +0000 (09:17 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 25 Oct 2023 23:47:11 +0000 (16:47 -0700)
A variable is never used for swapout path (shadowp is NULL) and compiler
is unable to optimize out the unneeded load since it's a function call.

The was introduced by 3852f6768ede ("mm/swapcache: support to handle the
shadow entries").

Link: https://lkml.kernel.org/r/20231017011728.37508-1-ryncsn@gmail.com
Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Huang Ying <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/swap_state.c

index b3b14bd0dd6447f47aea2df42e8348f15660c73a..ab79ffb71736af2c1903c8636e0bed4225b1fdb0 100644 (file)
@@ -109,9 +109,9 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t entry,
                        goto unlock;
                for (i = 0; i < nr; i++) {
                        VM_BUG_ON_FOLIO(xas.xa_index != idx + i, folio);
-                       old = xas_load(&xas);
-                       if (xa_is_value(old)) {
-                               if (shadowp)
+                       if (shadowp) {
+                               old = xas_load(&xas);
+                               if (xa_is_value(old))
                                        *shadowp = old;
                        }
                        xas_store(&xas, folio);