arm64: kaslr: Adjust the offset to avoid Image across alignment boundary
authorCatalin Marinas <catalin.marinas@arm.com>
Tue, 22 Aug 2017 14:39:00 +0000 (15:39 +0100)
committerWill Deacon <will.deacon@arm.com>
Tue, 22 Aug 2017 17:15:42 +0000 (18:15 +0100)
With 16KB pages and a kernel Image larger than 16MB, the current
kaslr_early_init() logic for avoiding mappings across swapper table
boundaries fails since increasing the offset by kimg_sz just moves the
problem to the next boundary.

This patch rounds the offset down to (1 << SWAPPER_TABLE_SHIFT) if the
Image crosses a PMD_SIZE boundary.

Fixes: afd0e5a87670 ("arm64: kaslr: Fix up the kernel image alignment")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch/arm64/kernel/kaslr.c

index 1d95c204186b40f8e8225fd1885136db0dd0928b..47080c49cc7e77a3df120f061f7b10f90a8c2ec5 100644 (file)
@@ -131,8 +131,7 @@ u64 __init kaslr_early_init(u64 dt_phys)
        /*
         * The kernel Image should not extend across a 1GB/32MB/512MB alignment
         * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
-        * happens, increase the KASLR offset by the size of the kernel image
-        * rounded up by SWAPPER_BLOCK_SIZE.
+        * happens, round down the KASLR offset by (1 << SWAPPER_TABLE_SHIFT).
         *
         * NOTE: The references to _text and _end below will already take the
         *       modulo offset (the physical displacement modulo 2 MB) into
@@ -141,11 +140,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
         *       mapping we choose.
         */
        if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
-           (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) {
-               u64 kimg_sz = _end - _text;
-               offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
-                               & mask;
-       }
+           (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))
+               offset = round_down(offset, 1 << SWAPPER_TABLE_SHIFT);
 
        if (IS_ENABLED(CONFIG_KASAN))
                /*