sfrench/cifs-2.6.git
21 months agoseq_file: allocate seq_file from kmem_cache
Alexey Dobriyan [Tue, 10 Apr 2018 23:34:45 +0000 (16:34 -0700)]
seq_file: allocate seq_file from kmem_cache

For fine-grained debugging and usercopy protection.

Link: http://lkml.kernel.org/r/20180310085027.GA17121@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agofs/reiserfs/journal.c: add missing resierfs_warning() arg
Andrew Morton [Tue, 10 Apr 2018 23:34:41 +0000 (16:34 -0700)]
fs/reiserfs/journal.c: add missing resierfs_warning() arg

One use of the reiserfs_warning() macro in journal_init_dev() is missing
a parameter, causing the following warning:

  REISERFS warning (device loop0): journal_init_dev: Cannot open '%s': %i journal_init_dev:

This also causes a WARN_ONCE() warning in the vsprintf code, and then a
panic if panic_on_warn is set.

  Please remove unsupported %/ in format string
  WARNING: CPU: 1 PID: 4480 at lib/vsprintf.c:2138 format_decode+0x77f/0x830 lib/vsprintf.c:2138
  Kernel panic - not syncing: panic_on_warn set ...

Just add another string argument to the macro invocation.

Addresses https://syzkaller.appspot.com/bug?id=0627d4551fdc39bf1ef5d82cd9eef587047f7718

Link: http://lkml.kernel.org/r/d678ebe1-6f54-8090-df4c-b9affad62293@infradead.org
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: <syzbot+6bd77b88c1977c03f584@syzkaller.appspotmail.com>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Jeff Mahoney <jeffm@suse.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jan Kara <jack@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoautofs4: use wait_event_killable
Matthew Wilcox [Tue, 10 Apr 2018 23:34:37 +0000 (16:34 -0700)]
autofs4: use wait_event_killable

This playing with signals to allow only fatal signals appears to predate
the introduction of wait_event_killable(), and I'm fairly sure that
wait_event_killable is what was meant to happen here.

[avagin@openvz.org: use wake_up() instead of wake_up_interruptible]
Link: http://lkml.kernel.org/r/20180331022839.21277-1-avagin@openvz.org
Link: http://lkml.kernel.org/r/20180319191609.23880-1-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Ian Kent <raven@themaw.net>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoinit/ramdisk: use pr_cont() at the end of ramdisk loading
Aaro Koskinen [Tue, 10 Apr 2018 23:34:34 +0000 (16:34 -0700)]
init/ramdisk: use pr_cont() at the end of ramdisk loading

Use pr_cont() at the end of ramdisk loading.  This will avoid the
rotator and an extra newline appearing in the dmesg.

Before:
  RAMDISK: Loading 2436KiB [1 disk] into ram disk... |
  done.

After:
  RAMDISK: Loading 2436KiB [1 disk] into ram disk... done.

Link: http://lkml.kernel.org/r/20180302205552.16031-1-aaro.koskinen@iki.fi
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: whinge about bool bitfields
Joe Perches [Tue, 10 Apr 2018 23:34:25 +0000 (16:34 -0700)]
checkpatch: whinge about bool bitfields

Using bool in a bitfield isn't a good idea as the alignment behavior is
arch implementation defined.

Suggest using unsigned int or u<8|16|32> instead.

Link: http://lkml.kernel.org/r/e22fb871b1b7f2fda4b22f3a24e0d7f092eb612c.camel@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: allow space between colon and bracket
Heinrich Schuchardt [Tue, 10 Apr 2018 23:34:14 +0000 (16:34 -0700)]
checkpatch: allow space between colon and bracket

Allow a space between a colon and subsequent opening bracket.  This
sequence may occur in inline assembler statements like

asm(
"ldr %[out], [%[in]]\n\t"
: [out] "=r" (ret)
: [in] "r" (addr)
);

Link: http://lkml.kernel.org/r/20180403191655.23700-1-xypron.glpk@gmx.de
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Acked-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: add test for assignment at start of line
Joe Perches [Tue, 10 Apr 2018 23:34:04 +0000 (16:34 -0700)]
checkpatch: add test for assignment at start of line

Kernel style seems to prefer line wrapping an assignment with the
assignment operator on the previous line like:

<leading tabs> identifier =
expression;
over
<leading tabs> identifier
= expression;

somewhere around a 50:1 ratio

$ git grep -P "[^=]=\s*$" -- "*.[ch]" | wc -l
52008
$ git grep -P "^\s+[\*\/\+\|\%\-]?=[^=>]" | wc -l
1161

So add a --strict test for that condition.

Link: http://lkml.kernel.org/r/1522275726.2210.12.camel@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: test SYMBOLIC_PERMS multiple times per line
Joe Perches [Tue, 10 Apr 2018 23:33:53 +0000 (16:33 -0700)]
checkpatch: test SYMBOLIC_PERMS multiple times per line

There are occasions where symbolic perms are used in a ternary like

return (channel == 0) ? S_IRUGO | S_IWUSR : S_IRUGO;

The current test will find the first use "S_IRUGO | S_IWUSR" but not the
second use "S_IRUGO" on the same line.

Improve the test to look for all instances on a line.

Link: http://lkml.kernel.org/r/1522127944.12357.49.camel@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: two spelling fixes
Claudio Fontana [Tue, 10 Apr 2018 23:33:42 +0000 (16:33 -0700)]
checkpatch: two spelling fixes

completly -> completely
wacking -> whacking

Link: http://lkml.kernel.org/r/1520405394-5586-1-git-send-email-claudio.fontana@gliwa.com
Signed-off-by: Claudio Fontana <claudio.fontana@gliwa.com>
Acked-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: improve get_quoted_string for TRACE_EVENT macros
Joe Perches [Tue, 10 Apr 2018 23:33:34 +0000 (16:33 -0700)]
checkpatch: improve get_quoted_string for TRACE_EVENT macros

The get_quoted_string function does not expect invalid arguments.

The $stat test can return non-statements for complicated macros like
TRACE_EVENT.

Allow the $stat block and test for vsprintf misuses to exceed the actual
block length and possibly test invalid lines by validating the arguments
of get_quoted_string.

Return "" if either get_quoted_string argument is undefined.

Miscellanea:

o Properly align the comment for the vsprintf extension test

Link: http://lkml.kernel.org/r/9e9725342ca3dfc0f5e3e0b8ca3c482b0e5712cc.1520356392.git.joe@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Reported-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: warn for use of %px
Tobin C. Harding [Tue, 10 Apr 2018 23:33:31 +0000 (16:33 -0700)]
checkpatch: warn for use of %px

Usage of the new %px specifier potentially leaks sensitive information.
Printing kernel addresses exposes the kernel layout in memory, this is
potentially exploitable.  We have tools in the kernel to help us do the
right thing.  We can have checkpatch warn developers of potential
dangers of using %px.

Have checkpatch emit a warning for usage of specifier %px.

Link: http://lkml.kernel.org/r/1519700648-23108-5-git-send-email-me@tobin.cc
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: add sub routine get_stat_here()
Tobin C. Harding [Tue, 10 Apr 2018 23:33:27 +0000 (16:33 -0700)]
checkpatch: add sub routine get_stat_here()

checkpatch currently contains duplicate code.  We can define a sub
routine and call that instead.  This reduces code duplication and line
count.

Add subroutine get_stat_here().

Link: http://lkml.kernel.org/r/1519700648-23108-4-git-send-email-me@tobin.cc
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: remove unused variable declarations
Tobin C. Harding [Tue, 10 Apr 2018 23:33:24 +0000 (16:33 -0700)]
checkpatch: remove unused variable declarations

Variables are declared and not used, we should remove them.

Link: http://lkml.kernel.org/r/1519700648-23108-3-git-send-email-me@tobin.cc
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: add sub routine get_stat_real()
Tobin C. Harding [Tue, 10 Apr 2018 23:33:20 +0000 (16:33 -0700)]
checkpatch: add sub routine get_stat_real()

checkpatch currently contains duplicate code.  We can define a sub
routine and call that instead.  This reduces code duplication and line
count.

Add subroutine get_stat_real()

Link: http://lkml.kernel.org/r/1519700648-23108-2-git-send-email-me@tobin.cc
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: add Crypto ON_STACK to declaration_macros
Gilad Ben-Yossef [Tue, 10 Apr 2018 23:33:17 +0000 (16:33 -0700)]
checkpatch: add Crypto ON_STACK to declaration_macros

Add the crypto API *_ON_STACK to $declaration_macros.

Resolves the following false warning:

WARNING: Missing a blank line after declarations
+ int err;
+ SHASH_DESC_ON_STACK(desc, ctx_p->shash_tfm);

Link: http://lkml.kernel.org/r/1518941636-4484-1-git-send-email-gilad@benyossef.com
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch.pl: add SPDX license tag check
Rob Herring [Tue, 10 Apr 2018 23:33:13 +0000 (16:33 -0700)]
checkpatch.pl: add SPDX license tag check

Add SPDX license tag check based on the rules defined in
Documentation/process/license-rules.rst.  To summarize, SPDX license
tags should be on the 1st line (or 2nd line in scripts) using the
appropriate comment style for the file type.

Link: http://lkml.kernel.org/r/20180202154026.15298-1-robh@kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Igor Stoppa <igor.stoppa@huawei.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agocheckpatch: improve parse_email signature checking
Joe Perches [Tue, 10 Apr 2018 23:33:09 +0000 (16:33 -0700)]
checkpatch: improve parse_email signature checking

Bare email addresses with non alphanumeric characters require escape
quoting before being substituted in the parse_email routine.

e.g. Reported-by: syzbot+bbd8e9a06452cc48059b@syzkaller.appspotmail.com

Do so.

Link: http://lkml.kernel.org/r/1518631805.3678.12.camel@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agolib/list_debug.c: print unmangled addresses
Matthew Wilcox [Tue, 10 Apr 2018 23:33:06 +0000 (16:33 -0700)]
lib/list_debug.c: print unmangled addresses

The entire point of printing the pointers in list_debug is to see if
there's any useful information in them (eg poison values, ASCII, etc);
obscuring them to see if they compare equal makes them much less useful.
If an attacker can force this message to be printed, we've already lost.

Link: http://lkml.kernel.org/r/20180401223237.GV13332@bombadil.infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Tobin C. Harding <me@tobin.cc>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agolib/test_ubsan.c: make test_ubsan_misaligned_access() static
Colin Ian King [Tue, 10 Apr 2018 23:33:02 +0000 (16:33 -0700)]
lib/test_ubsan.c: make test_ubsan_misaligned_access() static

test_ubsan_misaligned_access() is local to the source and does not need
to be in global scope, so make it static.

Cleans up sparse warning:

  lib/test_ubsan.c:91:6: warning: symbol 'test_ubsan_misaligned_access' was not declared. Should it be static?

Link: http://lkml.kernel.org/r/20180313103048.28513-1-colin.king@canonical.com
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Cc: Jinbum Park <jinb.park7@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agolib: add testing module for UBSAN
Jinbum Park [Tue, 10 Apr 2018 23:32:58 +0000 (16:32 -0700)]
lib: add testing module for UBSAN

This is a test module for UBSAN.  It triggers all undefined behaviors
that linux supports now, and detect them.

All test-cases have passed by compiling with gcc-5.5.0.

If use gcc-4.9.x, misaligned, out-of-bounds, object-size-mismatch will not
be detected.  Because gcc-4.9.x doesn't support them.

Link: http://lkml.kernel.org/r/20180309102247.GA2944@pjb1027-Latitude-E5410
Signed-off-by: Jinbum Park <jinb.park7@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agolib/test_bitmap.c: do not accidentally use stack VLA
Kees Cook [Tue, 10 Apr 2018 23:32:54 +0000 (16:32 -0700)]
lib/test_bitmap.c: do not accidentally use stack VLA

This avoids an accidental stack VLA (since the compiler thinks the value
of "len" can change, even when marked "const").  This just replaces it
with a #define so it will DTRT.

Seen with -Wvla.  Fixed as part of the directive to remove all VLAs from
the kernel: https://lkml.org/lkml/2018/3/7/621

Link: http://lkml.kernel.org/r/20180307212555.GA17927@beast
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Yury Norov <ynorov@caviumnetworks.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agolib/Kconfig.debug: Debug Lockups and Hangs: keep SOFTLOCKUP options together
Randy Dunlap [Tue, 10 Apr 2018 23:32:51 +0000 (16:32 -0700)]
lib/Kconfig.debug: Debug Lockups and Hangs: keep SOFTLOCKUP options together

Keep all of the SOFTLOCKUP kconfig symbols together (instead of
injecting the HARDLOCKUP symbols in the midst of them) so that the
config tools display them with their dependencies.

Tested with 'make {menuconfig/nconfig/gconfig/xconfig}'.

Link: http://lkml.kernel.org/r/6be2d9ed-4656-5b94-460d-7f051e2c7570@infradead.org
Fixes: 05a4a9527931 ("kernel/watchdog: split up config options")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoMAINTAINERS: update email address for Alexandre Bounine
Alexandre Bounine [Tue, 10 Apr 2018 23:32:48 +0000 (16:32 -0700)]
MAINTAINERS: update email address for Alexandre Bounine

Link: http://lkml.kernel.org/r/1522958149-6157-1-git-send-email-alex.bou9@gmail.com
Signed-off-by: Alexandre Bounine <alex.bou9@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Barry Wood <barry.wood@idt.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agotask_struct: only use anon struct under randstruct plugin
Kees Cook [Tue, 10 Apr 2018 23:32:44 +0000 (16:32 -0700)]
task_struct: only use anon struct under randstruct plugin

The original intent for always adding the anonymous struct in
task_struct was to make sure we had compiler coverage.

However, this caused pathological padding of 40 bytes at the start of
task_struct.  Instead, move the anonymous struct to being only used when
struct layout randomization is enabled.

Link: http://lkml.kernel.org/r/20180327213609.GA2964@beast
Fixes: 29e48ce87f1e ("task_struct: Allow randomized")
Signed-off-by: Kees Cook <keescook@chromium.org>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoclang-format: add configuration file
Miguel Ojeda [Tue, 10 Apr 2018 23:32:40 +0000 (16:32 -0700)]
clang-format: add configuration file

clang-format is a tool to format C/C++/...  code according to a set of
rules and heuristics.  Like most tools, it is not perfect nor covers
every single case, but it is good enough to be helpful.

In particular, it is useful for quickly re-formatting blocks of code
automatically, for reviewing full files in order to spot coding style
mistakes, typos and possible improvements.  It is also handy for sorting
``#includes``, for aligning variables and macros, for reflowing text and
other similar tasks.  It also serves as a teaching tool/guide for
newcomers.

The tool itself has been already included in the repositories of popular
Linux distributions for a long time.  The rules in this file are
intended for clang-format >= 4, which is easily available in most
distributions.

This commit adds the configuration file that contains the rules that the
tool uses to know how to format the code according to the kernel coding
style.  This gives us several advantages:

  * clang-format works out of the box with reasonable defaults;
    avoiding that everyone has to re-do the configuration.

  * Everyone agrees (eventually) on what is the most useful default
    configuration for most of the kernel.

  * If it becomes commonplace among kernel developers, clang-format
    may feel compelled to support us better. They already recognize
    the Linux kernel and its style in their documentation and in one
    of the style sub-options.

Some of clang-format's features relevant for the kernel are:

  * Uses clang's tooling support behind the scenes to parse and rewrite
    the code. It is not based on ad-hoc regexps.

  * Supports reasonably well the Linux kernel coding style.

  * Fast enough to be used at the press of a key.

  * There are already integrations (either built-in or third-party)
    for many common editors used by kernel developers (e.g. vim,
    emacs, Sublime, Atom...) that allow you to format an entire file
    or, more usefully, just your selection.

  * Able to parse unified diffs -- you can, for instance, reformat
    only the lines changed by a git commit.

  * Able to reflow text comments as well.

  * Widely supported and used by hundreds of developers in highly
    complex projects and organizations (e.g. the LLVM project itself,
    Chromium, WebKit, Google, Mozilla...). Therefore, it will be
    supported for a long time.

See more information about the tool at:

    https://clang.llvm.org/docs/ClangFormat.html
    https://clang.llvm.org/docs/ClangFormatStyleOptions.html

Link: http://lkml.kernel.org/r/20180318171632.qfkemw3mwbcukth6@gmail.com
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agouts: create "struct uts_namespace" from kmem_cache
Alexey Dobriyan [Tue, 10 Apr 2018 23:32:36 +0000 (16:32 -0700)]
uts: create "struct uts_namespace" from kmem_cache

So "struct uts_namespace" can enjoy fine-grained SLAB debugging and
usercopy protection.

I'd prefer shorter name "utsns" but there is "user_namespace" already.

Link: http://lkml.kernel.org/r/20180228215158.GA23146@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Serge Hallyn <serge@hallyn.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agotaint: add taint for randstruct
Kees Cook [Tue, 10 Apr 2018 23:32:33 +0000 (16:32 -0700)]
taint: add taint for randstruct

Since the randstruct plugin can intentionally produce extremely unusual
kernel structure layouts (even performance pathological ones), some
maintainers want to be able to trivially determine if an Oops is coming
from a randstruct-built kernel, so as to keep their sanity when
debugging.  This adds the new flag and initializes taint_mask
immediately when built with randstruct.

Link: http://lkml.kernel.org/r/1519084390-43867-4-git-send-email-keescook@chromium.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agotaint: consolidate documentation
Kees Cook [Tue, 10 Apr 2018 23:32:29 +0000 (16:32 -0700)]
taint: consolidate documentation

This consolidates the taint bit documentation into a single place with
both numeric and letter values.  Additionally adds the missing TAINT_AUX
documentation.

Link: http://lkml.kernel.org/r/1519084390-43867-3-git-send-email-keescook@chromium.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agotaint: convert to indexed initialization
Kees Cook [Tue, 10 Apr 2018 23:32:26 +0000 (16:32 -0700)]
taint: convert to indexed initialization

This converts to using indexed initializers instead of comments, adds a
comment on why the taint flags can't be an enum, and make sure that no
one forgets to update the taint_flags when adding new bits.

Link: http://lkml.kernel.org/r/1519084390-43867-2-git-send-email-keescook@chromium.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: selftests: test /proc/uptime
Alexey Dobriyan [Tue, 10 Apr 2018 23:43:28 +0000 (16:43 -0700)]
proc: selftests: test /proc/uptime

The only tests I could come up with for /proc/uptime are:
 - test that values increase monotonically for 1 second,
 - bounce around CPUs and test the same thing.

Avoid glibc like plague for affinity given patches like this:
https://marc.info/?l=linux-kernel&m=152130031912594&w=4

Link: http://lkml.kernel.org/r/20180317165235.GB3445@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: use slower rb_first()
Alexey Dobriyan [Tue, 10 Apr 2018 23:32:20 +0000 (16:32 -0700)]
proc: use slower rb_first()

In a typical for /proc "open+read+close" usecase, dentry is looked up
successfully on open only to be killed in dput() on close.  In fact
dentries which aren't /proc/*/...  and /proc/sys/* were almost NEVER
CACHED.  Simple printk in proc_lookup_de() shows that.

Now that ->delete hook intelligently picks which dentries should live in
dcache and which should not, rbtree caching is not necessary as dcache
does it job, at last!

As a side effect, struct proc_dir_entry shrinks by one pointer which can
go into inline name.

Link: http://lkml.kernel.org/r/20180314231032.GA15854@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: selftests: shotgun testing of read/readdir/readlink/write
Alexey Dobriyan [Tue, 10 Apr 2018 23:46:19 +0000 (16:46 -0700)]
proc: selftests: shotgun testing of read/readdir/readlink/write

Perform reads with nearly everything in /proc, and some writing as well.

Hopefully memleak checkers and KASAN will find something.

[adobriyan@gmail.com: /proc/kmsg can and will block if read under root]
Link: http://lkml.kernel.org/r/20180316232147.GA20146@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
[adobriyan@gmail.com: /proc/sysrq-trigger lives on the ground floor]
Link: http://lkml.kernel.org/r/20180317164911.GA3445@avx2
Link: http://lkml.kernel.org/r/20180315201251.GA12396@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: switch struct proc_dir_entry::count to refcount
Alexey Dobriyan [Tue, 10 Apr 2018 23:32:14 +0000 (16:32 -0700)]
proc: switch struct proc_dir_entry::count to refcount

->count is honest reference count unlike ->in_use.

Link: http://lkml.kernel.org/r/20180313174550.GA4332@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: reject "." and ".." as filenames
Alexey Dobriyan [Tue, 10 Apr 2018 23:32:11 +0000 (16:32 -0700)]
proc: reject "." and ".." as filenames

Various subsystems can create files and directories in /proc with names
directly controlled by userspace.

Which means "/", "." and ".." are no-no.

"/" split is already taken care of, do the other 2 prohibited names.

Link: http://lkml.kernel.org/r/20180310001223.GB12443@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: Florian Westphal <fw@strlen.de>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: add selftest for last field of /proc/loadavg
Alexey Dobriyan [Tue, 10 Apr 2018 23:42:23 +0000 (16:42 -0700)]
proc: add selftest for last field of /proc/loadavg

Test fork counter formerly known as ->last_pid, the only part of
/proc/loadavg which can be tested.

Testing in init pid namespace is not reliable because of background
activity.

Link: http://lkml.kernel.org/r/20180311152241.GA26247@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: do mmput ASAP for /proc/*/map_files
Alexey Dobriyan [Tue, 10 Apr 2018 23:32:05 +0000 (16:32 -0700)]
proc: do mmput ASAP for /proc/*/map_files

mm_struct is not needed while printing as all the data was already
extracted.

Link: http://lkml.kernel.org/r/20180309223120.GC3843@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: faster /proc/cmdline
Alexey Dobriyan [Tue, 10 Apr 2018 23:32:01 +0000 (16:32 -0700)]
proc: faster /proc/cmdline

Use seq_puts() and skip format string processing.

Link: http://lkml.kernel.org/r/20180309222948.GB3843@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: register filesystem last
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:57 +0000 (16:31 -0700)]
proc: register filesystem last

As soon as register_filesystem() exits, filesystem can be mounted.  It
is better to present fully operational /proc.

Of course it doesn't matter because /proc is not modular but do it
anyway.

Drop error check, it should be handled by panicking.

Link: http://lkml.kernel.org/r/20180309222709.GA3843@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: fix /proc/*/map_files lookup some more
Alexey Dobriyan [Tue, 10 Apr 2018 23:41:14 +0000 (16:41 -0700)]
proc: fix /proc/*/map_files lookup some more

I totally forgot that _parse_integer() accepts arbitrary amount of
leading zeroes leading to the following lookups:

OK
# readlink /proc/1/map_files/56427ecba000-56427eddc000
/lib/systemd/systemd

bogus
# readlink /proc/1/map_files/00000000000056427ecba000-56427eddc000
/lib/systemd/systemd
# readlink /proc/1/map_files/56427ecba000-00000000000056427eddc000
/lib/systemd/systemd

Link: http://lkml.kernel.org/r/20180303215130.GA23480@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Cyrill Gorcunov <gorcunov@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: move "struct proc_dir_entry" into kmem cache
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:52 +0000 (16:31 -0700)]
proc: move "struct proc_dir_entry" into kmem cache

"struct proc_dir_entry" is variable sized because of 0-length trailing
array for name, however, because of SLAB padding allocations it is
possible to make "struct proc_dir_entry" fixed sized and allocate same
amount of memory.

It buys fine-grained debugging with poisoning and usercopy protection
which is not possible with kmalloc-* caches.

Currently, on 32-bit 91+ byte allocations go into kmalloc-128 and on
64-bit 147+ byte allocations go to kmalloc-192 anyway.

Additional memory is allocated only for 38/46+ byte long names which are
rare or may not even exist in the wild.

Link: http://lkml.kernel.org/r/20180223205504.GA17139@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: test /proc/self/syscall
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:48 +0000 (16:31 -0700)]
proc: test /proc/self/syscall

Read from /proc/self/syscall should yield read system call and correct
args in the output as current is reading /proc/self/syscall.

Link: http://lkml.kernel.org/r/20180226212145.GB742@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: test /proc/self/wchan
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:45 +0000 (16:31 -0700)]
proc: test /proc/self/wchan

This patch starts testing /proc.  Many more tests to come (I promise).

Read from /proc/self/wchan should always return "0" as current is in
TASK_RUNNING state while reading /proc/self/wchan.

Link: http://lkml.kernel.org/r/20180226212006.GA742@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agofs/proc/proc_sysctl.c: remove redundant link check in proc_sys_link_fill_cache()
Danilo Krummrich [Tue, 10 Apr 2018 23:31:41 +0000 (16:31 -0700)]
fs/proc/proc_sysctl.c: remove redundant link check in proc_sys_link_fill_cache()

proc_sys_link_fill_cache() does not need to check whether we're called
for a link - it's already done by scan().

Link: http://lkml.kernel.org/r/20180228013506.4915-2-danilokrummrich@dk-develop.de
Signed-off-by: Danilo Krummrich <danilokrummrich@dk-develop.de>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: "Luis R . Rodriguez" <mcgrof@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agofs/proc/proc_sysctl.c: fix potential page fault while unregistering sysctl table
Danilo Krummrich [Tue, 10 Apr 2018 23:31:38 +0000 (16:31 -0700)]
fs/proc/proc_sysctl.c: fix potential page fault while unregistering sysctl table

proc_sys_link_fill_cache() does not take currently unregistering sysctl
tables into account, which might result into a page fault in
sysctl_follow_link() - add a check to fix it.

This bug has been present since v3.4.

Link: http://lkml.kernel.org/r/20180228013506.4915-1-danilokrummrich@dk-develop.de
Fixes: 0e47c99d7fe25 ("sysctl: Replace root_list with links between sysctl_table_sets")
Signed-off-by: Danilo Krummrich <danilokrummrich@dk-develop.de>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Luis R . Rodriguez" <mcgrof@kernel.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: use set_puts() at /proc/*/wchan
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:34 +0000 (16:31 -0700)]
proc: use set_puts() at /proc/*/wchan

Link: http://lkml.kernel.org/r/20180217072011.GB16074@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: Rasmus Villemoes <rasmus.villemoes@prevas.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: check permissions earlier for /proc/*/wchan
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:30 +0000 (16:31 -0700)]
proc: check permissions earlier for /proc/*/wchan

get_wchan() accesses stack page before permissions are checked, let's
not play this game.

Link: http://lkml.kernel.org/r/20180217071923.GA16074@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: Rasmus Villemoes <rasmus.villemoes@prevas.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: replace seq_printf by seq_put_smth to speed up /proc/pid/status
Andrei Vagin [Tue, 10 Apr 2018 23:31:26 +0000 (16:31 -0700)]
proc: replace seq_printf by seq_put_smth to speed up /proc/pid/status

seq_printf() works slower than seq_puts, seq_puts, etc.

== test_proc.c
int main(int argc, char **argv)
{
int n, i, fd;
char buf[16384];

n = atoi(argv[1]);
for (i = 0; i < n; i++) {
fd = open(argv[2], O_RDONLY);
if (fd < 0)
return 1;
if (read(fd, buf, sizeof(buf)) <= 0)
return 1;
close(fd);
}

return 0;
}
==

$ time ./test_proc  1000000 /proc/1/status

== Before path ==
real 0m5.171s
user 0m0.328s
sys 0m4.783s

== After patch ==
real 0m4.761s
user 0m0.334s
sys 0m4.366s

Link: http://lkml.kernel.org/r/20180212074931.7227-4-avagin@openvz.org
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: optimize single-symbol delimiters to spead up seq_put_decimal_ull
Andrei Vagin [Tue, 10 Apr 2018 23:31:23 +0000 (16:31 -0700)]
proc: optimize single-symbol delimiters to spead up seq_put_decimal_ull

A delimiter is a string which is printed before a number.  A
syngle-symbol delimiters can be printed by set_putc() and this works
faster than printing by set_puts().

== test_proc.c

int main(int argc, char **argv)
{
int n, i, fd;
char buf[16384];

n = atoi(argv[1]);
for (i = 0; i < n; i++) {
fd = open(argv[2], O_RDONLY);
if (fd < 0)
return 1;
if (read(fd, buf, sizeof(buf)) <= 0)
return 1;
close(fd);
}

return 0;
}
==

$ time ./test_proc  1000000 /proc/1/stat

== Before patch ==
  real 0m3.820s
  user 0m0.337s
  sys 0m3.394s

== After patch ==
  real 0m3.110s
  user 0m0.324s
  sys 0m2.700s

Link: http://lkml.kernel.org/r/20180212074931.7227-3-avagin@openvz.org
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: replace seq_printf on seq_putc to speed up /proc/pid/smaps
Andrei Vagin [Tue, 10 Apr 2018 23:31:19 +0000 (16:31 -0700)]
proc: replace seq_printf on seq_putc to speed up /proc/pid/smaps

seq_putc() works much faster than seq_printf()

== Before patch ==
  $ time python test_smaps.py
  real    0m3.828s
  user    0m0.413s
  sys     0m3.408s

== After patch ==
  $ time python test_smaps.py
  real 0m3.405s
  user 0m0.401s
  sys 0m3.003s

== Before patch ==
-   75.51%     4.62%  python   [kernel.kallsyms]    [k] show_smap.isra.33
   - 70.88% show_smap.isra.33
      + 24.82% seq_put_decimal_ull_aligned
      + 19.78% __walk_page_range
      + 12.74% seq_printf
      + 11.08% show_map_vma.isra.23
      + 1.68% seq_puts

== After patch ==
-   69.16%     5.70%  python   [kernel.kallsyms]    [k] show_smap.isra.33
   - 63.46% show_smap.isra.33
      + 25.98% seq_put_decimal_ull_aligned
      + 20.90% __walk_page_range
      + 12.60% show_map_vma.isra.23
        1.56% seq_putc
      + 1.55% seq_puts

Link: http://lkml.kernel.org/r/20180212074931.7227-2-avagin@openvz.org
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Reviewed-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps
Andrei Vagin [Tue, 10 Apr 2018 23:31:16 +0000 (16:31 -0700)]
proc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps

seq_put_decimal_ull_w(m, str, val, width) prints a decimal number with a
specified minimal field width.

It is equivalent of seq_printf(m, "%s%*d", str, width, val), but it
works much faster.

== test_smaps.py
  num = 0
  with open("/proc/1/smaps") as f:
          for x in xrange(10000):
                  data = f.read()
                  f.seek(0, 0)
==

== Before patch ==
  $ time python test_smaps.py
  real    0m4.593s
  user    0m0.398s
  sys     0m4.158s

== After patch ==
  $ time python test_smaps.py
  real    0m3.828s
  user    0m0.413s
  sys     0m3.408s

$ perf -g record python test_smaps.py
== Before patch ==
-   79.01%     3.36%  python   [kernel.kallsyms]    [k] show_smap.isra.33
   - 75.65% show_smap.isra.33
      + 48.85% seq_printf
      + 15.75% __walk_page_range
      + 9.70% show_map_vma.isra.23
        0.61% seq_puts

== After patch ==
-   75.51%     4.62%  python   [kernel.kallsyms]    [k] show_smap.isra.33
   - 70.88% show_smap.isra.33
      + 24.82% seq_put_decimal_ull_w
      + 19.78% __walk_page_range
      + 12.74% seq_printf
      + 11.08% show_map_vma.isra.23
      + 1.68% seq_puts

[akpm@linux-foundation.org: fix drivers/of/unittest.c build]
Link: http://lkml.kernel.org/r/20180212074931.7227-1-avagin@openvz.org
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: account "struct pde_opener"
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:12 +0000 (16:31 -0700)]
proc: account "struct pde_opener"

The allocation is persistent in fact as any fool can open a file in
/proc and sit on it.

Link: http://lkml.kernel.org/r/20180214082409.GC17157@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: move "struct pde_opener" to kmem cache
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:09 +0000 (16:31 -0700)]
proc: move "struct pde_opener" to kmem cache

"struct pde_opener" is fixed size and we can have more granular approach
to debugging.

For those who don't know, per cache SLUB poisoning and red zoning don't
work if there is at least one object allocated which is hopeless in case
of kmalloc-64 but not in case of standalone cache.  Although systemd
opens 2 files from the get go, so it is hopeless after all.

Link: http://lkml.kernel.org/r/20180214082306.GB17157@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: randomize "struct pde_opener"
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:05 +0000 (16:31 -0700)]
proc: randomize "struct pde_opener"

The more the merrier.

Link: http://lkml.kernel.org/r/20180214081935.GA17157@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: faster open/close of files without ->release hook
Alexey Dobriyan [Tue, 10 Apr 2018 23:31:01 +0000 (16:31 -0700)]
proc: faster open/close of files without ->release hook

The whole point of code in fs/proc/inode.c is to make sure ->release
hook is called either at close() or at rmmod time.

All if it is unnecessary if there is no ->release hook.

Save allocation+list manipulations under spinlock in that case.

Link: http://lkml.kernel.org/r/20180214063033.GA15579@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: move /proc/sysvipc creation to where it belongs
Alexey Dobriyan [Tue, 10 Apr 2018 23:30:58 +0000 (16:30 -0700)]
proc: move /proc/sysvipc creation to where it belongs

Move the proc_mkdir() call within the sysvipc subsystem such that we
avoid polluting proc_root_init() with petty cpp.

[dave@stgolabs.net: contributed changelog]
Link: http://lkml.kernel.org/r/20180216161732.GA10297@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: do less stuff under ->pde_unload_lock
Alexey Dobriyan [Tue, 10 Apr 2018 23:30:54 +0000 (16:30 -0700)]
proc: do less stuff under ->pde_unload_lock

Commit ca469f35a8e9ef ("deal with races between remove_proc_entry() and
proc_reg_release()") moved too much stuff under ->pde_unload_lock making
a problem described at series "[PATCH v5] procfs: Improve Scaling in
proc" worse.

While RCU is being figured out, move kfree() out of ->pde_unload_lock.

On my potato, difference is only 0.5% speedup with concurrent
open+read+close of /proc/cmdline, but the effect should be more
noticeable on more capable machines.

$ perf stat -r 16 -- ./proc-j 16

 Performance counter stats for './proc-j 16' (16 runs):

     130569.502377      task-clock (msec)         #   15.872 CPUs utilized            ( +-  0.05% )
            19,169      context-switches          #    0.147 K/sec                    ( +-  0.18% )
                15      cpu-migrations            #    0.000 K/sec                    ( +-  3.27% )
               437      page-faults               #    0.003 K/sec                    ( +-  1.25% )
   300,172,097,675      cycles                    #    2.299 GHz                      ( +-  0.05% )
    96,793,267,308      instructions              #    0.32  insn per cycle           ( +-  0.04% )
    22,798,342,298      branches                  #  174.607 M/sec                    ( +-  0.04% )
       111,764,687      branch-misses             #    0.49% of all branches          ( +-  0.47% )

       8.226574400 seconds time elapsed                                          ( +-  0.05% )
       ^^^^^^^^^^^

$ perf stat -r 16 -- ./proc-j 16

 Performance counter stats for './proc-j 16' (16 runs):

     129866.777392      task-clock (msec)         #   15.869 CPUs utilized            ( +-  0.04% )
            19,154      context-switches          #    0.147 K/sec                    ( +-  0.66% )
                14      cpu-migrations            #    0.000 K/sec                    ( +-  1.73% )
               431      page-faults               #    0.003 K/sec                    ( +-  1.09% )
   298,556,520,546      cycles                    #    2.299 GHz                      ( +-  0.04% )
    96,525,366,833      instructions              #    0.32  insn per cycle           ( +-  0.04% )
    22,730,194,043      branches                  #  175.027 M/sec                    ( +-  0.04% )
       111,506,074      branch-misses             #    0.49% of all branches          ( +-  0.18% )

       8.183629778 seconds time elapsed                                          ( +-  0.04% )
       ^^^^^^^^^^^

Link: http://lkml.kernel.org/r/20180213132911.GA24298@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoproc: get rid of task lock/unlock pair to read umask for the "status" file
Mateusz Guzik [Tue, 10 Apr 2018 23:30:51 +0000 (16:30 -0700)]
proc: get rid of task lock/unlock pair to read umask for the "status" file

get_task_umask locks/unlocks the task on its own.  The only caller does
the same thing immediately after.

Utilize the fact the task has to be locked anyway and just do it once.
Since there are no other users and the code is short, fold it in.

Link: http://lkml.kernel.org/r/1517995608-23683-1-git-send-email-mguzik@redhat.com
Signed-off-by: Mateusz Guzik <mguzik@redhat.com>
Reviewed-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoprocfs: optimize seq_pad() to speed up /proc/pid/maps
Andrei Vagin [Tue, 10 Apr 2018 23:30:47 +0000 (16:30 -0700)]
procfs: optimize seq_pad() to speed up /proc/pid/maps

seq_printf() is slow and it can be replaced by memset() in this case.

== test.py
  num = 0
  with open("/proc/1/maps") as f:
          while num < 10000 :
                  data = f.read()
                  f.seek(0, 0)
                  num = num + 1
==

== Before patch ==
  $  time python test.py
  real 0m0.986s
  user 0m0.279s
  sys 0m0.707s

== After patch ==
  $ time python test.py
  real 0m0.932s
  user 0m0.261s
  sys 0m0.669s

$ perf record -g python test.py
== Before patch ==
-   47.35%     3.38%  python   [kernel.kallsyms] [k] show_map_vma.isra.23
   - 43.97% show_map_vma.isra.23
      + 20.84% seq_path
      - 15.73% show_vma_header_prefix
      + 6.96% seq_pad
   + 2.94% __GI___libc_read

== After patch ==
-   44.01%     0.34%  python   [kernel.kallsyms] [k] show_pid_map
   - 43.67% show_pid_map
      - 42.91% show_map_vma.isra.23
         + 21.55% seq_path
         - 15.68% show_vma_header_prefix
         + 2.08% seq_pad
        0.55% seq_putc

Link: http://lkml.kernel.org/r/20180112185812.7710-2-avagin@openvz.org
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoprocfs: add seq_put_hex_ll to speed up /proc/pid/maps
Andrei Vagin [Tue, 10 Apr 2018 23:30:44 +0000 (16:30 -0700)]
procfs: add seq_put_hex_ll to speed up /proc/pid/maps

seq_put_hex_ll() prints a number in hexadecimal notation and works
faster than seq_printf().

== test.py
  num = 0
  with open("/proc/1/maps") as f:
          while num < 10000 :
                  data = f.read()
                  f.seek(0, 0)
                 num = num + 1
==

== Before patch ==
  $  time python test.py

  real 0m1.561s
  user 0m0.257s
  sys 0m1.302s

== After patch ==
  $ time python test.py

  real 0m0.986s
  user 0m0.279s
  sys 0m0.707s

$ perf -g record python test.py:

== Before patch ==
-   67.42%     2.82%  python   [kernel.kallsyms] [k] show_map_vma.isra.22
   - 64.60% show_map_vma.isra.22
      - 44.98% seq_printf
         - seq_vprintf
            - vsnprintf
               + 14.85% number
               + 12.22% format_decode
                 5.56% memcpy_erms
      + 15.06% seq_path
      + 4.42% seq_pad
   + 2.45% __GI___libc_read

== After patch ==
-   47.35%     3.38%  python   [kernel.kallsyms] [k] show_map_vma.isra.23
   - 43.97% show_map_vma.isra.23
      + 20.84% seq_path
      - 15.73% show_vma_header_prefix
           10.55% seq_put_hex_ll
         + 2.65% seq_put_decimal_ull
           0.95% seq_putc
      + 6.96% seq_pad
   + 2.94% __GI___libc_read

[avagin@openvz.org: use unsigned int instead of int where it is suitable]
Link: http://lkml.kernel.org/r/20180214025619.4005-1-avagin@openvz.org
[avagin@openvz.org: v2]
Link: http://lkml.kernel.org/r/20180117082050.25406-1-avagin@openvz.org
Link: http://lkml.kernel.org/r/20180112185812.7710-1-avagin@openvz.org
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agokasan: prevent compiler from optimizing away memset in tests
Andrey Konovalov [Tue, 10 Apr 2018 23:30:39 +0000 (16:30 -0700)]
kasan: prevent compiler from optimizing away memset in tests

A compiler can optimize away memset calls by replacing them with mov
instructions.  There are KASAN tests that specifically test that KASAN
correctly handles memset calls so we don't want this optimization to
happen.

The solution is to add -fno-builtin flag to test_kasan.ko

Link: http://lkml.kernel.org/r/105ec9a308b2abedb1a0d1fdced0c22d765e4732.1519924383.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Nick Terrell <terrelln@fb.com>
Cc: Chris Mason <clm@fb.com>
Cc: Yury Norov <ynorov@caviumnetworks.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: "Luis R . Rodriguez" <mcgrof@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jeff Layton <jlayton@redhat.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Cc: Kostya Serebryany <kcc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agokasan: fix invalid-free test crashing the kernel
Andrey Konovalov [Tue, 10 Apr 2018 23:30:35 +0000 (16:30 -0700)]
kasan: fix invalid-free test crashing the kernel

When an invalid-free is triggered by one of the KASAN tests, the object
doesn't actually get freed.  This later leads to a BUG failure in
kmem_cache_destroy that checks that there are no allocated objects in
the cache that is being destroyed.

Fix this by calling kmem_cache_free with the proper object address after
the call that triggers invalid-free.

Link: http://lkml.kernel.org/r/286eaefc0a6c3fa9b83b87e7d6dc0fbb5b5c9926.1519924383.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Nick Terrell <terrelln@fb.com>
Cc: Chris Mason <clm@fb.com>
Cc: Yury Norov <ynorov@caviumnetworks.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: "Luis R . Rodriguez" <mcgrof@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jeff Layton <jlayton@redhat.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Cc: Kostya Serebryany <kcc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agokasan, slub: fix handling of kasan_slab_free hook
Andrey Konovalov [Tue, 10 Apr 2018 23:30:31 +0000 (16:30 -0700)]
kasan, slub: fix handling of kasan_slab_free hook

The kasan_slab_free hook's return value denotes whether the reuse of a
slab object must be delayed (e.g.  when the object is put into memory
qurantine).

The current way SLUB handles this hook is by ignoring its return value
and hardcoding checks similar (but not exactly the same) to the ones
performed in kasan_slab_free, which is prone to making mistakes.

The main difference between the hardcoded checks and the ones in
kasan_slab_free is whether we want to perform a free in case when an
invalid-free or a double-free was detected (we don't).

This patch changes the way SLUB handles this by:
1. taking into account the return value of kasan_slab_free for each of
   the objects, that are being freed;
2. reconstructing the freelist of objects to exclude the ones, whose
   reuse must be delayed.

[andreyknvl@google.com: eliminate unnecessary branch in slab_free]
Link: http://lkml.kernel.org/r/a62759a2545fddf69b0c034547212ca1eb1b3ce2.1520359686.git.andreyknvl@google.com
Link: http://lkml.kernel.org/r/083f58501e54731203801d899632d76175868e97.1519400992.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/thp: don't count ZONE_MOVABLE as the target for freepage reserving
Joonsoo Kim [Tue, 10 Apr 2018 23:30:27 +0000 (16:30 -0700)]
mm/thp: don't count ZONE_MOVABLE as the target for freepage reserving

There was a regression report for "mm/cma: manage the memory of the CMA
area by using the ZONE_MOVABLE" [1] and I think that it is related to
this problem.  CMA patchset makes the system use one more zone
(ZONE_MOVABLE) and then increases min_free_kbytes.  It reduces usable
memory and it could cause regression.

ZONE_MOVABLE only has movable pages so we don't need to keep enough
freepages to avoid or deal with fragmentation.  So, don't count it.

This changes min_free_kbytes and thus min_watermark greatly if
ZONE_MOVABLE is used.  It will make the user uses more memory.

System:
  22GB ram, fakenuma, 2 nodes. 5 zones are used.

Before:
  min_free_kbytes: 112640

  zone_info (min_watermark):
  Node 0, zone      DMA
          min      19
  Node 0, zone    DMA32
          min      3778
  Node 0, zone   Normal
          min      10191
  Node 0, zone  Movable
          min      0
  Node 0, zone   Device
          min      0
  Node 1, zone      DMA
          min      0
  Node 1, zone    DMA32
          min      0
  Node 1, zone   Normal
          min      14043
  Node 1, zone  Movable
          min      127
  Node 1, zone   Device
          min      0

After:
  min_free_kbytes: 90112

  zone_info (min_watermark):
  Node 0, zone      DMA
          min      15
  Node 0, zone    DMA32
          min      3022
  Node 0, zone   Normal
          min      8152
  Node 0, zone  Movable
          min      0
  Node 0, zone   Device
          min      0
  Node 1, zone      DMA
          min      0
  Node 1, zone    DMA32
          min      0
  Node 1, zone   Normal
          min      11234
  Node 1, zone  Movable
          min      102
  Node 1, zone   Device
          min      0

[1] (lkml.kernel.org/r/20180102063528.GG30397%20()%20yexl-desktop)

Link: http://lkml.kernel.org/r/1522913236-15776-1-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y
Joonsoo Kim [Tue, 10 Apr 2018 23:30:23 +0000 (16:30 -0700)]
ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y

CMA area is now managed by the separate zone, ZONE_MOVABLE, to fix many
MM related problems.  In this implementation, if CONFIG_HIGHMEM = y,
then ZONE_MOVABLE is considered as HIGHMEM and the memory of the CMA
area is also considered as HIGHMEM.  That means that they are considered
as the page without direct mapping.  However, CMA area could be in a
lowmem and the memory could have direct mapping.

In ARM, when establishing a new mapping for DMA, direct mapping should
be cleared since two mapping with different cache policy could cause
unknown problem.  With this patch, PageHighmem() for the CMA memory
located in lowmem returns true so that the function for DMA mapping
cannot notice whether it needs to clear direct mapping or not,
correctly.  To handle this situation, this patch always clears direct
mapping for such CMA memory.

Link: http://lkml.kernel.org/r/1512114786-5085-4-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/cma: remove ALLOC_CMA
Joonsoo Kim [Tue, 10 Apr 2018 23:30:19 +0000 (16:30 -0700)]
mm/cma: remove ALLOC_CMA

Now, all reserved pages for CMA region are belong to the ZONE_MOVABLE
and it only serves for a request with GFP_HIGHMEM && GFP_MOVABLE.

Therefore, we don't need to maintain ALLOC_CMA at all.

Link: http://lkml.kernel.org/r/1512114786-5085-3-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE
Joonsoo Kim [Tue, 10 Apr 2018 23:30:15 +0000 (16:30 -0700)]
mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE

Patch series "mm/cma: manage the memory of the CMA area by using the
ZONE_MOVABLE", v2.

0. History

This patchset is the follow-up of the discussion about the "Introduce
ZONE_CMA (v7)" [1].  Please reference it if more information is needed.

1. What does this patch do?

This patch changes the management way for the memory of the CMA area in
the MM subsystem.  Currently the memory of the CMA area is managed by
the zone where their pfn is belong to.  However, this approach has some
problems since MM subsystem doesn't have enough logic to handle the
situation that different characteristic memories are in a single zone.
To solve this issue, this patch try to manage all the memory of the CMA
area by using the MOVABLE zone.  In MM subsystem's point of view,
characteristic of the memory on the MOVABLE zone and the memory of the
CMA area are the same.  So, managing the memory of the CMA area by using
the MOVABLE zone will not have any problem.

2. Motivation

There are some problems with current approach.  See following.  Although
these problem would not be inherent and it could be fixed without this
conception change, it requires many hooks addition in various code path
and it would be intrusive to core MM and would be really error-prone.
Therefore, I try to solve them with this new approach.  Anyway,
following is the problems of the current implementation.

o CMA memory utilization

First, following is the freepage calculation logic in MM.

 - For movable allocation: freepage = total freepage
 - For unmovable allocation: freepage = total freepage - CMA freepage

Freepages on the CMA area is used after the normal freepages in the zone
where the memory of the CMA area is belong to are exhausted.  At that
moment that the number of the normal freepages is zero, so

 - For movable allocation: freepage = total freepage = CMA freepage
 - For unmovable allocation: freepage = 0

If unmovable allocation comes at this moment, allocation request would
fail to pass the watermark check and reclaim is started.  After reclaim,
there would exist the normal freepages so freepages on the CMA areas
would not be used.

FYI, there is another attempt [2] trying to solve this problem in lkml.
And, as far as I know, Qualcomm also has out-of-tree solution for this
problem.

Useless reclaim:

There is no logic to distinguish CMA pages in the reclaim path.  Hence,
CMA page is reclaimed even if the system just needs the page that can be
usable for the kernel allocation.

Atomic allocation failure:

This is also related to the fallback allocation policy for the memory of
the CMA area.  Consider the situation that the number of the normal
freepages is *zero* since the bunch of the movable allocation requests
come.  Kswapd would not be woken up due to following freepage
calculation logic.

- For movable allocation: freepage = total freepage = CMA freepage

If atomic unmovable allocation request comes at this moment, it would
fails due to following logic.

- For unmovable allocation: freepage = total freepage - CMA freepage = 0

It was reported by Aneesh [3].

Useless compaction:

Usual high-order allocation request is unmovable allocation request and
it cannot be served from the memory of the CMA area.  In compaction,
migration scanner try to migrate the page in the CMA area and make
high-order page there.  As mentioned above, it cannot be usable for the
unmovable allocation request so it's just waste.

3. Current approach and new approach

Current approach is that the memory of the CMA area is managed by the
zone where their pfn is belong to.  However, these memory should be
distinguishable since they have a strong limitation.  So, they are
marked as MIGRATE_CMA in pageblock flag and handled specially.  However,
as mentioned in section 2, the MM subsystem doesn't have enough logic to
deal with this special pageblock so many problems raised.

New approach is that the memory of the CMA area is managed by the
MOVABLE zone.  MM already have enough logic to deal with special zone
like as HIGHMEM and MOVABLE zone.  So, managing the memory of the CMA
area by the MOVABLE zone just naturally work well because constraints
for the memory of the CMA area that the memory should always be
migratable is the same with the constraint for the MOVABLE zone.

There is one side-effect for the usability of the memory of the CMA
area.  The use of MOVABLE zone is only allowed for a request with
GFP_HIGHMEM && GFP_MOVABLE so now the memory of the CMA area is also
only allowed for this gfp flag.  Before this patchset, a request with
GFP_MOVABLE can use them.  IMO, It would not be a big issue since most
of GFP_MOVABLE request also has GFP_HIGHMEM flag.  For example, file
cache page and anonymous page.  However, file cache page for blockdev
file is an exception.  Request for it has no GFP_HIGHMEM flag.  There is
pros and cons on this exception.  In my experience, blockdev file cache
pages are one of the top reason that causes cma_alloc() to fail
temporarily.  So, we can get more guarantee of cma_alloc() success by
discarding this case.

Note that there is no change in admin POV since this patchset is just
for internal implementation change in MM subsystem.  Just one minor
difference for admin is that the memory stat for CMA area will be
printed in the MOVABLE zone.  That's all.

4. Result

Following is the experimental result related to utilization problem.

8 CPUs, 1024 MB, VIRTUAL MACHINE
make -j16

<Before>
  CMA area:               0 MB            512 MB
  Elapsed-time:           92.4 186.5
  pswpin:                 82 18647
  pswpout:                160 69839

<After>
  CMA        :            0 MB            512 MB
  Elapsed-time:           93.1 93.4
  pswpin:                 84 46
  pswpout:                183 92

akpm: "kernel test robot" reported a 26% improvement in
vm-scalability.throughput:
http://lkml.kernel.org/r/20180330012721.GA3845@yexl-desktop

[1]: lkml.kernel.org/r/1491880640-9944-1-git-send-email-iamjoonsoo.kim@lge.com
[2]: https://lkml.org/lkml/2014/10/15/623
[3]: http://www.spinics.net/lists/linux-mm/msg100562.html

Link: http://lkml.kernel.org/r/1512114786-5085-2-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/page_alloc: don't reserve ZONE_HIGHMEM for ZONE_MOVABLE request
Joonsoo Kim [Tue, 10 Apr 2018 23:30:11 +0000 (16:30 -0700)]
mm/page_alloc: don't reserve ZONE_HIGHMEM for ZONE_MOVABLE request

Freepage on ZONE_HIGHMEM doesn't work for kernel memory so it's not that
important to reserve.  When ZONE_MOVABLE is used, this problem would
theorectically cause to decrease usable memory for GFP_HIGHUSER_MOVABLE
allocation request which is mainly used for page cache and anon page
allocation.  So, fix it by setting 0 to
sysctl_lowmem_reserve_ratio[ZONE_HIGHMEM].

And, defining sysctl_lowmem_reserve_ratio array by MAX_NR_ZONES - 1 size
makes code complex.  For example, if there is highmem system, following
reserve ratio is activated for *NORMAL ZONE* which would be easyily
misleading people.

 #ifdef CONFIG_HIGHMEM
 32
 #endif

This patch also fixes this situation by defining
sysctl_lowmem_reserve_ratio array by MAX_NR_ZONES and place "#ifdef" to
right place.

Link: http://lkml.kernel.org/r/1504672525-17915-1-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Tested-by: Tony Lindgren <tony@atomide.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <linux-api@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm: unclutter THP migration
Michal Hocko [Tue, 10 Apr 2018 23:30:07 +0000 (16:30 -0700)]
mm: unclutter THP migration

THP migration is hacked into the generic migration with rather
surprising semantic.  The migration allocation callback is supposed to
check whether the THP can be migrated at once and if that is not the
case then it allocates a simple page to migrate.  unmap_and_move then
fixes that up by spliting the THP into small pages while moving the head
page to the newly allocated order-0 page.  Remaning pages are moved to
the LRU list by split_huge_page.  The same happens if the THP allocation
fails.  This is really ugly and error prone [1].

I also believe that split_huge_page to the LRU lists is inherently wrong
because all tail pages are not migrated.  Some callers will just work
around that by retrying (e.g.  memory hotplug).  There are other pfn
walkers which are simply broken though.  e.g. madvise_inject_error will
migrate head and then advances next pfn by the huge page size.
do_move_page_to_node_array, queue_pages_range (migrate_pages, mbind),
will simply split the THP before migration if the THP migration is not
supported then falls back to single page migration but it doesn't handle
tail pages if the THP migration path is not able to allocate a fresh THP
so we end up with ENOMEM and fail the whole migration which is a
questionable behavior.  Page compaction doesn't try to migrate large
pages so it should be immune.

This patch tries to unclutter the situation by moving the special THP
handling up to the migrate_pages layer where it actually belongs.  We
simply split the THP page into the existing list if unmap_and_move fails
with ENOMEM and retry.  So we will _always_ migrate all THP subpages and
specific migrate_pages users do not have to deal with this case in a
special way.

[1] http://lkml.kernel.org/r/20171121021855.50525-1-zi.yan@sent.com

Link: http://lkml.kernel.org/r/20180103082555.14592-4-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: Andrea Reale <ar@linux.vnet.ibm.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm, migrate: remove reason argument from new_page_t
Michal Hocko [Tue, 10 Apr 2018 23:30:03 +0000 (16:30 -0700)]
mm, migrate: remove reason argument from new_page_t

No allocation callback is using this argument anymore.  new_page_node
used to use this parameter to convey node_id resp.  migration error up
to move_pages code (do_move_page_to_node_array).  The error status never
made it into the final status field and we have a better way to
communicate node id to the status field now.  All other allocation
callbacks simply ignored the argument so we can drop it finally.

[mhocko@suse.com: fix migration callback]
Link: http://lkml.kernel.org/r/20180105085259.GH2801@dhcp22.suse.cz
[akpm@linux-foundation.org: fix alloc_misplaced_dst_page()]
[mhocko@kernel.org: fix build]
Link: http://lkml.kernel.org/r/20180103091134.GB11319@dhcp22.suse.cz
Link: http://lkml.kernel.org/r/20180103082555.14592-3-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: Andrea Reale <ar@linux.vnet.ibm.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm, numa: rework do_pages_move
Michal Hocko [Tue, 10 Apr 2018 23:29:59 +0000 (16:29 -0700)]
mm, numa: rework do_pages_move

Patch series "unclutter thp migration"

Motivation:

THP migration is hacked into the generic migration with rather
surprising semantic.  The migration allocation callback is supposed to
check whether the THP can be migrated at once and if that is not the
case then it allocates a simple page to migrate.  unmap_and_move then
fixes that up by splitting the THP into small pages while moving the
head page to the newly allocated order-0 page.  Remaining pages are
moved to the LRU list by split_huge_page.  The same happens if the THP
allocation fails.  This is really ugly and error prone [2].

I also believe that split_huge_page to the LRU lists is inherently wrong
because all tail pages are not migrated.  Some callers will just work
around that by retrying (e.g.  memory hotplug).  There are other pfn
walkers which are simply broken though.  e.g. madvise_inject_error will
migrate head and then advances next pfn by the huge page size.
do_move_page_to_node_array, queue_pages_range (migrate_pages, mbind),
will simply split the THP before migration if the THP migration is not
supported then falls back to single page migration but it doesn't handle
tail pages if the THP migration path is not able to allocate a fresh THP
so we end up with ENOMEM and fail the whole migration which is a
questionable behavior.  Page compaction doesn't try to migrate large
pages so it should be immune.

The first patch reworks do_pages_move which relies on a very ugly
calling semantic when the return status is pushed to the migration path
via private pointer.  It uses pre allocated fixed size batching to
achieve that.  We simply cannot do the same if a THP is to be split
during the migration path which is done in the patch 3.  Patch 2 is
follow up cleanup which removes the mentioned return status calling
convention ugliness.

On a side note:

There are some semantic issues I have encountered on the way when
working on patch 1 but I am not addressing them here.  E.g. trying to
move THP tail pages will result in either success or EBUSY (the later
one more likely once we isolate head from the LRU list).  Hugetlb
reports EACCESS on tail pages.  Some errors are reported via status
parameter but migration failures are not even though the original
`reason' argument suggests there was an intention to do so.  From a
quick look into git history this never worked.  I have tried to keep the
semantic unchanged.

Then there is a relatively minor thing that the page isolation might
fail because of pages not being on the LRU - e.g. because they are
sitting on the per-cpu LRU caches.  Easily fixable.

This patch (of 3):

do_pages_move is supposed to move user defined memory (an array of
addresses) to the user defined numa nodes (an array of nodes one for
each address).  The user provided status array then contains resulting
numa node for each address or an error.  The semantic of this function
is little bit confusing because only some errors are reported back.
Notably migrate_pages error is only reported via the return value.  This
patch doesn't try to address these semantic nuances but rather change
the underlying implementation.

Currently we are processing user input (which can be really large) in
batches which are stored to a temporarily allocated page.  Each address
is resolved to its struct page and stored to page_to_node structure
along with the requested target numa node.  The array of these
structures is then conveyed down the page migration path via private
argument.  new_page_node then finds the corresponding structure and
allocates the proper target page.

What is the problem with the current implementation and why to change
it? Apart from being quite ugly it also doesn't cope with unexpected
pages showing up on the migration list inside migrate_pages path.  That
doesn't happen currently but the follow up patch would like to make the
thp migration code more clear and that would need to split a THP into
the list for some cases.

How does the new implementation work? Well, instead of batching into a
fixed size array we simply batch all pages that should be migrated to
the same node and isolate all of them into a linked list which doesn't
require any additional storage.  This should work reasonably well
because page migration usually migrates larger ranges of memory to a
specific node.  So the common case should work equally well as the
current implementation.  Even if somebody constructs an input where the
target numa nodes would be interleaved we shouldn't see a large
performance impact because page migration alone doesn't really benefit
from batching.  mmap_sem batching for the lookup is quite questionable
and isolate_lru_page which would benefit from batching is not using it
even in the current implementation.

Link: http://lkml.kernel.org/r/20180103082555.14592-2-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Reale <ar@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/swapfile.c: make pointer swap_avail_heads static
Colin Ian King [Tue, 10 Apr 2018 23:29:55 +0000 (16:29 -0700)]
mm/swapfile.c: make pointer swap_avail_heads static

The pointer swap_avail_heads is local to the source and does not need to
be in global scope, so make it static.

Cleans up sparse warning:

  mm/swapfile.c:88:19: warning: symbol 'swap_avail_heads' was not declared. Should it be static?

Link: http://lkml.kernel.org/r/20180206215836.12366-1-colin.king@canonical.com
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: "Huang, Ying" <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomemcg: fix per_node_info cleanup
Michal Hocko [Tue, 10 Apr 2018 23:29:52 +0000 (16:29 -0700)]
memcg: fix per_node_info cleanup

syzbot has triggered a NULL ptr dereference when allocation fault
injection enforces a failure and alloc_mem_cgroup_per_node_info
initializes memcg->nodeinfo only half way through.

But __mem_cgroup_free still tries to free all per-node data and
dereferences pn->lruvec_stat_cpu unconditioanlly even if the specific
per-node data hasn't been initialized.

The bug is quite unlikely to hit because small allocations do not fail
and we would need quite some numa nodes to make struct
mem_cgroup_per_node large enough to cross the costly order.

Link: http://lkml.kernel.org/r/20180406100906.17790-1-mhocko@kernel.org
Reported-by: syzbot+8a5de3cce7cdc70e9ebe@syzkaller.appspotmail.com
Fixes: 00f3ca2c2d66 ("mm: memcontrol: per-lruvec stats infrastructure")
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoswap: divide-by-zero when zero length swap file on ssd
Tom Abraham [Tue, 10 Apr 2018 23:29:48 +0000 (16:29 -0700)]
swap: divide-by-zero when zero length swap file on ssd

Calling swapon() on a zero length swap file on SSD can lead to a
divide-by-zero.

Although creating such files isn't possible with mkswap and they woud be
considered invalid, it would be better for the swapon code to be more
robust and handle this condition gracefully (return -EINVAL).
Especially since the fix is small and straightforward.

To help with wear leveling on SSD, the swapon syscall calculates a
random position in the swap file using modulo p->highest_bit, which is
set to maxpages - 1 in read_swap_header.

If the swap file is zero length, read_swap_header sets maxpages=1 and
last_page=0, resulting in p->highest_bit=0 and we divide-by-zero when we
modulo p->highest_bit in swapon syscall.

This can be prevented by having read_swap_header return zero if
last_page is zero.

Link: http://lkml.kernel.org/r/5AC747C1020000A7001FA82C@prv-mh.provo.novell.com
Signed-off-by: Thomas Abraham <tabraham@suse.com>
Reported-by: <Mark.Landis@Teradata.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm: memcg: make sure memory.events is uptodate when waking pollers
Johannes Weiner [Tue, 10 Apr 2018 23:29:45 +0000 (16:29 -0700)]
mm: memcg: make sure memory.events is uptodate when waking pollers

Commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
memory.stat reporting") added per-cpu drift to all memory cgroup stats
and events shown in memory.stat and memory.events.

For memory.stat this is acceptable.  But memory.events issues file
notifications, and somebody polling the file for changes will be
confused when the counters in it are unchanged after a wakeup.

Luckily, the events in memory.events - MEMCG_LOW, MEMCG_HIGH, MEMCG_MAX,
MEMCG_OOM - are sufficiently rare and high-level that we don't need
per-cpu buffering for them: MEMCG_HIGH and MEMCG_MAX would be the most
frequent, but they're counting invocations of reclaim, which is a
complex operation that touches many shared cachelines.

This splits memory.events from the generic VM events and tracks them in
their own, unbuffered atomic counters.  That's also cleaner, as it
eliminates the ugly enum nesting of VM and cgroup events.

[hannes@cmpxchg.org: "array subscript is above array bounds"]
Link: http://lkml.kernel.org/r/20180406155441.GA20806@cmpxchg.org
Link: http://lkml.kernel.org/r/20180405175507.GA24817@cmpxchg.org
Fixes: a983b5ebee57 ("mm: memcontrol: fix excessive complexity in memory.stat reporting")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: Tejun Heo <tj@kernel.org>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/ksm.c: fix inconsistent accounting of zero pages
Claudio Imbrenda [Tue, 10 Apr 2018 23:29:41 +0000 (16:29 -0700)]
mm/ksm.c: fix inconsistent accounting of zero pages

When using KSM with use_zero_pages, we replace anonymous pages
containing only zeroes with actual zero pages, which are not anonymous.
We need to do proper accounting of the mm counters, otherwise we will
get wrong values in /proc and a BUG message in dmesg when tearing down
the mm.

Link: http://lkml.kernel.org/r/1522931274-15552-1-git-send-email-imbrenda@linux.vnet.ibm.com
Fixes: e86c59b1b1 ("mm/ksm: improve deduplication of zero pages with colouring")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/z3fold.c: use gfpflags_allow_blocking
Matthew Wilcox [Tue, 10 Apr 2018 23:29:37 +0000 (16:29 -0700)]
mm/z3fold.c: use gfpflags_allow_blocking

We have a perfectly good macro to determine whether the gfp flags allow
you to sleep or not; use it instead of trying to infer it.

Link: http://lkml.kernel.org/r/20180408062206.GC16007@bombadil.infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Vitaly Wool <vitalywool@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoz3fold: fix memory leak
Xidong Wang [Tue, 10 Apr 2018 23:29:34 +0000 (16:29 -0700)]
z3fold: fix memory leak

In z3fold_create_pool(), the memory allocated by __alloc_percpu() is not
released on the error path that pool->compact_wq , which holds the
return value of create_singlethread_workqueue(), is NULL.  This will
result in a memory leak bug.

[akpm@linux-foundation.org: fix oops on kzalloc() failure, check __alloc_percpu() retval]
Link: http://lkml.kernel.org/r/1522803111-29209-1-git-send-email-wangxidong_97@163.com
Signed-off-by: Xidong Wang <wangxidong_97@163.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Vitaly Wool <vitalywool@gmail.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomemcg, thp: do not invoke oom killer on thp charges
Michal Hocko [Tue, 10 Apr 2018 23:29:30 +0000 (16:29 -0700)]
memcg, thp: do not invoke oom killer on thp charges

A THP memcg charge can trigger the oom killer since 2516035499b9 ("mm,
thp: remove __GFP_NORETRY from khugepaged and madvised allocations").
We have used an explicit __GFP_NORETRY previously which ruled the OOM
killer automagically.

Memcg charge path should be semantically compliant with the allocation
path and that means that if we do not trigger the OOM killer for costly
orders which should do the same in the memcg charge path as well.
Otherwise we are forcing callers to distinguish the two and use
different gfp masks which is both non-intuitive and bug prone.  As soon
as we get a costly high order kmalloc user we even do not have any means
to tell the memcg specific gfp mask to prevent from OOM because the
charging is deep within guts of the slab allocator.

The unexpected memcg OOM on THP has already been fixed upstream by
9d3c3354bb85 ("mm, thp: do not cause memcg oom for thp") but this is a
one-off fix rather than a generic solution.  Teach mem_cgroup_oom to
bail out on costly order requests to fix the THP issue as well as any
other costly OOM eligible allocations to be added in future.

Also revert 9d3c3354bb85 because special gfp for THP is no longer
needed.

Link: http://lkml.kernel.org/r/20180403193129.22146-1-mhocko@kernel.org
Fixes: 2516035499b9 ("mm, thp: remove __GFP_NORETRY from khugepaged and madvised allocations")
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/migrate: properly preserve write attribute in special migrate entry
Ralph Campbell [Tue, 10 Apr 2018 23:29:27 +0000 (16:29 -0700)]
mm/migrate: properly preserve write attribute in special migrate entry

Use of pte_write(pte) is only valid for present pte, the common code
which set the migration entry can be reach for both valid present pte
and special swap entry (for device memory).  Fix the code to use the
mpfn value which properly handle both cases.

On x86 this did not have any bad side effect because pte write bit is
below PAGE_BIT_GLOBAL and thus special swap entry have it set to 0 which
in turn means we were always creating read only special migration entry.

So once migration did finish we always write protected the CPU page
table entry (moreover this is only an issue when migrating from device
memory to system memory).  End effect is that CPU write access would
fault again and restore write permission.

This behaviour isn't too bad; it just burns CPU cycles by forcing CPU to
take a second fault on write access. ie, double faulting the same
address.  There is no corruption or incorrect states (it behaves as a
COWed page from a fork with a mapcount of 1).

Link: http://lkml.kernel.org/r/20180402023506.12180-1-jglisse@redhat.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm: check __highest_present_section_nr directly in memory_dev_init()
Wei Yang [Tue, 10 Apr 2018 23:29:23 +0000 (16:29 -0700)]
mm: check __highest_present_section_nr directly in memory_dev_init()

__highest_present_section_nr is a more strict boundary than
NR_MEM_SECTIONS.  So checking __highest_present_section_nr directly is
enough.

Link: http://lkml.kernel.org/r/20180330032044.21647-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agosched/numa: avoid trapping faults and attempting migration of file-backed dirty pages
Mel Gorman [Tue, 10 Apr 2018 23:29:20 +0000 (16:29 -0700)]
sched/numa: avoid trapping faults and attempting migration of file-backed dirty pages

change_pte_range is called from task work context to mark PTEs for
receiving NUMA faulting hints.  If the marked pages are dirty then
migration may fail.  Some filesystems cannot migrate dirty pages without
blocking so are skipped in MIGRATE_ASYNC mode which just wastes CPU.
Even when they can, it can be a waste of cycles when the pages are
shared forcing higher scan rates.  This patch avoids marking shared
dirty pages for hinting faults but also will skip a migration if the
page was dirtied after the scanner updated a clean page.

This is most noticeable running the NASA Parallel Benchmark when backed
by btrfs, the default root filesystem for some distributions, but also
noticeable when using XFS.

The following are results from a 4-socket machine running a 4.16-rc4
kernel with some scheduler patches that are pending for the next merge
window.

                        4.16.0-rc4             4.16.0-rc4
                 schedtip-20180309          nodirty-v1
  Time cg.D      459.07 (   0.00%)      444.21 (   3.24%)
  Time ep.D       76.96 (   0.00%)       77.69 (  -0.95%)
  Time is.D       25.55 (   0.00%)       27.85 (  -9.00%)
  Time lu.D      601.58 (   0.00%)      596.87 (   0.78%)
  Time mg.D      107.73 (   0.00%)      108.22 (  -0.45%)

is.D regresses slightly in terms of absolute time but note that that
particular load varies quite a bit from run to run.  The more relevant
observation is the total system CPU usage.

            4.16.0-rc4  4.16.0-rc4
          schedtip-20180309 nodirty-v1
  User        71471.91    70627.04
  System      11078.96     8256.13
  Elapsed       661.66      632.74

That is a substantial drop in system CPU usage and overall the workload
completes faster.  The NUMA balancing statistics are also interesting

  NUMA base PTE updates        111407972   139848884
  NUMA huge PMD updates           206506      264869
  NUMA page range updates      217139044   275461812
  NUMA hint faults               4300924     3719784
  NUMA hint local faults         3012539     3416618
  NUMA hint local percent             70          91
  NUMA pages migrated            1517487     1358420

While more PTEs are scanned due to changes in what faults are gathered,
it's clear that a far higher percentage of faults are local as the bulk
of the remote hits were dirty pages that, in this case with btrfs, had
no chance of migrating.

The following is a comparison when using XFS as that is a more realistic
filesystem choice for a data partition

                        4.16.0-rc4             4.16.0-rc4
                 schedtip-20180309          nodirty-v1r47
  Time cg.D      485.28 (   0.00%)      442.62 (   8.79%)
  Time ep.D       77.68 (   0.00%)       77.54 (   0.18%)
  Time is.D       26.44 (   0.00%)       24.79 (   6.24%)
  Time lu.D      597.46 (   0.00%)      597.11 (   0.06%)
  Time mg.D      142.65 (   0.00%)      105.83 (  25.81%)

That is a reasonable gain on two relatively long-lived workloads.  While
not presented, there is also a substantial drop in system CPu usage and
the NUMA balancing stats show similar improvements in locality as btrfs
did.

Link: http://lkml.kernel.org/r/20180326094334.zserdec62gwmmfqf@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agoDocumentation/vm/hmm.txt: typos and syntaxes fixes
Jérôme Glisse [Tue, 10 Apr 2018 23:29:16 +0000 (16:29 -0700)]
Documentation/vm/hmm.txt: typos and syntaxes fixes

This fix typos and syntaxes, thanks to Randy Dunlap for pointing them out
(they were all my faults).

Link: http://lkml.kernel.org/r/20180409151859.4713-1-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: fix header file if/else/endif maze, again
Arnd Bergmann [Tue, 10 Apr 2018 23:29:13 +0000 (16:29 -0700)]
mm/hmm: fix header file if/else/endif maze, again

The last fix was still wrong, as we need the inline dummy functions also
for the case that CONFIG_HMM is enabled but CONFIG_HMM_MIRROR is not:

  kernel/fork.o: In function `__mmdrop':
  fork.c:(.text+0x14f6): undefined reference to `hmm_mm_destroy'

This adds back the second copy of the dummy functions, hopefully
this time in the right place.

Link: http://lkml.kernel.org/r/20180404110236.804484-1-arnd@arndb.de
Fixes: 8900d06a277a ("mm/hmm: fix header file if/else/endif maze")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm.c: remove superfluous RCU protection around radix tree lookup
Tejun Heo [Tue, 10 Apr 2018 23:29:09 +0000 (16:29 -0700)]
mm/hmm.c: remove superfluous RCU protection around radix tree lookup

hmm_devmem_find() requires rcu_read_lock_held() but there's nothing which
actually uses the RCU protection.  The only caller is
hmm_devmem_pages_create() which already grabs the mutex and does
superfluous rcu_read_lock/unlock() around the function.

This doesn't add anything and just adds to confusion.  Remove the RCU
protection and open-code the radix tree lookup.  If this needs to become
more sophisticated in the future, let's add them back when necessary.

Link: http://lkml.kernel.org/r/20180314194515.1661824-4-tj@kernel.org
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: use device driver encoding for HMM pfn
Jérôme Glisse [Tue, 10 Apr 2018 23:29:06 +0000 (16:29 -0700)]
mm/hmm: use device driver encoding for HMM pfn

Users of hmm_vma_fault() and hmm_vma_get_pfns() provide a flags array and
pfn shift value allowing them to define their own encoding for HMM pfn
that are fill inside the pfns array of the hmm_range struct.  With this
device driver can get pfn that match their own private encoding out of HMM
without having to do any conversion.

[rcampbell@nvidia.com: don't ignore specific pte fault flag in hmm_vma_fault()]
Link: http://lkml.kernel.org/r/20180326213009.2460-2-jglisse@redhat.com
[rcampbell@nvidia.com: clarify fault logic for device private memory]
Link: http://lkml.kernel.org/r/20180326213009.2460-3-jglisse@redhat.com
Link: http://lkml.kernel.org/r/20180323005527.758-16-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: change hmm_vma_fault() to allow write fault on page basis
Jérôme Glisse [Tue, 10 Apr 2018 23:29:02 +0000 (16:29 -0700)]
mm/hmm: change hmm_vma_fault() to allow write fault on page basis

This changes hmm_vma_fault() to not take a global write fault flag for a
range but instead rely on caller to populate HMM pfns array with proper
fault flag ie HMM_PFN_VALID if driver want read fault for that address or
HMM_PFN_VALID and HMM_PFN_WRITE for write.

Moreover by setting HMM_PFN_DEVICE_PRIVATE the device driver can ask for
device private memory to be migrated back to system memory through page
fault.

This is more flexible API and it better reflects how device handles and
reports fault.

Link: http://lkml.kernel.org/r/20180323005527.758-15-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: factor out pte and pmd handling to simplify hmm_vma_walk_pmd()
Jérôme Glisse [Tue, 10 Apr 2018 23:28:59 +0000 (16:28 -0700)]
mm/hmm: factor out pte and pmd handling to simplify hmm_vma_walk_pmd()

No functional change, just create one function to handle pmd and one to
handle pte (hmm_vma_handle_pmd() and hmm_vma_handle_pte()).

Link: http://lkml.kernel.org/r/20180323005527.758-14-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: move hmm_pfns_clear() closer to where it is used
Jérôme Glisse [Tue, 10 Apr 2018 23:28:54 +0000 (16:28 -0700)]
mm/hmm: move hmm_pfns_clear() closer to where it is used

Move hmm_pfns_clear() closer to where it is used to make it clear it is
not use by page table walkers.

Link: http://lkml.kernel.org/r/20180323005527.758-13-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: rename HMM_PFN_DEVICE_UNADDRESSABLE to HMM_PFN_DEVICE_PRIVATE
Jérôme Glisse [Tue, 10 Apr 2018 23:28:49 +0000 (16:28 -0700)]
mm/hmm: rename HMM_PFN_DEVICE_UNADDRESSABLE to HMM_PFN_DEVICE_PRIVATE

Make naming consistent across code, DEVICE_PRIVATE is the name use outside
HMM code so use that one.

Link: http://lkml.kernel.org/r/20180323005527.758-12-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: do not differentiate between empty entry or missing directory
Jérôme Glisse [Tue, 10 Apr 2018 23:28:46 +0000 (16:28 -0700)]
mm/hmm: do not differentiate between empty entry or missing directory

There is no point in differentiating between a range for which there is
not even a directory (and thus entries) and empty entry (pte_none() or
pmd_none() returns true).

Simply drop the distinction ie remove HMM_PFN_EMPTY flag and merge now
duplicate hmm_vma_walk_hole() and hmm_vma_walk_clear() functions.

Link: http://lkml.kernel.org/r/20180323005527.758-11-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: cleanup special vma handling (VM_SPECIAL)
Jérôme Glisse [Tue, 10 Apr 2018 23:28:42 +0000 (16:28 -0700)]
mm/hmm: cleanup special vma handling (VM_SPECIAL)

Special vma (one with any of the VM_SPECIAL flags) can not be access by
device because there is no consistent model across device drivers on those
vma and their backing memory.

This patch directly use hmm_range struct for hmm_pfns_special() argument
as it is always affecting the whole vma and thus the whole range.

It also make behavior consistent after this patch both hmm_vma_fault() and
hmm_vma_get_pfns() returns -EINVAL when facing such vma.  Previously
hmm_vma_fault() returned 0 and hmm_vma_get_pfns() return -EINVAL but both
were filling the HMM pfn array with special entry.

Link: http://lkml.kernel.org/r/20180323005527.758-10-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: use uint64_t for HMM pfn instead of defining hmm_pfn_t to ulong
Jérôme Glisse [Tue, 10 Apr 2018 23:28:38 +0000 (16:28 -0700)]
mm/hmm: use uint64_t for HMM pfn instead of defining hmm_pfn_t to ulong

All device driver we care about are using 64bits page table entry.  In
order to match this and to avoid useless define convert all HMM pfn to
directly use uint64_t.  It is a first step on the road to allow driver to
directly use pfn value return by HMM (saving memory and CPU cycles use for
conversion between the two).

Link: http://lkml.kernel.org/r/20180323005527.758-9-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: remove HMM_PFN_READ flag and ignore peculiar architecture
Jérôme Glisse [Tue, 10 Apr 2018 23:28:34 +0000 (16:28 -0700)]
mm/hmm: remove HMM_PFN_READ flag and ignore peculiar architecture

Only peculiar architecture allow write without read thus assume that any
valid pfn do allow for read.  Note we do not care for write only because
it does make sense with thing like atomic compare and exchange or any
other operations that allow you to get the memory value through them.

Link: http://lkml.kernel.org/r/20180323005527.758-8-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: use struct for hmm_vma_fault(), hmm_vma_get_pfns() parameters
Jérôme Glisse [Tue, 10 Apr 2018 23:28:30 +0000 (16:28 -0700)]
mm/hmm: use struct for hmm_vma_fault(), hmm_vma_get_pfns() parameters

Both hmm_vma_fault() and hmm_vma_get_pfns() were taking a hmm_range struct
as parameter and were initializing that struct with others of their
parameters.  Have caller of those function do this as they are likely to
already do and only pass this struct to both function this shorten
function signature and make it easier in the future to add new parameters
by simply adding them to the structure.

Link: http://lkml.kernel.org/r/20180323005527.758-7-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: hmm_pfns_bad() was accessing wrong struct
Jérôme Glisse [Tue, 10 Apr 2018 23:28:27 +0000 (16:28 -0700)]
mm/hmm: hmm_pfns_bad() was accessing wrong struct

The private field of mm_walk struct point to an hmm_vma_walk struct and
not to the hmm_range struct desired.  Fix to get proper struct pointer.

Link: http://lkml.kernel.org/r/20180323005527.758-6-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: unregister mmu_notifier when last HMM client quit
Jérôme Glisse [Tue, 10 Apr 2018 23:28:23 +0000 (16:28 -0700)]
mm/hmm: unregister mmu_notifier when last HMM client quit

This code was lost in translation at one point.  This properly call
mmu_notifier_unregister_no_release() once last user is gone.  This fix the
zombie mm_struct as without this patch we do not drop the refcount we have
on it.

Link: http://lkml.kernel.org/r/20180323005527.758-5-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: HMM should have a callback before MM is destroyed
Ralph Campbell [Tue, 10 Apr 2018 23:28:19 +0000 (16:28 -0700)]
mm/hmm: HMM should have a callback before MM is destroyed

hmm_mirror_register() registers a callback for when the CPU pagetable is
modified.  Normally, the device driver will call hmm_mirror_unregister()
when the process using the device is finished.  However, if the process
exits uncleanly, the struct_mm can be destroyed with no warning to the
device driver.

Link: http://lkml.kernel.org/r/20180323005527.758-4-jglisse@redhat.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: fix header file if/else/endif maze
Jérôme Glisse [Tue, 10 Apr 2018 23:28:15 +0000 (16:28 -0700)]
mm/hmm: fix header file if/else/endif maze

The #if/#else/#endif for IS_ENABLED(CONFIG_HMM) were wrong.  Because of
this after multiple include there was multiple definition of both
hmm_mm_init() and hmm_mm_destroy() leading to build failure if HMM was
enabled (CONFIG_HMM set).

Link: http://lkml.kernel.org/r/20180323005527.758-3-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm/hmm: documentation editorial update to HMM documentation
Ralph Campbell [Tue, 10 Apr 2018 23:28:11 +0000 (16:28 -0700)]
mm/hmm: documentation editorial update to HMM documentation

Update the documentation for HMM to fix minor typos and phrasing to be a
bit more readable.

Link: http://lkml.kernel.org/r/20180323005527.758-2-jglisse@redhat.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Stephen Bates <sbates@raithlin.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
21 months agomm, vmscan, tracing: use pointer to reclaim_stat struct in trace event
Steven Rostedt [Tue, 10 Apr 2018 23:28:07 +0000 (16:28 -0700)]
mm, vmscan, tracing: use pointer to reclaim_stat struct in trace event

The trace event trace_mm_vmscan_lru_shrink_inactive() currently has 12
parameters! Seven of them are from the reclaim_stat structure.  This
structure is currently local to mm/vmscan.c.  By moving it to the global
vmstat.h header, we can also reference it from the vmscan tracepoints.
In moving it, it brings down the overhead of passing so many arguments
to the trace event.  In the future, we may limit the number of arguments
that a trace event may pass (ideally just 6, but more realistically it
may be 8).

Before this patch, the code to call the trace event is this:

 0f 83 aa fe ff ff       jae    ffffffff811e6261 <shrink_inactive_list+0x1e1>
 48 8b 45 a0             mov    -0x60(%rbp),%rax
 45 8b 64 24 20          mov    0x20(%r12),%r12d
 44 8b 6d d4             mov    -0x2c(%rbp),%r13d
 8b 4d d0                mov    -0x30(%rbp),%ecx
 44 8b 75 cc             mov    -0x34(%rbp),%r14d
 44 8b 7d c8             mov    -0x38(%rbp),%r15d
 48 89 45 90             mov    %rax,-0x70(%rbp)
 8b 83 b8 fe ff ff       mov    -0x148(%rbx),%eax
 8b 55 c0                mov    -0x40(%rbp),%edx
 8b 7d c4                mov    -0x3c(%rbp),%edi
 8b 75 b8                mov    -0x48(%rbp),%esi
 89 45 80                mov    %eax,-0x80(%rbp)
 65 ff 05 e4 f7 e2 7e    incl   %gs:0x7ee2f7e4(%rip)        # 15bd0 <__preempt_count>
 48 8b 05 75 5b 13 01    mov    0x1135b75(%rip),%rax        # ffffffff8231bf68 <__tracepoint_mm_vmscan_lru_shrink_inactive+0x28>
 48 85 c0                test   %rax,%rax
 74 72                   je     ffffffff811e646a <shrink_inactive_list+0x3ea>
 48 89 c3                mov    %rax,%rbx
 4c 8b 10                mov    (%rax),%r10
 89 f8                   mov    %edi,%eax
 48 89 85 68 ff ff ff    mov    %rax,-0x98(%rbp)
 89 f0                   mov    %esi,%eax
 48 89 85 60 ff ff ff    mov    %rax,-0xa0(%rbp)
 89 c8                   mov    %ecx,%eax
 48 89 85 78 ff ff ff    mov    %rax,-0x88(%rbp)
 89 d0                   mov    %edx,%eax
 48 89 85 70 ff ff ff    mov    %rax,-0x90(%rbp)
 8b 45 8c                mov    -0x74(%rbp),%eax
 48 8b 7b 08             mov    0x8(%rbx),%rdi
 48 83 c3 18             add    $0x18,%rbx
 50                      push   %rax
 41 54                   push   %r12
 41 55                   push   %r13
 ff b5 78 ff ff ff       pushq  -0x88(%rbp)
 41 56                   push   %r14
 41 57                   push   %r15
 ff b5 70 ff ff ff       pushq  -0x90(%rbp)
 4c 8b 8d 68 ff ff ff    mov    -0x98(%rbp),%r9
 4c 8b 85 60 ff ff ff    mov    -0xa0(%rbp),%r8
 48 8b 4d 98             mov    -0x68(%rbp),%rcx
 48 8b 55 90             mov    -0x70(%rbp),%rdx
 8b 75 80                mov    -0x80(%rbp),%esi
 41 ff d2                callq  *%r10

After the patch:

 0f 83 a8 fe ff ff       jae    ffffffff811e626d <shrink_inactive_list+0x1cd>
 8b 9b b8 fe ff ff       mov    -0x148(%rbx),%ebx
 45 8b 64 24 20          mov    0x20(%r12),%r12d
 4c 8b 6d a0             mov    -0x60(%rbp),%r13
 65 ff 05 f5 f7 e2 7e    incl   %gs:0x7ee2f7f5(%rip)        # 15bd0 <__preempt_count>
 4c 8b 35 86 5b 13 01    mov    0x1135b86(%rip),%r14        # ffffffff8231bf68 <__tracepoint_mm_vmscan_lru_shrink_inactive+0x28>
 4d 85 f6                test   %r14,%r14
 74 2a                   je     ffffffff811e6411 <shrink_inactive_list+0x371>
 49 8b 06                mov    (%r14),%rax
 8b 4d 8c                mov    -0x74(%rbp),%ecx
 49 8b 7e 08             mov    0x8(%r14),%rdi
 49 83 c6 18             add    $0x18,%r14
 4c 89 ea                mov    %r13,%rdx
 45 89 e1                mov    %r12d,%r9d
 4c 8d 45 b8             lea    -0x48(%rbp),%r8
 89 de                   mov    %ebx,%esi
 51                      push   %rcx
 48 8b 4d 98             mov    -0x68(%rbp),%rcx
 ff d0                   callq  *%rax

Link: http://lkml.kernel.org/r/2559d7cb-ec60-1200-2362-04fa34fd02bb@fb.com
Link: http://lkml.kernel.org/r/20180322121003.4177af15@gandalf.local.home
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Reported-by: Alexei Starovoitov <ast@fb.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexei Starovoitov <ast@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>