Andrey Konovalov
beb3c23c69
lib/stackdepot: annotate racy pool_index accesses
...
Accesses to pool_index are protected by pool_lock everywhere except
in a sanity check in stack_depot_fetch. The read access there can race
with the write access in depot_alloc_stack.
Use WRITE/READ_ONCE() to annotate the racy accesses.
As the sanity check is only used to print a warning in case of a
violation of the stack depot interface usage, it does not make a lot
of sense to use proper synchronization.
[andreyknvl@google.com: s/pool_index/pool_index_cached/ in stack_depot_fetch()]
Link: https://lkml.kernel.org/r/95cf53f0da2c112aa2cc54456cbcd6975c3ff343.1676129911.git.andreyknvl@google.com
Link: https://lkml.kernel.org/r/359ac9c13cd0869c56740fb2029f505e41593830.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:51 -08:00
Andrey Konovalov
36aa1e6779
lib/stacktrace, kasan, kmsan: rework extra_bits interface
...
The current implementation of the extra_bits interface is confusing:
passing extra_bits to __stack_depot_save makes it seem that the extra
bits are somehow stored in stack depot. In reality, they are only
embedded into a stack depot handle and are not used within stack depot.
Drop the extra_bits argument from __stack_depot_save and instead provide
a new stack_depot_set_extra_bits function (similar to the exsiting
stack_depot_get_extra_bits) that saves extra bits into a stack depot
handle.
Update the callers of __stack_depot_save to use the new interace.
This change also fixes a minor issue in the old code: __stack_depot_save
does not return NULL if saving stack trace fails and extra_bits is used.
Link: https://lkml.kernel.org/r/317123b5c05e2f82854fc55d8b285e0869d3cb77.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:51 -08:00
Andrey Konovalov
d11a5621f3
lib/stackdepot: rename next_pool_inited to next_pool_required
...
Stack depot uses next_pool_inited to mark that either the next pool is
initialized or the limit on the number of pools is reached. However,
the flag name only reflects the former part of its purpose, which is
confusing.
Rename next_pool_inited to next_pool_required and invert its value.
Also annotate usages of next_pool_required with comments.
Link: https://lkml.kernel.org/r/484fd2695dff7a9bdc437a32f8a6ee228535aa02.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:51 -08:00
Andrey Konovalov
cd0fc64e76
lib/stackdepot: annotate depot_init_pool and depot_alloc_stack
...
Clean up the exisiting comments and add new ones to depot_init_pool and
depot_alloc_stack.
As a part of the clean-up, remove mentions of which variable is accessed
by smp_store_release and smp_load_acquire: it is clear as is from the
code.
Link: https://lkml.kernel.org/r/f80b02951364e6b40deda965b4003de0cd1a532d.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:51 -08:00
Andrey Konovalov
514d5c557b
lib/stacktrace: drop impossible WARN_ON for depot_init_pool
...
depot_init_pool has two call sites:
1. In depot_alloc_stack with a potentially NULL prealloc.
2. In __stack_depot_save with a non-NULL prealloc.
At the same time depot_init_pool can only return false when prealloc is
NULL.
As the second call site makes sure that prealloc is not NULL, the WARN_ON
there can never trigger. Thus, drop the WARN_ON and also move the prealloc
check from depot_init_pool to its first call site.
Also change the return type of depot_init_pool to void as it now always
returns true.
Link: https://lkml.kernel.org/r/ce149f9bdcbc80a92549b54da67eafb27f846b7b.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:51 -08:00
Andrey Konovalov
cb788e84a4
lib/stackdepot: rename init_stack_pool
...
Rename init_stack_pool to depot_init_pool to align the name with
depot_alloc_stack.
No functional changes.
Link: https://lkml.kernel.org/r/23106a3e291d8df0aba33c0e2fe86dc596286479.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:50 -08:00
Andrey Konovalov
424cafee4a
lib/stackdepot: rename handle and pool constants
...
Change the "STACK_ALLOC_" prefix to "DEPOT_" for the constants that
define the number of bits in stack depot handles and the maximum number
of pools.
The old prefix is unclear and makes wonder about how these constants
are related to stack allocations. The new prefix is also shorter.
Also simplify the comment for DEPOT_POOL_ORDER.
No functional changes.
Link: https://lkml.kernel.org/r/84fcceb0acc261a356a0ad4bdfab9ff04bea2445.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:50 -08:00
Andrey Konovalov
961c949b01
lib/stackdepot: rename slab to pool
...
Use "pool" instead of "slab" for naming memory regions stack depot
uses to store stack traces. Using "slab" is confusing, as stack depot
pools have nothing to do with the slab allocator.
Also give better names to pool-related global variables: change
"depot_" prefix to "pool_" to point out that these variables are
related to stack depot pools.
Also rename the slabindex (poolindex) field in handle_parts to pool_index
to align its name with the pool_index global variable.
No functional changes.
Link: https://lkml.kernel.org/r/923c507edb350c3b6ef85860f36be489dfc0ad21.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:50 -08:00
Andrey Konovalov
4c2e9a6794
lib/stackdepot: rename hash table constants and variables
...
Give more meaningful names to hash table-related constants and variables:
1. Rename STACK_HASH_SCALE to STACK_HASH_TABLE_SCALE to point out that it
is related to scaling the hash table.
2. Rename STACK_HASH_ORDER_MIN/MAX to STACK_BUCKET_NUMBER_ORDER_MIN/MAX
to point out that it is related to the number of hash table buckets.
3. Rename stack_hash_order to stack_bucket_number_order for the same
reason as #2 .
No functional changes.
Link: https://lkml.kernel.org/r/f166dd6f3cb2378aea78600714393dd568c33ee9.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:50 -08:00
Andrey Konovalov
0d249ac0e0
lib/stackdepot: reorder and annotate global variables
...
Group stack depot global variables by their purpose:
1. Hash table-related variables,
2. Slab-related variables,
and add comments.
Also clean up comments for hash table-related constants.
Link: https://lkml.kernel.org/r/5606a6c70659065a25bee59cd10e57fc60bb4110.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:49 -08:00
Andrey Konovalov
c60324fbf0
lib/stackdepot: lower the indentation in stack_depot_init
...
stack_depot_init does most things inside an if check. Move them out and
use a goto statement instead.
No functional changes.
Link: https://lkml.kernel.org/r/8e382f1f0c352e4b2ad47326fec7782af961fe8e.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:49 -08:00
Andrey Konovalov
df225c877d
lib/stackdepot: annotate init and early init functions
...
Add comments to stack_depot_early_init and stack_depot_init to explain
certain parts of their implementation.
Also add a pr_info message to stack_depot_early_init similar to the one
in stack_depot_init.
Also move the scale variable in stack_depot_init to the scope where it
is being used.
Link: https://lkml.kernel.org/r/d17fbfbd4d73f38686c5e3d4824a6d62047213a1.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:49 -08:00
Andrey Konovalov
735df3c3a3
lib/stackdepot: rename stack_depot_disable
...
Rename stack_depot_disable to stack_depot_disabled to make its name look
similar to the names of other stack depot flags.
Also put stack_depot_disabled's definition together with the other flags.
Also rename is_stack_depot_disabled to disable_stack_depot: this name
looks more conventional for a function that processes a boot parameter.
No functional changes.
Link: https://lkml.kernel.org/r/d78a07d222e689926e5ead229e4a2e3d87dc9aa7.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:49 -08:00
Andrey Konovalov
1c0310add7
lib/stackdepot, mm: rename stack_depot_want_early_init
...
Rename stack_depot_want_early_init to stack_depot_request_early_init.
The old name is confusing, as it hints at returning some kind of intention
of stack depot. The new name reflects that this function requests an
action from stack depot instead.
No functional changes.
[akpm@linux-foundation.org: update mm/kmemleak.c]
Link: https://lkml.kernel.org/r/359f31bf67429a06e630b4395816a967214ef753.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:49 -08:00
Andrey Konovalov
4a6b5314d6
lib/stackdepot: use pr_fmt to define message format
...
Use pr_fmt to define the format for printing stack depot messages instead
of duplicating the "Stack Depot" prefix in each message.
Link: https://lkml.kernel.org/r/3d09db0171a0e92ff3eb0ee74de74558bc9b56c4.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:48 -08:00
Andrey Konovalov
15ef6a982f
lib/stackdepot: put functions in logical order
...
Patch series "lib/stackdepot: fixes and clean-ups", v2.
A set of fixes, comments, and clean-ups I came up with while reading
the stack depot code.
This patch (of 18):
Put stack depot functions' declarations and definitions in a more logical
order:
1. Functions that save stack traces into stack depot.
2. Functions that fetch and print stack traces.
3. stack_depot_get_extra_bits that operates on stack depot handles
and does not interact with the stack depot storage.
No functional changes.
Link: https://lkml.kernel.org/r/cover.1676063693.git.andreyknvl@google.com
Link: https://lkml.kernel.org/r/daca1319b665d826b94c596b992a8d8117846147.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reviewed-by: Alexander Potapenko <glider@google.com >
Cc: Evgenii Stepanov <eugenis@google.com >
Cc: Marco Elver <elver@google.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-16 20:43:48 -08:00
Jakub Wilk
6bdfc60cf0
mm: fix typo in __vm_enough_memory warning
...
Link: https://lkml.kernel.org/r/20230210203316.5613-1-jwilk@jwilk.net
Signed-off-by: Jakub Wilk <jwilk@jwilk.net >
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org >
Cc: Kefeng Wang <wangkefeng.wang@huawei.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:33 -08:00
SeongJae Park
620932cd28
mm/damon/dbgfs: print DAMON debugfs interface deprecation message
...
DAMON debugfs interface has announced to be deprecated after >v5.15 LTS
kernel is released. And, v6.1.y has announced to be an LTS[1].
Though the announcement was there for a while, some people might not
noticed that so far. Also, some users could depend on it and have
problems at movng to the alternative (DAMON sysfs interface).
For such cases, warn DAMON debugfs interface deprecation with contacts
to ask helps when any DAMON debugfs interface file is opened.
[1] https://git.kernel.org/pub/scm/docs/kernel/website.git/commit/?id=332e9121320bc7461b2d3a79665caf153e51732c
[sj@kernel.org: split DAMON debugfs file open warning message, per Randy]
Link: https://lkml.kernel.org/r/20230209192009.7885-4-sj@kernel.org
Link: https://lkml.kernel.org/r/20230210044838.63723-4-sj@kernel.org
Link: https://lkml.kernel.org/r/20230209192009.7885-4-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org >
Cc: Jonathan Corbet <corbet@lwn.net >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:33 -08:00
SeongJae Park
61e88a2f66
mm/damon/Kconfig: add DAMON debugfs interface deprecation notice
...
DAMON debugfs interface has announced to be deprecated after >v5.15 LTS
kernel is released. And, v6.1.y has announced to be an LTS[1].
Though the announcement was there for a while, some people might not
noticed that so far. Also, some users could depend on it and have
problems at movng to the alternative (DAMON sysfs interface).
For such cases, note DAMON debugfs interface as deprecated, and contacts
to ask helps on the Kconfig.
[1] https://git.kernel.org/pub/scm/docs/kernel/website.git/commit/?id=332e9121320bc7461b2d3a79665caf153e51732c
Link: https://lkml.kernel.org/r/20230209192009.7885-3-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org >
Cc: Jonathan Corbet <corbet@lwn.net >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:32 -08:00
SeongJae Park
5445fcbc4c
Docs/admin-guide/mm/damon/usage: add DAMON debugfs interface deprecation notice
...
Patch series "mm/damon: deprecate DAMON debugfs interface".
DAMON debugfs interface has announced to be deprecated after >v5.15 LTS
kernel is released. And v6.1.y has been announced to be an LTS[1].
Though the announcement was there for a while, some people might not have
noticed that so far. Also, some users could depend on it and have
problems at movng to the alternative (DAMON sysfs interface).
For such cases, keep the code and documents with warning messages and
contacts to ask helps for the deprecation.
[1] https://git.kernel.org/pub/scm/docs/kernel/website.git/commit/?id=332e9121320bc7461b2d3a79665caf153e51732c
This patch (of 3):
DAMON debugfs interface has announced to be deprecated after >v5.15 LTS
kernel is released. And, v6.1.y has announced to be an LTS[1].
Though the announcement was there for a while, some people might not
noticed that so far. Also, some users could depend on it and have
problems at movng to the alternative (DAMON sysfs interface).
For such cases, note DAMON debugfs interface as deprecated, and contacts
to ask helps on the document.
[1] https://git.kernel.org/pub/scm/docs/kernel/website.git/commit/?id=332e9121320bc7461b2d3a79665caf153e51732c
Link: https://lkml.kernel.org/r/20230209192009.7885-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20230209192009.7885-2-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org >
Cc: Jonathan Corbet <corbet@lwn.net >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:32 -08:00
Vishal Moola (Oracle)
280d724ac2
mm/migrate: convert putback_movable_pages() to use folios
...
Removes 6 calls to compound_head(), and replaces putback_movable_page()
with putback_movable_folio() as well.
Link: https://lkml.kernel.org/r/20230130214352.40538-5-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:32 -08:00
Vishal Moola (Oracle)
19979497c0
mm/migrate: convert isolate_movable_page() to use folios
...
Removes 6 calls to compound_head() and prepares the function to take in a
folio instead of page argument.
Link: https://lkml.kernel.org/r/20230130214352.40538-4-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:32 -08:00
Vishal Moola (Oracle)
da707a6d18
mm/migrate: add folio_movable_ops()
...
folio_movable_ops() does the same as page_movable_ops() except uses folios
instead of pages. This function will help make folio conversions in
migrate.c more readable.
Link: https://lkml.kernel.org/r/20230130214352.40538-3-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:31 -08:00
Vishal Moola (Oracle)
3c1ea2c729
mm: add folio_get_nontail_page()
...
Patch series "Convert a couple migrate functions to use folios", v2.
This patchset introduces folio_movable_ops() and converts 3 functions in
mm/migrate.c to use folios. It also introduces folio_get_nontail_page()
for folio conversions which may want to distinguish between head and tail
pages.
This patch (of 4):
folio_get_nontail_page() returns the folio associated with a head page.
This is necessary for folio conversions where the behavior of that
function differs between head pages and tail pages.
Link: https://lkml.kernel.org/r/20230130214352.40538-1-vishal.moola@gmail.com
Link: https://lkml.kernel.org/r/20230130214352.40538-2-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:31 -08:00
Vishal Moola (Oracle)
4a64981dfe
mm/mempolicy: convert migrate_page_add() to migrate_folio_add()
...
Replace migrate_page_add() with migrate_folio_add(). migrate_folio_add()
does the same a migrate_page_add() but takes in a folio instead of a page.
This removes a couple of calls to compound_head().
Link: https://lkml.kernel.org/r/20230130201833.27042-7-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jane Chu <jane.chu@oracle.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:31 -08:00
Vishal Moola (Oracle)
d451b89dcd
mm/mempolicy: convert queue_pages_required() to queue_folio_required()
...
Replace queue_pages_required() with queue_folio_required().
queue_folio_required() does the same as queue_pages_required(), except
takes in a folio instead of a page.
Link: https://lkml.kernel.org/r/20230130201833.27042-6-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jane Chu <jane.chu@oracle.com >
Cc: "Yin, Fengwei" <fengwei.yin@intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:31 -08:00
Vishal Moola (Oracle)
0a2c1e8183
mm/mempolicy: convert queue_pages_hugetlb() to queue_folios_hugetlb()
...
This change is in preparation for the conversion of queue_pages_required()
to queue_folio_required() and migrate_page_add() to migrate_folio_add().
Link: https://lkml.kernel.org/r/20230130201833.27042-5-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jane Chu <jane.chu@oracle.com >
Cc: "Yin, Fengwei" <fengwei.yin@intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:31 -08:00
Vishal Moola (Oracle)
3dae02bbd0
mm/mempolicy: convert queue_pages_pte_range() to queue_folios_pte_range()
...
This function now operates on folios associated with ptes instead of
pages.
This change is in preparation for the conversion of queue_pages_required()
to queue_folio_required() and migrate_page_add() to migrate_folio_add().
Link: https://lkml.kernel.org/r/20230130201833.27042-4-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jane Chu <jane.chu@oracle.com >
Cc: "Yin, Fengwei" <fengwei.yin@intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:30 -08:00
Vishal Moola (Oracle)
de1f505552
mm/mempolicy: convert queue_pages_pmd() to queue_folios_pmd()
...
The function now operates on a folio instead of the page associated with a
pmd.
This change is in preparation for the conversion of queue_pages_required()
to queue_folio_required() and migrate_page_add() to migrate_folio_add().
Link: https://lkml.kernel.org/r/20230130201833.27042-3-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jane Chu <jane.chu@oracle.com >
Cc: "Yin, Fengwei" <fengwei.yin@intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:30 -08:00
Vishal Moola (Oracle)
fa4e3f5ffa
mm: add folio_estimated_sharers()
...
Patch series "Convert various mempolicy.c functions to use folios", v4.
This patch series converts migrate_page_add() and queue_pages_required()
to migrate_folio_add() and queue_page_required(). It also converts the
callers of the functions to use folios as well, and introduces a helper
function to estimate the number of sharers of a folio.
This patch (of 6):
folio_estimated_sharers() takes in a folio and returns the precise number
of times the first subpage of the folio is mapped.
This function aims to provide an estimate for the number of sharers of a
folio. This is necessary for folio conversions where we care about the
number of processes that share a folio, but don't necessarily want to
check every single page within that folio.
This is in contrast to folio_mapcount() which calculates the total number
of the times a folio and all its subpages are mapped.
Link: https://lkml.kernel.org/r/20230130201833.27042-1-vishal.moola@gmail.com
Link: https://lkml.kernel.org/r/20230130201833.27042-2-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com >
Acked-by: David Hildenbrand <david@redhat.com >
Cc: Jane Chu <jane.chu@oracle.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:30 -08:00
Sidhartha Kumar
192a502203
Documentation/mm: update hugetlbfs documentation to mention alloc_hugetlb_folio
...
Link: https://lkml.kernel.org/r/20230125170537.96973-9-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:30 -08:00
Sidhartha Kumar
371607a3c7
mm/hugetlb: convert hugetlb_wp() to take in a folio
...
Change the pagecache_page argument of hugetlb_wp to pagecache_folio.
Replaces a call to find_lock_page() with filemap_lock_folio().
Link: https://lkml.kernel.org/r/20230125170537.96973-8-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reported-by: gerald.schaefer@linux.ibm.com
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:29 -08:00
Sidhartha Kumar
9b91c0e277
mm/hugetlb: convert hugetlb_add_to_page_cache to take in a folio
...
Every caller of hugetlb_add_to_page_cache() is now passing in
&folio->page, change the function to take in a folio directly and clean up
the call sites.
Link: https://lkml.kernel.org/r/20230125170537.96973-7-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:29 -08:00
Sidhartha Kumar
d2d7bb44bf
mm/hugetlb: convert restore_reserve_on_error to take in a folio
...
Every caller of restore_reserve_on_error() is now passing in &folio->page,
change the function to take in a folio directly and clean up the call
sites.
Link: https://lkml.kernel.org/r/20230125170537.96973-6-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:29 -08:00
Sidhartha Kumar
d0ce0e47b3
mm/hugetlb: convert hugetlb fault paths to use alloc_hugetlb_folio()
...
Change alloc_huge_page() to alloc_hugetlb_folio() by changing all callers
to handle the now folio return type of the function. In this conversion,
alloc_huge_page_vma() is also changed to alloc_hugetlb_folio_vma() and
hugepage_add_new_anon_rmap() is changed to take in a folio directly. Many
additions of '&folio->page' are cleaned up in subsequent patches.
hugetlbfs_fallocate() is also refactored to use the RCU +
page_cache_next_miss() API.
Link: https://lkml.kernel.org/r/20230125170537.96973-5-sidhartha.kumar@oracle.com
Suggested-by: Mike Kravetz <mike.kravetz@oracle.com >
Reported-by: kernel test robot <lkp@intel.com >
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:29 -08:00
Sidhartha Kumar
ea8e72f411
mm/hugetlb: convert putback_active_hugepage to take in a folio
...
Convert putback_active_hugepage() to folio_putback_active_hugetlb(), this
removes one user of the Huge Page macros which take in a page. The
callers in migrate.c are also cleaned up by being able to directly use the
src and dst folio variables.
Link: https://lkml.kernel.org/r/20230125170537.96973-4-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:28 -08:00
Sidhartha Kumar
91a2fb956a
mm/hugetlb: convert hugetlbfs_pagecache_present() to folios
...
Refactor hugetlbfs_pagecache_present() to avoid getting and dropping a
refcount on a page. Use RCU and page_cache_next_miss() instead.
Link: https://lkml.kernel.org/r/20230125170537.96973-3-sidhartha.kumar@oracle.com
Suggested-by: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: kernel test robot <lkp@intel.com >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:28 -08:00
Sidhartha Kumar
ea4c353df3
mm/hugetlb: convert hugetlb_install_page to folios
...
Patch series "convert hugetlb fault functions to folios", v2.
This series converts the hugetlb page faulting functions to operate on
folios. These include hugetlb_no_page(), hugetlb_wp(),
copy_hugetlb_page_range(), and hugetlb_mcopy_atomic_pte().
This patch (of 8):
Change hugetlb_install_page() to hugetlb_install_folio(). This reduces
one user of the Huge Page flag macros which take in a page.
Link: https://lkml.kernel.org/r/20230125170537.96973-1-sidhartha.kumar@oracle.com
Link: https://lkml.kernel.org/r/20230125170537.96973-2-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:28 -08:00
Sidhartha Kumar
bdd7be075a
mm/hugetlb: convert demote_free_huge_page to folios
...
Change demote_free_huge_page to demote_free_hugetlb_folio() and change
demote_pool_huge_page() pass in a folio.
Link: https://lkml.kernel.org/r/20230113223057.173292-9-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:28 -08:00
Sidhartha Kumar
0ffdc38eb5
mm/hugetlb: convert restore_reserve_on_error() to folios
...
Use the hugetlb folio flag macros inside restore_reserve_on_error() and
update the comments to reflect the use of folios.
Link: https://lkml.kernel.org/r/20230113223057.173292-8-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:28 -08:00
Sidhartha Kumar
e37d3e838d
mm/hugetlb: convert alloc_migrate_huge_page to folios
...
Change alloc_huge_page_nodemask() to alloc_hugetlb_folio_nodemask() and
alloc_migrate_huge_page() to alloc_migrate_hugetlb_folio(). Both
functions now return a folio rather than a page.
Link: https://lkml.kernel.org/r/20230113223057.173292-7-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:27 -08:00
Sidhartha Kumar
ff7d853b03
mm/hugetlb: increase use of folios in alloc_huge_page()
...
Change hugetlb_cgroup_commit_charge{,_rsvd}(), dequeue_huge_page_vma() and
alloc_buddy_huge_page_with_mpol() to use folios so alloc_huge_page() is
cleaned by operating on folios until its return.
Link: https://lkml.kernel.org/r/20230113223057.173292-6-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:27 -08:00
Sidhartha Kumar
3a740e8bb5
mm/hugetlb: convert alloc_surplus_huge_page() to folios
...
Change alloc_surplus_huge_page() to alloc_surplus_hugetlb_folio() and
update its callers.
Link: https://lkml.kernel.org/r/20230113223057.173292-5-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:27 -08:00
Sidhartha Kumar
a36f1e9024
mm/hugetlb: convert dequeue_hugetlb_page functions to folios
...
dequeue_huge_page_node_exact() is changed to dequeue_hugetlb_folio_node_
exact() and dequeue_huge_page_nodemask() is changed to dequeue_hugetlb_
folio_nodemask(). Update their callers to pass in a folio.
Link: https://lkml.kernel.org/r/20230113223057.173292-4-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:27 -08:00
Sidhartha Kumar
6f6956cf7e
mm/hugetlb: convert __update_and_free_page() to folios
...
Change __update_and_free_page() to __update_and_free_hugetlb_folio() by
changing its callers to pass in a folio.
Link: https://lkml.kernel.org/r/20230113223057.173292-3-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:26 -08:00
Sidhartha Kumar
6aa3a92012
mm/hugetlb: convert isolate_hugetlb to folios
...
Patch series "continue hugetlb folio conversion", v3.
This series continues the conversion of core hugetlb functions to use
folios. This series converts many helper funtions in the hugetlb fault
path. This is in preparation for another series to convert the hugetlb
fault code paths to operate on folios.
This patch (of 8):
Convert isolate_hugetlb() to take in a folio and convert its callers to
pass a folio. Use page_folio() to convert the callers to use a folio is
safe as isolate_hugetlb() operates on a head page.
Link: https://lkml.kernel.org/r/20230113223057.173292-1-sidhartha.kumar@oracle.com
Link: https://lkml.kernel.org/r/20230113223057.173292-2-sidhartha.kumar@oracle.com
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com >
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:26 -08:00
Vishal Moola (Oracle)
f528260b1a
mm/khugepaged: fix invalid page access in release_pte_pages()
...
release_pte_pages() converts from a pfn to a folio by using pfn_folio().
If the pte is not mapped, pfn_folio() will result in undefined behavior
which ends up causing a kernel panic[1].
Only call pfn_folio() once we have validated that the pte is both valid
and mapped to fix the issue.
[1] https://lore.kernel.org/linux-mm/ff300770-afe9-908d-23ed-d23e0796e899@samsung.com/
Link: https://lkml.kernel.org/r/20230213214324.34215-1-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com >
Fixes: 9bdfeea46f ("mm/khugepaged: convert release_pte_pages() to use folios")
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com >
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com >
Debugged-by: Alexandre Ghiti <alex@ghiti.fr >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-13 15:54:26 -08:00
Andrew Morton
f67d6b2664
Merge branch 'mm-hotfixes-stable' into mm-stable
...
To pick up depended-upon changes
2023-02-10 15:34:48 -08:00
Li Zhijian
223ec6ab26
mm/memremap.c: fix outdated comment in devm_memremap_pages
...
commit a4574f63ed ("mm/memremap_pages: convert to 'struct range'")
converted res to range, update the comment correspondingly.
Link: https://lkml.kernel.org/r/1675751220-2-1-git-send-email-lizhijian@fujitsu.com
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com >
Cc: Dan Williams <dan.j.williams@intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-09 16:51:46 -08:00
Thomas Weißschuh
e56397e8c4
mm/damon/sysfs: make kobj_type structures constant
...
Since commit ee6d3dd4ed ("driver core: make kobj_type constant.") the
driver core allows the usage of const struct kobj_type.
Take advantage of this to constify the structure definitions to prevent
modification at runtime.
Link: https://lkml.kernel.org/r/20230207-kobj_type-damon-v1-1-9d4fea6a465b@weissschuh.net
Signed-off-by: Thomas Weißschuh <linux@weissschuh.net >
Reviewed-by: SeongJae Park <sj@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-02-09 16:51:45 -08:00