Matthew Wilcox (Oracle)
c3a15bff46
mm: reimplement folio_order() and folio_nr_pages()
...
Instead of calling compound_order() and compound_nr_pages(), use the folio
directly. Saves 1905 bytes from mm/filemap.o due to folio_test_large()
now being a cheaper check than PageHead().
Link: https://lkml.kernel.org/r/20220902194653.1739778-4-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Matthew Wilcox (Oracle)
379708ffde
mm: add the first tail page to struct folio
...
Some of the static checkers get confused by extracting the page from the
folio and referring to fields in the first tail page. Adding these fields
to struct folio lets us avoid doing that. It has the risk that people
will refer to those fields without checking that the folio is actually a
large folio, so prefix them with underscores and document the preferred
function to use instead.
Link: https://lkml.kernel.org/r/20220902194653.1739778-3-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Matthew Wilcox (Oracle)
49fd9b6df5
mm/vmscan: fix a lot of comments
...
Patch series "MM folio changes for 6.1", v2.
My focus this round has been on shmem. I believe it is now fully
converted to folios. Of course, shmem interacts with a lot of the swap
cache and other parts of the kernel, so there are patches all over the MM.
This patch series survives a round of xfstests on tmpfs, which is nice,
but hardly an exhaustive test. Hugh was nice enough to run a round of
tests on it and found a bug which is fixed in this edition.
This patch (of 57):
A lot of comments mention pages when they should say folios.
Fix them up.
[akpm@linux-foundation.org: fixups for mglru additions]
Link: https://lkml.kernel.org/r/20220902194653.1739778-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20220902194653.1739778-2-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Cc: Hugh Dickins <hughd@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
58730ab6c7
ksm: convert to use common struct mm_slot
...
Convert to use common struct mm_slot, no functional change.
Link: https://lkml.kernel.org/r/20220831031951.43152-8-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
79b0994156
ksm: convert ksm_mm_slot.link to ksm_mm_slot.hash
...
In order to use common struct mm_slot, convert ksm_mm_slot.link to
ksm_mm_slot.hash in advance, no functional change.
Link: https://lkml.kernel.org/r/20220831031951.43152-7-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
23f746e412
ksm: convert ksm_mm_slot.mm_list to ksm_mm_slot.mm_node
...
In order to use common struct mm_slot, convert ksm_mm_slot.mm_list to
ksm_mm_slot.mm_node in advance, no functional change.
Link: https://lkml.kernel.org/r/20220831031951.43152-6-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
21fbd59136
ksm: add the ksm prefix to the names of the ksm private structures
...
In order to prevent the name of the private structure of ksm from being
the same as the name of the common structure used in subsequent patches,
prefix their names with ksm in advance.
Link: https://lkml.kernel.org/r/20220831031951.43152-5-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:43 -07:00
Qi Zheng
79e1119b7e
ksm: remove redundant declarations in ksm.h
...
Currently, for struct stable_node, no one uses it in both the
include/linux/ksm.h file and the file that contains it. For struct
mem_cgroup, it's also not used in ksm.h. So they're all redundant, just
remove them.
Link: https://lkml.kernel.org/r/20220831031951.43152-4-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:43 -07:00
Qi Zheng
b26e27015e
mm: thp: convert to use common struct mm_slot
...
Rename private struct mm_slot to struct khugepaged_mm_slot and convert to
use common struct mm_slot with no functional change.
[zhengqi.arch@bytedance.com: fix build error with CONFIG_SHMEM disabled]
Link: https://lkml.kernel.org/r/639fa8d5-8e5b-2333-69dc-40ed46219364@bytedance.com
Link: https://lkml.kernel.org/r/20220831031951.43152-3-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:43 -07:00
Qi Zheng
7e736b8e36
mm: introduce common struct mm_slot
...
Patch series "add common struct mm_slot and use it in THP and KSM", v2.
At present, both THP and KSM module have similar structures mm_slot for
organizing and recording the information required for scanning mm, and
each defines the following exactly the same operation functions:
- alloc_mm_slot
- free_mm_slot
- get_mm_slot
- insert_to_mm_slots_hash
In order to de-duplicate these codes, this patchset introduces a common
struct mm_slot, and lets THP and KSM to use it.
This patch (of 7):
At present, both THP and KSM module have similar structures mm_slot for
organizing and recording the information required for scanning mm, and
each defines the following exactly the same operation functions:
- alloc_mm_slot
- free_mm_slot
- get_mm_slot
- insert_to_mm_slots_hash
In order to de-duplicate these codes, this patch introduces a common
struct mm_slot, and subsequent patches will let THP and KSM to use it.
Link: https://lkml.kernel.org/r/20220831031951.43152-1-zhengqi.arch@bytedance.com
Link: https://lkml.kernel.org/r/20220831031951.43152-2-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:43 -07:00
xu xin
21b7bdb504
ksm: add profit monitoring documentation
...
Add the description of KSM profit and how to determine it separately in
system-wide range and inner a single process.
Link: https://lkml.kernel.org/r/20220830144003.299870-1-xu.xin16@zte.com.cn
Signed-off-by: xu xin <xu.xin16@zte.com.cn >
Reviewed-by: Xiaokai Ran <ran.xiaokai@zte.com.cn >
Reviewed-by: Yang Yang <yang.yang29@zte.com.cn >
Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com >
Cc: Alexey Dobriyan <adobriyan@gmail.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: Izik Eidus <izik.eidus@ravellosystems.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:29 -07:00
xu xin
cb4df4cae4
ksm: count allocated ksm rmap_items for each process
...
Patch series "ksm: count allocated rmap_items and update documentation",
v5.
KSM can save memory by merging identical pages, but also can consume
additional memory, because it needs to generate rmap_items to save each
scanned page's brief rmap information.
To determine how beneficial the ksm-policy (like madvise), they are using
brings, so we add a new interface /proc/<pid>/ksm_stat for each process
The value "ksm_rmap_items" in it indicates the total allocated ksm
rmap_items of this process.
The detailed description can be seen in the following patches' commit
message.
This patch (of 2):
KSM can save memory by merging identical pages, but also can consume
additional memory, because it needs to generate rmap_items to save each
scanned page's brief rmap information. Some of these pages may be merged,
but some may not be abled to be merged after being checked several times,
which are unprofitable memory consumed.
The information about whether KSM save memory or consume memory in
system-wide range can be determined by the comprehensive calculation of
pages_sharing, pages_shared, pages_unshared and pages_volatile. A simple
approximate calculation:
profit =~ pages_sharing * sizeof(page) - (all_rmap_items) *
sizeof(rmap_item);
where all_rmap_items equals to the sum of pages_sharing, pages_shared,
pages_unshared and pages_volatile.
But we cannot calculate this kind of ksm profit inner single-process wide
because the information of ksm rmap_item's number of a process is lacked.
For user applications, if this kind of information could be obtained, it
helps upper users know how beneficial the ksm-policy (like madvise) they
are using brings, and then optimize their app code. For example, one
application madvise 1000 pages as MERGEABLE, while only a few pages are
really merged, then it's not cost-efficient.
So we add a new interface /proc/<pid>/ksm_stat for each process in which
the value of ksm_rmap_itmes is only shown now and so more values can be
added in future.
So similarly, we can calculate the ksm profit approximately for a single
process by:
profit =~ ksm_merging_pages * sizeof(page) - ksm_rmap_items *
sizeof(rmap_item);
where ksm_merging_pages is shown at /proc/<pid>/ksm_merging_pages, and
ksm_rmap_items is shown in /proc/<pid>/ksm_stat.
Link: https://lkml.kernel.org/r/20220830143731.299702-1-xu.xin16@zte.com.cn
Link: https://lkml.kernel.org/r/20220830143838.299758-1-xu.xin16@zte.com.cn
Signed-off-by: xu xin <xu.xin16@zte.com.cn >
Reviewed-by: Xiaokai Ran <ran.xiaokai@zte.com.cn >
Reviewed-by: Yang Yang <yang.yang29@zte.com.cn >
Signed-off-by: CGEL ZTE <cgel.zte@gmail.com >
Cc: Alexey Dobriyan <adobriyan@gmail.com >
Cc: Bagas Sanjaya <bagasdotme@gmail.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: Izik Eidus <izik.eidus@ravellosystems.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:29 -07:00
Shakeel Butt
e6ad640bc4
mm: deduplicate cacheline padding code
...
There are three users (mmzone.h, memcontrol.h, page_counter.h) using
similar code for forcing cacheline padding between fields of different
structures. Dedup that code.
Link: https://lkml.kernel.org/r/20220826230642.566725-1-shakeelb@google.com
Signed-off-by: Shakeel Butt <shakeelb@google.com >
Suggested-by: Feng Tang <feng.tang@intel.com >
Reviewed-by: Feng Tang <feng.tang@intel.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:29 -07:00
Michal Hocko
974f4367dd
mm: reduce noise in show_mem for lowmem allocations
...
While discussing early DMA pool pre-allocation failure with Christoph [1]
I have realized that the allocation failure warning is rather noisy for
constrained allocations like GFP_DMA{32}. Those zones are usually not
populated on all nodes very often as their memory ranges are constrained.
This is an attempt to reduce the ballast that doesn't provide any relevant
information for those allocation failures investigation. Please note that
I have only compile tested it (in my default config setup) and I am
throwing it mostly to see what people think about it.
[1] http://lkml.kernel.org/r/20220817060647.1032426-1-hch@lst.de
[mhocko@suse.com: update]
Link: https://lkml.kernel.org/r/Yw29bmJTIkKogTiW@dhcp22.suse.cz
[mhocko@suse.com: fix build]
[akpm@linux-foundation.org: fix it for mapletree]
[akpm@linux-foundation.org: update it for Michal's update]
[mhocko@suse.com: fix arch/powerpc/xmon/xmon.c]
Link: https://lkml.kernel.org/r/Ywh3C4dKB9B93jIy@dhcp22.suse.cz
[akpm@linux-foundation.org: fix arch/sparc/kernel/setup_32.c]
Link: https://lkml.kernel.org/r/YwScVmVofIZkopkF@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com >
Acked-by: Johannes Weiner <hannes@cmpxchg.org >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Cc: Christoph Hellwig <hch@infradead.org >
Cc: Mel Gorman <mgorman@suse.de >
Cc: Dan Carpenter <dan.carpenter@oracle.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:29 -07:00
David Hildenbrand
7014887a01
mm: fixup documentation regarding pte_numa() and PROT_NUMA
...
pte_numa() no longer exists -- replaced by pte_protnone() -- and PROT_NUMA
probably never existed: MM_CP_PROT_NUMA also ends up using PROT_NONE.
Let's fixup the doc.
Link: https://lkml.kernel.org/r/20220825164659.89824-4-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: Jason Gunthorpe <jgg@nvidia.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mel Gorman <mgorman@suse.de >
Cc: Peter Xu <peterx@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:28 -07:00
David Hildenbrand
0cf459866a
mm/gup: use gup_can_follow_protnone() also in GUP-fast
...
There seems to be no reason why FOLL_FORCE during GUP-fast would have to
fallback to the slow path when stumbling over a PROT_NONE mapped page. We
only have to trigger hinting faults in case FOLL_FORCE is not set, and any
kind of fault handling naturally happens from the slow path -- where NUMA
hinting accounting/handling would be performed.
Note that the comment regarding THP migration is outdated: commit
2b4847e730 ("mm: numa: serialise parallel get_user_page against THP
migration") described that this was required for THP due to lack of PMD
migration entries. Nowadays, we do have proper PMD migration entries in
place -- see set_pmd_migration_entry(), which does a proper
pmdp_invalidate() when placing the migration entry.
So let's just reuse gup_can_follow_protnone() here to make it consistent
and drop the somewhat outdated comments.
Link: https://lkml.kernel.org/r/20220825164659.89824-3-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: Jason Gunthorpe <jgg@nvidia.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mel Gorman <mgorman@suse.de >
Cc: Peter Xu <peterx@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:28 -07:00
David Hildenbrand
474098edac
mm/gup: replace FOLL_NUMA by gup_can_follow_protnone()
...
Patch series "mm: minor cleanups around NUMA hinting".
Working on some GUP cleanups (e.g., getting rid of some FOLL_ flags) and
preparing for other GUP changes (getting rid of FOLL_FORCE|FOLL_WRITE for
for taking a R/O longterm pin), this is something I can easily send out
independently.
Get rid of FOLL_NUMA, allow FOLL_FORCE access to PROT_NONE mapped pages in
GUP-fast, and fixup some documentation around NUMA hinting.
This patch (of 3):
No need for a special flag that is not even properly documented to be
internal-only.
Let's just factor this check out and get rid of this flag. The separate
function has the nice benefit that we can centralize comments.
Link: https://lkml.kernel.org/r/20220825164659.89824-2-david@redhat.com
Link: https://lkml.kernel.org/r/20220825164659.89824-1-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: Jason Gunthorpe <jgg@nvidia.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mel Gorman <mgorman@suse.de >
Cc: Peter Xu <peterx@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:28 -07:00
Haiyue Wang
f7091ed64e
mm: fix the handling Non-LRU pages returned by follow_page
...
The handling Non-LRU pages returned by follow_page() jumps directly, it
doesn't call put_page() to handle the reference count, since 'FOLL_GET'
flag for follow_page() has get_page() called. Fix the zone device page
check by handling the page reference count correctly before returning.
And as David reviewed, "device pages are never PageKsm pages". Drop this
zone device page check for break_ksm().
Since the zone device page can't be a transparent huge page, so drop the
redundant zone device page check for split_huge_pages_pid(). (by Miaohe)
Link: https://lkml.kernel.org/r/20220823135841.934465-3-haiyue.wang@intel.com
Fixes: 3218f8712d ("mm: handling Non-LRU pages returned by vm_normal_pages")
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com >
Reviewed-by: "Huang, Ying" <ying.huang@intel.com >
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com >
Reviewed-by: Alistair Popple <apopple@nvidia.com >
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com >
Acked-by: David Hildenbrand <david@redhat.com >
Cc: Alex Sierra <alex.sierra@amd.com >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Muchun Song <songmuchun@bytedance.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:28 -07:00
Jakub Matěna
ca3d76b0aa
mm: add merging after mremap resize
...
When mremap call results in expansion, it might be possible to merge the
VMA with the next VMA which might become adjacent. This patch adds
vma_merge call after the expansion is done to try and merge.
[akpm@linux-foundation.org: coding-style cleanups]
Link: https://lkml.kernel.org/r/20220603145719.1012094-3-matenajakub@gmail.com
Signed-off-by: Jakub Matěna <matenajakub@gmail.com >
Reviewed-by: Vlastimil Babka <vbabka@suse.cz >
Cc: Hugh Dickins <hughd@google.com >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Liam Howlett <liam.howlett@oracle.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mel Gorman <mgorman@techsingularity.net >
Cc: Michal Hocko <mhocko@kernel.org >
Cc: Peter Zijlstra (Intel) <peterz@infradead.org >
Cc: Rik van Riel <riel@surriel.com >
Cc: Steven Rostedt <rostedt@goodmis.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:28 -07:00
Jakub Matěna
eef199440d
mm: refactor of vma_merge()
...
Patch series "Refactor of vma_merge and new merge call", v4.
I am currently working on my master's thesis trying to increase number of
merges of VMAs currently failing because of page offset incompatibility
and difference in their anon_vmas. The following refactor and added merge
call included in this series is just two smaller upgrades I created along
the way.
This patch (of 2):
Refactor vma_merge() to make it shorter and more understandable. Main
change is the elimination of code duplicity in the case of merge next
check. This is done by first doing checks and caching the results before
executing the merge itself. The variable 'area' is divided into 'mid' and
'res' as previously it was used for two purposes, as the middle VMA
between prev and next and also as the result of the merge itself. Exit
paths are also unified.
Link: https://lkml.kernel.org/r/20220603145719.1012094-1-matenajakub@gmail.com
Link: https://lkml.kernel.org/r/20220603145719.1012094-2-matenajakub@gmail.com
Signed-off-by: Jakub Matěna <matenajakub@gmail.com >
Reviewed-by: Vlastimil Babka <vbabka@suse.cz >
Cc: Michal Hocko <mhocko@kernel.org >
Cc: Mel Gorman <mgorman@techsingularity.net >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Liam Howlett <liam.howlett@oracle.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Rik van Riel <riel@surriel.com >
Cc: Steven Rostedt <rostedt@goodmis.org >
Cc: Peter Zijlstra (Intel) <peterz@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Suren Baghdasaryan
b3541d912a
mm: delete unused MMF_OOM_VICTIM flag
...
With the last usage of MMF_OOM_VICTIM in exit_mmap gone, this flag is now
unused and can be removed.
[akpm@linux-foundation.org: remove comment about now-removed mm_is_oom_victim()]
Link: https://lkml.kernel.org/r/20220531223100.510392-2-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: David Rientjes <rientjes@google.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Roman Gushchin <guro@fb.com >
Cc: Minchan Kim <minchan@kernel.org >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Christian Brauner (Microsoft) <brauner@kernel.org >
Cc: Christoph Hellwig <hch@infradead.org >
Cc: Oleg Nesterov <oleg@redhat.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jann Horn <jannh@google.com >
Cc: Shakeel Butt <shakeelb@google.com >
Cc: Peter Xu <peterx@redhat.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Shuah Khan <shuah@kernel.org >
Cc: Liam Howlett <liam.howlett@oracle.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Suren Baghdasaryan
bf3980c852
mm: drop oom code from exit_mmap
...
The primary reason to invoke the oom reaper from the exit_mmap path used
to be a prevention of an excessive oom killing if the oom victim exit
races with the oom reaper (see [1] for more details). The invocation has
moved around since then because of the interaction with the munlock logic
but the underlying reason has remained the same (see [2]).
Munlock code is no longer a problem since [3] and there shouldn't be any
blocking operation before the memory is unmapped by exit_mmap so the oom
reaper invocation can be dropped. The unmapping part can be done with the
non-exclusive mmap_sem and the exclusive one is only required when page
tables are freed.
Remove the oom_reaper from exit_mmap which will make the code easier to
read. This is really unlikely to make any observable difference although
some microbenchmarks could benefit from one less branch that needs to be
evaluated even though it almost never is true.
[1] 2129258024 ("mm: oom: let oom_reap_task and exit_mmap run concurrently")
[2] 27ae357fa8 ("mm, oom: fix concurrent munlock and oom reaper unmap, v3")
[3] a213e5cf71 ("mm/munlock: delete munlock_vma_pages_all(), allow oomreap")
[akpm@linux-foundation.org: restore Suren's mmap_read_lock() optimization]
Link: https://lkml.kernel.org/r/20220531223100.510392-1-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Christian Brauner (Microsoft) <brauner@kernel.org >
Cc: Christoph Hellwig <hch@infradead.org >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Rientjes <rientjes@google.com >
Cc: Jann Horn <jannh@google.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Liam Howlett <liam.howlett@oracle.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Oleg Nesterov <oleg@redhat.com >
Cc: Peter Xu <peterx@redhat.com >
Cc: Roman Gushchin <guro@fb.com >
Cc: Shakeel Butt <shakeelb@google.com >
Cc: Shuah Khan <shuah@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Liam Howlett
66071896cd
mm/mlock: drop dead code in count_mm_mlocked_page_nr()
...
The check for mm being null has never been needed since the only caller
has always passed in current->mm. Remove the check from
count_mm_mlocked_page_nr().
Link: https://lkml.kernel.org/r/20220615174050.738523-1-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Suggested-by: Lukas Bulwahn <lukas.bulwahn@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Liam R. Howlett
c154124fe9
mm/mmap.c: pass in mapping to __vma_link_file()
...
__vma_link_file() resolves the mapping from the file, if there is one.
Pass through the mapping and check the vm_file externally since most
places already have the required information and check of vm_file.
Link: https://lkml.kernel.org/r/20220906194824.2110408-71-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Liam R. Howlett
d0601a500c
mm/mmap: drop range_has_overlap() function
...
Since there is no longer a linked list, the range_has_overlap() function
is identical to the find_vma_intersection() function.
Link: https://lkml.kernel.org/r/20220906194824.2110408-70-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Liam R. Howlett
763ecb0350
mm: remove the vma linked list
...
Replace any vm_next use with vma_find().
Update free_pgtables(), unmap_vmas(), and zap_page_range() to use the
maple tree.
Use the new free_pgtables() and unmap_vmas() in do_mas_align_munmap(). At
the same time, alter the loop to be more compact.
Now that free_pgtables() and unmap_vmas() take a maple tree as an
argument, rearrange do_mas_align_munmap() to use the new tree to hold the
vmas to remove.
Remove __vma_link_list() and __vma_unlink_list() as they are exclusively
used to update the linked list.
Drop linked list update from __insert_vm_struct().
Rework validation of tree as it was depending on the linked list.
[yang.lee@linux.alibaba.com: fix one kernel-doc comment]
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=1949
Link: https://lkml.kernel.org/r/20220824021918.94116-1-yang.lee@linux.alibaba.comLink : https://lkml.kernel.org/r/20220906194824.2110408-69-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Liam R. Howlett
78ba531ff3
mm/vmscan: use vma iterator instead of vm_next
...
Use the vma iterator in in get_next_vma() instead of the linked list.
[yuzhao@google.com: mm/vmscan: use the proper VMA iterator]
Link: https://lkml.kernel.org/r/Yx+QGOgHg1Wk8tGK@google.com
Link: https://lkml.kernel.org/r/20220906194824.2110408-68-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Signed-off-by: Yu Zhao <yuzhao@google.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Liam R. Howlett
9b580a1d60
riscv: use vma iterator for vdso
...
Remove the linked list use in favour of the vma iterator.
Link: https://lkml.kernel.org/r/20220906194824.2110408-67-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Matthew Wilcox (Oracle)
8220543df1
nommu: remove uses of VMA linked list
...
Use the maple tree or VMA iterator instead. This is faster and will allow
us to shrink the VMA.
Link: https://lkml.kernel.org/r/20220906194824.2110408-66-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Matthew Wilcox (Oracle)
f683b9d613
i915: use the VMA iterator
...
Replace the linked list in probe_range() with the VMA iterator.
Link: https://lkml.kernel.org/r/20220906194824.2110408-65-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
208c09db6d
mm/swapfile: use vma iterator instead of vma linked list
...
unuse_mm() no longer needs to reference the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-64-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Matthew Wilcox (Oracle)
9ec08f30f8
mm/pagewalk: use vma_find() instead of vma linked list
...
walk_page_range() no longer uses the one vma linked list reference.
Link: https://lkml.kernel.org/r/20220906194824.2110408-63-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
e1c2c775d4
mm/oom_kill: use vma iterators instead of vma linked list
...
Use vma iterator in preparation of removing the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-62-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
4267d1fd78
mm/msync: use vma_find() instead of vma linked list
...
Remove a single use of the vma linked list in preparation for the
removal of the linked list. Uses find_vma() to get the next element.
Link: https://lkml.kernel.org/r/20220906194824.2110408-61-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
396a44cc58
mm/mremap: use vma_find_intersection() instead of vma linked list
...
Using the vma_find_intersection() call allows for cleaner code and
removes linked list users in preparation of the linked list removal.
Also remove one user of the linked list at the same time in favour of
find_vma().
Link: https://lkml.kernel.org/r/20220906194824.2110408-60-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Liam R. Howlett
70821e0b89
mm/mprotect: use maple tree navigation instead of VMA linked list
...
Switch to navigating the VMA list with the maple tree operators in
preparation for removing the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-59-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Matthew Wilcox (Oracle)
33108b05f3
mm/mlock: use vma iterator and maple state instead of vma linked list
...
Handle overflow checking in count_mm_mlocked_page_nr() differently.
Link: https://lkml.kernel.org/r/20220906194824.2110408-58-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Liam R. Howlett
66850be55e
mm/mempolicy: use vma iterator & maple state instead of vma linked list
...
Reworked the way mbind_range() finds the first VMA to reuse the maple
state and limit the number of tree walks needed.
Note, this drops the VM_BUG_ON(!vma) call, which would catch a start
address higher than the last VMA. The code was written in a way that
allowed no VMA updates to occur and still return success. There should be
no functional change to this scenario with the new code.
Link: https://lkml.kernel.org/r/20220906194824.2110408-57-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Liam R. Howlett
ba0aff8ea6
mm/memcontrol: stop using mm->highest_vm_end
...
Pass through ULONG_MAX instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-56-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Liam R. Howlett
3547481831
mm/madvise: use vma_find() instead of vma linked list
...
madvise_walk_vmas() no longer uses linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-55-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Matthew Wilcox (Oracle)
a5f18ba072
mm/ksm: use vma iterators instead of vma linked list
...
Remove the use of the linked list for eventual removal.
Link: https://lkml.kernel.org/r/20220906194824.2110408-54-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Matthew Wilcox (Oracle)
685405020b
mm/khugepaged: stop using vma linked list
...
Use vma iterator & find_vma() instead of vma linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-53-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Liam R. Howlett
c4d1a92d0d
mm/gup: use maple tree navigation instead of linked list
...
Use find_vma_intersection() to locate the VMAs in __mm_populate() instead
of using find_vma() and the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-52-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Liam R. Howlett
becc8cdb6c
bpf: remove VMA linked list
...
Use vma_next() and remove reference to the start of the linked list
Link: https://lkml.kernel.org/r/20220906194824.2110408-51-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
fa5e587679
fork: use VMA iterator
...
The VMA iterator is faster than the linked list and removing the linked
list will shrink the vm_area_struct.
Link: https://lkml.kernel.org/r/20220906194824.2110408-50-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
0cd4d02c32
sched: use maple tree iterator to walk VMAs
...
The linked list is slower than walking the VMAs using the maple tree. We
can't use the VMA iterator here because it doesn't support moving to an
earlier position.
Link: https://lkml.kernel.org/r/20220906194824.2110408-49-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
fcb72a585a
perf: use VMA iterator
...
The VMA iterator is faster than the linked list and removing the linked
list will shrink the vm_area_struct.
Link: https://lkml.kernel.org/r/20220906194824.2110408-48-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
160c820023
acct: use VMA iterator instead of linked list
...
The VMA iterator is faster than the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-47-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Liam R. Howlett
01293a62ba
ipc/shm: use VMA iterator instead of linked list
...
The VMA iterator is faster than the linked llist, and it can be walked
even when VMAs are being removed from the address space, so there's no
need to keep track of 'next'.
Link: https://lkml.kernel.org/r/20220906194824.2110408-46-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00
Liam R. Howlett
69dbe6daf1
userfaultfd: use maple tree iterator to iterate VMAs
...
Don't use the mm_struct linked list or the vma->vm_next in prep for
removal.
Link: https://lkml.kernel.org/r/20220906194824.2110408-45-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00