Jakub Matěna
eef199440d
mm: refactor of vma_merge()
...
Patch series "Refactor of vma_merge and new merge call", v4.
I am currently working on my master's thesis trying to increase number of
merges of VMAs currently failing because of page offset incompatibility
and difference in their anon_vmas. The following refactor and added merge
call included in this series is just two smaller upgrades I created along
the way.
This patch (of 2):
Refactor vma_merge() to make it shorter and more understandable. Main
change is the elimination of code duplicity in the case of merge next
check. This is done by first doing checks and caching the results before
executing the merge itself. The variable 'area' is divided into 'mid' and
'res' as previously it was used for two purposes, as the middle VMA
between prev and next and also as the result of the merge itself. Exit
paths are also unified.
Link: https://lkml.kernel.org/r/20220603145719.1012094-1-matenajakub@gmail.com
Link: https://lkml.kernel.org/r/20220603145719.1012094-2-matenajakub@gmail.com
Signed-off-by: Jakub Matěna <matenajakub@gmail.com >
Reviewed-by: Vlastimil Babka <vbabka@suse.cz >
Cc: Michal Hocko <mhocko@kernel.org >
Cc: Mel Gorman <mgorman@techsingularity.net >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Liam Howlett <liam.howlett@oracle.com >
Cc: Hugh Dickins <hughd@google.com >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Rik van Riel <riel@surriel.com >
Cc: Steven Rostedt <rostedt@goodmis.org >
Cc: Peter Zijlstra (Intel) <peterz@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Suren Baghdasaryan
b3541d912a
mm: delete unused MMF_OOM_VICTIM flag
...
With the last usage of MMF_OOM_VICTIM in exit_mmap gone, this flag is now
unused and can be removed.
[akpm@linux-foundation.org: remove comment about now-removed mm_is_oom_victim()]
Link: https://lkml.kernel.org/r/20220531223100.510392-2-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: David Rientjes <rientjes@google.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Roman Gushchin <guro@fb.com >
Cc: Minchan Kim <minchan@kernel.org >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Christian Brauner (Microsoft) <brauner@kernel.org >
Cc: Christoph Hellwig <hch@infradead.org >
Cc: Oleg Nesterov <oleg@redhat.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: Jann Horn <jannh@google.com >
Cc: Shakeel Butt <shakeelb@google.com >
Cc: Peter Xu <peterx@redhat.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: Shuah Khan <shuah@kernel.org >
Cc: Liam Howlett <liam.howlett@oracle.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Suren Baghdasaryan
bf3980c852
mm: drop oom code from exit_mmap
...
The primary reason to invoke the oom reaper from the exit_mmap path used
to be a prevention of an excessive oom killing if the oom victim exit
races with the oom reaper (see [1] for more details). The invocation has
moved around since then because of the interaction with the munlock logic
but the underlying reason has remained the same (see [2]).
Munlock code is no longer a problem since [3] and there shouldn't be any
blocking operation before the memory is unmapped by exit_mmap so the oom
reaper invocation can be dropped. The unmapping part can be done with the
non-exclusive mmap_sem and the exclusive one is only required when page
tables are freed.
Remove the oom_reaper from exit_mmap which will make the code easier to
read. This is really unlikely to make any observable difference although
some microbenchmarks could benefit from one less branch that needs to be
evaluated even though it almost never is true.
[1] 2129258024 ("mm: oom: let oom_reap_task and exit_mmap run concurrently")
[2] 27ae357fa8 ("mm, oom: fix concurrent munlock and oom reaper unmap, v3")
[3] a213e5cf71 ("mm/munlock: delete munlock_vma_pages_all(), allow oomreap")
[akpm@linux-foundation.org: restore Suren's mmap_read_lock() optimization]
Link: https://lkml.kernel.org/r/20220531223100.510392-1-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Christian Brauner (Microsoft) <brauner@kernel.org >
Cc: Christoph Hellwig <hch@infradead.org >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Rientjes <rientjes@google.com >
Cc: Jann Horn <jannh@google.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: "Kirill A . Shutemov" <kirill@shutemov.name >
Cc: Liam Howlett <liam.howlett@oracle.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Oleg Nesterov <oleg@redhat.com >
Cc: Peter Xu <peterx@redhat.com >
Cc: Roman Gushchin <guro@fb.com >
Cc: Shakeel Butt <shakeelb@google.com >
Cc: Shuah Khan <shuah@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Liam Howlett
66071896cd
mm/mlock: drop dead code in count_mm_mlocked_page_nr()
...
The check for mm being null has never been needed since the only caller
has always passed in current->mm. Remove the check from
count_mm_mlocked_page_nr().
Link: https://lkml.kernel.org/r/20220615174050.738523-1-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Suggested-by: Lukas Bulwahn <lukas.bulwahn@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Liam R. Howlett
c154124fe9
mm/mmap.c: pass in mapping to __vma_link_file()
...
__vma_link_file() resolves the mapping from the file, if there is one.
Pass through the mapping and check the vm_file externally since most
places already have the required information and check of vm_file.
Link: https://lkml.kernel.org/r/20220906194824.2110408-71-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:27 -07:00
Liam R. Howlett
d0601a500c
mm/mmap: drop range_has_overlap() function
...
Since there is no longer a linked list, the range_has_overlap() function
is identical to the find_vma_intersection() function.
Link: https://lkml.kernel.org/r/20220906194824.2110408-70-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Liam R. Howlett
763ecb0350
mm: remove the vma linked list
...
Replace any vm_next use with vma_find().
Update free_pgtables(), unmap_vmas(), and zap_page_range() to use the
maple tree.
Use the new free_pgtables() and unmap_vmas() in do_mas_align_munmap(). At
the same time, alter the loop to be more compact.
Now that free_pgtables() and unmap_vmas() take a maple tree as an
argument, rearrange do_mas_align_munmap() to use the new tree to hold the
vmas to remove.
Remove __vma_link_list() and __vma_unlink_list() as they are exclusively
used to update the linked list.
Drop linked list update from __insert_vm_struct().
Rework validation of tree as it was depending on the linked list.
[yang.lee@linux.alibaba.com: fix one kernel-doc comment]
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=1949
Link: https://lkml.kernel.org/r/20220824021918.94116-1-yang.lee@linux.alibaba.comLink : https://lkml.kernel.org/r/20220906194824.2110408-69-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Liam R. Howlett
78ba531ff3
mm/vmscan: use vma iterator instead of vm_next
...
Use the vma iterator in in get_next_vma() instead of the linked list.
[yuzhao@google.com: mm/vmscan: use the proper VMA iterator]
Link: https://lkml.kernel.org/r/Yx+QGOgHg1Wk8tGK@google.com
Link: https://lkml.kernel.org/r/20220906194824.2110408-68-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Signed-off-by: Yu Zhao <yuzhao@google.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Liam R. Howlett
9b580a1d60
riscv: use vma iterator for vdso
...
Remove the linked list use in favour of the vma iterator.
Link: https://lkml.kernel.org/r/20220906194824.2110408-67-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Matthew Wilcox (Oracle)
8220543df1
nommu: remove uses of VMA linked list
...
Use the maple tree or VMA iterator instead. This is faster and will allow
us to shrink the VMA.
Link: https://lkml.kernel.org/r/20220906194824.2110408-66-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:26 -07:00
Matthew Wilcox (Oracle)
f683b9d613
i915: use the VMA iterator
...
Replace the linked list in probe_range() with the VMA iterator.
Link: https://lkml.kernel.org/r/20220906194824.2110408-65-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
208c09db6d
mm/swapfile: use vma iterator instead of vma linked list
...
unuse_mm() no longer needs to reference the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-64-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Matthew Wilcox (Oracle)
9ec08f30f8
mm/pagewalk: use vma_find() instead of vma linked list
...
walk_page_range() no longer uses the one vma linked list reference.
Link: https://lkml.kernel.org/r/20220906194824.2110408-63-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
e1c2c775d4
mm/oom_kill: use vma iterators instead of vma linked list
...
Use vma iterator in preparation of removing the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-62-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
4267d1fd78
mm/msync: use vma_find() instead of vma linked list
...
Remove a single use of the vma linked list in preparation for the
removal of the linked list. Uses find_vma() to get the next element.
Link: https://lkml.kernel.org/r/20220906194824.2110408-61-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:25 -07:00
Liam R. Howlett
396a44cc58
mm/mremap: use vma_find_intersection() instead of vma linked list
...
Using the vma_find_intersection() call allows for cleaner code and
removes linked list users in preparation of the linked list removal.
Also remove one user of the linked list at the same time in favour of
find_vma().
Link: https://lkml.kernel.org/r/20220906194824.2110408-60-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Liam R. Howlett
70821e0b89
mm/mprotect: use maple tree navigation instead of VMA linked list
...
Switch to navigating the VMA list with the maple tree operators in
preparation for removing the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-59-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Matthew Wilcox (Oracle)
33108b05f3
mm/mlock: use vma iterator and maple state instead of vma linked list
...
Handle overflow checking in count_mm_mlocked_page_nr() differently.
Link: https://lkml.kernel.org/r/20220906194824.2110408-58-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Liam R. Howlett
66850be55e
mm/mempolicy: use vma iterator & maple state instead of vma linked list
...
Reworked the way mbind_range() finds the first VMA to reuse the maple
state and limit the number of tree walks needed.
Note, this drops the VM_BUG_ON(!vma) call, which would catch a start
address higher than the last VMA. The code was written in a way that
allowed no VMA updates to occur and still return success. There should be
no functional change to this scenario with the new code.
Link: https://lkml.kernel.org/r/20220906194824.2110408-57-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:24 -07:00
Liam R. Howlett
ba0aff8ea6
mm/memcontrol: stop using mm->highest_vm_end
...
Pass through ULONG_MAX instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-56-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Liam R. Howlett
3547481831
mm/madvise: use vma_find() instead of vma linked list
...
madvise_walk_vmas() no longer uses linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-55-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Matthew Wilcox (Oracle)
a5f18ba072
mm/ksm: use vma iterators instead of vma linked list
...
Remove the use of the linked list for eventual removal.
Link: https://lkml.kernel.org/r/20220906194824.2110408-54-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Matthew Wilcox (Oracle)
685405020b
mm/khugepaged: stop using vma linked list
...
Use vma iterator & find_vma() instead of vma linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-53-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Liam R. Howlett
c4d1a92d0d
mm/gup: use maple tree navigation instead of linked list
...
Use find_vma_intersection() to locate the VMAs in __mm_populate() instead
of using find_vma() and the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-52-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:23 -07:00
Liam R. Howlett
becc8cdb6c
bpf: remove VMA linked list
...
Use vma_next() and remove reference to the start of the linked list
Link: https://lkml.kernel.org/r/20220906194824.2110408-51-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
fa5e587679
fork: use VMA iterator
...
The VMA iterator is faster than the linked list and removing the linked
list will shrink the vm_area_struct.
Link: https://lkml.kernel.org/r/20220906194824.2110408-50-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
0cd4d02c32
sched: use maple tree iterator to walk VMAs
...
The linked list is slower than walking the VMAs using the maple tree. We
can't use the VMA iterator here because it doesn't support moving to an
earlier position.
Link: https://lkml.kernel.org/r/20220906194824.2110408-49-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
fcb72a585a
perf: use VMA iterator
...
The VMA iterator is faster than the linked list and removing the linked
list will shrink the vm_area_struct.
Link: https://lkml.kernel.org/r/20220906194824.2110408-48-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Matthew Wilcox (Oracle)
160c820023
acct: use VMA iterator instead of linked list
...
The VMA iterator is faster than the linked list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-47-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:22 -07:00
Liam R. Howlett
01293a62ba
ipc/shm: use VMA iterator instead of linked list
...
The VMA iterator is faster than the linked llist, and it can be walked
even when VMAs are being removed from the address space, so there's no
need to keep track of 'next'.
Link: https://lkml.kernel.org/r/20220906194824.2110408-46-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00
Liam R. Howlett
69dbe6daf1
userfaultfd: use maple tree iterator to iterate VMAs
...
Don't use the mm_struct linked list or the vma->vm_next in prep for
removal.
Link: https://lkml.kernel.org/r/20220906194824.2110408-45-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00
Matthew Wilcox (Oracle)
c4c84f0628
fs/proc/task_mmu: stop using linked list and highest_vm_end
...
Remove references to mm_struct linked list and highest_vm_end for when
they are removed
Link: https://lkml.kernel.org/r/20220906194824.2110408-44-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00
Liam R. Howlett
5f14b9246e
fs/proc/base: use the vma iterators in place of linked list
...
Use the vma iterator instead of a for loop across the linked list. The
link list of vmas will be removed in this patch set.
Link: https://lkml.kernel.org/r/20220906194824.2110408-43-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00
Matthew Wilcox (Oracle)
19066e5868
exec: use VMA iterator instead of linked list
...
Remove a use of the vm_next list by doing the initial lookup with the VMA
iterator and then using it to find the next entry.
Link: https://lkml.kernel.org/r/20220906194824.2110408-42-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:21 -07:00
Matthew Wilcox (Oracle)
182ea1d717
coredump: remove vma linked list walk
...
Use the Maple Tree iterator instead. This is too complicated for the VMA
iterator to handle, so let's open-code it for now. If this turns out to
be a common pattern, we can migrate it to common code.
Link: https://lkml.kernel.org/r/20220906194824.2110408-41-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:20 -07:00
Matthew Wilcox (Oracle)
cbd43755ad
um: remove vma linked list walk
...
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-40-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:20 -07:00
Matthew Wilcox (Oracle)
df724cedcf
optee: remove vma linked list walk
...
Use the VMA iterator instead. Change the calling convention of
__check_mem_type() to pass in the mm instead of the first vma in the
range.
Link: https://lkml.kernel.org/r/20220906194824.2110408-39-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:20 -07:00
Matthew Wilcox (Oracle)
d9fa0e37cd
cxl: remove vma linked list walk
...
Use the VMA iterator instead. This requires a little restructuring of the
surrounding code to hoist the mm to the caller. That turns
cxl_prefault_one() into a trivial function, so call cxl_fault_segment()
directly.
Link: https://lkml.kernel.org/r/20220906194824.2110408-38-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:20 -07:00
Matthew Wilcox (Oracle)
49c40fb4b8
xtensa: remove vma linked list walks
...
Use the VMA iterator instead. Since VMA can no longer be NULL in the
loop, then deal with out-of-memory outside the loop. This means a
slightly longer run time in the failure case (-ENOMEM) - it will run to
the end of the VMAs before erroring instead of in the middle of the loop.
Link: https://lkml.kernel.org/r/20220906194824.2110408-37-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:20 -07:00
Matthew Wilcox (Oracle)
a388462116
x86: remove vma linked list walks
...
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-36-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:20 -07:00
Matthew Wilcox (Oracle)
e7b6b990e5
s390: remove vma linked list walks
...
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-35-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:19 -07:00
Matthew Wilcox (Oracle)
405e669172
powerpc: remove mmap linked list walks
...
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-34-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Reviewed-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:19 -07:00
Matthew Wilcox (Oracle)
70fa203165
parisc: remove mmap linked list from cache handling
...
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-33-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:19 -07:00
Liam R. Howlett
ef770d180e
arm64: Change elfcore for_each_mte_vma() to use VMA iterator
...
Rework for_each_mte_vma() to use a VMA iterator instead of an explicit
linked-list.
Link: https://lkml.kernel.org/r/20220906194824.2110408-32-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com >
Acked-by: Catalin Marinas <catalin.marinas@arm.com >
Link: https://lore.kernel.org/r/20220218023650.672072-1-Liam.Howlett@oracle.com
Signed-off-by: Will Deacon <will@kernel.org >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:19 -07:00
Matthew Wilcox (Oracle)
de2b84d24b
arm64: remove mmap linked list from vdso
...
Use the VMA iterator instead.
Link: https://lkml.kernel.org/r/20220906194824.2110408-31-Liam.Howlett@oracle.com
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:19 -07:00
Liam R. Howlett
67e7c16764
mm/mmap: change do_brk_munmap() to use do_mas_align_munmap()
...
do_brk_munmap() has already aligned the address and has a maple tree state
to be used. Use the new do_mas_align_munmap() to avoid unnecessary
alignment and error checks.
Link: https://lkml.kernel.org/r/20220906194824.2110408-30-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:18 -07:00
Liam R. Howlett
11f9a21ab6
mm/mmap: reorganize munmap to use maple states
...
Remove __do_munmap() in favour of do_munmap(), do_mas_munmap(), and
do_mas_align_munmap().
do_munmap() is a wrapper to create a maple state for any callers that have
not been converted to the maple tree.
do_mas_munmap() takes a maple state to mumap a range. This is just a
small function which checks for error conditions and aligns the end of the
range.
do_mas_align_munmap() uses the aligned range to mumap a range.
do_mas_align_munmap() starts with the first VMA in the range, then finds
the last VMA in the range. Both start and end are split if necessary.
Then the VMAs are removed from the linked list and the mm mlock count is
updated at the same time. Followed by a single tree operation of
overwriting the area in with a NULL. Finally, the detached list is
unmapped and freed.
By reorganizing the munmap calls as outlined, it is now possible to avoid
extra work of aligning pre-aligned callers which are known to be safe,
avoid extra VMA lookups or tree walks for modifications.
detach_vmas_to_be_unmapped() is no longer used, so drop this code.
vm_brk_flags() can just call the do_mas_munmap() as it checks for
intersecting VMAs directly.
Link: https://lkml.kernel.org/r/20220906194824.2110408-29-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:18 -07:00
Liam R. Howlett
e99668a564
mm/mmap: move mmap_region() below do_munmap()
...
Relocation of code for the next commit. There should be no changes here.
Link: https://lkml.kernel.org/r/20220906194824.2110408-28-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:18 -07:00
Liam R. Howlett
d7c6229557
mm: convert vma_lookup() to use mtree_load()
...
Unlike the rbtree, the Maple Tree will return a NULL if there's nothing at
a particular address.
Since the previous commit dropped the vmacache, it is now possible to
consult the tree directly.
Link: https://lkml.kernel.org/r/20220906194824.2110408-27-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:18 -07:00
Liam R. Howlett
7964cf8caa
mm: remove vmacache
...
By using the maple tree and the maple tree state, the vmacache is no
longer beneficial and is complicating the VMA code. Remove the vmacache
to reduce the work in keeping it up to date and code complexity.
Link: https://lkml.kernel.org/r/20220906194824.2110408-26-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Tested-by: Yu Zhao <yuzhao@google.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David Hildenbrand <david@redhat.com >
Cc: David Howells <dhowells@redhat.com >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Cc: SeongJae Park <sj@kernel.org >
Cc: Sven Schnelle <svens@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-09-26 19:46:18 -07:00