Matthew Wilcox (Oracle)
a160e5377b
mm: convert do_swap_page() to use folio_free_swap()
...
Also convert should_try_to_free_swap() to use a folio. This removes a few
calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-47-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
Matthew Wilcox (Oracle)
b4e6f66e45
ksm: use a folio in replace_page()
...
Replace three calls to compound_head() with one.
Link: https://lkml.kernel.org/r/20220902194653.1739778-46-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:53 -07:00
Matthew Wilcox (Oracle)
5fcd079af9
uprobes: use folios more widely in __replace_page()
...
Remove a few hidden calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-45-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
Matthew Wilcox (Oracle)
98b211d641
madvise: convert madvise_free_pte_range() to use a folio
...
Saves a lot of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-44-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
Matthew Wilcox (Oracle)
2fad3d14b9
huge_memory: convert do_huge_pmd_wp_page() to use a folio
...
Removes many calls to compound_head(). Does not remove the assumption
that a folio may not be larger than a PMD.
Link: https://lkml.kernel.org/r/20220902194653.1739778-43-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
Matthew Wilcox (Oracle)
e4a2ed9490
mm: convert do_wp_page() to use a folio
...
Saves many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-42-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
Matthew Wilcox (Oracle)
71fa1a533d
swap: convert swap_writepage() to use a folio
...
Removes many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-41-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:52 -07:00
Matthew Wilcox (Oracle)
aedd74d439
swap_state: convert free_swap_cache() to use a folio
...
Saves several calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-40-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
Matthew Wilcox (Oracle)
cb691e2f28
mm: remove lookup_swap_cache()
...
All callers have now been converted to swap_cache_get_folio(), so we can
remove this wrapper.
Link: https://lkml.kernel.org/r/20220902194653.1739778-39-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
Matthew Wilcox (Oracle)
5a423081b2
mm: convert do_swap_page() to use swap_cache_get_folio()
...
Saves a folio->page->folio conversion.
Link: https://lkml.kernel.org/r/20220902194653.1739778-38-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
Matthew Wilcox (Oracle)
f102cd8b17
swapfile: convert unuse_pte_range() to use a folio
...
Delay fetching the precise page from the folio until we're in unuse_pte().
Saves many calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-37-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
Matthew Wilcox (Oracle)
2c3f6194b0
swapfile: convert __try_to_reclaim_swap() to use a folio
...
Saves five calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-36-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:51 -07:00
Matthew Wilcox (Oracle)
000085b9af
swapfile: convert try_to_unuse() to use a folio
...
Saves five calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-35-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
Matthew Wilcox (Oracle)
923e2f0e7c
shmem: remove shmem_getpage()
...
With all callers removed, remove this wrapper function. The flags are now
mysteriously called SGP, but I think we can live with that.
Link: https://lkml.kernel.org/r/20220902194653.1739778-34-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
Matthew Wilcox (Oracle)
12acf4fbc4
userfaultfd: convert mcontinue_atomic_pte() to use a folio
...
shmem_getpage() is being replaced by shmem_get_folio() so use a folio
throughout this function. Saves several calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-33-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
Matthew Wilcox (Oracle)
7459c149ae
khugepaged: call shmem_get_folio()
...
shmem_getpage() is being removed, so call its replacement and find the
precise page ourselves.
Link: https://lkml.kernel.org/r/20220902194653.1739778-32-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
Matthew Wilcox (Oracle)
e4b57722d0
shmem: convert shmem_get_link() to use a folio
...
Symlinks will never use a large folio, but using the folio API removes a
lot of unnecessary folio->page->folio conversions.
Link: https://lkml.kernel.org/r/20220902194653.1739778-31-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:50 -07:00
Matthew Wilcox (Oracle)
7ad0414bde
shmem: convert shmem_symlink() to use a folio
...
While symlinks will always be < PAGE_SIZE, using the folio APIs gets rid
of unnecessary calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-30-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
Matthew Wilcox (Oracle)
b0802b22a9
shmem: convert shmem_fallocate() to use a folio
...
Call shmem_get_folio() and use the folio APIs instead of the page APIs.
Saves several calls to compound_head() and removes assumptions about the
size of a large folio.
Link: https://lkml.kernel.org/r/20220902194653.1739778-29-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
Matthew Wilcox (Oracle)
4601e2fc8b
shmem: convert shmem_file_read_iter() to use shmem_get_folio()
...
Use a folio throughout, saving five calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-28-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
Matthew Wilcox (Oracle)
eff1f906c2
shmem: convert shmem_write_begin() to use shmem_get_folio()
...
Use a folio throughout this function, saving a couple of calls to
compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-27-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
Matthew Wilcox (Oracle)
a7f5862cc0
shmem: convert shmem_get_partial_folio() to use shmem_get_folio()
...
Get rid of an unnecessary folio->page->folio conversion.
Link: https://lkml.kernel.org/r/20220902194653.1739778-26-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
Matthew Wilcox (Oracle)
4e1fc793ad
shmem: add shmem_get_folio()
...
With no remaining callers of shmem_getpage_gfp(), add shmem_get_folio()
and reimplement shmem_getpage() as a call to shmem_get_folio().
Link: https://lkml.kernel.org/r/20220902194653.1739778-25-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:49 -07:00
Matthew Wilcox (Oracle)
a3a9c39704
shmem: convert shmem_read_mapping_page_gfp() to use shmem_get_folio_gfp()
...
Saves a couple of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-24-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
Matthew Wilcox (Oracle)
68a541001a
shmem: convert shmem_fault() to use shmem_get_folio_gfp()
...
No particular advantage for this function, but necessary to remove
shmem_getpage_gfp().
[hughd@google.com: fix crash]
Link: https://lkml.kernel.org/r/7693a84-bdc2-27b5-2695-d0fe8566571f@google.com
Link: https://lkml.kernel.org/r/20220902194653.1739778-23-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
Matthew Wilcox (Oracle)
fc26babbc7
shmem: convert shmem_getpage_gfp() to shmem_get_folio_gfp()
...
Add a shmem_getpage_gfp() wrapper for compatibility with current users.
Link: https://lkml.kernel.org/r/20220902194653.1739778-22-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
Matthew Wilcox (Oracle)
5739a81cf8
shmem: eliminate struct page from shmem_swapin_folio()
...
Convert shmem_swapin() to return a folio and use swap_cache_get_folio(),
removing all uses of struct page in this function.
Link: https://lkml.kernel.org/r/20220902194653.1739778-21-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
Matthew Wilcox (Oracle)
c9edc24281
swap: add swap_cache_get_folio()
...
Convert lookup_swap_cache() into swap_cache_get_folio() and add a
lookup_swap_cache() wrapper around it.
[akpm@linux-foundation.org: add CONFIG_SWAP=n stub for swap_cache_get_folio()]
Link: https://lkml.kernel.org/r/20220902194653.1739778-20-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:48 -07:00
Matthew Wilcox (Oracle)
0d698e2572
shmem: convert shmem_replace_page() to shmem_replace_folio()
...
The caller has a folio, so convert the calling convention and rename the
function.
Link: https://lkml.kernel.org/r/20220902194653.1739778-19-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
Matthew Wilcox (Oracle)
7a7256d5f5
shmem: convert shmem_mfill_atomic_pte() to use a folio
...
Assert that this is a single-page folio as there are several assumptions
in here that it's exactly PAGE_SIZE bytes large. Saves several calls to
compound_head() and removes the last caller of shmem_alloc_page().
Link: https://lkml.kernel.org/r/20220902194653.1739778-18-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
Matthew Wilcox (Oracle)
6599591816
memcg: convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio()
...
All callers now have a folio, so pass it in here and remove an unnecessary
call to page_folio().
Link: https://lkml.kernel.org/r/20220902194653.1739778-17-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
Matthew Wilcox (Oracle)
d4f9565ae5
mm: convert do_swap_page()'s swapcache variable to a folio
...
The 'swapcache' variable is used to track whether the page is from the
swapcache or not. It can do this equally well by being the folio of the
page rather than the page itself, and this saves a number of calls to
compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-16-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
Matthew Wilcox (Oracle)
63ad4add38
mm: convert do_swap_page() to use a folio
...
Removes quite a lot of calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-15-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:47 -07:00
Matthew Wilcox (Oracle)
4081f7446d
mm/swap: convert put_swap_page() to put_swap_folio()
...
With all callers now using a folio, we can convert this function.
Link: https://lkml.kernel.org/r/20220902194653.1739778-14-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
Matthew Wilcox (Oracle)
a4c366f01f
mm/swap: convert add_to_swap_cache() to take a folio
...
With all callers using folios, we can convert add_to_swap_cache() to take
a folio and use it throughout.
Link: https://lkml.kernel.org/r/20220902194653.1739778-13-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
Matthew Wilcox (Oracle)
a0d3374b07
mm/swap: convert __read_swap_cache_async() to use a folio
...
Remove a few hidden (and one visible) calls to compound_head().
Link: https://lkml.kernel.org/r/20220902194653.1739778-12-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
Matthew Wilcox (Oracle)
bdb0ed54a4
mm/swapfile: convert try_to_free_swap() to folio_free_swap()
...
Add kernel-doc for folio_free_swap() and make it return bool. Add a
try_to_free_swap() compatibility wrapper.
Link: https://lkml.kernel.org/r/20220902194653.1739778-11-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
Matthew Wilcox (Oracle)
14d01ee9fc
mm/swapfile: remove page_swapcount()
...
By restructuring folio_swapped(), it can use swap_swapcount() instead of
page_swapcount(). It's even a little more efficient.
Link: https://lkml.kernel.org/r/20220902194653.1739778-10-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:46 -07:00
Matthew Wilcox (Oracle)
907ea17eb2
shmem: convert shmem_replace_page() to use folios throughout
...
Introduce folio_set_swap_entry() to abstract how both folio->private and
swp_entry_t work. Use swap_address_space() directly instead of
indirecting through folio_mapping(). Include an assertion that the old
folio is not large as we only allocate a single-page folio to replace it.
Use folio_put_refs() instead of calling folio_put() twice.
Link: https://lkml.kernel.org/r/20220902194653.1739778-9-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:45 -07:00
Matthew Wilcox (Oracle)
4cd400fd1f
shmem: convert shmem_delete_from_page_cache() to take a folio
...
Remove the assertion that the page is not Compound as this function now
handles large folios correctly.
Link: https://lkml.kernel.org/r/20220902194653.1739778-8-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:45 -07:00
Matthew Wilcox (Oracle)
f530ed0e2d
shmem: convert shmem_writepage() to use a folio throughout
...
Even though we will split any large folio that comes in, write the code to
handle large folios so as to not leave a trap for whoever tries to handle
large folios in the swap cache.
Link: https://lkml.kernel.org/r/20220902194653.1739778-7-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:45 -07:00
Matthew Wilcox (Oracle)
681ecf6301
mm: add folio_add_lru_vma()
...
Convert lru_cache_add_inactive_or_unevictable() to folio_add_lru_vma()
and add a compatibility wrapper.
Link: https://lkml.kernel.org/r/20220902194653.1739778-6-willy@infradead.org
Signed-off-by: "Matthew Wilcox (Oracle)" <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:45 -07:00
Matthew Wilcox (Oracle)
d788f5b374
mm: add split_folio()
...
This wrapper removes a need to use split_huge_page(&folio->page). Convert
two callers.
Link: https://lkml.kernel.org/r/20220902194653.1739778-5-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:45 -07:00
Matthew Wilcox (Oracle)
c3a15bff46
mm: reimplement folio_order() and folio_nr_pages()
...
Instead of calling compound_order() and compound_nr_pages(), use the folio
directly. Saves 1905 bytes from mm/filemap.o due to folio_test_large()
now being a cheaper check than PageHead().
Link: https://lkml.kernel.org/r/20220902194653.1739778-4-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Matthew Wilcox (Oracle)
379708ffde
mm: add the first tail page to struct folio
...
Some of the static checkers get confused by extracting the page from the
folio and referring to fields in the first tail page. Adding these fields
to struct folio lets us avoid doing that. It has the risk that people
will refer to those fields without checking that the folio is actually a
large folio, so prefix them with underscores and document the preferred
function to use instead.
Link: https://lkml.kernel.org/r/20220902194653.1739778-3-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Matthew Wilcox (Oracle)
49fd9b6df5
mm/vmscan: fix a lot of comments
...
Patch series "MM folio changes for 6.1", v2.
My focus this round has been on shmem. I believe it is now fully
converted to folios. Of course, shmem interacts with a lot of the swap
cache and other parts of the kernel, so there are patches all over the MM.
This patch series survives a round of xfstests on tmpfs, which is nice,
but hardly an exhaustive test. Hugh was nice enough to run a round of
tests on it and found a bug which is fixed in this edition.
This patch (of 57):
A lot of comments mention pages when they should say folios.
Fix them up.
[akpm@linux-foundation.org: fixups for mglru additions]
Link: https://lkml.kernel.org/r/20220902194653.1739778-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20220902194653.1739778-2-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Cc: Hugh Dickins <hughd@google.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
58730ab6c7
ksm: convert to use common struct mm_slot
...
Convert to use common struct mm_slot, no functional change.
Link: https://lkml.kernel.org/r/20220831031951.43152-8-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
79b0994156
ksm: convert ksm_mm_slot.link to ksm_mm_slot.hash
...
In order to use common struct mm_slot, convert ksm_mm_slot.link to
ksm_mm_slot.hash in advance, no functional change.
Link: https://lkml.kernel.org/r/20220831031951.43152-7-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
23f746e412
ksm: convert ksm_mm_slot.mm_list to ksm_mm_slot.mm_node
...
In order to use common struct mm_slot, convert ksm_mm_slot.mm_list to
ksm_mm_slot.mm_node in advance, no functional change.
Link: https://lkml.kernel.org/r/20220831031951.43152-6-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:44 -07:00
Qi Zheng
21fbd59136
ksm: add the ksm prefix to the names of the ksm private structures
...
In order to prevent the name of the private structure of ksm from being
the same as the name of the common structure used in subsequent patches,
prefix their names with ksm in advance.
Link: https://lkml.kernel.org/r/20220831031951.43152-5-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com >
Cc: Johannes Weiner <hannes@cmpxchg.org >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Minchan Kim <minchan@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Yang Shi <shy828301@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-10-03 14:02:43 -07:00