mm/memory: document restore_exclusive_pte()

Let's document how this function is to be used, and why the folio lock is
involved.

Link: https://lkml.kernel.org/r/20250226132257.2826043-5-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
David Hildenbrand
2025-02-26 14:22:56 +01:00
committed by Andrew Morton
parent 248624f9c6
commit 2f95381f8a

View File

@@ -716,6 +716,32 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
}
#endif
/**
* restore_exclusive_pte - Restore a device-exclusive entry
* @vma: VMA covering @address
* @folio: the mapped folio
* @page: the mapped folio page
* @address: the virtual address
* @ptep: pte pointer into the locked page table mapping the folio page
* @orig_pte: pte value at @ptep
*
* Restore a device-exclusive non-swap entry to an ordinary present pte.
*
* The folio and the page table must be locked, and MMU notifiers must have
* been called to invalidate any (exclusive) device mappings.
*
* Locking the folio makes sure that anybody who just converted the pte to
* a device-exclusive entry can map it into the device to make forward
* progress without others converting it back until the folio was unlocked.
*
* If the folio lock ever becomes an issue, we can stop relying on the folio
* lock; it might make some scenarios with heavy thrashing less likely to
* make forward progress, but these scenarios might not be valid use cases.
*
* Note that the folio lock does not protect against all cases of concurrent
* page table modifications (e.g., MADV_DONTNEED, mprotect), so device drivers
* must use MMU notifiers to sync against any concurrent changes.
*/
static void restore_exclusive_pte(struct vm_area_struct *vma,
struct folio *folio, struct page *page, unsigned long address,
pte_t *ptep, pte_t orig_pte)