iomap, xfs: lift zero range hole mapping flush into xfs

iomap zero range has a wart in that it also flushes dirty pagecache
over hole mappings (rather than only unwritten mappings). This was
included to accommodate a quirk in XFS where COW fork preallocation
can exist over a hole in the data fork, and the associated range is
reported as a hole. This is because the range actually is a hole,
but XFS also has an optimization where if COW fork blocks exist for
a range being written to, those blocks are used regardless of
whether the data fork blocks are shared or not. For zeroing, COW
fork blocks over a data fork hole are only relevant if the range is
dirty in pagecache, otherwise the range is already considered
zeroed.

The easiest way to deal with this corner case is to flush the
pagecache to trigger COW remapping into the data fork, and then
operate on the updated on-disk state. The problem is that ext4
cannot accommodate a flush from this context due to being a
transaction deadlock vector.

Outside of the hole quirk, ext4 can avoid the flush for zero range
by using the recently introduced folio batch lookup mechanism for
unwritten mappings. Therefore, take the next logical step and lift
the hole handling logic into the XFS iomap_begin handler. iomap will
still flush on unwritten mappings without a folio batch, and XFS
will flush and retry mapping lookups in the case where it would
otherwise report a hole with dirty pagecache during a zero range.

Note that this is intended to be a fairly straightforward lift and
otherwise not change behavior. Now that the flush exists within XFS,
follow on patches can further optimize it.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Carlos Maiolino <cem@kernel.org>
This commit is contained in:
Brian Foster
2026-03-11 12:24:57 -04:00
committed by Carlos Maiolino
parent 2f46c239fc
commit a35bb0dec9
2 changed files with 23 additions and 4 deletions

View File

@@ -1641,7 +1641,7 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
srcmap->type == IOMAP_UNWRITTEN)) {
s64 status;
if (range_dirty) {
if (range_dirty && srcmap->type == IOMAP_UNWRITTEN) {
range_dirty = false;
status = iomap_zero_iter_flush_and_stale(&iter);
} else {

View File

@@ -1811,6 +1811,7 @@ xfs_buffered_write_iomap_begin(
if (error)
return error;
restart:
error = xfs_ilock_for_iomap(ip, flags, &lockmode);
if (error)
return error;
@@ -1838,9 +1839,27 @@ xfs_buffered_write_iomap_begin(
if (eof)
imap.br_startoff = end_fsb; /* fake hole until the end */
/* We never need to allocate blocks for zeroing or unsharing a hole. */
if ((flags & (IOMAP_UNSHARE | IOMAP_ZERO)) &&
imap.br_startoff > offset_fsb) {
/* We never need to allocate blocks for unsharing a hole. */
if ((flags & IOMAP_UNSHARE) && imap.br_startoff > offset_fsb) {
xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff);
goto out_unlock;
}
/*
* We may need to zero over a hole in the data fork if it's fronted by
* COW blocks and dirty pagecache. To make sure zeroing occurs, force
* writeback to remap pending blocks and restart the lookup.
*/
if ((flags & IOMAP_ZERO) && imap.br_startoff > offset_fsb) {
if (filemap_range_needs_writeback(inode->i_mapping, offset,
offset + count - 1)) {
xfs_iunlock(ip, lockmode);
error = filemap_write_and_wait_range(inode->i_mapping,
offset, offset + count - 1);
if (error)
return error;
goto restart;
}
xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff);
goto out_unlock;
}