drm/ttm: Fix ttm_bo_shrink() infinite LRU walk on backup failure

Apply the same fix as b2ed01e7ad ("drm/ttm: Fix ttm_bo_swapout()
infinite LRU walk on swapout failure") to the ttm_bo_shrink() path.

Move del_bulk_move from before the backup to after success only,
using ttm_resource_del_bulk_move_unevictable() since the resource
is now unevictable once fully backed up.

Fixes: 70d645deac ("drm/ttm: Add helpers for shrinking")
Cc: Christian König <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Cc: stable@vger.kernel.org # v6.15+
Assisted-by: GitHub_Copilot:claude-opus-4.6
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patch.msgid.link/20260511162443.24352-1-thomas.hellstrom@linux.intel.com
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
This commit is contained in:
Thomas Hellström
2026-05-11 18:24:43 +02:00
parent 591711b326
commit 1d59f36e95

View File

@@ -1112,19 +1112,14 @@ long ttm_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
if (lret < 0)
return lret;
if (bo->bulk_move) {
spin_lock(&bdev->lru_lock);
ttm_resource_del_bulk_move(bo->resource, bo);
spin_unlock(&bdev->lru_lock);
}
lret = ttm_tt_backup(bdev, bo->ttm, (struct ttm_backup_flags)
{.purge = flags.purge,
.writeback = flags.writeback});
if (lret <= 0 && bo->bulk_move) {
if (lret > 0) {
spin_lock(&bdev->lru_lock);
ttm_resource_add_bulk_move(bo->resource, bo);
ttm_resource_del_bulk_move_unevictable(bo->resource, bo);
ttm_resource_move_to_lru_tail(bo->resource);
spin_unlock(&bdev->lru_lock);
}