net/mlx5e: RX, Fix flush and close release flow of regular rq for legacy rq

Regular (non-XSK) RQs get flushed on XSK setup and re-activated on XSK
close. If the same regular RQ is closed (a config change for example)
soon after the XSK close, a double release occurs because the missing
wqes get released a second time.

Fixes: 3f93f82988 ("net/mlx5e: RX, Defer page release in legacy rq for better recycling")
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
This commit is contained in:
Dragos Tatulea
2023-05-22 21:18:53 +03:00
committed by Saeed Mahameed
parent d543b649ff
commit 2e2d196579

View File

@@ -390,10 +390,18 @@ static void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix)
{
struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix);
if (rq->xsk_pool)
if (rq->xsk_pool) {
mlx5e_xsk_free_rx_wqe(wi);
else
} else {
mlx5e_free_rx_wqe(rq, wi);
/* Avoid a second release of the wqe pages: dealloc is called
* for the same missing wqes on regular RQ flush and on regular
* RQ close. This happens when XSK RQs come into play.
*/
for (int i = 0; i < rq->wqe.info.num_frags; i++, wi++)
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
}
static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)