xprtrdma: Post receive buffers after RPC completion

rpcrdma_post_recvs() runs in CQ poll context and its cost
falls on the latency-critical path between polling a Receive
completion and waking the RPC consumer. Every cycle spent
refilling the Receive Queue delays delivery of the reply to
the NFS layer.

Move the rpcrdma_post_recvs() call in rpcrdma_reply_handler()
to after the RPC has been decoded and completed. The larger
batch size from the preceding patch provides sufficient
Receive Queue headroom to absorb the brief delay before
buffers are replenished.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
This commit is contained in:
Chuck Lever
2026-03-06 16:56:28 -05:00
committed by Trond Myklebust
parent 93b4791adb
commit 704f3f640f

View File

@@ -1422,7 +1422,6 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
credits = 1; /* don't deadlock */
else if (credits > r_xprt->rx_ep->re_max_requests)
credits = r_xprt->rx_ep->re_max_requests;
rpcrdma_post_recvs(r_xprt, credits + (buf->rb_bc_srv_max_requests << 1));
if (buf->rb_credits != credits)
rpcrdma_update_cwnd(r_xprt, credits);
@@ -1441,15 +1440,20 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
/* LocalInv completion will complete the RPC */
else
kref_put(&req->rl_kref, rpcrdma_reply_done);
return;
out_badversion:
trace_xprtrdma_reply_vers_err(rep);
goto out;
out_post:
rpcrdma_post_recvs(r_xprt,
credits + (buf->rb_bc_srv_max_requests << 1));
return;
out_norqst:
spin_unlock(&xprt->queue_lock);
trace_xprtrdma_reply_rqst_err(rep);
rpcrdma_rep_put(buf, rep);
goto out_post;
out_badversion:
trace_xprtrdma_reply_vers_err(rep);
goto out;
out_shortreply: