nvme/rdma: Move post WRs on send/recv queue after poll CQ

If nvme_rdma_qpair_submit_sends() returns -ENOMEM,
nvme_rdma_qpair_process_completions() returns immediately.
In this case, nvme_rdma_qpair_process_completions() does not
poll CQ.

However, nvme_rdma_qpair_process_completions() can poll CQ even
when there is no free slot in SQ.

Hence move nvme_rdma_qpair_submit_sends() and
nvme_rdma_qpair_submit_recvs() after the loop to poll CQ.

nvme_rdma_qpair_submit_sends() and nvme_rdma_qpair_submit_recvs()
output error log and so checking return code of them is not
necessary and is removed in this patch.

This fixes part of the github issue #1271.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Icf22879c69c3f84e6b1d91dc061b6f44237eedd1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1342
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This commit is contained in:
Shuhei Matsumoto 2020-03-18 06:53:27 +09:00 committed by Tomasz Zawadzki
parent a4a8080fad
commit c3d0a83347

View File

@ -2005,11 +2005,6 @@ nvme_rdma_qpair_process_completions(struct spdk_nvme_qpair *qpair,
struct spdk_nvme_rdma_req *rdma_req;
struct nvme_rdma_ctrlr *rctrlr;
if (spdk_unlikely(nvme_rdma_qpair_submit_sends(rqpair) ||
nvme_rdma_qpair_submit_recvs(rqpair))) {
return -1;
}
if (max_completions == 0) {
max_completions = rqpair->num_entries;
} else {
@ -2082,6 +2077,9 @@ nvme_rdma_qpair_process_completions(struct spdk_nvme_qpair *qpair,
}
} while (reaped < max_completions);
nvme_rdma_qpair_submit_sends(rqpair);
nvme_rdma_qpair_submit_recvs(rqpair);
if (spdk_unlikely(rqpair->qpair.ctrlr->timeout_enabled)) {
nvme_rdma_qpair_check_timeout(qpair);
}