635d0cbe75
With async connect, we need to avoid the case where the initiator is sending the icreq, and meanwhile the application submits enough I/O such that the request objects are exhausted, leaving none for the FABRICS/CONNECT command that we need to send after the icreq is done. So allocate an extra request, and then use it when sending the FABRICS/CONNECT command, rather than trying to pull one from the qpair's STAILQ. Fixes issue #2371. Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: If42a3fbb3fd9d863ee48cf5cae75a9ba1754c349 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11515 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> |
||
---|---|---|
.. | ||
Makefile | ||
nvme_ctrlr_cmd.c | ||
nvme_ctrlr_ocssd_cmd.c | ||
nvme_ctrlr.c | ||
nvme_cuse.c | ||
nvme_cuse.h | ||
nvme_discovery.c | ||
nvme_fabric.c | ||
nvme_internal.h | ||
nvme_io_msg.c | ||
nvme_io_msg.h | ||
nvme_ns_cmd.c | ||
nvme_ns_ocssd_cmd.c | ||
nvme_ns.c | ||
nvme_opal_internal.h | ||
nvme_opal.c | ||
nvme_pcie_common.c | ||
nvme_pcie_internal.h | ||
nvme_pcie.c | ||
nvme_poll_group.c | ||
nvme_qpair.c | ||
nvme_quirks.c | ||
nvme_rdma.c | ||
nvme_tcp.c | ||
nvme_transport.c | ||
nvme_vfio_user.c | ||
nvme_zns.c | ||
nvme.c | ||
spdk_nvme.map |