ea0aaf5e85
.ctrlr_connect_qpair Previously this was assumed to be a synchronous process so the generic layer transport code updated the state after .ctrlr_connect_qpair returned. In preparation for making this support asynchronous mode, shift that responsibility down into the individual transports. While none of the transports actually do this asynchronously, insert a busy wait in nvme_transport_ctrlr_connect_qpair to wait for the qpair to exit from the CONNECTING state. None of the upper layer code can actually correct handle a transport doing this asynchronously, so the busy wait will cover that. Signed-off-by: Ben Walker <benjamin.walker@intel.com> Change-Id: I3c1a5c115264ffcb87e549765d891d796e0c81fe Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8909 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Monica Kenguva <monica.kenguva@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> |
||
---|---|---|
.. | ||
Makefile | ||
nvme_ctrlr_cmd.c | ||
nvme_ctrlr_ocssd_cmd.c | ||
nvme_ctrlr.c | ||
nvme_cuse.c | ||
nvme_cuse.h | ||
nvme_fabric.c | ||
nvme_internal.h | ||
nvme_io_msg.c | ||
nvme_io_msg.h | ||
nvme_ns_cmd.c | ||
nvme_ns_ocssd_cmd.c | ||
nvme_ns.c | ||
nvme_opal_internal.h | ||
nvme_opal.c | ||
nvme_pcie_common.c | ||
nvme_pcie_internal.h | ||
nvme_pcie.c | ||
nvme_poll_group.c | ||
nvme_qpair.c | ||
nvme_quirks.c | ||
nvme_rdma.c | ||
nvme_tcp.c | ||
nvme_transport.c | ||
nvme_vfio_user.c | ||
nvme_zns.c | ||
nvme.c | ||
spdk_nvme.map |