37ad7fd3b8
This value was not being decremented when we got SEND completions for write operations because we were using the recv send to indicate when we had completed all writes associated with the request. I also erroneously made the assumption that spdk_nvmf_rdma_request_parse_sgl would properly reset this value to zero for all requests. However, for requests that return SPDK_NVME_DATA_NONE rom spdk_nvmf_rdma_request_get_xfer, this funxtion is skipped and the value is never reset. This can cause a coherency issue on admin queues when we request multiple log files. When the keep_alive request is resent, it can pick up an old rdma_req which reports the wrong number of outstanding_wrs and it will permanently increment the qpairs curr_send_depth. This change decrements num_outstanding_data_wrs on writes, and also resets that value when the request is freed to ensure that this problem doesn't occur again. Change-Id: I5866af97c946a0a58c30507499b43359fb6d0f64 Signed-off-by: Seth Howell <seth.howell@intel.com> Reviewed-on: https://review.gerrithub.io/c/443811 (master) Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447462 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> |
||
---|---|---|
.. | ||
bdev | ||
blob | ||
blobfs | ||
conf | ||
copy | ||
env_dpdk | ||
event | ||
ftl | ||
ioat | ||
iscsi | ||
json | ||
jsonrpc | ||
log | ||
lvol | ||
nbd | ||
net | ||
nvme | ||
nvmf | ||
reduce | ||
rocksdb | ||
rpc | ||
scsi | ||
sock | ||
thread | ||
trace | ||
ut_mock | ||
util | ||
vhost | ||
virtio | ||
Makefile |