59a1fbe937
We throttle the number of data_in operations per connection. Currently after a read is completed, we try to send more data_in operations since one has just been completed. But we are trying to send more too early. The data_in_cnt doesn't actually get decremented until after the PDU is written on the socket. So this results in a case where data_in_cnt == 64, and all 64 read operations complete before any of those 64 are actually transmitted onto the TCP socket. There are no more read operations waiting, so we won't try to handle the data_in list again, and if none of these 64 resulted in a SCSI command completing, then the initiator may not send us any more read I/O which would have also kicked the data_in list. So the solution is to kick the data_in list after the PDU has been written - not after a read I/O is completed back from the SCSI layer. Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: Ia01cf96e8eb6e08ddcaaeff449386e78de7c5bc5 Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455454 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: GangCao <gang.cao@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> |
||
---|---|---|
.. | ||
bdev | ||
blob | ||
blobfs | ||
conf | ||
copy | ||
env_dpdk | ||
event | ||
ftl | ||
ioat | ||
iscsi | ||
json | ||
jsonrpc | ||
log | ||
lvol | ||
nbd | ||
net | ||
notify | ||
nvme | ||
nvmf | ||
reduce | ||
rocksdb | ||
rpc | ||
scsi | ||
sock | ||
thread | ||
trace | ||
ut_mock | ||
util | ||
vhost | ||
virtio | ||
Makefile |