5b2c76f062
For some use case that there is heavy large read I/O, the performance bottleneck due to MAX_LARGE_DATAIN_PER_CONNECTION was reported. The following assumes that all I/Os are large read. Large read primary task whose I/O size is more than SPDK_BDEV_LARGE_BUF_MAX_SIZE (=64KB) is split into multiple read subtasks. spdk_iscsi_globals::MaxQueueDepth limits maximum number of outstanding read primary tasks, and MAX_LARGE_DATAIN_PER_CONNECTION (=64) limits maximum number of outstanding read subtasks. MAX_LARGE_DATAIN_PER_CONNECTION is also used to calculate PDU pool. To remove the performance bottleneck, change the macro constant MAX_LARGE_DATAIN_PER_CONNECTION to a global variable spdk_iscsi_globals::MaxLargeDataInPerConnection. We don't see any negative side effect if we set spdk_iscsi_globals::MaxLargeDataInPerConnection to 64. The use case that reported the performance issue will change the value of spdk_iscsi_globals::MaxLargeDataInPerConnection by its own responsibility. The next patch will add the value of spdk_iscsi_globals::MaxLargeDataInPerConnection to iSCSI options, and make it configurable by JSON RPC. Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Change-Id: Ifc30cdb8e00d50f4d3755ff399263cf5d0b681b6 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3755 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> |
||
---|---|---|
.. | ||
accel | ||
bdev | ||
blob | ||
blobfs | ||
conf | ||
env_dpdk | ||
env_ocf | ||
event | ||
ftl | ||
idxd | ||
ioat | ||
iscsi | ||
json | ||
jsonrpc | ||
log | ||
log_rpc | ||
lvol | ||
nbd | ||
net | ||
notify | ||
nvme | ||
nvmf | ||
rdma | ||
reduce | ||
rocksdb | ||
rpc | ||
rte_vhost | ||
scsi | ||
sock | ||
thread | ||
trace | ||
ut_mock | ||
util | ||
vhost | ||
virtio | ||
vmd | ||
Makefile |