4dba507224
The resources allocated to a queue pair do not need to be directly correlated to the queue size requested by the initiator in NVMe-oF, as long as enough resources are present. The RDMA transport, for instance, does complex pooling of the resources behind the scenes when using a shared receive queue. Simplify the resource allocation for a TCP qpair to just always allocate the max allowed queue size right away. This is a configurable parameter, so system administrators can adjust for their needs. The initiator may then request a queue size less than or equal to that, which will only be enforced by queue depth counting and not impact the actual number of resources allocated on the target. This change relies on the MaxC2HSize being equal to the Maximum Data Transfer Size (MDTS) reported. That is the default configuration, but MDTS is configurable. Changing the MDTS with this patch to a value larger than 128k will cause the target to break. This is addressed in the next patch in this series. Change-Id: Ibd4723785c6a4d8d444f9b7bbfa89f98de2320f5 Signed-off-by: Ben Walker <benjamin.walker@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479733 Community-CI: SPDK CI Jenkins <sys_sgci@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com> |
||
---|---|---|
.. | ||
bdev | ||
blob | ||
blobfs | ||
conf | ||
copy | ||
env_dpdk | ||
env_ocf | ||
event | ||
ftl | ||
ioat | ||
iscsi | ||
json | ||
jsonrpc | ||
log | ||
log_rpc | ||
lvol | ||
nbd | ||
net | ||
notify | ||
nvme | ||
nvmf | ||
reduce | ||
rocksdb | ||
rpc | ||
rte_vhost | ||
scsi | ||
sock | ||
thread | ||
trace | ||
ut_mock | ||
util | ||
vhost | ||
virtio | ||
vmd | ||
Makefile |