0bb626672b
According to the TP 8000 spec in Page 26: Maximum Number of Outstanding R2T (MAXR2T): Specifies the maximum number of outstanding R2T PDUs for a command at any point in time on the connection. Note that by the spec, the target may only support single r2t (which is the minimum possible), it doesn't have to use multiple r2ts even if the initiator supports that. So remove the maxr2t and pending_r2t variable in the tcp qpair structure. In the original design, we think that maxr2t is the maximal active r2t numbers for each connection. So if the initiator sends out maxr2t=16, it means that all the commands of a qpair can use such number of R2T pdus. So we need to wait for the available R2Ts for the request when the maxr2t reaches the maximal value. But it is the wrong understanding of the spec. In fact, each command has its own number of maximal r2t numbers, then we do not need to use the wait method for R2T method anymore. So we remove the state TCP_REQUEST_STATE_DATA_PENDING_FOR_R2T. Futhermore, we adjust the related SPDK_TPOINT_ID definition. In current patch, the target will support one active R2T for each write NVMe command. Thus, we remove the function spdk_nvmf_tcp_handle_queued_r2t_req. Reported-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Ziye Yang <ziye.yang@intel.com> Change-Id: I7547b8facbc39139b4584637ccc51ba8b33ca285 Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455763 Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Or Gerlitz <gerlitz.or@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> |
||
---|---|---|
.. | ||
ctrlr_bdev.c | ||
ctrlr_discovery.c | ||
ctrlr.c | ||
Makefile | ||
nvmf_fc.h | ||
nvmf_internal.h | ||
nvmf.c | ||
rdma.c | ||
subsystem.c | ||
tcp.c | ||
transport.c | ||
transport.h |