a2adca79d9
With our target design, there's no advantage to sending multiple R2T PDUs per nvme command. This patch starts by setting up the math so that at most 1 R2T PDU is required per request. This can be guaranteed because the maximum data transfer size (MDTS) is pre-negotiated in NVMe-oF to a reasonable size at start up. It then proceeds to simplify all of the logic around mapping requests to PDUs. It turns out that the mapping is now always 1:1. There are two additional cases where there is no request object at all but a PDU is still needed - the connection response and termination request. Put an extra PDU on the queue object for that purpose. This is a major simplification. Change-Id: I8d41f9bf95e70c354ece8fb786793624bec757ea Signed-off-by: Ben Walker <benjamin.walker@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479905 Community-CI: SPDK CI Jenkins <sys_sgci@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com> |
||
---|---|---|
.. | ||
ctrlr_bdev.c | ||
ctrlr_discovery.c | ||
ctrlr.c | ||
custom_cmd_hdlr.c | ||
fc_ls.c | ||
fc.c | ||
Makefile | ||
nvmf_fc.h | ||
nvmf_internal.h | ||
nvmf_rpc.c | ||
nvmf.c | ||
rdma.c | ||
subsystem.c | ||
tcp.c | ||
transport.c | ||
transport.h |