8a80461ac6
RDMA transport executes spdk_nvmf_rdma_request_parse_sgl() only if the request is the first of the pending requests in the case RDMA_REQUEST_STATE_NEED_BUFFER in the state machine spdk_nvmf_rdma_requests_process(). This made RDMA transport possible to use STAILQ for pending requests because STAILQ_REMOVE parses from head and is slow when the target is in the middle of STAILQ. On the other hand, TCP transport executes spdk_nvmf_tcp_req_parse_sgl() even if the request is in the middle of the pending request in the case TCP_REQUEST_STATE_NEED_BUFFER in the state machine spdk_nvmf_tcp_req_process() if the request has in-capsule data. Hence TCP transport have used TAILQ for pending requests. This patch removes the condition if the request has in-capsule data from the case TCP_REQUEST_STATE_NEED_BUFFER. The purpose of this patch is to unify I/O buffer management further. Performance degradation was not observed even after this patch. Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Change-Id: Idc97fe20f7013ca66fd58587773edb81ef7cbbfc Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466636 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> |
||
---|---|---|
.. | ||
ctrlr_bdev.c | ||
ctrlr_discovery.c | ||
ctrlr.c | ||
fc_ls.c | ||
fc.c | ||
Makefile | ||
nvmf_fc.h | ||
nvmf_internal.h | ||
nvmf_rpc.c | ||
nvmf.c | ||
rdma.c | ||
subsystem.c | ||
tcp.c | ||
transport.c | ||
transport.h |