Commit Graph

1116 Commits

Author SHA1 Message Date
yidong0635
024127dcfd rdma: Add return value check for memory map notify.
Now code always return 0 , do this like nvme_rdma_mr_map_notify.
That callback can get the right return.

Change-Id: Ief2924e14321b2062f6001e7ae3f50d507206594
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468663
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2019-09-19 01:55:55 +00:00
Seth Howell
8126509c4f rdma: replace improperly aligned buffers in requests.
It is a very rare thing for a buffer to be split over two memory
regions. In fact, it is only possible in dpdk versions where
--match-allocations is not passed as a startup parameter to dpdk but
dynamic memory allocation is enabled.

By adding a small helper function, we avoid failing an I/O because it
was assigned one of these improperly aligned buffers. Also, we try to
remove the buffer from circulation so that it doesn't get picked up
again by another request.

Also, add a unit test to catch this case.

Change-Id: Ia09865c2f77160a960571665b29c4533b11758ae
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467446
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
2019-09-17 19:43:01 +00:00
Seth Howell
98233769f4 rdma: simplify nvmf_rdma_fill_buffers
Just cleaning up a few things like variable names and ordering to make
the whole function more readable.

Change-Id: I1503cdb43ddd73e063d6e57e9ff0cf2a06e79728
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467445
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
2019-09-17 19:43:01 +00:00
Evgeniy Kochetov
87ebcb08c1 nvmf/rdma: Handle completions for destroyed QP associated with SRQ
IB Architecture Specification vol.1 rel.13. in ch.10.3.1 "QUEUE PAIR
AND EE CONTEXT STATES" suggests the following destroy procedure for
QPs associated with SRQ:
- Put the QP in the Error State;
- wait for the Affiliated Asynchronous Last WQE Reached Event;
- either:
  * drain the CQ by invoking the Poll CQ verb and either wait for CQ
    to be empty or the number of Poll CQ operations has exceeded CQ
    capacity size; or
  * post another WR that completes on the same CQ and wait for this WR
    to return as a WC;
- and then invoke a Destroy QP or Reset QP.

Without the drain step it is possible that LAST_WQE_REACHED event is
received and QP is destroyed before the last receive WR completion is
polled from the CQ.

In SPDK there is no risk of resource leakage in this case. So, instead
of draining we can destroy QP and then just ignore receive completions
without QP and post receive WRs back to SRQ.

Fixes #903

Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ice6d3d5afc205c489f768e3b51c6cda8809bee9a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465747
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-12 17:04:48 +00:00
Michal Ben Haim
62615117f7 SPDK: changing TREQ value from 'not specified' to 'not required'.
Signed-off-by: Michal Ben Haim <michal.benhaim@kaminario.com>
Change-Id: Ia7bda5b18db24df97172d4500a499c4635d592d5
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467499
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-10 17:51:26 +00:00
Ben Walker
59e34aa865 nvmf/tcp: Don't set socket recvbuf size anymore
The default behavior is to set it to 2MB, so this isn't
required anymore.

Change-Id: I62d7605cd4d5bc41347128f32f9a1aa373a15744
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466993
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-09-10 17:48:49 +00:00
Ziye Yang
24eb7a84b0 nvme/tcp: fix the iov vector count.
Since we use pdu->data_iovcnt to
build the iov in nvme_tcp_build_iovs, so
send out pdu has the maximal iov number
equals to: 2 + pdu->data_iovcnt,
so we change the comparison.

This makes sure that we can handle all the data
owned by one pdu.

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I2b9258cc5716d706c0fa38af609726c439708768
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467207
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-09-09 02:08:31 +00:00
Shuhei Matsumoto
9796768132 nvmf: Move pending_data_buf_queue to common struct spdk_nvmf_transport_poll_group
This unifies buffer management among transports further and is a
preparation to make buffer allocation asynchronous.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I8c588eeac4081f50fe32605feb7352f72c628d95
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466847
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
cb5c661274 nvmf/fc: Move pending_data_buf_queue from fc_conn to fc_poll_group
I/O buffer cache is per transport_poll_group now. Hence moving
pending_data_buf_queue from struct spdk_nvmf_fc_conn to struct
spdk_nvmf_fc_poll_group is reasonable and do it in this patch.

This change is based on RDMA and TCP transport.

Further unification among transports will be done in subsequent
patches.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic857046be8da238cb3ff9e89b83cdac5f6349bcf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466844
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
2ed1b6c253 nvmf/fc: Use transport pointer stored in transport_poll_group
The pointer to transport is set to struct nvmf_transport_poll_group
in nvmf_transport_poll_group_create() after returning
nvmf_fc_poll_group_create(). Hence use it and remove ftransport pointer
from struct nvmf_fc_poll_group.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9f2b2ade77afa18d0e97949fc0c2403eb000cdad
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467060
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
b913e01644 nvmf/fc: Rename pointer to nvmf_fc_transport from fc_transport to ftransport
RDMA transport have used rtransport and TCP transport have used
ttransport, respectively. So FC transport changes to use ftransport
instead of fc_transport.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7d98eb2f6efbae7e2b4784f31b9de5e1a81bc2ac
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467059
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
b9dc11f98d nvmf/fc: Rename transport_poll_group instance in nvmf_fc_poll_group to group
Both RDMA and TCP transport have uesd group for such case. Hence
FC transport changes to use group instead of tp_poll_group.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic4b401179da506bb204c3ec48650db87f91fe72a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466843
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
01df17d007 nvmf/fc: Use pointer stored in transport_poll_group and remove it from fc_poll_group
The pointer to nvmf_poll_group is set in nvmf_transport_poll_group_create()
after returning nvmf_fc_poll_group_create(). Hence holding it into
struct spdk_nvmf_fc_poll_group is duplicated and can be removed.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7087c5cdb94b0b0c5f51b0b63b631c08266c90d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466842
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
99ea1d3612 nvmf/fc: Rename nvmf_fc_poll_group pointer held in struct to fgroup
RDMA transport have used rgroup and TCP transport have used tgroup
for such case. Hence FC transport changes to use fgroup instead of
fc_poll_group.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I91b7ad6a1c6e45caf92801b0635b18d48b3c9810
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466841
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Seth Howell
20b35d769d nvmf: don't keep a global discovery log page.
Keeping a global discovery log page was meant to be a time saving
mechanism, but in the current implementation, it doesn't work properly,
and can cause undesirable behavior and potential crashes. There are two
main problems with keeping a global log page.

1. Admin qpairs can be assigned to any SPDK thread. This means that when
multiple initiators connect to the host and request the discovery log,
they can both be running through the spdk_nvmf_ctrlr_get_log_page
function at the same time. In the event that the discovery generation
counter is incremented while these accesses are occurring, it can cause
one or both of the threads to update the log at the same time. This
results in both logs trying to free the old log page (double free) and
set their log as the new one (possible memory leak).

2. The second problem is that each host is supposed to get a unique
discovery log based on the subsystems to which they have access.
Currently the code relies on whether the discovery log page offset in
the request is equal to 0 to determine if it should load a new discovery
log page or use the cached one. This is inherently faulty because it
relies on initiator provided value to determine what information to
provide from the log page. An initiator could easily send a discovery
request with an offset greater than 0 on purpose to procure most of a
log page provided to another host.

Overall, I think it's safest to not cache the log page at all anymore
and rely on a thread local fresh log page each time.

Reported-by: Curt Bruns <curt.e.bruns@intel.com>

Change-Id: Ib048e26f139927d888fed7019e0deec346359582
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466839
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-03 00:30:59 +00:00
Shuhei Matsumoto
0b068f8530 nvmf/rdma: Pass nvmf_request to nvmf_rdma_fill_buffers
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_rdma_request to
nvmf_rdma_request_fill_buffers and do it in this patch.

Additionally, we use the cached pointer to nvmf_request in
spdk_nvmf_rdma_request_fill_iovs which is the caller to
nvmf_rdma_request_fill_buffers in this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ia7664e9688bd9fa157504b4f5075f79759d0e489
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466212
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
b4778363b4 nvmf/tcp: Pass nvmf_request to nvmf_tcp_req_fill_buffers
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_tcp_req to
nvmf_tcp_req_fill_buffers and do it in this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I00eff578a98891e99fcb9a3aafa3d99126d6f1c1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466089
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
90a2be2006 nvmf/fc: Pass nvmf_request to nvmf_fc_request_fill_buffers
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_fc_request to
nvmf_fc_request_fill_buffers and do it in this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ibe87e7641e5c364b20a6d877ce7928c612b0b83a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466088
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
9412a8370d nvmf/fc: Use STAILQ for pending_data_buf_queue
This is a small performance optimization and an effor to unify I/O
buffer management further among transports.

it is ensured that the request is the first of STAILQ when
nvmf_fc_request_execute() completes successfully.

Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for the case.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: If982842bf53ba00426a854a18eaadf8a1b8d642d
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466676
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
6c8b297262 nvmf/fc: Rename pending_queue to pending_data_buf_queue
This is an effort to unify I/O buffer management further among
transports. RDMA and TCP transport have named pending_queue
pending_data_buf_queue. So FC transport follows RDMA and TCP transport.

The next patch will change pending_data_buf_queue to use STAILQ
instead of TAILQ.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I57c3c678a1e92ec262eb8940418529a62b6768c3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466675
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
2bc819dd52 nvmf/tcp: Use STAILQ for queued_c2h_data_tcp_req and pending_data_buf_queue
This is a small performance optimization and an effort to unify
I/O buffer management further among transports.

It is ensured that the request is the first of STAILQ when
spdk_nvmf_tcp_send_c2h_data() is called or the case
TCP_REQUEST_STATE_NEED_BUFFER is executed in spdk_nvmf_tcp_req_process().

Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for these two cases.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0b195874ac22a8d5ecfb283a9865d2615b7d5912
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466637
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-30 16:56:46 +00:00
Ziye Yang
5e7b8d18f3 nvmf/tcp: Remove the potential pdu hdr memory copy.
In this patch, we directly point the hdr_p
to the memory owned by the pdu_recv_buf to avoid
memory copy.

Change-Id: Iee0dd98058928f429bf7ad22103cd4826226400f
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465158
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 02:25:22 +00:00
Shuhei Matsumoto
8a80461ac6 nvmf/tcp: execute buffer allocation only if request is the first of pendings
RDMA transport executes spdk_nvmf_rdma_request_parse_sgl() only if
the request is the first of the pending requests in the case
RDMA_REQUEST_STATE_NEED_BUFFER in the state machine
spdk_nvmf_rdma_requests_process().

This made RDMA transport possible to use STAILQ for pending requests
because STAILQ_REMOVE parses from head and is slow when the target is in
the middle of STAILQ.

On the other hand, TCP transport executes spdk_nvmf_tcp_req_parse_sgl()
even if the request is in the middle of the pending request in the case
TCP_REQUEST_STATE_NEED_BUFFER in the state machine
spdk_nvmf_tcp_req_process() if the request has in-capsule data.

Hence TCP transport have used TAILQ for pending requests.

This patch removes the condition if the request has in-capsule data
from the case TCP_REQUEST_STATE_NEED_BUFFER.

The purpose of this patch is to unify I/O buffer management further.

Performance degradation was not observed even after this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idc97fe20f7013ca66fd58587773edb81ef7cbbfc
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466636
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
0f73c253b5 nvmf/fc: Replace FC specific get/free_buffers by common APIs
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove nvmf_fc_request_free_buffers() and nvmf_fc_request_get_buffers().

Set fc_req->data_from_pool to false after spdk_nvmf_request_free_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I046a642156411da3935bc2fa2c2816fc2e025147
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465877
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
9968035884 nvmf/tcp: Replace TCP specific get/free_buffers by common APIs
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove spdk_nvmf_tcp_request_free_buffers() and
spdk_nvmf_tcp_request_get_buffers().

Set tcp_req->data_from_pool to false after spdk_nvmf_request_free_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I286b48149530c93784a4865b7215b5a33a4dd3c3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465876
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
85b9e716e9 nvmf/rdma: Replace RDMA specific get/free_buffers by common APIs
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove spdk_nvmf_rdma_request_free_buffers() and
nvmf_rdma_request_get_buffers().

Set rdma_req->data_from_pool to false after
spdk_nvmf_request_free_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ie1fc4c261c3197c8299761655bf3138eebcea3bc
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465875
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
cc4d1f82cc nvmf: Add spdk_nvmf_request_get/free_buffers() usable among transports
This patch adds new APIs spdk_nvmf_request_get_buffers() and
spdk_nvmf_request_free_buffers() to be used among transports.
Subsequent patches will replace transport specific APIs by them.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ib153e2c5806b7276915a0aa91179fe9dbcb2a1f0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
005b053a02 nvmf: Move data_from_pool flag to common struct spdk_nvmf_request
This is a prepration to unify buffer management among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I6b1c208207ae3679619239db4e6e9a77b33291d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466002
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
04ae83ec93 nvmf: Move allocated buffer pointers to common struct spdk_nvmf_request
This is a preparation to unify buffer management among transports.
struct spdk_nvmf_request already has SPDK_NVMF_MAX_SGL_ENTRIES (16) * 2
iovecs. Hence incresing the number of buffers twice will be no problem.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idb525abbf35dc9f4b8547b785b5dfa77d106d8c9
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465873
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-29 18:17:38 +00:00
Evgeniy Kochetov
01887d3c96 nvmf/rdma: Fix data WR release
One of stop conditions in data WR release function was wrong. This
can cause release of uncompleted data WRs. Release of WRs that are
not yet completed leads to different side-effects, up to data
corruption.

The issue was introduced with send WR batching feature in commit
9d63933b7f.

This patch fixes stop condition and contains some refactoring to
simplify WR release function.

Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ie79f64da345e38038f16a0210bef240f63af325b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466029
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-29 18:09:14 +00:00
Ziye Yang
d50736776c nvmf/tcp: Use a big buffer for PDU receving.
Purpose: Reduce the recv/readv system call.
Method: Use a big recv buffer to conduct the read.
Though it will introduce addtional buffer copy,
we hope that the overhead introduced by buffer copy will
be smaller compared with frequent recv/readv system call overhead.
And the design is to make a trade off between them.

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I9286fd9cec0b512cea8e3f2c335c5bf862b98573
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464842
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-28 15:38:02 +00:00
Ziye Yang
ea5ad0b286 nvme/tcp: Change hdr in nvme_tcp_pdu to pointer
Purpose: Prepare the further optimnization in the
target side whening receving pdu headers, we expect
to use zero copy.

Change-Id: Iae7f9106844736d7160d39d0af1f5941084422ec
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465380
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-28 15:38:02 +00:00
Shuhei Matsumoto
eab7360bcb nvmf/tcp: Factor out getting and filling buffers from nvmf_tcp_req_fill_iovs
This follows the practice of RDMA transport and is a preparation to
unify buffer allocation among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ib85625f2a0eca01ef4028685dd838d6c41faad7b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465872
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
72c10f7094 nvmf/tcp: Use spdk_mempool_get_bulk in nvmf_tcp_req_fill_iovs
This follows the practice of RDMA transport and a preparation to
unify buffer management among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I4e9b81b2bec813935064a6d49109b6a0365cb950
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465871
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
8aac212005 nvmf/tcp: Pass number of alloc buffers s as param to nvmf_tcp_request_free_buffers
This is a preparation to the next patch to use spdk_mempool_get_bulk.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I28a5ad941004f139c9032d85c2ef92680081f1ce
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465870
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
5437470cdc nvmf/fc: Factor out getting and filling buffers from nvmf_fc_request_alloc_buffers
This follows the practice of RDMA transport and  is a preparation to
unify buffer allocation among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I3cd4377ae31e47bbde697837be2d9bc1b1b582f1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465869
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
71ae39594f nvmf/fc: Use buffer cache in nvmf_fc_request_alloc/free_buffers
FC transport can use buffer cache as same as RDMA and TCP transport
now. The next patch will factor out getting buffers and filling
buffers to iovs in nvmf_fc_request_alloc_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0d7b4552f6ba053ba8fb5b3ca8fe7657b86f9984
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465868
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
fbb0f0faf9 nvmf/fc: Pass transport and num_buffers as params to nvmf_fc_request_free_buffers
This is a preparation to the next patch to use buffer cache in
FC transport.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I116b064ea0b0a437f9a3293a6f3d46a0e5fc8ecf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465867
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
e3b8c31d03 nvmf/fc: Use spdk_mempool_get_bulk in nvmf_fc_request_alloc_buffers
This follows the practice of RDMA transport and a preparation to
unify buffer management among transport types.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic7dc8e6b826baf7f471d192630e8a048a35056ac
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465866
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
c5b15dde18 nvmf/fc: Use common buffer pool for FC transport
NVMe-oF FC transport have used its own buffer pool and have not used
common buffer pool yet.

It looks that there is no particular reason to prevent FC transport
from using the common buffer pool.

This patch removes FC transport specific buffer pool and changes
FC transport to use common buffer pool instead. Add transport
as a parameter to nvmf_fc_request_free_buffers() because similar
APIs of RDMA and TCP transport do that.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Iae3a117466c21eaddbe78a8e8023d80ef37bb3e9
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465865
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
cdf80adccc nvmf/fc: Check if buffer came from pool prior to nvmf_fc_request_free_buffers()
NVMe-oF FC transport have used its own buffer pool and have not used
common buffer pool yet.

It looks that there is no particular reason to prevent FC transport
from using the common buffer pool.

This patch extract checking fc_req->data_from_pool from
nvmf_fc_request_free_buffers() to make the transition easier.

fc_req->req.iovcnt and fc_req->req.data should be cleared regardless
of fc_req->data_from_pool. Hence extract them into callees.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I36420f0e573d1ec3f9f3a75f6b2ced82ade89dd3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465864
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
cbd3500019 nvmf/fc: Use common setting to FC specific data buffer pool
NVMe-oF FC transport have used its own buffer pool and have not used
common buffer pool yet.

It looks that there is no particular reason to prevent FC transport
from using the common buffer pool.

This patch adjust the setting of the FC transport specific buffer pool
to the common buffer pool to make the transition easier.

Large alignment requirement consumes more memory but is acceptable.
Cache size calculation looks dated.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Id3224b65f39187c4d8e99c00cf54b1cfdd902250
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465863
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-26 19:04:24 +00:00
Seth Howell
f8433aad23 rpc/nvmf: add tgt_name options to relevant RPCs.
All of the RPCs in lib/nvmf/nvmf_rpc.c rely on knowing which nvmf_tgt
they should work with. They have historically relied on the assumption
that there will only be a single target in a given application. This is
true for the example application in the spdk repo, but it is not
necessarily true generally,

By adding an option tgt_name parameter to the RPCs we enable them for
multi-target NVMe-oF applications. We also further reduce the coupling
between the library and the example application.

Change-Id: I03b6695da05a42af3024842ed87d2ce2c296f33f
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465442
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
2019-08-21 17:20:28 +00:00
Seth Howell
a54a6a266c lib/nvmf: extract RPCs from the subsystem directory
There are one or two RPCs that deal with application specific
configuration. We can leave these there for now.

Change-Id: I9c40aa3403d32d3e2214c8c904fb1c414ad99967
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465365
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-21 17:20:28 +00:00
Seth Howell
79d876716c nvmf: add spdk_nvmf_get_tgt function
This function will allow applications (and RPCs)
to obtain an spdk_nvmf_tgt pointer by name.

Change-Id: I82792e06a819e06d9fddb5429830008653d92cd1
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465349
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-21 17:20:28 +00:00
Seth Howell
8d6d26bd29 nvmf: add a name entry to the spdk_nvmf_tgt struct
This will provide a unique identifier which can be used to provide get
and set methods within the RPCs.

Change-Id: Idd144e99e49b8d26530f60530d2e908b18fa251b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465330
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-20 19:15:04 +00:00
Seth Howell
7d6d95db3c nvmf: change the function signature of spdk_nvmf_tgt_create
This is necessary to allow the spdk_nvmf_tgt structure to evolve over
time without having to further change the target API.

Change-Id: Ib0f0f9b1f190913feff0229c96df4e84b1bf35f7
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465363
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-08-20 19:15:04 +00:00
Seth Howell
0ac5050624 lib/nvmf: add a global list of targets
As part of moving the nvmf rpc code to the library, we will need to make
it more inclusive of use cases outside of the example spdk nvmf_tgt
application. That application only supports a single nvmf target
structure. As such, many of the RPCs have this assumption built into
them.
In order to enable the multi-target use case, we need to configure a way
to translate between user supplied RPCs and actual target objects in the
library.

Change-Id: I5d3745afe9c2ca1c33f6e1a1bcc2b8bb3196ccd6
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465329
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
2019-08-20 19:15:04 +00:00
Ben Walker
1e82ec0640 nvmf: Delay sending AER until subsystem resumes
Change-Id: Id5152a793c6b530cb1419c559ac3ed71ee042037
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464614
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-14 21:24:27 +00:00
Ziye Yang
1917d3b413 nvmf: move the assigment of pdu outside the switch
Purpose: To reduce the duplicated code.

And one minor fix: add an empty line between two functions

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I12c9ddba6526c094cd2bd945e14f9d8bf5209adf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464504
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-09 07:37:12 +00:00