The transport API now allows for multiple transport
objects, so allocate them on demand instead of using
a single global.
Change-Id: I5dd35f287fe7312e6185c75ae75e2488ec8cc78e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/371990
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For now, this is a name change of controller to poll_group
in the transport layer. Over time, the poll_group will
become a more general concept than a controller, allowing
for qpairs to be spread across cores.
Change-Id: Ia92a2934541ad336f462f73175d53aaaf021f67b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/371775
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I10857de922d5a17131910aca92c73995ea6ab8f6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/373828
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
There are now three simple functions on the transport:
listen(transport, trid)
stop_listen(transport, trid)
accept(transport)
This makes the code quite a bit simpler.
Change-Id: I550343a084b5c095240703952c8c07ae535b5c16
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/371774
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Create one transport per nvmf target. Today, there is just
one global nvmf target, but this paves the way for multiple.
Change-Id: Iaa1f8c5e7b3c1e87621ef2a636c68c2dd8fd929e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/371748
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Match the terminology used in the NVMe-oF specification,
which is queue pair. For the RDMA transport, this maps to
an RDMA queue pair, but may map to other things for other
transports. It still is logically a "connection" from
a networking sense.
Change-Id: Ic43a5398e63ac85c93a8e0417e4b0d2905bf2dfc
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/371747
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is just a rename - the functionality hasn't changed.
Use the same terminology as the specification (which is controller)
so those familiar with the specification can more easily
approach the code base.
This is still conceptually equivalent to a "session" in the
networking sense.
Change-Id: I388b56df62d19560224c4adc2a03c71eae6fed0d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/371746
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
The NVMe-oF target was written before we defined
spdk_nvme_transport_id. Now that we have it, go back
and replace all of the locations where we individually
tracked traddr, trsvcid, trtype, etc. and use a trid.
Change-Id: I84334a12c7581f414c1e84680f122fe885a3b9dd
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/370744
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Change-Id: Ie05f58e677107072fea6cc7702bab47a077cb595
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/370743
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
This allows the user to optionally specify the address family for
construct_nvmf_subsystem (default is IPv4).
Note that the RDMA transport still only supports IPv4 because of the way
it binds to the listen address; this will be fixed in a separate patch.
Change-Id: I534ed75f6f81e53559d1bebcd2f34f1a2b210a97
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/367429
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
- rename spdk_malloc_socket to spdk_dma_malloc_socket
- rename spdk_malloc to spdk_dma_malloc
- rename spdk_zmalloc to spdk_dma_zmalloc
- rename spdk_realloc to spdk_dma_realloc
- rename spdk_free to spdk_dma_free
Change-Id: I52a11b7a4243281f9c56f503e826fd7c4a1fd883
Signed-off-by: John Meneghini <johnm@netapp.com>
Reviewed-on: https://review.gerrithub.io/362604
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Replace with it with check the returned req
via spdk_unlikely macro
Change-Id: I1202b3955af9a68496d8ced7cf66c20cf26f7fff
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
When data needs to be transferred from the controller
to the host, do a single ibv_post_send containing
both the data and the completion.
Change-Id: I072c545b31593e0e324c97ed700b42c6a4c358e1
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This call had been reduced to a simple wrapper
around the ibv call. Delete it.
Change-Id: I42926d123db262617119a9cff77bc0d0eb1e8f31
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These functions were only called from one place and
their functionality has been reduced to a wrapper
around the underlying ibv call. Remove them.
Change-Id: I65182012dbe6393b9d57f4191fd327bcd025a6c8
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This keeps all SGL handling in the prep_data function.
Change-Id: I9bfeed3748c1b329288350b85aa87bd604cfce4e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Now that all of the SGL mappings are static,
this function just called ibv_post_recv. Delete
the function and call ibv_post_recv directly.
Change-Id: I45216170a157709249b08c4cb0ebdb1adb906049
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
For an NVMe read, send the completion immediately
following the RDMA WRITE, without waiting for
the acknowledgement. RDMA is strictly ordered,
so the WRITE will arrive before the completion.
Change-Id: I7e4e01d7a02c2130b655ef90f5fdaec992d9361a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Except for a CONNECT capsule, always use the central data
pool for RDMA READ/WRITE operations. The in-capsule
data buffer is associated with the receive operation
while the pool data buffers are associated with the
completion, and using the in-capsule data buffer
causes a lifetime mismatch.
Change-Id: Ieb45e521d78daa7c706078a3dd5c5a146f8dc1d6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
After commit b654e9b, this is no longer required.
Change-Id: I0cf1a7059d7fba0303aca5ad5a15afe3890b4172
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The RDMA protocol this module uses is strictly ordered,
which means messages are delivered in exactly the order
they are sent. However, we have detected a number of
cases where the acknowledgements for those messages
arrive out of order. This patch attempts to handle
that case.
Separate the data required to post a recv from the
data required to send a response. If a recv arrives
when no response object is available, queue the
recv.
Change-Id: I2d6f2f8636b820d0c746505e5a5e3d3442ce5ba4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This clarifies the relation between the values assigned to sg_list and
num_sge (no functional change).
Change-Id: I8e81d47dd97a033b17cd3b813b06e4887127146c
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The mappings are all static, so it isn't interesting
to print them out on each I/O.
Change-Id: I85301b4518d4523a7c031f6ca9ff678d91428504
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows pipelining of READ/WRITE with completion.
Change-Id: Ib3ab5bffb8e3e5de8cbae7a3b2fff7d9f6646d2d
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
This allows static initialization of the scatter
gather list as well as future optimizations
around pipelining commands with data.
Change-Id: I8af8f3e3425610bc720677c9bc84f163cfb6278a
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
The first version of the Linux kernel NVMe-oF initiator had
a bug when reporting queue size where it was off by 1. We
had a workaround to deal with this. Now that the kernel
has been fixed, remove the workaround.
Change-Id: I0ad4a5c6db68cfa9683ab93e6f5210772c713b55
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The sq_head handling is already done in
spdk_nvmf_rdma_request_send_completion, so do not need to
do again.
Change-Id: I527ff8adfcbdf43ac79794cb5c7777c0e8ef6973
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
It is no longer used now that AER handling holds the request until it is
triggerred.
Change-Id: I71a75e86f82bc06f677cf26defa701e60b9aa1bd
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The wr_id should never be NULL - it will always correspond to a request
we previously posted. Convert the check to an assert() so we notice if
this ever happens (which would indicate a programming error somewhere
else).
While we're here, add a more robust check to make sure the request is
actually in the correct array of requests for the connection being
polled (also in an assert, since this should never fail in normal
execution).
Change-Id: I855763d7d827fb8cf00a775c7bc2ccb579db8d0f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Kernel nvmf host always tries to connect nvmf target
when we does not issue nvme disconnect command. Thus,
we face rdma_create_qp issue, the reason is that we call
rdma_listen too early, and the event retrieved from
rdma_cm_get_event is too late.
And this patch solves this issue.
Change-Id: I153a8aea7420a86a236301dad9bd54af97f60865
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Calling spdk_nvmf_request_complete to complete spdk_nvmf_request
causes some fields in completion queue entry not set correctly.
Calling spdk_nvmf_request_complete fixes the problem.
Whether a nvme command having data transfer cannot be completely
determined by command opcode. For set features command, some features
don't require data transfer.
Change spdk_nvmf_request_prep_data to fix this issue.
Reason: the 4 fields of struct ibv_recv_wr is already
set in the following 4 lines.
Change-Id: I97437ee2e4c6e944154813bb48b1740b182220df
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
These were repeated a few different places, so pull them into a common
header file.
Change-Id: Id807fa2cfec0de2e0363aeb081510fb801781985
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The phenoemon is that we can not shutdown the nvmf tgt.
The solution is that we need to adjust the shutting down orders of
nvmf tgt subsystem and rdma trasport layer.
Change-Id: Ie39657370b1574960e0ee7cf604cc5872db0bed3
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
When we do frequent same subsystem add/delete,
we will face the adding issue. For example,
1 Add subsystem A
2 Delete subsystem A
3 Add subsystem A (Fail in this step).
The reason is that we did not correctly free
the listener resources of subsystems, and this patch
can solve this issue.
Change-Id: I6765a306a3f10c9a0f38c95dbba12e2a4073e705
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
This is not actually optional - it contains required
information for setting up the connection.
Change-Id: I21136de12794a0f4f5c14c5d3e2e3f2306c5c102
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The SPDK_TRACELOG macro depends on a CONFIG setting (DEBUG), so it
should not be part of the public API.
Create a new include/spdk_internal directory for headers that should
only be used within SPDK, not exported for public use.
Change-Id: I39b90ce57da3270e735ba32210c4b3a3468c460b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Print the error information when the kernel RNIC driver did not load
properly, and fix the cleanup logic for the exceptional exit.
Change-Id: I97a45e73d830280b994818f3defc491bc2b6b020
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
If the RDMA transport failed to initialize, g_rdma.event_channel may be
NULL.
Change-Id: I4510ee5893389f244f0fbaa1cd4a182868939b25
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
For iWARP devices, buffers that are intended to be the
target of an RDMA read initiated by the target must additionally
have IBV_ACCESS_REMOTE_WRITE permission. This is because iWARP's
RDMA read path essentially requests the remote side to do
an RDMA write.
This is unfortunate because there is no way to differentiate between
memory that the remote side can do an RDMA write to and memory
that will only be the target of RDMA reads initiated by the
target. There is nothing we can do about this serious deficiency in
the specification, however, so we have to live with it.
Change-Id: I3d2f2814ce0cb1df4e5347296ef371db4d16be21
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
I believe this is required for NICs to report, but handle
the case where it isn't reported.
Change-Id: I38d10c3590d1df8bb902ab312af0f9e01b9e5032
Signed-off-by: Ben Walker <benjamin.walker@intel.com>