This variable really indicates when a qpair is
no longer connected. So NVME_QPAIR_DISCONNECTED is
actually much more accurate.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ia480d94f795bb0d8f5b4eff9f2857d6fe8ea1b34
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1850
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The rdma_disconnect call triggers an RDMA_CM_EVENT_DISCONNECTED
message on the target side. The hope is that the target side will
reply with the same message in a reasonable amount of time. If the
target doesn't have that mechanism implemented, print an error message
and continue with the process.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I164a3538714fa3adfc306ea0c88220ea710e7c39
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1879
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
That is done to make sure that scenario described in github
issue #1292 won't happen
Change-Id: Ie2ad001da701e25ef984ae57da850fb84d51b734
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1771
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
In some situations we may get a completion of RDMA_RECV before
completion of RDMA_SEND and this can lead to a bug described in #1292
To avoid such situations we must complete nvme_request only when
we received both RMDA_RECV and RDMA_SEND completions.
Add a new field to spdk_nvme_rdma_req to store response idx -
it is used to complete nvme request when RDMA_RECV was completed
before RDMA_SEND
Repost RDMA_RECV when both RDMA_SEND and RDMA_RECV are completed
Side changes: change type of spdk_nvme_rdma_req::id to uint16_t,
repack struct nvme_rdma_qpair
Fixes#1292
Change-Id: Ie51fbbba425acf37c306c5af031479bc9de08955
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1770
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This API will allow us to simplify the polling mechanism for qpairs on a single
thread. It also will pave the way for doing transport specific aggregation of
qpair polling to increase performance.
The generic implementation is included. The transport specific calls
have yet to be implemented.
Change-Id: If07b4170b2be61e4690847c993ec3bde9560b0f0
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/579
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Add function nvme_rdma_get_key to get either lkey
or rkey, use it in request building functions
Change-Id: Ic9e3429e07a10b2dddc133b553e437359532401d
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1462
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Cache payload type and in-capsule data transfer support
Change-Id: Id40a6e86d1f29235ca3e0189d7fbcf19baa30ffe
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1461
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Here destruct contrllers are in one function, and we can
remove the duplicated codes using goto.
It can save several lines of codes.
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: Ibf3cb9fe2ea4bfc65d42603a7b13aaf575854580
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1638
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If nvme_rdma_qpair_submit_sends() returns -ENOMEM,
nvme_rdma_qpair_process_completions() returns immediately.
In this case, nvme_rdma_qpair_process_completions() does not
poll CQ.
However, nvme_rdma_qpair_process_completions() can poll CQ even
when there is no free slot in SQ.
Hence move nvme_rdma_qpair_submit_sends() and
nvme_rdma_qpair_submit_recvs() after the loop to poll CQ.
nvme_rdma_qpair_submit_sends() and nvme_rdma_qpair_submit_recvs()
output error log and so checking return code of them is not
necessary and is removed in this patch.
This fixes part of the github issue #1271.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Icf22879c69c3f84e6b1d91dc061b6f44237eedd1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1342
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This gives us a more standard path in the create_io_qpair path. Eventually
this will allow us to bring the connection commands out to the generic layer
in alloc_io_qpair. Then we can split the calls to create and connect at the
generic level making it possible to add rdma qpairs to a poll group in a meaningful
way.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ib1b125f834c3c39a2b5050ff4a9bc4a053b95c99
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1119
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This allows it to fit on three cachelines instead of four.
Change-Id: I2510b50ffcefb77fa570e738b2c6588749f30a00
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1143
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Align rdma and tcp to respect opts. Reduce default number of entries
for admin queue so it becomes memory optimization.
Linux driver by default creates admin queue with 32 depth, there is no
good reason to enlarge that queue by default within SPDK NVMe driver.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I97ceea8f350c52313021a63190fb0980f604c48e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1110
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This gets rid of some duplicate lines of code.
Change-Id: I24d4864921f6030672f3640b33f88f37a9e8175a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1136
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Add transport_ack_timeout parameter to nvme controller opts.
This parameter allows to configure RDMA ACK timeout according
to the formula 4.096 * 2^(transport_ack_timeout) usec.
The parameter should be in range 0..31 where 0 means use
driver-specific default value.
Change-Id: I0c8a5a636aa9d816bda5c1ba58f56a00a585b060
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/502
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Users may only set the transport type, but for the actual probe
process, the trstring field is mandatory, so set the trstring
based on transport type at first. Also remove unnecessary
spdk_nvme_trid_populate_transport() call from each transport
module.
Fix#1228.
Change-Id: I2378065945cf725df4b1997293a737c101969e69
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1001
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This allows to avoid calculation of ioccsz bytes on each request
and removes access to "cold" ctrlr structures in data path.
Add UT to check validness of calculation
Change-Id: I55ceff99eb924156155e69a20f587a4f92b83f0b
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/519
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If the transport doesn't define one, don't call it.
Change-Id: I8b83132f9fc0accbd4faa8fa0fc17a6bd11e543e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/783
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
With the transport plugin system, this is no longer necessary.
Change-Id: Ia73878599658db84150603223ac811cb5a34ffba
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/713
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This allows to configure desired retry_count instead of using
hard coded value
Change-Id: I25c9601997ace916dfb735469a4b443c0cd2a96b
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482499
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In the event that we have more than one event outstanding for a qpair
at the time of destruction, we need to ack all of the events, Luckily
the synchronization is already there in the form of the ctrlr lock.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ib297598f2e28d9b9bd83e904f950795a61fa883a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479171
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Not inlining all host to controller operations breaks the target within
the context of fused commands. This issue was discovered when enabling
the compare-and-write fused command. Only the write command buffer was
being inlined which caused the write to jump the compare in the
transport specific state machine on the target side before our fused
command checks in the generic code.
Change-Id: I9e52ae6160e01ffd36d20429ffc8459491c729ef
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482001
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Now that we have a more flexible function table strategy for
transports, we can get rid of some of the wrapping we were doing
to match the macro definitions exactly.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I12c868babfa7bd27dc8ed5e86d35e179f8ec984f
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The trtype should be stored as both an enum and string. This is intended to
help pave the way for pluggable NVMe-oF transports.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I6af658d7a17c405e191ff401b80ab704c65497e7
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478744
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
nvme_rdma_register_rsps returned ENOMEM for all failure cases. All of
them are not directly related to shortage of memory. Every point of
failure now sets relevant return code.
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ia340f6c6fd3a68d8c34acfefc2c9224ffcdcad3f
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477302
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
RDMA work requests generated between two calls to NVMe RDMA QP
processing function are chained into a list and then posted together
to a queue in next call to processing function.
Batching improves performance in scenarios with deep queues and heavy
load on CPU. But it may cause latency increase on smaller
loads. Batching is configurable with RPC methods and configuration file.
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I600bce78427eb7e8ed819bbbe523ad318e2da32b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/462585
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
In some real data center deployments, 100ms is not enough. Increase
the timeout to 1 second.
Change-Id: I8195a1c1e987b7eff2d8541509f79381be32ed4b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478638
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: yidong0635 <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
To address the error message:
SPDK_ERRLOG("Unable to resubmit as many requests as we completed.\n");
Reason: The "reaped" variable is used to caculate the free slots
of rdma_reqs after calling the nvme_transport_qpair_process_completions.
And we should correctly caculate the free slots when the rdma_req is
really put.
If we caculate the slots more than we will have, we will trigger
the error print described above.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I269bdb63646eee6444d340b904882736c4cbca36
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477913
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: qun wan <qun.wan@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
nvme_qpair_get_state fits more closely with the semantics in other
modules.
Change-Id: I6ea8e02abe27253d9b4d779a43ac1963be56356a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/476920
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
The qpair state transport_qpair_is_failed is actually equivalent to
NVME_QPAIR_IS_CONNECTED in the qpair state machine.
There are a couple of places where we check against
transport_qp_is_failed and then immediately check to see if we are in
the connected state. If we are failed, or we are not in the connected
state we return the same value to the calling function.
Since the checks for transport_qpair_is_failed are not necessary, they
can be removed. As a result, there is no need to keep track of it and it
can be removed from the qpair structure.
Change-Id: I4aef5d20eb267bfd6118e5d1d088df05574d9ffd
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/475802
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
If the initiator dies without a disconnecting a qpair, the target can
possibly retain the state of the connection. In this case, it will
inform us that the connection is stale, and we need to try again.
Change-Id: I4d349c634aee59ce9ea4af795b07dd8649db56b3
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473063
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This is the first step in properly reconnecting after a hard power off
event.
Change-Id: I9739bffacd66ec6d9f8f1d376bf42291c84f90f2
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473061
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This step is going to become more involved, so it's best to keep it in a
separate function entirely.
Change-Id: Iefa9860420edf28e858c4ed8aa932985c686cfd9
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473060
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If we disconnect qpairs without taking the lock, we run the risk of
trying to double free qpair resources before they have been marked as
NULL.
For example, polling on one thread and calling
nvme_rdma_qpair_disconnect from one thread while doing an
nvme_ctrlr_reset on another thread. nvme_ctrlr_reset will call down to
nvme_rdma_qpair_disconnect on the same qpair and without any locking it
can result in trying to destroy the qpair resources multiple times.
Change-Id: I9eef6f2f92961ef8e3f8ece0e4a3d54f3434cff8
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472413
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Also, adds a field to the generic qpair for future use in other
transports.
Change-Id: Ie5a66e7f5ebfec1131155fc07e3c671be814fb9b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471414
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The way these two functions were separated previously represented a
pretty sserious bug when doing a controller reset.
If there were any outstanding requests in the rqpair, they would get
overwritten during the call to nvme_rdma_qpair_register_reqs and the
application would never get a completion for the higher level requests.
The only thing that we need to do in this function is assign the proper
lkeys.
Change-Id: I304c70646daf9b563cd00badba7141e5e8653aad
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471659
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This function is identical between the two transports.
Change-Id: If50b781259f224eb2c21de7da14564e6ce487650
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471778
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This wasn't being done in the previous case which meant that I/O qpairs
were not being moved to the connecting state when connecting for the
first time. However, to prepare the way for a coherent state machine for
nvme qpairs, we need to ensure that all qpairs go through the same
states.
Change-Id: I3cfe799a003acd926b24c107ab1461a96239c1bb
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471753
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Leaving these on the stack outstanding list can cause unnecessary
buildup. If we fail to post the request to ibv, then the upper layer
request will be freed immediately for reuse, but we will keep that
request in the outstanding queue at the RDMA layer.
Change-Id: Ib422dc9fcb50344ce7c01749f3e20ea9310fd5cb
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470255
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We were already passing up from each transport the number of completions
done during the transport specific call. So just use that return code
and batch all of the submissions together at one time in the generic
code.
This change and subsequent moves of code from the transport layer to the
genric layer are aimed at making reset handling at the generic NVMe
layer simpler.
Change-Id: I028aea86d76352363ffffe661deec2215bc9c450
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469757
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The tailq and the requests all belong to the generic layer, might as
well put the queueing code there for better encapsulation.
Change-Id: Id5f08f798121b50a21044cfc61856999c50ca227
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469758
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Previously we would just sit forever. preventing us from properly
attempting reconnects and timing out.
Change-Id: Id7386ab95cf75fd9ac972b44afa2719aad412f49
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469021
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This enables us to create a single file descriptor and a single event
channel to poll for completions. With that accomplished, we can easily
poll for events on the admin qpair each time we check it for
completions.
Change-Id: I8b901252510744a956bef12594d1e045715e002e
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467549
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>