nvme_qpair_get_state fits more closely with the semantics in other
modules.
Change-Id: I6ea8e02abe27253d9b4d779a43ac1963be56356a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/476920
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
The qpair state transport_qpair_is_failed is actually equivalent to
NVME_QPAIR_IS_CONNECTED in the qpair state machine.
There are a couple of places where we check against
transport_qp_is_failed and then immediately check to see if we are in
the connected state. If we are failed, or we are not in the connected
state we return the same value to the calling function.
Since the checks for transport_qpair_is_failed are not necessary, they
can be removed. As a result, there is no need to keep track of it and it
can be removed from the qpair structure.
Change-Id: I4aef5d20eb267bfd6118e5d1d088df05574d9ffd
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/475802
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
If the initiator dies without a disconnecting a qpair, the target can
possibly retain the state of the connection. In this case, it will
inform us that the connection is stale, and we need to try again.
Change-Id: I4d349c634aee59ce9ea4af795b07dd8649db56b3
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473063
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This is the first step in properly reconnecting after a hard power off
event.
Change-Id: I9739bffacd66ec6d9f8f1d376bf42291c84f90f2
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473061
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This step is going to become more involved, so it's best to keep it in a
separate function entirely.
Change-Id: Iefa9860420edf28e858c4ed8aa932985c686cfd9
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473060
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If we disconnect qpairs without taking the lock, we run the risk of
trying to double free qpair resources before they have been marked as
NULL.
For example, polling on one thread and calling
nvme_rdma_qpair_disconnect from one thread while doing an
nvme_ctrlr_reset on another thread. nvme_ctrlr_reset will call down to
nvme_rdma_qpair_disconnect on the same qpair and without any locking it
can result in trying to destroy the qpair resources multiple times.
Change-Id: I9eef6f2f92961ef8e3f8ece0e4a3d54f3434cff8
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472413
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Also, adds a field to the generic qpair for future use in other
transports.
Change-Id: Ie5a66e7f5ebfec1131155fc07e3c671be814fb9b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471414
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The way these two functions were separated previously represented a
pretty sserious bug when doing a controller reset.
If there were any outstanding requests in the rqpair, they would get
overwritten during the call to nvme_rdma_qpair_register_reqs and the
application would never get a completion for the higher level requests.
The only thing that we need to do in this function is assign the proper
lkeys.
Change-Id: I304c70646daf9b563cd00badba7141e5e8653aad
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471659
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This function is identical between the two transports.
Change-Id: If50b781259f224eb2c21de7da14564e6ce487650
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471778
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This wasn't being done in the previous case which meant that I/O qpairs
were not being moved to the connecting state when connecting for the
first time. However, to prepare the way for a coherent state machine for
nvme qpairs, we need to ensure that all qpairs go through the same
states.
Change-Id: I3cfe799a003acd926b24c107ab1461a96239c1bb
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471753
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Leaving these on the stack outstanding list can cause unnecessary
buildup. If we fail to post the request to ibv, then the upper layer
request will be freed immediately for reuse, but we will keep that
request in the outstanding queue at the RDMA layer.
Change-Id: Ib422dc9fcb50344ce7c01749f3e20ea9310fd5cb
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470255
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We were already passing up from each transport the number of completions
done during the transport specific call. So just use that return code
and batch all of the submissions together at one time in the generic
code.
This change and subsequent moves of code from the transport layer to the
genric layer are aimed at making reset handling at the generic NVMe
layer simpler.
Change-Id: I028aea86d76352363ffffe661deec2215bc9c450
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469757
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The tailq and the requests all belong to the generic layer, might as
well put the queueing code there for better encapsulation.
Change-Id: Id5f08f798121b50a21044cfc61856999c50ca227
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469758
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Previously we would just sit forever. preventing us from properly
attempting reconnects and timing out.
Change-Id: Id7386ab95cf75fd9ac972b44afa2719aad412f49
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469021
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This enables us to create a single file descriptor and a single event
channel to poll for completions. With that accomplished, we can easily
poll for events on the admin qpair each time we check it for
completions.
Change-Id: I8b901252510744a956bef12594d1e045715e002e
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467549
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This prevents us from failing a reset and then trying to double put the
rqpair->cq which ends up causing seg faults.
Change-Id: If3e14a3d039b4b19cc587a7482157f4b23f8ee32
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469609
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
By splitting all cm_event handling into a single function, we can create
a single point of contact for cm_events, whether we want to process them
synchronously or asynchronously.
Change-Id: I053a850358605115362f424de55e66806a769320
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467546
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This is paving the way for additional changes to enable polling for
cm_events in the initiator.
For now, just present the same blocking API on top of the now polled
file descriptor. Later, we will change this API to be more useful.
Change-Id: I174dac028720f95c30100f6dc2ed49b5bb2a7e40
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467545
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This reverts commit 6129e78d26.
When the initiator sends the discovery log page, if the log page
exceeds the size of its data buffer, it will break it up into
multiple log page commands with appropriate offsets. However,
supporting offsets in log pages is an optional feature in NVMe
and reported by the EDLP bit in the identify data.
This commit changed the discovery process to no longer send an
identify command prior to doing the discovery log page command,
so the values in the identify data are always 0. If the discovery
log page exceeds the size of the data buffer (4k), it will then
fail to send the second log page with an offset because it
believes the controller does not support the feature.
Revert this change to fix it. An identify should always be sent
as part of the discovery process. A test case is included in a
follow up patch the demonstrates the bug.
Reported-by: Zahra Khatami <zahra.k.khatami@oracle.com>
Reported-by: Akshay Shah <akshay.shah@oracle.com>
Change-Id: Iefd512a7521e0fea90541b3eb547671cfa816ea6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466819
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
SPDK NVMe-oF initiator driver could not transfer IO whose size is
more than 128KiB even if NVMe-oF target allows IO whose size is
more than 128KiB both for RDMA and TCP transport.
Some use cases need to transfer IO larger than 128KiB.
For RDMA transport, max_mr_size by ibv_query_device of RDMA devices
indicates the maximum size of a single memory region and is independent
from the actual I/O size, and is very likely to be larger than 2 MiB
which is the granularity we currently register memory regions.
Actually some RDMA NICs return UINT64_MAX for max_mr_size by ibv_query_device.
Hence use UINT32_MAX and let the generic layer use the controller data
to moderate this value.
On the other hand, for TCP transport, there is no limit for maximum IO
size and hence use UINT32_MAX.
Besides, for RDMA transport, max_sges should be the minimum of
max_sge got by querying RDMA devices and NVME_RDMA_MAX_SGL_DESCRIPTORS.
Hence do this change together in this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idc813afd3e525bf5f370c0fcd2623f9c146a5528
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459218
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Previously comparing the transport supported value and the target value
was done in RDMA transport layer. However this comparison should be
done in the generic layer like the maximum IO transfer size. Hence
change the comparison to do in the generic layer in this patch.
Besides, for MSDBD, the value 0 indicates no limit but we had handled
this as maximum number of SGS entries was 0 by mistake. This patch fixes
the bug together.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I54365cf114169b10180ec2c659f9c7302672674c
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459574
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
In some cases we have the qpair already when calling
this function. So pass the qpair to avoid having
to get it from the request. This shows about a 3%
performance improvement for high IOPs single core
tests.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I22fcca560492f4e7cf5ffedd252e41a027d0dd79
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455286
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The RDMA transport was the only one implementing this
function, and it only does a connect - not a disconnect
followed by a connect.
A later patch will add a matching disconnect function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ib68eb0ff2f8e59f437d6d8831bb37dfddf83e9a4
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453929
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This function returns a pointer to the PCIe I/O registers for a controller
or NULL if unsupported for this transport.
Used for PCIe only, other transports return NULL.
Use with caution.
Signed-off-by: James Bergsten <jamesx.bergsten@intel.com>
Change-Id: I849f9de9ad259a65b1eef9c1237345eb7195b9bf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452927
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This transport function is a complete nop now, so
remove it.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5cc6ac75795a3cf5311f24e2ac293fb53d4b9f8c
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453487
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This will allow us to move more of the reset-related
functionality to the common layer, as part of enabling
resets for fabrics controllers.
The transport qpair_enable and qpair_fail functions
acted similarly - so those are both removed now and
replaced with this new qpair_abort_reqs function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9486630ad5b807239b0b5bcde50e8cfd313695d3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453486
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
We submit AERs to all controllers - both pcie and
fabrics. But currently we only manually abort the
aers when disabling the qpair for pcie. Make this
common instead by creating a new transport function
for aborting aers.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I1e926b61b8035488cdc6e8cb4336b373732f985e
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453482
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This better explains what the function is doing,
and makes the name more general so we can use it
for the adminq as well.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I6b55761cb141a9a79cdef876be47995d8813b312
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453480
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This moves us towards not freeing and reallocating
this memory if and when we reconnect the qpair.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic20d3c221442f6206d161760a8bfa7f9b8989d4c
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453479
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This will simplify some upcoming changes to reconnect
a qpair. In these cases we only need to re-register
the memory - we shouldn't have to allocate it again.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Id8adff313f191fbf11d7502127a2b961f2ca2f6e
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/453478
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
In order to truly support multi-sgl inline requests in the RDMA
transport, we would need to increase the size of the
spdk_nvme_rdma_req object dramatically. This is because we would need
enough ibv_sge objects in it to support up to the maximum number of SGEs
supported by the target (for SPDK that is up to 16). Instead of doing
that or creating a new pool of shared ibv_sge objects to support that
case, just send split multi-sgl requests through the regular sgl path.
Change-Id: I78313bd88f3ed1cea3b772d9476a00087f49a4dd
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452266
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The upper levels of the stack allow for this, so we should follow that
pattern so I/O don't break here.
Change-Id: Ia862f14975a551b0675bafd7709fb7897d0d567e
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450685
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This avoids a data dependent load to find which
callback to call in the completion path.
Change-Id: Ifa20790a7af3332a74bc45037e589668744af797
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450558
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The identify data is only valid if spdk_nvme_connect()
was used with Discovery Controller, so move this code
into the section where it belongs to.
Change-Id: I1897f38277eafc192552a09556a568e9152bb72d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448500
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Move req->submit_tick assignments from specific transports to generic
qpair code.
Check whether submit_tick has been assigned before doing the actual
assignment, because a request may be submitted several times and the
original submit_tick shouldn't be covered.
Change-Id: I2de8018dc21763eb5a19bb9d48dfbdef764b036e
Signed-off-by: lorneli <lorneli@163.com>
Reviewed-on: https://review.gerrithub.io/c/444702
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Existing NVMe driver uses a global list g_nvme_init_ctrlrs
to track the controllers during initialization, and internal
function will start each controller in the list one by one
until the list is empty. We introduce a probe context
and move the global list into the context, with the context
we can enable asynchronous probe API in the next patch, also
this can enable parallel probe feature.
Change-Id: I538537abe8c1a4a82fb168ca8055de42caa6e4f9
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/426304
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This prevents us from overrunning the send queue.
Change-Id: I6afbd9e2ba0ff266eb8fee2ae0361ac89fad7f81
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443476
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
RDMA transport will report SPDK_NVME_SC_ABORTED_POWER_LOSS code
when fail the admin queue, however, SPDK_NVME_SC_ABORTED_SQ_DELETION
makes more sense here, because we know we are going to shutdown
the controller.
Fix issue #568.
Change-Id: I31da095ec92c06079511d89cc2743654ba2c001b
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440132
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Necessary to avoid erroring out in the edge case where we have an SGL
request sent with two buffers that fit in the incapsule data size.
Change-Id: If51fb69c402482b564c737319584378cb03e7213
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/436062
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This helps us get rit of outstanding requests at the bdev layer.
Change-Id: I362c7c0c6641715fcd96e8eb465b308c368d34fc
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/431844
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This is a failsafe for finding and reporting data buffers that span
multiple Memory Regions. These errors should never be triggered, but
finding and reporting them will help any debugging.
Change-Id: I3c61e3cc510f5a36039fc1815ff0de45fce794d5
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/436054
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is necessary to confirm that a buffer that spans a 2_MB boundary is
still in a single MR.
Change-Id: If0d14e514ab2197a0d2e3af4f565f56d50591210
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/435179
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
The max_send_sge and max_recv_sge values can be set to any value from
0...dev_attr->max_sge. WHen we actually set the attributes, we will
receive a qpair with values for max_sge greater than or equal to what we
initially set. We need to store the maximum number of SGEs for later use
when constructing work requests.
Previously we have not relied on these values since we assumed that we
would always be able to have more sges than we asked for initially. This
may change as we try to allocate more SGEs to handle splitting buffers
across memory regions.
Change-Id: Ibbeae1908b86baa3a96d9c6cd2051401aaa2197b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/433307
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The RDMA memory map needs to be per-protection
domain, not per NVMe controller. Otherwise, when
an NVMe controller is removed, the memory map may
reference an invalid pointer to a detached
controller.
Change-Id: I0c5bd2172daee0c70efb40eab784839e0cde8bc4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/432590
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>