Inline this code into the places that called it. These two
spots will be combined into a single path in a later patch.
Change-Id: Ice2f009ad56b783dc28ebbf1abbb877ce6000293
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is an RDMA-specific operation, so hide it inside
the transport-specific layer.
Change-Id: Iaa097e8dde78d820547b3a39e9717c992581340b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These can be done at the same time now that the queue depth
is known ahead of time.
Change-Id: I7ecef30ebb4311e0a1c88f37461d34534f8600bf
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Calculate queue depth into a local variable without
touching the rdma_conn.
Change-Id: Ie804ed39ddecbf59015a4e4f7aa127f1381d9080
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Make sure the trace history that is exported via shared memory is always
the same size, regardless of DPDK configuration.
Also removes the necessity of including DPDK headers from spdk/trace.h
(so we have to fix up other files to include what they use).
Change-Id: I32f88921fd95c64a9d1f4ba768ae75e2ca5d91da
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Make sure no response fields are left over from the previous command in
the spdk_nvmf_request.
Change-Id: I42937e991d9dd6550fd4bc9b6d0dd66b44c6b83e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If we connected a subsystem twice from the initiator, the second
connection will be rejected by the NVMf target, however, the previous
connection will also be impacted because we destroy the connection id
before ack the disconnect event.
Change-Id: Ib597cc68a7823524460693053898f4d6e5499eb4
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
There is no need to allocate ibv_sge structures within the RDMA request;
we can just fill them out on the stack right before submitting each
request.
Change-Id: I438ff0be2f6d07ffa933255c92c4ec964aa1b235
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Just return success or failure - the actual count was not used.
Change-Id: I26e7c4c6319af444d221d9b0f313fb7071733619
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
All of the WC events that we handle map back to a request, so look it up
before checking the opcode.
Change-Id: I1b70a773374f64387df0a21a4f7fd64b26534b14
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Make sure all tracelogs in rdma.c use SPDK_TRACE_RDMA.
Change-Id: Idc3d3b6654215b5ab3ee84a106e46ffd3019cc7a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
There was only one function and a structure declaration
left.
Change-Id: I63277b4182120e7a76a925ed0bf7378ec7c23f20
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These can be simplified and merged into the subsystem.
Remove the concept of mappings from subsystems and replace
it with a list of hosts and ports. The host is optional -
not specifying a host means any host can connect.
Change-Id: Ib3786acb40a34b7e10935af55f4b6756d40cc906
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Make the transport responsible for filling out the fabric-specific
details in the discovery log entry.
Change-Id: I41d871c605becd557dca18f8ef7e80da66950257
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Make the core NVMf to transport interface generic and allow for multiple
transport types to be registered.
Change-Id: I0a2767a47d55999c45f788ae1318bb50af60ab4e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Instead of starting the connection poller immediately upon
the connect event, wait for the first connect capsule to
start the poller.
This builds toward associating all connections with the same
session with the same lcore.
Change-Id: I7f08b2dd34585d093ad36a4ebca63c5f782dcf14
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Pull out the duplicated min checks against the ibdev_attr values.
Change-Id: I774c355ba669486afde5c05c55a4ed653723db98
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Drop the debug print in conn.c that was the only user.
We still have the connect data structure when determining the connection
type, and after that point, the queue ID is not needed.
Change-Id: Ida9e170099f977ec6b84478874863c40d6f7d8a1
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the number of devices returned by ibv_get_device_list() instead of
stopping at 4.
While we're here, drop the unused MAX_SESSIONS_PER_DEVICE definition
too.
Change-Id: I21ca6c6c95b7f2cccc1de4d0a34b95217a522bfc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These can be isolated in rdma.c rather than being part of the generic
transport API.
Change-Id: Idc2b969a2f7685420cda2f7c4aa12495ffc3fcbc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Clean up everything that isn't strictly necessary in rdma.h.
Change-Id: Ied9acbed5f5b64860eae39816cdcb74620009a79
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This essentially turns the current nesting (of RDMA conn inside NVMf
conn) inside out. Now the transport owns the connection structure and
allocates it when necessary.
Change-Id: Ib5ca84e2a57b16741d84943a5b858e9c3297d44b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This sets up the RDMA layer to be able to embed the NVMf conn inside the
RDMA conn.
Change-Id: I5e3714ac8503826504d78d06fb5eaafabd025bb8
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The NVMf target set the maximum data transfer size(MDTS) to the default value
of 128KB now, and the initiator driver will read the value and set it to the
block layer, so each command sent from initiator will not runoff 128KB.
Change-Id: I1d4f259e887b2fc70c7f1c5406c07c58f7fc9b8d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
If any completion indicates an error, we need to close the connection.
Change-Id: I50b30aa692ae121932f1baec32f713422ff415ed
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The bb_sgl must follow recv_sgl make the logic obscure.
Change-Id: I8d47477986efd8f2d4ed964ab9373b7f157af274
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
Admin commands technically don't allow inline data,
but there is nothing from preventing us from posting
a recv buffer that could handle inline data. It just
won't be used for incoming admin capsules.
Change-Id: I3e7e4406e01ab870654a166d52221c11fc0ac683
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The queue type and queue depth are not known until
the connect capsule is processed. Delay allocating more
than 1 recv wqe until then.
Change-Id: I0e68c24bc3d6f37043946de6c2cbcb3198cd5d1b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Currently, the recv wqe is re-posted immediately. This
closes a small window where we could get more I/O
than we could handle.
Change-Id: I9b0b1f0cc526069033b9e04f170195c4fb130e37
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is going to be used elsehwere in teh code, so
name it according to the public namign convention
and make it public.
Change-Id: Id5fd57e78e146f3235741a251bb30244d6530f2c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is going to be used elsewhere in the code, so
name it according to the public naming convention
and make it public.
Change-Id: I0dcb88e902c5e609fe6acd06ad06743203fcaa60
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Break out the code to allocate a single rdma request
to be used elsewhere.
Change-Id: I687ce5ec862831fed5300157bfb4bf980d22c782
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The queue depth allowed for incoming commands is set
such that we can do the maximum number of RDMA reads
necessary. There is never a case where a READ will need
to be queued anymore.
Change-Id: I4f7e7f4a59f6358065b82f36a5e22744af210d07
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
There were 4 variables tracking queue depths. In reality,
only one is needed once the minimum is computed correctly.
Change-Id: I9bb890e92a33a3c7bd6e27cbd31d6bee7ca0cf3d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The function call of spdk_nvmf_check_pools can be
directly put in nvmf.c.
Reason: This pool is created by nvmf subsystem,
it should be recycled by this subsystem.
Change-Id: I49e49bcb56079fc25d26b1f5078a1808c2f8e189
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Drop the RDMA-specific fields from spdk_nvmf_request and get them
directly from the command SGL in the transport-specific read function.
Change-Id: Icd06a9018a8c341213fbc8d26d3d7cbf2fb32d30
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The connection will be closed in these cases anyway, so just let the
normal connection cleanup deal with the active tx_desc.
Change-Id: I96c68d5802e189bb82b180cc3c7d7c3f4135be1f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If spdk_nvmf_request_exec() fails, the connection will be closed anyway,
so just leave the tx_desc in the active array; it will be cleaned up in
the normal connection cleanup path.
Change-Id: Ie4f60bd6001658403dd7e1c6a47d40be756ef6f2
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If an invalid SGL is specified, send a response with a status code
indicating what the error was rather than silently dropping the command.
Change-Id: I12d1fd847d3bc0ea8de7698e934626c2586a7452
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The error case could only be reached with tx_desc != NULL in one case,
so move the cleanup code there and drop the goto.
Change-Id: I7aace6b40dd75ef8d86fb173f9d58110e929b082
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Also split the generic nvmf_trace_command() function out of
the RDMA-specific handler and move it to request.c
Change-Id: If29b89db33c5e080c9816977ae5b18b90884e775
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Also finish up the req_state -> req conversion.
Change-Id: I131dd52dcd36a790b942e06f0207a3274cc04ffc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Move toward making request.c transport agnostic.
Change-Id: I25fbe74fff21a5c23138e1a6e2d40bc6a4a984ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Make nvmf_post_rdma_read() interface generic (don't require a tx_desc).
Change-Id: I331a93eed4bb1912a47a88bb904cf392fcc364c6
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This fixes an oversight that allowed in-capsule data block SGLs to
potentially refer to more than the received in-capsule data size.
It also makes spdk_nvmf_request_prep_data() less dependent on the
RDMA-specific rx_desc/tx_desc structures.
Change-Id: I34d61aca4cf5ba033849673116d16ec90488dcd4
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The RDMA read and write commands can determine the desired length based
on the nvmf_request length field.
Change-Id: I97b63289556e7de3c19c5a17ecbacbbbdfc10425
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Replace the generic "msg_buf" naming with command and response.
Change-Id: I19baff43b41a5eb7db9be9d7feec33d17112e320
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The mempool functionality is never used at runtime - all bounce buffers
were immediately assigned to a rx_desc.
Change-Id: Ie2195059858e34b30b07e104739f046c13abc335
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The RDMA tx_desc and rx_desc pools were only used at startup; all
descriptors are immediately allocated and put into a queue, and the
mempool functionality was never used at runtime.
Change-Id: I2882274962550191a555c8483b8f7be2854b32ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is an implementation detail of the RDMA layer.
Change-Id: Ib97d6fbd593789eed0b6e746972b8882a3320995
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This code is operating on a list owned by the RDMA
connection, so move it to rdma.c
Change-Id: I8b81f9d1ffc1df489c9b698969725ed0d1db6a06
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These are an implementation detail of RDMA, so move
them into the RDMA portion of the connection.
Change-Id: I68d146019c5d78fbf5e9968abfd7baed2a54a2ed
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Separate out the RDMA connection from the
NVMf connection. For now, the RDMA connection
is just embedded in the NVMf connection, but
eventually they will have different lifetimes.
Change-Id: I9407d94891e22090bff90b8415d88a4ac8c3e95e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This structure will be expanded in future patches.
Change-Id: Ibb04917134243560e09a2a255844739eb33fab65
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The nvmf_request structure holds the pair of pointers
for rx_desc and tx_desc.
Change-Id: I3e735979bbdcdc0e70ad78762e289849d41158ba
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The nvmf_request object is generic and is mapped 1:1 with rx_desc.
Change-Id: I397224a3859c3c93d6eca99f7ba7c53ce7963f57
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Instead of searching the global list of connections to find a matching
cm_id, we can just store the pointer back to the spdk_nvmf_conn in the
rdma_cm_id context field.
Change-Id: I39ea16be6a633a1136d65743747b63b600f20e63
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If we just pass NULL to rdma_create_qp, it will do
the right thing.
Change-Id: I9621a5110ace6237a1e47c6e5defb4cac3afc4ae
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The wrappers are much simpler to use than the low
level ib verbs calls.
Change-Id: I4b09a96a60020bc27df9396d40d955733f618837
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
They are only ever called in sequence and do related
operations.
Change-Id: I825abe08deba1dafb405757bb4f2d52062a801ca
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Most of the #include statements in nvmf.h aren't part of the public API.
Change-Id: I0d43dd542a28744a91a4fd0c4c806a991d1e194e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>