Set a status code in the response capsule for each possible error case.
Also enforce CC.EN == 1 before I/O connect.
The NVMf spec requires that the controller is enabled before any I/O
queue Connect commands are allowed.
Change-Id: If56d6b4d6bedad00e9e845e77f05f715e3969f8b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Drop the debug print in conn.c that was the only user.
We still have the connect data structure when determining the connection
type, and after that point, the queue ID is not needed.
Change-Id: Ida9e170099f977ec6b84478874863c40d6f7d8a1
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The NVMf target is being refactored to split the RDMA transport-specific
code into its own file. Once this is complete, we should be able to
plug in other transports and build the NVMf target without any RDMA
dependency if desired.
To enable this, change the CONFIG option to RDMA; it still controls
whether the whole NVMf target is built for now, but once the RDMA
dependency is actually made optional, we will be able to build the
generic NVMf target code without libibverbs installed.
Change-Id: I8cd90a9aaa85dcefcc9b0f8f2e7b6af21958b2a8
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Move the configuration file parsing for subsystems
into the configuration file parsing file.
Change-Id: Ie16e73cdc65fae7f2f3c3b22f9cba7f167024fa1
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The code for parsing the configuration file still
referred to a host as an init_grp, so fix it.
Change-Id: Ifa250b09de495dd7d393ccc3557fd6d56a54e790
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This never really made sense, so replace it with a list of
subsystems.
Change-Id: Ie7a9400083c091ac7142d01c23948200f515bdf7
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is just extra complication for no real benefit.
Change-Id: I528af98e799d0641e753390fe35ff561fa3d7d76
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows the custom UIO hotplug driver to be used as well as the
uio_pci_generic driver.
Change-Id: Ica3316ed716827ad305eb4a146d0864d61ff190f
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Linking this new library allows applications using
the framework to be killed via an RPC call.
This only works if the RPC subsystem is loaded.
Change-Id: Ifcf91c212add620fe410589eba5490337c635776
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Linking this library into an application makes the log
configurable via RPC. This only works if the rpc subsystem
is also loaded.
Change-Id: If1340cf2a845ef159290232c26f341150c98fb9a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Use the number of devices returned by ibv_get_device_list() instead of
stopping at 4.
While we're here, drop the unused MAX_SESSIONS_PER_DEVICE definition
too.
Change-Id: I21ca6c6c95b7f2cccc1de4d0a34b95217a522bfc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is the only file that calls it, so it can be static.
Change-Id: I47573b7b38b40ad37e758234245eedbe94ae0a12
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These were internal-only APIs; initialize just checks to see that the
pool was initialized (which is already checked internally), and shutdown
just called spdk_nvmf_shutdown_nvme(), which we can call directly.
Change-Id: I95e1b912d61a38fa9934f58df7b1512678303452
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These can be isolated in rdma.c rather than being part of the generic
transport API.
Change-Id: Idc2b969a2f7685420cda2f7c4aa12495ffc3fcbc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Just calculate the required number of requests once and store it in a
global variable.
Change-Id: Iffeb637a3ac5f69ec89989b84f03699bac483b6e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
There can be only one session per subsystem.
Change-Id: I8ba85a5ebd11dd71fda2a4bafa97a0935609379f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It is just a duplicate of the NVMe library request_mempool.
Change-Id: I2a5484e5d515b965503b2cfcd8d85ccfcb0dee05
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Clean up everything that isn't strictly necessary in rdma.h.
Change-Id: Ied9acbed5f5b64860eae39816cdcb74620009a79
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This essentially turns the current nesting (of RDMA conn inside NVMf
conn) inside out. Now the transport owns the connection structure and
allocates it when necessary.
Change-Id: Ib5ca84e2a57b16741d84943a5b858e9c3297d44b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This sets up the RDMA layer to be able to embed the NVMf conn inside the
RDMA conn.
Change-Id: I5e3714ac8503826504d78d06fb5eaafabd025bb8
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The whole cleanup process is now started by
spdk_shutdown_nvmf_subsystems(). Each subsystem will clean up its
session, if any, and each session will clean up its connections.
Change-Id: I9915d4547751ed4ffc4baa2c45c628698dd0b881
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The per-lcore connection counter was incremented and decremented, but it
is no longer actually read. The lcore allocation should happen at the
session level instead.
Change-Id: I7bdf1b521bfda4892304338d43fad3ed5123c494
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Nothing actually maps the shared memory region, so there is no need to
allocate the array of connections that way.
Change-Id: I3d5eca748f892e37fbb0ec52942f1c510e9f9dc8
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
There is only one controller per subsystem, so therefore
there can be 0 or 1 sessions. Change the list of sessions
to a pointer that can be NULL if no session exists.
Change-Id: I2c0d042d9cecacae93da3e806093faf0155ddd6e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Subsystems only have one controller, so cntlid
is always 0.
Change-Id: I690a1793ad3a696adbaefca856e559dd0177b11a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This was intended to track the number of NVMe device
queues per session, but there is only one hardware
queue per session. It was conflated with the number
of RDMA queues in several places as well.
Change-Id: I74a1c56a5d395dea8bee4778882821e904cebcf9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Everything can be done when the session is created.
Change-Id: I7cb38c093b2b1b69460cabba465828eed0cec432
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The cntlid is inside the session, so no need for
duplicate data.
Change-Id: I5669ee6393807959506dfec36a7583af77386fc4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Since we only allocate workers to the master lcore,
remove the logic that places I/O conns on the same
lcore as the admin conn.
The "right" logic would be to place the I/O conn
on the same lcore as the whole session, and this
patch builds toward that.
Change-Id: I8983b56de41062ec834b0a169ba0fa61326c466d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Temporarily, only run on the master lcore. This makes
some temporary refactoring possible that is required
to move to a truly scalable threading model.
Change-Id: I13a2e03107a27f8ec18b023b15f653d374a137b5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
A connection function was initializing some session data, so
move that code to the function that initializes the session.
Change-Id: I5f2d4349585cb97985a7bbd9fb8d6c66eeaa7d4e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
There was an extra layer of indirection complicating
things for no reason. This removes it.
Change-Id: I8d4e654eb17f8f6ec028d775329794f0745fb0f7
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The NVMf target set the maximum data transfer size(MDTS) to the default value
of 128KB now, and the initiator driver will read the value and set it to the
block layer, so each command sent from initiator will not runoff 128KB.
Change-Id: I1d4f259e887b2fc70c7f1c5406c07c58f7fc9b8d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
If any completion indicates an error, we need to close the connection.
Change-Id: I50b30aa692ae121932f1baec32f713422ff415ed
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
NVMf does not have the concept of subsystem groups; the (former)
subsystem_grp files really contain structures and functions related to
individual subsystems.
Change-Id: I4b3a64de799fffb29f8685ea4908d754516815cd
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
For a single poll of the completion queue, if the user
submits I/O from within their completion callback and their
completion callback is particularly slow to execute, the loop
could potentially continue forever. To support this, we
need to limit the number of completions we'll process
in one batch.
Change-Id: If6bae47e52b36347dbe5622ace68c866ee88a0b2
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Create a list of valid properties with get and set callbacks (set is
optional to allow read-only fields).
Remove handling for fields declared as "reserved" in the NVMe over
Fabrics 1.0 specification.
Also simplify the vcprop structure to only contain the required fields.
Change-Id: I14d3ddfd008c62b75fce8e64d193c87fb6f7b5ad
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is intended to be used for examples/nvme/identify and similar
diagnostic utilities.
Change-Id: Ib2f941e9af7a3fb7555865ef253742e30ccad2b5
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Multiple NVMe controllers within a subsystem does not work correctly,
since we would need to virtualize the controller data, namespace IDs,
and so on. For now, only allow pass-through mapping of a single NVMe
controller per subsystem.
Change-Id: Ib2d3576d2856c46a086f38eb6bec56f3e7a73575
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Previously, we used cap_lo and cap_hi to represent the 32-bit halves of
the full CAP register. However, it is simpler to keep them in a single
64-bit structure, and is no less efficient on 64-bit platforms.
Also name the NSSRS field from NVMe 1.2, which was previously reserved.
Change-Id: I1d5d9b0dccbb12373b4aed3db29c883881d43223
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The bb_sgl must follow recv_sgl make the logic obscure.
Change-Id: I8d47477986efd8f2d4ed964ab9373b7f157af274
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
Make sure the reactor mask in profile take effect.
Change-Id: Ia471b2b88a711f05738cf93068c4f3a8c9a3039d
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
Admin commands technically don't allow inline data,
but there is nothing from preventing us from posting
a recv buffer that could handle inline data. It just
won't be used for incoming admin capsules.
Change-Id: I3e7e4406e01ab870654a166d52221c11fc0ac683
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
We need to bind to each port declared in the config file; there is not a
single global port number.
Change-Id: I41c315588078d131c32cb145d22314047505c95c
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The access to the NVMf IOCCSZ (I/O Queue Command Capsule Supported Size)
field in the Identify Controller data was incorrect.
Change-Id: I23b0aa175de8e5d8a0220e9c35e0cb6868121cb5
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The maximum in-capsule data size is determined by the I/O queue bounce
buffer size, and there is no point in limiting it beyond that, so remove
the need to configure it.
Change-Id: I64806516b847e819f57ac9f62a162f7a04805b57
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
4420 is the officially assigned IP port from IANA for NVMe over Fabrics.
Change-Id: I433a5ed0780d1ffd7ca6512617759d59fa5e8def
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The queue type and queue depth are not known until
the connect capsule is processed. Delay allocating more
than 1 recv wqe until then.
Change-Id: I0e68c24bc3d6f37043946de6c2cbcb3198cd5d1b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Currently, the recv wqe is re-posted immediately. This
closes a small window where we could get more I/O
than we could handle.
Change-Id: I9b0b1f0cc526069033b9e04f170195c4fb130e37
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is going to be used elsehwere in teh code, so
name it according to the public namign convention
and make it public.
Change-Id: Id5fd57e78e146f3235741a251bb30244d6530f2c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is going to be used elsewhere in the code, so
name it according to the public naming convention
and make it public.
Change-Id: I0dcb88e902c5e609fe6acd06ad06743203fcaa60
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Break out the code to allocate a single rdma request
to be used elsewhere.
Change-Id: I687ce5ec862831fed5300157bfb4bf980d22c782
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
When Debug is not defined, SPDK_TRACELOG will do nothing,
thus cmd_type is an unused variable, and will trigger the
compilation warnings. And this patch will solve this issue
Change-Id: I821f7601a16c98e514227aee2e18fbfa61928bea
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
The queue depth allowed for incoming commands is set
such that we can do the maximum number of RDMA reads
necessary. There is never a case where a READ will need
to be queued anymore.
Change-Id: I4f7e7f4a59f6358065b82f36a5e22744af210d07
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
There were 4 variables tracking queue depths. In reality,
only one is needed once the minimum is computed correctly.
Change-Id: I9bb890e92a33a3c7bd6e27cbd31d6bee7ca0cf3d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
NVMe over Fabrics defines its own NVMe Qualified Name (NQN) format; it
does not use iSCSI Qualified Names.
Also change the default node base for nvmf_tgt to "nqn.2016-06.io.spdk".
Change-Id: I2b73c1426ef1d8c83cc2df499d79228ea61257cd
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Fix the sizes of the UUID fields to match RFC 4122.
Change-Id: I1458a22579f455cde0a67ee3ce616e78d5c810c2
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This will allow removal notifications to be propagated to the library
user (e.g. for hotplug).
The callback is currently unused, but this at least prepares the API for
the future hotplug support.
Based on a patch by Dave Jiang <dave.jiang@intel.com>
Change-Id: I20b1c2dbf5e084e0b45a7e51205aba4514ee9a95
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the knowledge that both the source and destination of
nvme_copy_command() are aligned to emit the aligned variants of the
SSE2/AVX mov instructions.
Change-Id: I0a7e32a3bb10b9a1920cd85691b79fa7172eecb3
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The function call of spdk_nvmf_check_pools can be
directly put in nvmf.c.
Reason: This pool is created by nvmf subsystem,
it should be recycled by this subsystem.
Change-Id: I49e49bcb56079fc25d26b1f5078a1808c2f8e189
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Drop the RDMA-specific fields from spdk_nvmf_request and get them
directly from the command SGL in the transport-specific read function.
Change-Id: Icd06a9018a8c341213fbc8d26d3d7cbf2fb32d30
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The connection will be closed in these cases anyway, so just let the
normal connection cleanup deal with the active tx_desc.
Change-Id: I96c68d5802e189bb82b180cc3c7d7c3f4135be1f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If the transport poll routine fails, we need to close the connection.
Change-Id: Ie534b0f05e6642c31e0450865e309a784abbe744
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If spdk_nvmf_request_exec() fails, the connection will be closed anyway,
so just leave the tx_desc in the active array; it will be cleaned up in
the normal connection cleanup path.
Change-Id: Ie4f60bd6001658403dd7e1c6a47d40be756ef6f2
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If an invalid SGL is specified, send a response with a status code
indicating what the error was rather than silently dropping the command.
Change-Id: I12d1fd847d3bc0ea8de7698e934626c2586a7452
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Make all command processing functions return a bool to indicate
asynchronous (false) or synchronous (true) completion.
Change-Id: I7c2e4d28fa473b36ff26c902e4bb69f38b64d18d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Only use SPDK_TRACE_RDMA within the RDMA transport code.
Change-Id: Ie15fd24bb142a68f3661929267ebe396b556c351
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The error case could only be reached with tx_desc != NULL in one case,
so move the cleanup code there and drop the goto.
Change-Id: I7aace6b40dd75ef8d86fb173f9d58110e929b082
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Also split the generic nvmf_trace_command() function out of
the RDMA-specific handler and move it to request.c
Change-Id: If29b89db33c5e080c9816977ae5b18b90884e775
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Also finish up the req_state -> req conversion.
Change-Id: I131dd52dcd36a790b942e06f0207a3274cc04ffc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The overflow condition can't happen unless there is a programming error
in the nvmf_tgt library; we can only possibly receive command capsules
(sq entries from the point of view of the host) if we have posted a RDMA
Recv for the command capsule memory region.
This means that we also don't need to track sq_tail in the NVMf library.
Change-Id: I101509080c744528871e72fa46d188e2850c928a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The "immediate completion" cases in spdk_nvmf_request_exec() already
call spdk_nvmf_request_complete(), so the ret == 1 case in nvmf_recv()
is bogus.
Also fix a couple of spdk_nvmf_request_complete() calls in
nvmf_process_admin_cmd() that should be handled by its caller.
Change-Id: I41b865d5e6e7fec08087faf9c6f3da3b057a5fb2
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These are not supported (and did not actually function) in NVMe over
Fabrics. Queue creation is handled automatically when new connections
are initiated.
Change-Id: If3a10e5df2f0625537b2c453cd8c835e570fa31e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Move toward making request.c transport agnostic.
Change-Id: I25fbe74fff21a5c23138e1a6e2d40bc6a4a984ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Make nvmf_post_rdma_read() interface generic (don't require a tx_desc).
Change-Id: I331a93eed4bb1912a47a88bb904cf392fcc364c6
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This fixes an oversight that allowed in-capsule data block SGLs to
potentially refer to more than the received in-capsule data size.
It also makes spdk_nvmf_request_prep_data() less dependent on the
RDMA-specific rx_desc/tx_desc structures.
Change-Id: I34d61aca4cf5ba033849673116d16ec90488dcd4
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It is the same as spdk_nvme_cpl, aside from reserved fields.
Change-Id: I62b0718dd58c998b4d26a0d1b44ee16d37eff25d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The RDMA read and write commands can determine the desired length based
on the nvmf_request length field.
Change-Id: I97b63289556e7de3c19c5a17ecbacbbbdfc10425
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Replace the generic "msg_buf" naming with command and response.
Change-Id: I19baff43b41a5eb7db9be9d7feec33d17112e320
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The mempool functionality is never used at runtime - all bounce buffers
were immediately assigned to a rx_desc.
Change-Id: Ie2195059858e34b30b07e104739f046c13abc335
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The RDMA tx_desc and rx_desc pools were only used at startup; all
descriptors are immediately allocated and put into a queue, and the
mempool functionality was never used at runtime.
Change-Id: I2882274962550191a555c8483b8f7be2854b32ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is an implementation detail of the RDMA layer.
Change-Id: Ib97d6fbd593789eed0b6e746972b8882a3320995
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This code is operating on a list owned by the RDMA
connection, so move it to rdma.c
Change-Id: I8b81f9d1ffc1df489c9b698969725ed0d1db6a06
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These are an implementation detail of RDMA, so move
them into the RDMA portion of the connection.
Change-Id: I68d146019c5d78fbf5e9968abfd7baed2a54a2ed
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Separate out the RDMA connection from the
NVMf connection. For now, the RDMA connection
is just embedded in the NVMf connection, but
eventually they will have different lifetimes.
Change-Id: I9407d94891e22090bff90b8415d88a4ac8c3e95e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This structure will be expanded in future patches.
Change-Id: Ibb04917134243560e09a2a255844739eb33fab65
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The request needs to know which connection specifically
it is associated with.
Change-Id: I492b9968b4d2e307b5af44edee0778478b32d2ba
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
They each only had 1 function left that belonged
in the session.c file.
Change-Id: I405902b02e9316d2dc02d3732d8bc085c2b84d31
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Only move nvmf_request definitions from nvmf_internal.h
for now. Subsequent patches will move more.
Change-Id: If47472542515fd050cc78d95540eb25beee59d2a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Fabric commands were skipping a step, so unify all
types of requests through the same completion path.
Change-Id: I5f38a7e1cdcdf33baf71486d5ddae9f5a6157fac
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The nvmf_request structure holds the pair of pointers
for rx_desc and tx_desc.
Change-Id: I3e735979bbdcdc0e70ad78762e289849d41158ba
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This moves some definitions from nvmf_spec.h to
nvme_spec.h based on the latest publication.
Change-Id: I51b0abd16f7d034696239894aea5089f8ac70c40
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Swap the order of checks in the failure check - if rc is not 0, addr may
be garbage.
Change-Id: I110710efd00397c777d59ac8b219ba3cc2156596
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The nvmf_request object is generic and is mapped 1:1 with rx_desc.
Change-Id: I397224a3859c3c93d6eca99f7ba7c53ce7963f57
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Instead of searching the global list of connections to find a matching
cm_id, we can just store the pointer back to the spdk_nvmf_conn in the
rdma_cm_id context field.
Change-Id: I39ea16be6a633a1136d65743747b63b600f20e63
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
All of the variables are private to conn.c, so they don't need global
visibility.
Change-Id: I7c24cfc6249a9f8164b162b4b8de0e24c452e0df
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It is always set to nvmf_process_async_completion and is only used
within the library.
Also rename nvmf_process_async_completion to spdk_nvmf_request_complete
to clarify its purpose.
Change-Id: Ie737fb60688329bfe329a8553c4a40ff2e5f8f1d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Create spdk_nvmf_request_prep_data(), which handles SGL processing and
data transfer for all command types, and spdk_nvmf_request_exec(), which
executes a command after data transfer has completed.
Change-Id: I51c2196260dd0686db8acca4d8f7c93e17618c2f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The pending type can be determined based on the command opcode.
This also moves the "issue pending RDMA reads" case out of the I/O queue
handling into the generic continuation code; this should not make any
difference for the current case, since the Fabrics Connect command is
the only other continuation case currently, and there cannot be any
pending RDMA reads in that case.
Change-Id: Idddfa496b6e5b7e6da772aa3ab1b9d1a5344771f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
nvmf_connect_continue() no longer needs the RDMA-specific tx_desc.
Change-Id: I95f6938063e9853aa7dcd419f488b91422ff9b60
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Let the calling function handle the tx_desc if nvmf_connect_continue()
fails.
Change-Id: I25a8cbc4c3be0608bcec8db2fb8c50e55fbe3e8c
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Everything necessary for processing an admin command is now stored in
nvmf_request.
Change-Id: I74e75a5b7bb3b406ad167c2b31cab1af7a1f270a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Everything necessary for processing an I/O is now stored in
nvmf_request.
Change-Id: I3f390707ebe83ea66a116dcfda4d0388a6823629
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Keep a pointer to the local bounce buffer in the transport-agnostic
struct nvmf_request rather than groveling in tx_desc/rx_desc to get it.
Change-Id: Ic328d8e2b3a15759ccb149a89fb3562e928ca500
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Now that we have xfer to track data direction, the length field can be
populated correctly for all transfers (including in-capsule data).
Change-Id: I7b2228f3fac80aab983a4103ba095c7bc38e0b21
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This field is used to decide whether data needs to be transferred back
to the host after a command is completed.
Previously, this was determined using the length field, and length was
cleared to 0 after a transfer was completed. However, length will be
used in future patches after a host to controller transfer completes, so
we need some other way to tell what kind of data transfer is required.
Change-Id: I6b27cf7816908394735fc95c15bd5eb40a7c0157
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
vtophys() was checking for out-of-range addresses incorrectly: vfn_2mb
is already shifted to account for 2 MB hugepages, but it was being
compared with a mask that did not account for the shift. This would
allow out-of-bounds access to the 128tb map array for certain invalid
addresses (it had no effect on addresses within the valid userspace
range).
Change-Id: Ida7455595e586494c9025f9ba65d050abb16b1b9
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Other subsystems can depend on this one and then use
SPDK_RPC_REGISTER to register RPC functions.
Change-Id: I557f774331ce7146d299d06b3f81426e2103a11f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Since we used u64 to mask CPU cores, the available number of CPU
is 64, for default RTE_MAX_LCORE in DPDK, the value is 128, in some
cases(e.g.: when nr_io_queues > 4) we can get the wrong lcore ID.
Change-Id: Icc334b1bf5b068a310839118be341e61071cff65
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
A transfer using less than the total bounce buffer size is a normal
occurrence and not worthy of a tracelog. Also drop the pointless
conditional.
Change-Id: Ibcdcf693fea439d5034fa51b08b3fbd8fd7df8f2
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Avoid using the RDMA-specific tx_desc when the transport-agnostic
nvmf_request will suffice.
Change-Id: Id35bbdfb353cb72e0feb4f5af19e5bd5c86d3ff4
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>