Commit Graph

805 Commits

Author SHA1 Message Date
Daniel Verkamp
bd4ac74eaf scsi: import SCSI/blockdev translation layer
Change-Id: Ie96943f40ea8be4156d55bc5eeacc567743cf9d6
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-08-01 10:35:01 -07:00
Daniel Verkamp
8779605323 readme: update to DPDK 16.07
Change-Id: I928721f1908de6695c7f2bf05ea2ca032b59d29c
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-08-01 10:33:56 -07:00
Changpeng Liu
5627b6871e nvmf: add identify namespace list support to NVMf target
We reported virtualized NVMe devices through NVMe over Fabric specification,
with 1.2.1 NVMe version. For direct mode, the NVMe device maybe has lower
version, such as 1.0, the identify namespace list can not support in those
devices, so we need to add helper function here to simulate such commands
from initiator.

Change-Id: I226f4f34bf61017f538d2dd80332f1d054a501f1
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
2016-07-29 15:49:41 -07:00
Changpeng Liu
ae20d784c2 nvmf: Print NVMe probe messages only on attach
Change-Id: I50f0cbf792f2d88316fbba9dd90ca1389961fecf
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
2016-07-29 15:49:34 -07:00
Ben Walker
f1a584a9f7 nvmf: Use a shared memory pool for large data buffers.
Change-Id: Iab66335cee2a1e6c1774edd34978735be6763ce1
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-29 15:36:10 -07:00
Ben Walker
b7b747eab1 nvmf: Correctly handle multiple wildcard NVMe directives.
Change-Id: Ie0c4a76734f1f0c4b87c7a752fe68627892a93b9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 14:29:15 -07:00
Ben Walker
caf8860900 nvmf: Allow higher queue depths
Allow higher queue depths by allowing many more send/recv
operations than read/write.

Change-Id: I66c424a6463e5e09be6d5463667241ce9271404b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 13:55:43 -07:00
Ben Walker
eee64c69f7 nvmf: Re-post the capsule immediately upon sending a completion
The target can only provide updates to sq_head inside
of completions. Therefore, we must update sq_head prior
to sending the completion or we'll incorrectly get into
queue full scenarios.

Change-Id: If2925d39570bbc247801219f352e690d33132a2d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 10:27:31 -07:00
Ben Walker
7e23841d28 nvmf: Separate the send and recv completion queues.
This allows the target to poll for internal completions
at higher priority.

Change-Id: I895c33a594a7d7c0545aa3a8405a296be3c106fb
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 09:00:48 -07:00
Ben Walker
04beb5661e nvmf: Send completions after RDMA write has completed.
This ensures that the data buffers are not in use
when we go to send the completion.

Change-Id: I30467b3e3964001150f81b21e5b695dcd0974b0c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 09:00:48 -07:00
Ben Walker
d7b8da3b81 nvmf: Add a transport specific session
This is useful for holding session-wide buffer pools.

Change-Id: I7024da24b210a2205bf1e159d5935e0093b81120
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 09:00:47 -07:00
Ben Walker
52a4a388fb nvmf: Make RDMA WRITE operations signalled
Change-Id: Iad9e216144d88c899b52220ae9b32c24e3cbb252
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-28 09:00:47 -07:00
Changpeng Liu
0075135deb nvmf: fix the wrong caculation of Number of Queues for Get Features
Change-Id: I1aa388a85ebfba5a724ecde40d6ab6201ca8a410
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
2016-07-28 08:09:30 +08:00
Ben Walker
8a701c3f8d nvmf: Use the inline SGL for keyed SGLs if the size is small enough
For small SGLs, even if they are keyed and not inline, use the
buffer we allocated for inline data.

Change-Id: I5051c43aabacb20a4247b2feaf2af801dba5f5a9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 15:24:01 -07:00
Ben Walker
ca0c13387a nvmf: Calculate queue_depth separately from rw_depth
Read/Write depth is much lower than Send/Recv depth.
Calculate them separately to prepare for supporting
a larger number of receives than read/writes.

Currently, the target still only exposes a queue depth
equal to the read/write depth.

Change-Id: I08a7434d4ace8d696ae7e1eee241047004de7cc5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 14:24:41 -07:00
Ben Walker
756df04485 nvmf: Remove g_nvmf_tgt global usage from transport layer
Change-Id: Id788312f597abf6ea937beb7d1d1bd5a168ae0f0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 14:24:39 -07:00
Ben Walker
296add8bb1 nvmf: Add config options for inline and max I/O size
These don't actually work quite yet, but pipe the
configuration file data through to where it will
be needed.

Change-Id: I95512d718d45b936fa85c03c0b80689ce3c866bc
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 14:23:25 -07:00
Ben Walker
3d52e57cd0 nvmf: Allocate rdma reqs as a single contiguous buffer
For each connection, allocate a single buffer each
of requests, inline data buffers, commands, and
completions.

Change-Id: Ie235a3c0c37a3242831311fa595c8135813ae49e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:40:25 -07:00
Ben Walker
b43945830f nvmf: Simplify error handling in rdma conn create
Change-Id: I5380c7785a066f4414aaa1a27a467089d7b50031
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:40:25 -07:00
Ben Walker
5ade1c40f4 nvmf: Standardize names of rdma conn create/destroy
Change-Id: Id1b3328deceeeaa7da8ee2bda992a006286886b0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:40:25 -07:00
Ben Walker
a6135981e8 nvmf: Add a req_release callback to the transport layer
This can be used to release requests that don't
require a completion to be sent.

Change-Id: I8fb932ea8569bf3c45342d9fa4e270af5510c60c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:40:25 -07:00
Ben Walker
052be2f540 nvmf: Each listen addr gets its own PORT ID
PORT IDs indicate hardware failure domains according
to the NVMf specification, which means they should
indicate which transport addresses are on the same
NIC. Unfortunately, that doesn't really make sense for
IP-based fabrics because IP addresses can move. The
safest way to present this is to show all IP addresses
as part of different subsystem ports.

Change-Id: I056a50c69be70b4fbf1f896e684ce65bd792241e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:39:55 -07:00
Ben Walker
8e1a8b8752 nvmf: Change fio tests to write and verify
No need to do read and writes plus verify - the verify is
a read itself.

Change-Id: I28d08717e49b1327b04490810f8e6069504173c2
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:24:31 -07:00
Ben Walker
e660e677b8 nvmf: Delete partitions when tests end
Change-Id: Ia30e037e98df8b0c781ce769453bbf891091334d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:24:31 -07:00
Daniel Verkamp
201843a9eb nvmf: report virtualized NVMe version 1.2.1
The NVMe over Fabrics 1.0 spec corresponds to the NVMe base spec version
1.2.1, so we should pretend to be at least that new.

Change-Id: I36fc44c780de01d6c666e87b803cd47dba0e74c5
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-26 09:23:38 -07:00
Ben Walker
5e15296025 nvmf: Reorder some static functions to avoid forward declarations
Also, clarify the name of nvmf_conn_cleanup

Change-Id: I632c1fc2dde7de03b2dc2f5e21c9f5be5465f5b3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-26 09:18:37 -07:00
Daniel Verkamp
0b27c6f649 nvmf: remove unused Read command structures
These belong in nvme_spec.h anyway and are not used.

Change-Id: I889dfebee523dc5ae503fd0370bb800f1d17fb5d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-26 09:18:18 -07:00
Daniel Verkamp
d693613626 nvmf: remove unused g_nvmf_tgt.mutex
It isn't protecting anything any more.

Change-Id: Ife14809751dd6fb52b787489f87e9fd8be0cbdf6
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-26 09:18:18 -07:00
Daniel Verkamp
420dfa124d nvmf: remove unused #define NVMF_CNTLID_SUBS_SHIFT
This is a leftover from a previous controller numbering scheme that is
no longer used.

Change-Id: I3058802f0324b0e38708111634ee993c6e884087
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-26 09:18:18 -07:00
Ben Walker
0e1dcce24a nvmf: Disconnect the initiator on test exit.
Change-Id: I8e640a77dc4c17582ff5ec0a4daba4f1d509c30c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-25 14:12:58 -07:00
Cunyin Chang
8c094266ac nvmf: Adjust the data structure of spdk_nvmf_subsystem.
Move the ctrlr and io_qpair out of spdk_nvmf_subsystem, package them
as a new data structure. Union the direct and virtual mode namespaces.

Change-Id: I839aee3372c6c57aa03a0be76f8aaeb5045ecdaf
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
2016-07-25 13:56:00 -07:00
Daniel Verkamp
594c19bf69 nvmf: set CAP.CQR as required by the spec
CAP.CQR indicates whether contiguous queues are required; this is
meaningless in NVMe over Fabrics, since queue creation is handled
implicitly for each connection, but the spec requires it to be set to 1.

Change-Id: I6b05954eefa6928beecd7a640bbbdbd835c6b69a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-25 09:55:39 -07:00
Daniel Verkamp
27c38d2c0c nvmf: merge NVMf ctrlr data into nvme_spec.h
Change-Id: I4c88986b5eebcb30b4b209240df813f91087e4de
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-25 09:55:34 -07:00
Daniel Verkamp
447cee868e nvmf: drop NVMF_{H2C,C2H}_MAX_MSG #defines
Use the size of the applicable structs directly.

Change-Id: I4a65de548d409c9962b11a75d3fde2bfe434a3ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-25 09:21:01 -07:00
Daniel Verkamp
d8efd71455 nvmf: remove pointless strdup() in discovery setup
nvmf_create_subsystem() already copies the name, so the strdup() in the
caller is unnecessary.

Change-Id: I225f0f077fee30051b197a4b1d7276b113ec6b01
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
2016-07-25 08:10:09 -07:00
Ben Walker
9b9e3253e6 nvmf: Remove unnecessary conn arguments in rdma.c
Change-Id: I7847f6427e73622e9beea3f69c1c4deb7f487ab4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 16:29:12 -07:00
Ben Walker
8ff733aeed nvmf: Remove nvmf_drain_cq
It isn't actually necessary to drain the cq before
destroying it.

Change-Id: I6f77ae578176a14b5de935274a14cfd165229ec5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 16:29:12 -07:00
Ben Walker
3a4c101b2d nvmf: Move detection of 0 connections on a session to session layer
This logically belongs inside the session handling code, not
in the transport-specific layer.

Change-Id: I93b2271f38dbfc742162c98c40acb153c7e9022a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 16:29:12 -07:00
Ben Walker
3e1a251ed7 nvmf: Track outstanding I/O for debug purposes
Track and print out the currently outstanding I/O in debug
mode with rdma tracing enabled.

Change-Id: I0a1f0cd6e22dbf21e18ca0ec7d0c2c6d194509e3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 16:29:12 -07:00
Ben Walker
c6a608769d nvmf: Move sq_head updates to nvmf_rdma_request_release
Change-Id: Iaba621d54ae600015c9d1dbec6485a730da11bf3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 16:28:33 -07:00
Ben Walker
130fec6636 nvmf: Add better tracing of RDMA operations
Change-Id: Icf5f39fad41d85bb6b325f9fc51b08a7e1055323
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 15:39:37 -07:00
Ben Walker
2d68928c3f nvmf: nvmf_rdma_accept now uses nvmf_rdma_poll
Instead of reimplementing handling for checking the
completion queue, nvmf_rdma_accept can now call
the general purpose poller.

Change-Id: Id2c899d1e500a8cb8491e51cc101a1bf0e167764
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 15:36:01 -07:00
Ben Walker
c9593cd17e nvmf: Add temporary special handling for AER
AER breaks our current model of requests/completion pairs.
Temporarily handle it by immediately re-posting the
capsule while we work on a real solution.

Change-Id: Ie7a4d88030b6fff5a11c4697eec0f024f9737f27
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 15:36:01 -07:00
Ben Walker
04a0ac723c nvmf: conn_poll now returns a count of requests
Change-Id: Ic239bfa072905bbb65574e344d6a060cb4ce44e5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 15:35:53 -07:00
Ben Walker
6beb310cf3 nvmf: Remove nvmf_recv in RDMA layer.
Inline this code into the places that called it. These two
spots will be combined into a single path in a later patch.

Change-Id: Ice2f009ad56b783dc28ebbf1abbb877ce6000293
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 14:35:46 -07:00
Ben Walker
2625cf4261 nvmf: Move nvmf_request_prep_data into rdma.c
This is an RDMA-specific operation, so hide it inside
the transport-specific layer.

Change-Id: Iaa097e8dde78d820547b3a39e9717c992581340b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 14:35:46 -07:00
Ben Walker
9d9dc8452c nvmf: Allocate all rdma requests up front.
Change-Id: Ia7fdb6994b8c167840d7335a2dfcad3ce6171d3a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 14:35:45 -07:00
Ben Walker
989859bbe1 nvmf: Combine alloc_rdma_queue and nvmf_rdma_queue_init
These can be done at the same time now that the queue depth
is known ahead of time.

Change-Id: I7ecef30ebb4311e0a1c88f37461d34534f8600bf
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 14:35:45 -07:00
Ben Walker
a9f5ffbd1c nvmf: Change algorithm for calculating queue depth
Calculate queue depth into a local variable without
touching the rdma_conn.

Change-Id: Ie804ed39ddecbf59015a4e4f7aa127f1381d9080
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 14:35:34 -07:00
Ben Walker
6a61126f37 nvmf: Eliminate conn_id local variable
Change-Id: Iac2371f60914d43a14adadd8c4ecd7663726584f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
2016-07-22 14:18:56 -07:00