Record the user-provided asynchronous event configuration set via Set
Features, and return it in Get Features.
This value is not actually used, since AER is not implemented yet in the
virtual controller model, but it at least implements the mandatory
Set/Get Features.
This allows the hack in the NVMe host code that ignored the Set Features
failure to be reverted.
Change-Id: I2ac639eb8b069ef8e87230a21fa77225f32aedde
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The SPDK_TRACELOG macro depends on a CONFIG setting (DEBUG), so it
should not be part of the public API.
Create a new include/spdk_internal directory for headers that should
only be used within SPDK, not exported for public use.
Change-Id: I39b90ce57da3270e735ba32210c4b3a3468c460b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The details of the structure were removed earlier, but
now remove all references even to a pointer to the
structure. The user can refer to transports by their
string name.
Change-Id: I273356f46329ea5372dcd951eda6f14767477d69
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is a step toward abstracting away the definition
of the subsystem.
Change-Id: I88b2aa107b27152620f51a1ca2a153792b4c85e9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Print the error information when the kernel RNIC driver did not load
properly, and fix the cleanup logic for the exceptional exit.
Change-Id: I97a45e73d830280b994818f3defc491bc2b6b020
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
As we can support multiple sessions now for each Subsystem, the Host
will use cntlid field to create IO queues, if 2 different Hosts
connected to the same Subsystem, for IO queues' creation process, it
will use cntlid field with 0 for current code logic.
Change-Id: I6fd437892e8eb3146f62f4b211c0baadd70b505e
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
The NVMe over Fabrics target was storing the PCI device pointer for each
direct-mode controller, but it only really needs the PCI address, which
is exposed via the get_nvmf_subsystems RPC.
Also update the same code path to use the new spdk_pci_device_get_addr()
function for brevity.
Change-Id: I0708b3331b7c279c1a86f0d7459b5deb40dd7c89
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
If the RDMA transport failed to initialize, g_rdma.event_channel may be
NULL.
Change-Id: I4510ee5893389f244f0fbaa1cd4a182868939b25
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
For iWARP devices, buffers that are intended to be the
target of an RDMA read initiated by the target must additionally
have IBV_ACCESS_REMOTE_WRITE permission. This is because iWARP's
RDMA read path essentially requests the remote side to do
an RDMA write.
This is unfortunate because there is no way to differentiate between
memory that the remote side can do an RDMA write to and memory
that will only be the target of RDMA reads initiated by the
target. There is nothing we can do about this serious deficiency in
the specification, however, so we have to live with it.
Change-Id: I3d2f2814ce0cb1df4e5347296ef371db4d16be21
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
I believe this is required for NICs to report, but handle
the case where it isn't reported.
Change-Id: I38d10c3590d1df8bb902ab312af0f9e01b9e5032
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This makes it consistent with the way connections and
requests work.
Change-Id: Ifb97499ba72f7dfd02ac54ba1b622726d266262c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The shared memory pool for a session is associated with
a particular RNIC via the protection domain. New connections
attempting to join a session that came in on a different RNIC
can't use that memory, so must be rejected.
Change-Id: Ibd79fe90566a231f76b7472e5e9b484c3e528454
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Rearrange the functions in rdma.c to match the order
of the function pointers in the transport. No other
code changes.
Change-Id: I9dbc68912ecd5dfdf53f20b4807d4116933a3c3a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Use the lower level registration functions. The RDMA-CM
examples use the ibv_* versions, so who knows if the
rdma_reg_* wrappers are even well tested.
Change-Id: I8e8250ab09a1401e636aebe2fc04a60806f7a827
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Previously, we mixed use free and spdk_nvmf_rdma_conn_destroy to
free allocated spdk_nvmf_rdma_conn structure, which sounds not
exactly free all the resources.
Change-Id: I2917b442c34d63ba5c014add58f429ae4b831595
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
The RDMA API doesn't say whether the wr is copied, so be
safe and allocate it on the heap.
Change-Id: I091af50aa031e1861333f19d864eb52335d6b756
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows the entire transport structure definition
to become private.
Change-Id: I9ca19edbfc3cfb75b9b113a89bb2b90bc499ab16
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This changes as little code as possible while still creating
a single public API header. This enables future clean up
of the public API and clarification of the exposed
concepts.
Change-Id: I780e7a5a9afd27acf0276516bd71b896ad301c50
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Remove #includes for all DPDK headers that weren't
necessary.
Change-Id: Ib02522e0f04e64a1c98afceb7508cc0e8d931a9d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This converts some, but not all, usage of rte_mempool
to spdk_mempool. The remaining rte_mempools use features
we elected not to expose through spdk_mempool such as
constructors, so that will need to be revisited.
Change-Id: I6528809a864ab466b8d19431789bf0f976b648b6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Use the env library to perform all memory allocations
that previously called DPDK directly.
Change-Id: I6d33e85bde99796e0c85277d6d4880521c34f10d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
'virtual' is a keyword in C++, so avoid using it in variable
and structure names in case any files are eventually
included from a C++ project.
Change-Id: I2122750445def63038af68a3000758e33b937f9d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
All completion queues for the same listen address
now share a common completion queue channel.
Change-Id: I42c149fe7e221951e8a3826b1713482c37a265b8
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These 4 callbacks can be condensed into two callbacks, which
simplifies the API.
Change-Id: I069da00de34b252753cdc8961439e13a75d1cc68
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows users to swap out SPDK's third party
libraries for an implementation based on their own
framework.
Change-Id: Ia0b7384ce5e31acba5ad0d7002dec9e95b759c52
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The offset variable is used to store the result of a uint64_t * uint32_t
multiplication; a signed integer is not the correct type for the result.
Change-Id: If1fb22314ba7e3cec91808cc051678f809c9e58b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This feature should only be used if clients are coordinating
with one another.
Change-Id: I89a437441a7e3fbcc1e5f6efa1c8e970ade7c2ec
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
We already require the assert header from the C standard library,
so use that instead of RTE_VERIFY to further isolate DPDK
dependencies.
Change-Id: I4a718af858c88aff6080e33e6c3dd533c077b8f4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
bdev and copy modules no longer have check_io functions
now - all polling is done via pollers registered when
I/O channels are created.
Other default resources are also removed - for example,
a qpair is no longer allocated and assigned per bdev
exposed by the nvme driver - the qpairs are only allocated
via I/O channels. Similar principle also applies to the
aio driver.
ioat channels are no longer allocated and assigned to
lcores - they are dynamically allocated and assigned
to I/O channels when needed. If no ioat channel is
available for an I/O channel, the copy engine framework
will revert to using memcpy/memset instead.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I99435a75fe792a2b91ab08f25962dfd407d6402f
I/O channels are not actually used for I/O yet however.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaa3774ecacc7ec206c7c0c66e6b2f2d10c8fa785
Instead of polling for only 1 completion at at time,
poll for batches of 32.
Change-Id: I5ef99a270489e7b3d2a58cb765915f187775a93e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Purpose: To make the function definition style consistent
Change-Id: I7ade943881aa5076fdd419958e386ae3c3661da6
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
1 In our nvmf tgt implemention, we use the async
mode to delete the nvmf subsystem. However, when
we parse nvmf subsystem, we need to use the sync
function to delete the nvmf subsystem. Since if
there is error, we will call spdk_app_stop, thus
async functions will not be executed. It is
approved in my local test.
2 Add debug info in spdk_nvmf_delete_subsystem
Change-Id: Ia8ecd6eee1bbd25cb3e1ceeb0e2146f3f03be228
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
ibv_poll_cq is actually an expensive call to make, so take
steps to begin to minimize the number of times it is called.
Change-Id: I6fc64979604220eb8cacd612b46e3a3b1bca0924
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This matches the general order (LBA start then LBA count) for
the NVMe API.
While here, fix a copy/paste error in a debug message (write
instead of writev).
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ice326af5d6025867dffed4d1f6c7b81fb9eba5eb
Set status code to invalid opcode when opcode is not supported
in nvmf_process_discovery_cmd.
Change-Id: Ibab8097e536f26f16c322d5f539277688906cfc3
Signed-off-by: Liang Yan <liang.z.yan@intel.com>
The spec does not define NQNs as case-insensitive, so replace the
strcasecmp() matching of NQNs with strcmp().
Change-Id: I5946d9ee8e1d0aa5966e9b1b3c6f14f3f5119aec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
1 Rename this function and make it more meaninful, since
we have spdk_nvmf_session_connect which is used to link a
connection to the session
2 split spdk_nvmf_session_destruct.
Change-Id: I150df7ccdf4de3428d8cecbb286d5f7944510a8c
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Fix copy-and-paste errors - when polling the recv CQ, we should print
"Recv" instead of "Send" in log messages.
Signed-off-by: Roland Dreier <roland@purestorage.com>
This can just directly assign the completion instead
of calling memcpy.
Change-Id: I07819c824eba45245b00fa3538a99bc81bcb9fcc
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This function always shows up as one of the hottest functions when
profiling. I believe it is the memset that is expensive, so instead
use default initialization when the wr is declared on the stack
and just set the members that need to be updated in the function.
Also make the function inline for good measure.
Change-Id: I29e24cdd375311fa033b5a6df772ff4f73e35302
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
We need to free the session resource, if there is error
for creating a new session
Change-Id: I7c4f3e779e0b30e213e02b8676d93bd2fe9bf851
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
The application is now entirely responsible for scheduling subsystem
pollers and sending events between threads.
Change-Id: I88da1f53b5e8852c7c4acd6f0a7a1e2219fbed41
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reason: In acceptor_poller_unregistered_event, we
directly call spdk_nvmf_check_pools and spdk_app_stop,
it will fail the memory check.
And function nvmf_delete_subsystem_poller_unreg will
not be called since we already call spdk_app_stop.
Change-Id: I3ffa30c87b149a66cee1d87d1bb81d4dc8cc96b9
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
The "+" is not correct, should be "-". Currently,
the issue doest not happen since the offset is 0,
then both + and - is OK. But if we adjust the location
of spdk_nvmf_conn or spdk_nvmf_request, we can find
this bug.
Change-Id: Ib358dc729da901a69442d0402a6089989f49b05c
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Check that the number of blocks/ranges in the command fits within the
length specified by the SGL.
Change-Id: I21aded797dc1f1e752fe0bc9cec27310a4fb106a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The Dataset Management command allows several operations to be specified
at once; the virtual controller only supports deallocate for now, but it
should just ignore the other bits in order to be spec compliant: "If the
Dataset Management command is supported, all combinations of attributes
[...] may be set".
The spec also explicitly states that it is acceptable for controllers to
choose to take no action based on information provided, so not
implementing the other attributes is fine.
Change-Id: Ia989dc1faa9c852660bf1299ea18fa8e7bdf4053
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Also add a diagnostic message if the requested log page ID is not
supported.
Change-Id: I7551b5905d5ebc29356839f0f9153dc86f237106
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Rather than comparing the bdev name against "NVMe", use the new I/O type
supported API to query whether the unmap operation is supported.
Change-Id: I62c7a1ea5529366ff2ae4723b62f24ea78aa8193
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Move the NQN validation into the subsytem creation function, and fix the
allowed size to match the spec.
The spec is not clear about the allowed NQN size; for now, interpret it
as 223 bytes, including the null terminator (222 bytes of actual NQN
plus one terminator byte).
Change-Id: If9743ab2fe009d9d852e8b03317d9b38d8af18dc
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
SUBNQN is a UTF-8 null terminated string according to the NVMe base
spec, so pad it with zeroes using strncpy().
Change-Id: I486161b26d91f3ea1fd17428e220b9f20a874732
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These are specified as "ASCII string", which means they should be
left-aligned and padded with spaces, according to the NVMe base
specification.
Change-Id: I25babe0ca417c2e16137b0bfc41fc7834277114e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Clean up the poller and only then free the associated subsystem's
memory. This prepares for future dynamic subsystem creation/deletion.
Change-Id: I9e56cbf8822814930fdbb662095c51b6ad40fbc4
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Currently the NVMf target listens for new connections on any address.
Instead, listen only on the addresses specified by the user.
Change-Id: Idb6d37c422e442fc70a8673bd3fcfb9c27b57828
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Use the event framework's new delay parameter to allow
for idle cores to sleep for up to 1ms at a time.
Change-Id: I665f38e590c07338418892afe0e75b0b2c79706e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
It is no longer needed, since the nvmf_tgt app handles initialization
and shutdown.
Change-Id: I051afe2b4fcbd09b32998386c63f591a0ab343c2
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This will be used in future patches outside the library.
Change-Id: I1fcf5709944a884e161e5a6a9eaec033a995a812
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The NVMe over Fabrics target library now exposes a simple function call
that polls the acceptor once, and the application handles registration
of the poller.
Also rename the transport function pointers related to the acceptor so
they better reflect their purpose.
Change-Id: I5fa0d516586bf17e73afeb88ff3c2d5b0d46794d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This will become more important when other transports are added.
For now, it is also useful to be able to start nvmf_tgt on systems
without RDMA hardware.
Change-Id: I6b9002cc7711f928c4e6b73adcd9b677349ebdd6
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
spdk_shutdown_nvmf_subsystems() was removing the subsystem from the
list, but nvmf_delete_subsystem() also wants to remove it, so drop the
extra removal.
Also rewrite the shutdown loop as a TAILQ_FOREACH_SAFE() to make the
static analyzer happy (and make it more obvious that the loop will
terminate).
Change-Id: Iccadafa77d9cd3e26be21c0f11e62cfc1ef0197c
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Verify that the record format is the one we support (only 0 is defined
by the spec for now).
Change-Id: Iddf038b381e540134abf572e0545c97a0ef71d5f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The spec requires that NQNs are null terminated and maximum of 223 bytes
long, despite the Connect command fields being larger (256 bytes), so
add checks for both subsystem NQN and host NQN before using them as null
terminated strings.
Change-Id: I343d9e44a09ab4d0f6654feba460b31e976c4e56
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Users can specify the core for each subsystem and the acceptor listen routine
to run on different cores for performance consideration.
Change-Id: I4bd1a96f39194c870863b4b778e6ea7cf8fc1a2d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
This is causing issues during shutdown because the poller removal is not
synchronized with the rest of the cleanup path.
This reverts commit 7dfc5e922d.
Change-Id: If95c4b72c5d120f18bdc3db6d7d532ad1aada642
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This should enhance performance, since the hardware admin queue poll
function takes a mutex and should not be in the performance path.
Change-Id: I7e4acde0337aaf7079811612cba5348acf0a467d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This leaves more flexibility for future changes to the poller
representation without requiring API changes (after this one).
It also prevents the user from accidentally using poller fields in a
non-thread-safe way, since they can't be accessed directly anymore.
Change-Id: I7677d5b93668665d29ae39c5e0ba74333ad3f878
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The NVMe submission queue head wraparound point can be determined in the
generic NVMe over Fabrics layer; it should not be using the RDMA
connection queue depth.
Change-Id: I9da8f09e4f057f8fdc1ff4c6cc5f48cea7123e11
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Report the maximum admin queue size correctly.
Change-Id: I52cad654bf59806e0abb8d869c22973647056617
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the max_queue_depth parameter rather than rdma_conn->max_queue_depth
so that we can start to eliminate rdma_conn->max_queue_depth.
Change-Id: I1670c634e6d12aa004fb5a10338b7624850fbc4a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
There were two unchecked allocations in the nvmf library. Check
for allocation failures.
Change-Id: Ic6b3104d825dba1ee6bd1748fa99e132702f300c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This fixes a static analysis warning for unsigned/signed
mismatch.
Change-Id: I49bd8d6d195f13b402e14a85503a5de6114f5b7f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The large buffer pool allocation was using the per-connection queue
depth, whereas the RDMA memory region registration was using the global
RDMA max queue depth. These sizes need to match, so use the global RDMA
max queue depth for both calls.
Change-Id: Iae161b719e09e19ca3e81df6593b68a4a2e86614
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the new timer-based poller functionality to replace rte_timer.
Change-Id: Ic40653306cc73b40139fe18e06bab29b35721a43
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Allow pollers to be scheduled to be run periodically every N
microseconds instead of every iteration of the reactor loop.
Change-Id: Iaea3e98965d81044e6dc5ce5f406bcb7a455289e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
We reported virtualized NVMe devices through NVMe over Fabric specification,
with 1.2.1 NVMe version. For direct mode, the NVMe device maybe has lower
version, such as 1.0, the identify namespace list can not support in those
devices, so we need to add helper function here to simulate such commands
from initiator.
Change-Id: I226f4f34bf61017f538d2dd80332f1d054a501f1
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Allow higher queue depths by allowing many more send/recv
operations than read/write.
Change-Id: I66c424a6463e5e09be6d5463667241ce9271404b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The target can only provide updates to sq_head inside
of completions. Therefore, we must update sq_head prior
to sending the completion or we'll incorrectly get into
queue full scenarios.
Change-Id: If2925d39570bbc247801219f352e690d33132a2d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows the target to poll for internal completions
at higher priority.
Change-Id: I895c33a594a7d7c0545aa3a8405a296be3c106fb
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This ensures that the data buffers are not in use
when we go to send the completion.
Change-Id: I30467b3e3964001150f81b21e5b695dcd0974b0c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is useful for holding session-wide buffer pools.
Change-Id: I7024da24b210a2205bf1e159d5935e0093b81120
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
For small SGLs, even if they are keyed and not inline, use the
buffer we allocated for inline data.
Change-Id: I5051c43aabacb20a4247b2feaf2af801dba5f5a9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Read/Write depth is much lower than Send/Recv depth.
Calculate them separately to prepare for supporting
a larger number of receives than read/writes.
Currently, the target still only exposes a queue depth
equal to the read/write depth.
Change-Id: I08a7434d4ace8d696ae7e1eee241047004de7cc5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These don't actually work quite yet, but pipe the
configuration file data through to where it will
be needed.
Change-Id: I95512d718d45b936fa85c03c0b80689ce3c866bc
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
For each connection, allocate a single buffer each
of requests, inline data buffers, commands, and
completions.
Change-Id: Ie235a3c0c37a3242831311fa595c8135813ae49e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This can be used to release requests that don't
require a completion to be sent.
Change-Id: I8fb932ea8569bf3c45342d9fa4e270af5510c60c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
PORT IDs indicate hardware failure domains according
to the NVMf specification, which means they should
indicate which transport addresses are on the same
NIC. Unfortunately, that doesn't really make sense for
IP-based fabrics because IP addresses can move. The
safest way to present this is to show all IP addresses
as part of different subsystem ports.
Change-Id: I056a50c69be70b4fbf1f896e684ce65bd792241e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The NVMe over Fabrics 1.0 spec corresponds to the NVMe base spec version
1.2.1, so we should pretend to be at least that new.
Change-Id: I36fc44c780de01d6c666e87b803cd47dba0e74c5
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These belong in nvme_spec.h anyway and are not used.
Change-Id: I889dfebee523dc5ae503fd0370bb800f1d17fb5d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is a leftover from a previous controller numbering scheme that is
no longer used.
Change-Id: I3058802f0324b0e38708111634ee993c6e884087
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Move the ctrlr and io_qpair out of spdk_nvmf_subsystem, package them
as a new data structure. Union the direct and virtual mode namespaces.
Change-Id: I839aee3372c6c57aa03a0be76f8aaeb5045ecdaf
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
CAP.CQR indicates whether contiguous queues are required; this is
meaningless in NVMe over Fabrics, since queue creation is handled
implicitly for each connection, but the spec requires it to be set to 1.
Change-Id: I6b05954eefa6928beecd7a640bbbdbd835c6b69a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the size of the applicable structs directly.
Change-Id: I4a65de548d409c9962b11a75d3fde2bfe434a3ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
nvmf_create_subsystem() already copies the name, so the strdup() in the
caller is unnecessary.
Change-Id: I225f0f077fee30051b197a4b1d7276b113ec6b01
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It isn't actually necessary to drain the cq before
destroying it.
Change-Id: I6f77ae578176a14b5de935274a14cfd165229ec5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This logically belongs inside the session handling code, not
in the transport-specific layer.
Change-Id: I93b2271f38dbfc742162c98c40acb153c7e9022a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Track and print out the currently outstanding I/O in debug
mode with rdma tracing enabled.
Change-Id: I0a1f0cd6e22dbf21e18ca0ec7d0c2c6d194509e3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Instead of reimplementing handling for checking the
completion queue, nvmf_rdma_accept can now call
the general purpose poller.
Change-Id: Id2c899d1e500a8cb8491e51cc101a1bf0e167764
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
AER breaks our current model of requests/completion pairs.
Temporarily handle it by immediately re-posting the
capsule while we work on a real solution.
Change-Id: Ie7a4d88030b6fff5a11c4697eec0f024f9737f27
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Inline this code into the places that called it. These two
spots will be combined into a single path in a later patch.
Change-Id: Ice2f009ad56b783dc28ebbf1abbb877ce6000293
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is an RDMA-specific operation, so hide it inside
the transport-specific layer.
Change-Id: Iaa097e8dde78d820547b3a39e9717c992581340b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These can be done at the same time now that the queue depth
is known ahead of time.
Change-Id: I7ecef30ebb4311e0a1c88f37461d34534f8600bf
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Calculate queue depth into a local variable without
touching the rdma_conn.
Change-Id: Ie804ed39ddecbf59015a4e4f7aa127f1381d9080
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Make sure the trace history that is exported via shared memory is always
the same size, regardless of DPDK configuration.
Also removes the necessity of including DPDK headers from spdk/trace.h
(so we have to fix up other files to include what they use).
Change-Id: I32f88921fd95c64a9d1f4ba768ae75e2ca5d91da
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It is not currently configurable, but this will allow us to make the
discovery subsystem have config options (e.g. which lcore to run on).
Change-Id: I788a64ba4462b023453191e509ce8de59fd90ae4
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is a much simpler approach and is only slightly
less efficient.
Change-Id: I909de376d576a74156c1be447e90e7dbc240f025
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Drop the redundant controller ready check.
nvmf_process_io_cmd() was checking CSTS.RDY, but this is not necessary,
since its only caller, spdk_nvmf_request_exec(), is already checking
CC.EN, which always matches RDY in our virtual controller
implementation.
The initialization of status is a dead store -
nvmf_complete_cmd() always writes the full response, and the only other
branch is the return immediately below the call, which also sets status.
Change-Id: I1ec2b8a225a91c4b2997d8ab4f45d050cc216de3
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>