Allow higher queue depths by allowing many more send/recv
operations than read/write.
Change-Id: I66c424a6463e5e09be6d5463667241ce9271404b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The target can only provide updates to sq_head inside
of completions. Therefore, we must update sq_head prior
to sending the completion or we'll incorrectly get into
queue full scenarios.
Change-Id: If2925d39570bbc247801219f352e690d33132a2d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows the target to poll for internal completions
at higher priority.
Change-Id: I895c33a594a7d7c0545aa3a8405a296be3c106fb
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This ensures that the data buffers are not in use
when we go to send the completion.
Change-Id: I30467b3e3964001150f81b21e5b695dcd0974b0c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is useful for holding session-wide buffer pools.
Change-Id: I7024da24b210a2205bf1e159d5935e0093b81120
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
For small SGLs, even if they are keyed and not inline, use the
buffer we allocated for inline data.
Change-Id: I5051c43aabacb20a4247b2feaf2af801dba5f5a9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Read/Write depth is much lower than Send/Recv depth.
Calculate them separately to prepare for supporting
a larger number of receives than read/writes.
Currently, the target still only exposes a queue depth
equal to the read/write depth.
Change-Id: I08a7434d4ace8d696ae7e1eee241047004de7cc5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These don't actually work quite yet, but pipe the
configuration file data through to where it will
be needed.
Change-Id: I95512d718d45b936fa85c03c0b80689ce3c866bc
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
For each connection, allocate a single buffer each
of requests, inline data buffers, commands, and
completions.
Change-Id: Ie235a3c0c37a3242831311fa595c8135813ae49e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This can be used to release requests that don't
require a completion to be sent.
Change-Id: I8fb932ea8569bf3c45342d9fa4e270af5510c60c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
PORT IDs indicate hardware failure domains according
to the NVMf specification, which means they should
indicate which transport addresses are on the same
NIC. Unfortunately, that doesn't really make sense for
IP-based fabrics because IP addresses can move. The
safest way to present this is to show all IP addresses
as part of different subsystem ports.
Change-Id: I056a50c69be70b4fbf1f896e684ce65bd792241e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The NVMe over Fabrics 1.0 spec corresponds to the NVMe base spec version
1.2.1, so we should pretend to be at least that new.
Change-Id: I36fc44c780de01d6c666e87b803cd47dba0e74c5
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These belong in nvme_spec.h anyway and are not used.
Change-Id: I889dfebee523dc5ae503fd0370bb800f1d17fb5d
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is a leftover from a previous controller numbering scheme that is
no longer used.
Change-Id: I3058802f0324b0e38708111634ee993c6e884087
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Move the ctrlr and io_qpair out of spdk_nvmf_subsystem, package them
as a new data structure. Union the direct and virtual mode namespaces.
Change-Id: I839aee3372c6c57aa03a0be76f8aaeb5045ecdaf
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
CAP.CQR indicates whether contiguous queues are required; this is
meaningless in NVMe over Fabrics, since queue creation is handled
implicitly for each connection, but the spec requires it to be set to 1.
Change-Id: I6b05954eefa6928beecd7a640bbbdbd835c6b69a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the size of the applicable structs directly.
Change-Id: I4a65de548d409c9962b11a75d3fde2bfe434a3ec
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
nvmf_create_subsystem() already copies the name, so the strdup() in the
caller is unnecessary.
Change-Id: I225f0f077fee30051b197a4b1d7276b113ec6b01
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It isn't actually necessary to drain the cq before
destroying it.
Change-Id: I6f77ae578176a14b5de935274a14cfd165229ec5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This logically belongs inside the session handling code, not
in the transport-specific layer.
Change-Id: I93b2271f38dbfc742162c98c40acb153c7e9022a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Track and print out the currently outstanding I/O in debug
mode with rdma tracing enabled.
Change-Id: I0a1f0cd6e22dbf21e18ca0ec7d0c2c6d194509e3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Instead of reimplementing handling for checking the
completion queue, nvmf_rdma_accept can now call
the general purpose poller.
Change-Id: Id2c899d1e500a8cb8491e51cc101a1bf0e167764
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
AER breaks our current model of requests/completion pairs.
Temporarily handle it by immediately re-posting the
capsule while we work on a real solution.
Change-Id: Ie7a4d88030b6fff5a11c4697eec0f024f9737f27
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Inline this code into the places that called it. These two
spots will be combined into a single path in a later patch.
Change-Id: Ice2f009ad56b783dc28ebbf1abbb877ce6000293
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is an RDMA-specific operation, so hide it inside
the transport-specific layer.
Change-Id: Iaa097e8dde78d820547b3a39e9717c992581340b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
These can be done at the same time now that the queue depth
is known ahead of time.
Change-Id: I7ecef30ebb4311e0a1c88f37461d34534f8600bf
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Calculate queue depth into a local variable without
touching the rdma_conn.
Change-Id: Ie804ed39ddecbf59015a4e4f7aa127f1381d9080
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Make sure the trace history that is exported via shared memory is always
the same size, regardless of DPDK configuration.
Also removes the necessity of including DPDK headers from spdk/trace.h
(so we have to fix up other files to include what they use).
Change-Id: I32f88921fd95c64a9d1f4ba768ae75e2ca5d91da
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
It is not currently configurable, but this will allow us to make the
discovery subsystem have config options (e.g. which lcore to run on).
Change-Id: I788a64ba4462b023453191e509ce8de59fd90ae4
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is a much simpler approach and is only slightly
less efficient.
Change-Id: I909de376d576a74156c1be447e90e7dbc240f025
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Drop the redundant controller ready check.
nvmf_process_io_cmd() was checking CSTS.RDY, but this is not necessary,
since its only caller, spdk_nvmf_request_exec(), is already checking
CC.EN, which always matches RDY in our virtual controller
implementation.
The initialization of status is a dead store -
nvmf_complete_cmd() always writes the full response, and the only other
branch is the return immediately below the call, which also sets status.
Change-Id: I1ec2b8a225a91c4b2997d8ab4f45d050cc216de3
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
No reason to use DPDK in this file just for an equivalent to assert().
Change-Id: Ic6932a16d0a36cd1a3cb25c8cc5e295c59f3e2db
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>