The NVMe submission queue head wraparound point can be determined in the
generic NVMe over Fabrics layer; it should not be using the RDMA
connection queue depth.
Change-Id: I9da8f09e4f057f8fdc1ff4c6cc5f48cea7123e11
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Report the maximum admin queue size correctly.
Change-Id: I52cad654bf59806e0abb8d869c22973647056617
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Use the max_queue_depth parameter rather than rdma_conn->max_queue_depth
so that we can start to eliminate rdma_conn->max_queue_depth.
Change-Id: I1670c634e6d12aa004fb5a10338b7624850fbc4a
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
There were two unchecked allocations in the nvmf library. Check
for allocation failures.
Change-Id: Ic6b3104d825dba1ee6bd1748fa99e132702f300c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This fixes a static analysis warning for unsigned/signed
mismatch.
Change-Id: I49bd8d6d195f13b402e14a85503a5de6114f5b7f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is the size of a logical block in bytes; 4 GB is more than plenty.
Also allows cleaning up casts to uint32_t in the SCSI translation layer.
Change-Id: I3ec2e2f41fd378f1a83f31aac25c46ef780f63e9
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The large buffer pool allocation was using the per-connection queue
depth, whereas the RDMA memory region registration was using the global
RDMA max queue depth. These sizes need to match, so use the global RDMA
max queue depth for both calls.
Change-Id: Iae161b719e09e19ca3e81df6593b68a4a2e86614
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is a step towards enabling sharing SPDK NVMe
device access from multiple processes using DPDK's
multi-process framework.
Change-Id: I57d5eec158b42addc1036bd2583596471a467a95
Signed-off-by: GangCao <gang.cao@intel.com>
Similar to our NVMf target, this is an iSCSI target that
can interoperate with the Linux and Windows standard iSCSI
initiators.
Change-Id: I6961c5ef99f7b161c396330ed5b543ea29b0ca7b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is a useful abstraction when you want to plug in
a userspace networking layer instead of using the kernel.
Change-Id: I7039d2987e6abad9dcd1987fa105282b1598e2f5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The public header file was missing some required definitions.
Change-Id: Ic4f8028367b1e21ea00c02660ca36be28da54e37
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Use the new timer-based poller functionality to replace rte_timer.
Change-Id: Ic40653306cc73b40139fe18e06bab29b35721a43
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Allow pollers to be scheduled to be run periodically every N
microseconds instead of every iteration of the reactor loop.
Change-Id: Iaea3e98965d81044e6dc5ce5f406bcb7a455289e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Just getting a reference to a bdev should not claim it.
Change-Id: I21e07160662490ec95b52fa31ea1d2ae93a21f09
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Combine the necessary functionality with the main bdev file.
Change-Id: I96d796bc87ac2a8688cdf1fd3c16d2a7c8aef730
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The rte_ring used for pollers is already single-producer and
single-consumer, so it is not providing any thread safety guarantees.
ALl modifications to the active_pollers ring are done from the core that
is running the reactor (via events). This means the rte_ring can be
replaced with a simpler intrusive linked list.
This simplifies the removal of pollers in the middle of the list and
avoids extra allocations for the ring.
Change-Id: Ica149b7a1668a8af1e6ca8f741c48f2217f6f9bf
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
We reported virtualized NVMe devices through NVMe over Fabric specification,
with 1.2.1 NVMe version. For direct mode, the NVMe device maybe has lower
version, such as 1.0, the identify namespace list can not support in those
devices, so we need to add helper function here to simulate such commands
from initiator.
Change-Id: I226f4f34bf61017f538d2dd80332f1d054a501f1
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Allow higher queue depths by allowing many more send/recv
operations than read/write.
Change-Id: I66c424a6463e5e09be6d5463667241ce9271404b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
The target can only provide updates to sq_head inside
of completions. Therefore, we must update sq_head prior
to sending the completion or we'll incorrectly get into
queue full scenarios.
Change-Id: If2925d39570bbc247801219f352e690d33132a2d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This allows the target to poll for internal completions
at higher priority.
Change-Id: I895c33a594a7d7c0545aa3a8405a296be3c106fb
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This ensures that the data buffers are not in use
when we go to send the completion.
Change-Id: I30467b3e3964001150f81b21e5b695dcd0974b0c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
This is useful for holding session-wide buffer pools.
Change-Id: I7024da24b210a2205bf1e159d5935e0093b81120
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
For small SGLs, even if they are keyed and not inline, use the
buffer we allocated for inline data.
Change-Id: I5051c43aabacb20a4247b2feaf2af801dba5f5a9
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Read/Write depth is much lower than Send/Recv depth.
Calculate them separately to prepare for supporting
a larger number of receives than read/writes.
Currently, the target still only exposes a queue depth
equal to the read/write depth.
Change-Id: I08a7434d4ace8d696ae7e1eee241047004de7cc5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>