rdma: clean up the create_transport rpc docs

Just a couple minor clarifications.

Change-Id: I6c368d263296f742d5bfb0df431d3bf40c800c6c
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452270
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This commit is contained in:
Seth Howell 2019-04-26 13:40:35 -07:00 committed by Ben Walker
parent eb6006c242
commit 9d0e11d4a2
2 changed files with 3 additions and 2 deletions

View File

@ -3485,6 +3485,7 @@ io_unit_size | Optional | number | I/O unit size (bytes)
max_aq_depth | Optional | number | Max number of admin cmds per AQ
num_shared_buffers | Optional | number | The number of pooled data buffers available to the transport
buf_cache_size | Optional | number | The number of shared buffers to reserve for each poll group
max_srq_depth | Optional | number | The number of elements in a per-thread shared receive queue (RDMA only)
### Example:

View File

@ -57,8 +57,8 @@ def nvmf_create_transport(client,
io_unit_size: I/O unit size in bytes (optional)
max_aq_depth: Max size admin quque per controller (optional)
num_shared_buffers: The number of pooled data buffers available to the transport (optional)
buf_cache_size: The number of shared buffers to reserve for each poll group(optional)
max_srq_depth: Max number of outstanding I/O per shared receive queue (optional)
buf_cache_size: The number of shared buffers to reserve for each poll group (optional)
max_srq_depth: Max number of outstanding I/O per shared receive queue - RDMA specific (optional)
Returns:
True or False