Change the way we increase poll group reference counts
for round-robin scheduling.
So far we used to increase them whenever someone called
vhost_get_poll_group() and this worked fine for Vhost-Block
which picks a new poll group for each session. Vhost-SCSI,
however, picks only one poll group for all sessions on
a vhost device. This means that some threads will have
multiple Vhost-SCSI pollers but will still appear to the
vhost scheduler as if they had only one.
To fix it, increase poll group refcnt only when sessions
are really being started - in vhost_session_start_done().
Change-Id: I60f0d2101239e5a91138a5afd30c51dc1ccf7c2e
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466733
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Currently vhost_dev_foreach_session() accepts a single
callback function for both iterating through all active
sessions and for signaling the end of iteration (called
last time with vsession param == NULL). Now that the
final signal has completely different semantics and is
called on a specific thread, it makes sense to put it in
a separate function.
While here, remove the one-line description of
spdk_vhost_session_fn typepef. It wasn't helpful anyway.
Change-Id: I56b97180110874a813e666f964bb51c39a8ce6bb
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466732
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Currently vhost_dev_foreach_session() accepts a single
callback function for both iterating through all active
sessions and for signaling the end of iteration (called
last time with vsession param == NULL). Now that the
final signal has completely different semantics and is
called on a specific thread, it makes sense to put in
a separate function.
In this patch we prepare separate functions for the final
call, but still call them in the original callback. In
a separate patch we'll start passing both functions
directly to foreach_session().
Change-Id: I9f4338d9696f7bd15ca2d6655c6a3851569aff75
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466731
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The function could never fail, so make it return void
rather than int. This serves as cleanup.
Change-Id: I16a857ecee8d162f546fd097acaa2e66d51ebffa
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466730
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Historically the callbacks from vhost_dev_foreach_session()
could be called with vdev argument == NULL, which would
mean that device was removed after enqueuing the event
and before consuming it. Now we keep track of pending
asynchronous operations on each vhost device and don't
allow removing it if there are any unconsumed events,
so the the vdev == NULL checks are redundant. Remove them.
Change-Id: I7aa3785080d20ed06e008c081d3f37a949228f5a
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466729
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Remove them all at once. spdk_ prefix should be
only applied to publicly exported functions.
Change-Id: Ib6d2bd0954ec5cb7c8cf253d79b9d3cd8aa0eeef
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466728
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
As SSDs with namespace management become more prevalant,
improve the identify utility to clarify things like the
total number of namespaces, and that data is printed out
for active namespaces only.
While here, change an active namespace check to an
assert. The print_namespace() function is only
called for active namespaces, so the check and print
was a bit confusing since it is never seen with SSDs
with inactive namespaces.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ib21772579b94c5bcd1c518adb9d3341f4bf824f6
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466818
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This unifies buffer management among transports further and is a
preparation to make buffer allocation asynchronous.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I8c588eeac4081f50fe32605feb7352f72c628d95
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466847
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
I/O buffer cache is per transport_poll_group now. Hence moving
pending_data_buf_queue from struct spdk_nvmf_fc_conn to struct
spdk_nvmf_fc_poll_group is reasonable and do it in this patch.
This change is based on RDMA and TCP transport.
Further unification among transports will be done in subsequent
patches.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic857046be8da238cb3ff9e89b83cdac5f6349bcf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466844
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The pointer to transport is set to struct nvmf_transport_poll_group
in nvmf_transport_poll_group_create() after returning
nvmf_fc_poll_group_create(). Hence use it and remove ftransport pointer
from struct nvmf_fc_poll_group.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9f2b2ade77afa18d0e97949fc0c2403eb000cdad
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467060
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
RDMA transport have used rtransport and TCP transport have used
ttransport, respectively. So FC transport changes to use ftransport
instead of fc_transport.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7d98eb2f6efbae7e2b4784f31b9de5e1a81bc2ac
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467059
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Both RDMA and TCP transport have uesd group for such case. Hence
FC transport changes to use group instead of tp_poll_group.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic4b401179da506bb204c3ec48650db87f91fe72a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466843
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The pointer to nvmf_poll_group is set in nvmf_transport_poll_group_create()
after returning nvmf_fc_poll_group_create(). Hence holding it into
struct spdk_nvmf_fc_poll_group is duplicated and can be removed.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7087c5cdb94b0b0c5f51b0b63b631c08266c90d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466842
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
RDMA transport have used rgroup and TCP transport have used tgroup
for such case. Hence FC transport changes to use fgroup instead of
fc_poll_group.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I91b7ad6a1c6e45caf92801b0635b18d48b3c9810
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466841
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is just a little convenience thing for people attempting to do an
automated make of SPDK from outside of the spdk directory. make and git
have -C options for the same purpose.
Change-Id: Id4b3768a6aaeb0f7149bb78c2fd2467faecbda97
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466348
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Start processing ANM events only after the device is fully initialized.
Otherwise some of the structures are partially filled and can be
interpreted incorrectly.
Change-Id: Ia741730cf15d44d76ce8afa7955e6a5bf42ca42b
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466935
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Track the number of acquired but not yet submitted write buffer entries
to be able to correctly calculate the required number of entries to be
padded.
Change-Id: Ie201681937ad1d03ec125aa5912311c54a7e35c9
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466934
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
When recovering the data from the non-volatile cache, the data inside
the volatile cache needs to be flushed before flushing active bands.
Otherwise, if the number of blocks in a band is smaller than the number
of blocks inside the volatile cache, part of the data may not get
flushed.
Change-Id: I4e99709c8c2a526a928578870d7fbd5fef37db02
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466883
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Double quoting of string in "$IO_QUEUES" results in bash
replacing the variable with '-i 8' instead of just -i 8.
This makes nvme-cli to consider the string as a single
option, not a key/value pair and results in
"Invalid argument" error.
Change-Id: Ibc2f6324baa3c90aa7bf43128c5e7486e38c1fc1
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467112
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Keeping a global discovery log page was meant to be a time saving
mechanism, but in the current implementation, it doesn't work properly,
and can cause undesirable behavior and potential crashes. There are two
main problems with keeping a global log page.
1. Admin qpairs can be assigned to any SPDK thread. This means that when
multiple initiators connect to the host and request the discovery log,
they can both be running through the spdk_nvmf_ctrlr_get_log_page
function at the same time. In the event that the discovery generation
counter is incremented while these accesses are occurring, it can cause
one or both of the threads to update the log at the same time. This
results in both logs trying to free the old log page (double free) and
set their log as the new one (possible memory leak).
2. The second problem is that each host is supposed to get a unique
discovery log based on the subsystems to which they have access.
Currently the code relies on whether the discovery log page offset in
the request is equal to 0 to determine if it should load a new discovery
log page or use the cached one. This is inherently faulty because it
relies on initiator provided value to determine what information to
provide from the log page. An initiator could easily send a discovery
request with an offset greater than 0 on purpose to procure most of a
log page provided to another host.
Overall, I think it's safest to not cache the log page at all anymore
and rely on a thread local fresh log page each time.
Reported-by: Curt Bruns <curt.e.bruns@intel.com>
Change-Id: Ib048e26f139927d888fed7019e0deec346359582
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466839
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The new name comes first, the old name comes last.
Eliminates warnings at app start time for
set_bdev_nvme_options and set_bdev_nvme_hotplug
aliases.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia5ff3012e97bf59a6ecd9282dfc5cc0796ca7630
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466967
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
By default nvme-cli will try to connect to subsystem using
number of io queues equal to number of available cores.
In case of more powerful CPUs and HT enabled this results in
more time needed to allocate resources.
Using "-i" option for nvme connect allows us to control the
number of IO queues and avoid timeout issues when testing.
Fixes issue #927
Change-Id: Ibb9b89b2d8be84fba29360db0f781cc99ae5c5b1
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466389
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
When building SPDK on aarch64, met error about invalid target attribute 'bmi',
'arch=core2' and 'arch=atom'.
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Change-Id: I369e61005e6550045e24d84147a71b5dd25c4a52
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466848
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In the past, memory in spdk could have been unregistered in
different chunks than it was registered, so to account
for that the vtophys code used to register each hugepage
(2MB chunk of memory) separately to the VFIO driver. This
really made the code generally simple.
Now that memory in spdk can only be unregistered in the same
chunks it was registered in, we no longer have to register
each hugepage to VFIO separately. We could register the
entire memory region with just a single VFIO ioctl instead,
so that's we'll do now.
This serves as an optimization as we obviously send less
ioctls now, but most importantly it prevents SPDK from
reaching a VFIO registrations limit that was introduced
in Linux 5.1. [1]
The default limit is 65535, which results in SPDK being able to
make only the first 128GB of memory DMA-able. This is most
problematic for vhost where we need to register the memory
of all the VMs.
Fixes#915
[1] 492855939bdb59c6f947b0b5b44af9ad82b7e38c
("vfio/type1: Limit DMA mappings per container")
Change-Id: Ida40306b2684e20daa2fd8d12e0df2eef5a4bff1
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/432442
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
We'll be now able to check contiguity for more than 2MB
regions.
Change-Id: I738ff451d534075c944972918d08e5e0cadea4f5
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466073
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This was never rebased after the shellcheck changes
went in. Fix it so that master can pass check_format.sh
again.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic124c8e77049f08ab29a5b4a902e7718d378dd22
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466951
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Scan build is really pessimistic and assumes that
mempool functions can dequeue NULL buffers even if they
return success. This is obviously a false possitive, but
the mempool dequeue is done in a DPDK inline function
that we can't decorate with usual assert(buf != NULL).
Instead, under #ifdef __clang_analyzer__ we'll now
preinitialize the dequeued buffer array with some dummy
objects.
Change-Id: I070cfbfd39b6a66d25cd5f9a7c0dfbfadc4cb92a
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/463232
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_rdma_request to
nvmf_rdma_request_fill_buffers and do it in this patch.
Additionally, we use the cached pointer to nvmf_request in
spdk_nvmf_rdma_request_fill_iovs which is the caller to
nvmf_rdma_request_fill_buffers in this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ia7664e9688bd9fa157504b4f5075f79759d0e489
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466212
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_tcp_req to
nvmf_tcp_req_fill_buffers and do it in this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I00eff578a98891e99fcb9a3aafa3d99126d6f1c1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466089
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_fc_request to
nvmf_fc_request_fill_buffers and do it in this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ibe87e7641e5c364b20a6d877ce7928c612b0b83a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466088
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
This is a small performance optimization and an effor to unify I/O
buffer management further among transports.
it is ensured that the request is the first of STAILQ when
nvmf_fc_request_execute() completes successfully.
Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for the case.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: If982842bf53ba00426a854a18eaadf8a1b8d642d
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466676
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
This is an effort to unify I/O buffer management further among
transports. RDMA and TCP transport have named pending_queue
pending_data_buf_queue. So FC transport follows RDMA and TCP transport.
The next patch will change pending_data_buf_queue to use STAILQ
instead of TAILQ.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I57c3c678a1e92ec262eb8940418529a62b6768c3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466675
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
This is a small performance optimization and an effort to unify
I/O buffer management further among transports.
It is ensured that the request is the first of STAILQ when
spdk_nvmf_tcp_send_c2h_data() is called or the case
TCP_REQUEST_STATE_NEED_BUFFER is executed in spdk_nvmf_tcp_req_process().
Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for these two cases.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0b195874ac22a8d5ecfb283a9865d2615b7d5912
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466637
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Use old rpc in conf.json to check
if rpc alias if handled properly.
Change-Id: I466e39dd6b4adf140225060e46752250ae30ddb4
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465666
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
spdk_bdev_part_construct() must be now called on the
same thread that called spdk_bdev_part_base_construct().
This was always the case so far and I don't see any other
case where thread safety could be useful, so just remove
it. The doxygen doesn't say anything about it either.
Even in GPT case, we create a base directly as a part of
examine and then create part bdevs in the spdk_bdev_read()
completion callback, but that callback will be always
executed on the same thread which issued the read.
Change-Id: I752f2a7f08c9faf4231ed53a46b700b33fa13697
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466024
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Not any different from the test_env version in this patch but it will
be in the upcomiong series as the 2MB tests are added as we will need
to manipulate the size param.
Change-Id: I13a7544546b5296421fe5b696c8777e402bacc35
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466076
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>