This reverts commit 6d6052ac96.
This approach is no longer necessary given the patch immediately
preceeding this one.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2512 (master)
(cherry picked from commit 76aed8e4ff)
Change-Id: I5aab14346fa5a14dbf33c94ffcf88b045cdb4999
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2601
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This is to backport below change to SPDK v20.01.2 LTS release.
6a29c6a906
Change-Id: I9b7ed97f2a376af71578ccb5556231832863b255
Signed-off-by: Liang Yan <liang.z.yan@intel.com>
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2262
Tested-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
In the autotest, when calling kill_stub() function, there is error log
like this: "Device 0000:83:00.0 is still attached at shutdown!", so it's
better to detach the controller when exit the stub process.
But after call spdk_nvme_detach() in the stub process, there is another issue:
1. NVMe stub running as the primary process, and it will send 4 AERs.
2. Using NVMe reset tool as the secondary process.
When doing NVMe reset from the secondary process, it will abort all the
outstanding requests, so for the 4 AERs from the primary process, the 4
requests will be added to the active_proc->active_reqs list.
When calling spdk_nvme_detach() to detach a controller, there is a
assertion in the nvme_ctrlr_free_processes() at last to check the
active requests list of this active process data structure.
We can add a check before destructing the controller to poll the
completion queue, so that the active requests list can be flushed.
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/977 (master)
(cherry picked from commit bad2c8e86c)
Change-Id: I0c473e935333a28d16f4c9fb443341fc47c5c24f
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1298
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
On Fedora 30 we have noticed VPP 19.04 related issues:
1) Error values returned by vppctl in non-interactive mode
are not relevant to the success/fail of command.
Vppctl ALWAYS returns 0, so "-e" bash option is unable
to detect any errors.
2) We have intermittent pipefail errors (error 141) returned
by vppctl on disconnect from vpp, even though commands are
executed succesfully.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1214 (master)
(cherry picked from commit 7b7e97604b)
Change-Id: Ie22ea24f7e81017089b899111724d338eeb81113
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1287
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
GCC9 complains:
./db/version_edit.h:134:71: error: implicitly-declared "constexpr
rocksdb::FileDescriptor::FileDescriptor(const
rocksdb::FileDescriptor&)" is deprecated [-Werror=deprecated-copy]
From what I see this can be fixed by explicitly
defining some constructors and assignment operators,
even setting them to `= default;`. I didn't dig into
this further, just ignore the warning for now.
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1082 (master)
(cherry picked from commit a5bcbbefcb)
Change-Id: Ia0ee0cc5fc1dce36f7098959d383b08855a825df
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1286
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Github issue 1165 details some issues we have with soft-roce and these
tests. Right now we are disabling them for build stability.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/483300 (master)
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
(cherry picked from commit e0f63b969d)
Change-Id: I3a9e28ff3cc1c6ac7d9aa91d93541e295514bb7b
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/483407
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Extent table and extent page descriptors are now
set to be default way clusters are serialized on disk.
With this patch UT are ran with and without
extent table.
Changed two asserts in test, since amount is dependent on
which type of serialization is used.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ica58fce6a4effd014d7dd40ee26edd0fa3196d0f
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/481901
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This will be used to add another run of whole UT suite
with extent pages on/off.
Next patch in series will be enabling both types of
extent serialization for all UT.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ia8b4b8822edefb90ffc13cf777885f9af95e4545
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482170
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
All the non-ext version of the call is doing, is calling
ext with NULL as opts. Then default opts are used in
its place.
This change was facilitated by next on in series,
where all blob opts will be initalized in UT
with parameter use_extent_table either set to
true or false.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I62b642c1808b38a5f7c94a5900f25f4978a4ec39
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482859
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
The network operations are now asynchronous, so wait for the kernel
to stop using the NVMe partition after unmounting the filesystem.
The kernel is presumably checking for partition tables or unmapping.
Change-Id: Ibefe8e072823a230a896ecfd0adcd9d5fff2723f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482926
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
A pointer to a stack variable is passed as an argument to
nvme_completion_poll_cb function, later this variable is used
to track completion in the spdk_nvme_wait_for_completion() function.
If normal scenario a request submitted to the admin queue will be completed
within the function which submitted the request.
spdk_nvme_wait_for_completion() calls nvme_transport_qpair_process_completions
which may return an error to the caller, the caller may exit from the
function which submitted the request and the pointer to the stack variable
will no longer be valid. Thereby the request may not be completed at that time
and completed later (e.g. when the controller/qpair are destroyed)
and that will lead to call to nvme_completion_poll_cb with the pointer
to invalid stack variable.
Fix - Dynamically allocate status structure to track the completion;
Add a new field to nvme_completion_poll_status structure to track status
objects that need to be freed in a completion callback
Fixes#1125
Change-Id: Ie0cd8316e1284d42a67439b056c48ab89f23e0d0
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/481530
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
After creation of blobs in both tests, only clusters indexed from
0 to 10 are supposed to be used. Index 0 for md and 1-10 for data
of single blob since it was create thick provisoned.
Cluster allocations are done in order so if there was a bug for
overflow amount of clusters claimed, first in order would be
one with index 11.
This patch adds asserts after each bs load for first data cluster
that is supposed to be used and for first data cluster that is
not.
During the tests those should remain constant.
When creating/deleting snapshots, the blobs are affected
by changing their type to/from thin_provisioned.
Added asserts to verify their state at every blob open.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I38418da55850d5b8468e578b3c42c5b817ae8045
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482661
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
g_bserrno from blob deletion or snapshot creation,
should not be checked. It is implementation
dependent whether the error (or success) from those
calls actually means that enough data was persisted
on disk.
This test case should work even if we set the threshold
high enough that no failed opperations occur.
On the other hand some parts of those calls do cleanup
in them, meanwhile there is enough metadata data on disk already.
Such as cleaning up unused clusters or pages issue
writes, but at that point the blobs already are in expected
state.
Thus removed assert for g_bserrno, as failure is not indicative
of impossibility to recover.
While here, removed the spdk_bs_unload(). This UT are for
testing power fail safety. Never should it be the case that
enough writes occured in create/delete, but blobs are not
in the expected state.
When such bug would be introduced, it could be covered up
by spdk_bs_unload() cleanly closing up the blobstore.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ic69c3061f2cc1fe04bf895632cdb11efb2fe6912
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482660
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Extent Pages claim and insertion can be asynchronous
when cluster allocation happens due to writing to a new cluster.
In such case lowest free cluster and lowest free md page
is claimed, and message is passed to md_thread.
Where inserting both into the arrays and md_sycn happens.
This patch adds parameters to pass the Extent Page offset
in such case.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I46d8ace9cd5abc0bfe48174c2f2ec218145b9c75
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479849
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If available, automatically use MSG_ZEROCOPY when sending on sockets.
Storage workloads contain sufficient data transfer sizes that this is
always a performance improvement, regardless of workload.
Change-Id: I14429d78c22ad3bc036aec13c9fce6453e899c92
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/471752
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Or Gerlitz <gerlitz.or@gmail.com>
With our target design, there's no advantage to sending
multiple R2T PDUs per nvme command. This patch starts by
setting up the math so that at most 1 R2T PDU is required
per request. This can be guaranteed because the maximum
data transfer size (MDTS) is pre-negotiated in NVMe-oF
to a reasonable size at start up.
It then proceeds to simplify all of the logic around mapping
requests to PDUs. It turns out that the mapping is now always
1:1. There are two additional cases where there is no request
object at all but a PDU is still needed - the connection response
and termination request. Put an extra PDU on the queue object
for that purpose.
This is a major simplification.
Change-Id: I8d41f9bf95e70c354ece8fb786793624bec757ea
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479905
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
We can always accept up to the maximum I/O size in an H2C,
so eliminate the #define.
Change-Id: I349dab5f9b6ec482a7c580b1396e03c8d30a250b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482278
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The resources allocated to a queue pair do not need to be directly
correlated to the queue size requested by the initiator in NVMe-oF, as
long as enough resources are present. The RDMA transport, for instance,
does complex pooling of the resources behind the scenes when using a
shared receive queue.
Simplify the resource allocation for a TCP qpair to just always allocate
the max allowed queue size right away. This is a configurable parameter,
so system administrators can adjust for their needs. The initiator may
then request a queue size less than or equal to that, which will only be
enforced by queue depth counting and not impact the actual number of
resources allocated on the target.
This change relies on the MaxC2HSize being equal to the Maximum Data
Transfer Size (MDTS) reported. That is the default configuration, but
MDTS is configurable. Changing the MDTS with this patch to a value
larger than 128k will cause the target to break. This is addressed in
the next patch in this series.
Change-Id: Ibd4723785c6a4d8d444f9b7bbfa89f98de2320f5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479733
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
This patch adds test for compare-and-write fused
command. First it runs successful call of compare
and write command and then it runs again and the
second run is expected to fail.
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Iaf1151414c4dea16487e8c33630cbffb0c09ae3b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482606
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add call for spdk_nvmf_bdev_ctrlr_compare_and_write_cmd
function in spdk_nvmf_ctrlr_process_io_cmd function
when fused command is discovered.
This patch also removes redundant defines for fused flags.
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: I61971a56577ab32b52e1fde1e572f718a9a2d9aa
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/476621
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This patch introduces new spdk_nvmf_bdev_ctrlr_compare_cmd
function which implements support for compare operation.
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Iadf402a6441a78ea0e6468f1066c6b0e10e63b9b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477782
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Use provided configuration file via --disk-map option
instead of creating bdevs and VMs in ordered sequence
(e.g. 0 to 10, etc.).
This allows to:
- specify which NVMe device we want to use
(PCI BDF identifier)
- how to name it in SPDK nvme bdev configuration
- how many splits or lvol bdevs to create on this device
- which VMs should use created bdevs
With CPU mask configuration file this allows to better
control resources when running the test (especially in
case of NUMA optimization where using sequential for/while
loops is not a good approach).
vm_count and max_disks parameters removed. These are not
needed anymore as they're controlled by config file.
Example of config file contents:
(BDF,Spdk NvmeBdev name,Split count,VM list)
0000:1b:00.0,Nvme1,2,2 3
0000:89:00.0,Nvme3,4,4 5 6 7
Change-Id: I9fc73458825d8072537aa04880765a048e034ce4
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464565
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This test essentially confirms that the new RPC properly creates a test
file and that the script can parse that file and run through a basic set
of operations and exit without an error.
Change-Id: Idf0c831020696a3a62fcef13171eedf3fcf63f5b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477867
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This reverts commit 4700ef0fa6.
This has merge conflicts with the iSCSI async write patch
series that was merged.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5a27460a369ef5f13bf490a287603e566071be40
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482482
Use new bdev aux buf feature. Huge performance benefit for writes.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I5a27460a369ef5f13bf490a287603e566071be41
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478384
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Enables us to test randomized data against the iSCSI target interface.
Change-Id: I9ff9a06c11bb16b315686156b27855664f21bd48
Signed-off-by: Hailiang Wang <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470925
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is prepared for the further call back usage.
Change-Id: Iccf304c87e67debfb4e7c330acc9cc233cc3ec48
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/481917
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch eliminates the flushing logic and simplies
the writev logic. And this patch can also improve the performance.
We support async write for PDUs other than login response, logout response,
and text response in this patch. We will support async write also for them
later in this patch series.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I243f598f297d594da0bb18466bc47dab918ed3ee
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/481686
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Initiator drivers (e.g nvme/tcp) don't use poll groups but rather directly
poll the qpair. In this case we want to allow the polling function (e.g
_qpair_process_completions()) to flush async writes pending on the socket.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Change-Id: Ibd8c73691213d58e287b7110d0f5a381a89a64d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/475419
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This gives us an assurance that we know scan-build is being run against
all relevant files in the repository.
Change-Id: Ic0b871e98a9ea7acd2d6b2a99ab81955af29fc66
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479898
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Now we can actually know which tests we are running on which bdevs when
we call blockdev.sh. Removing the conditionals from the configuration
setup (or failing if they don't succeed) and then calling run_test from
the top level allows us to reliably track the bdev tests and fail if
they don't run on a desired class of bdev.
Change-Id: I14e6e3d993b4af4995adcbc5f138bac4ae9d63be
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478247
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Some bdevs don't support unmap so they may not be able to run the fio
unmap tests. In case none of the bdevs we are testing support this
operation, the test will be skipped.
Change-Id: I5063297f0378bc87abf50e5311d147251cfe0ad9
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478337
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This allows us to run the NBD tests on any combination of bdevs
based on the flags passed to the script. Previously we ran the tests
on a specific grouping of bdevs and opted to only free one type (passthru)
over RPC after the test ran. Rather than free all bdev types in the new
implementation, I opted to not free any of them via RPC and rather allow
the application to close them at termination time.
Change-Id: I574db4f941bfbb073cceed4610e1f5d43028569f
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478246
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Just to make sure that we always test against a valid bdev.
Change-Id: I7c9786f32b8d5ff5f2bf9c86151fb7f2bdfdf5f3
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478245
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Previously, we only ran bdevperf tests against GPT partitioned NVMe
drives. These tests are generally applicable and should be run against
all of the bdev types we support.
So long story short this configuration file isn't needed and we can just
get rid of it.
Change-Id: Ia6ded22ce16fc1f76b7d99643b9d37e3ecbd1c60
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478244
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This will allow us to parameterize the blockdev.sh script and call it
individually from different jobs.
Change-Id: I8585c12dedfb1a32ea206a09a544440d2e794d4b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478242
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Previously, we conditionally ran one test in per patch, and another in
nightly tests. However, the only difference between these tests was the
configuration options sent to FIO. All other tests in the build pool
follow a pattern where they simply change the options and run the same
test in nightly and per-patch.
Following that pattern makes it easier to set a specific list of jobs
that must be run before a tet run is considered complete.
Change-Id: I6b2f49114155fdfa1d1e07faf45f03e64572eeb8
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/481908
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
This will allow us to run test cases under multiple suites without
clogging up the logs too much, but it will also preserve information
about which test suites were run or (more importantly) not run.
Change-Id: I2434a54a0877ae36b9f84bfab8a62653ac1172f8
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477367
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>