test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
#!/usr/bin/env bash
|
|
|
|
|
|
|
|
testdir=$(readlink -f $(dirname $0))
|
|
|
|
rootdir=$(readlink -f $testdir/../..)
|
|
|
|
source $rootdir/test/common/autotest_common.sh
|
|
|
|
source $rootdir/test/lvol/common.sh
|
2020-02-17 12:57:10 +00:00
|
|
|
source "$rootdir/test/bdev/nbd_common.sh"
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
|
|
|
|
# create empty lvol store and verify its parameters
|
|
|
|
function test_construct_lvs() {
|
2019-06-27 18:00:25 +00:00
|
|
|
# create a malloc bdev
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
2019-06-27 18:00:25 +00:00
|
|
|
|
|
|
|
# create a valid lvs
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
lvs=$(rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid")
|
|
|
|
|
2019-07-18 12:19:46 +00:00
|
|
|
# try to destroy inexistent lvs, this should obviously fail
|
|
|
|
dummy_uuid="00000000-0000-0000-0000-000000000000"
|
2020-03-05 12:12:55 +00:00
|
|
|
NOT rpc_cmd bdev_lvol_delete_lvstore -u "$dummy_uuid"
|
2019-07-18 12:19:46 +00:00
|
|
|
# our lvs should not be impacted
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid"
|
|
|
|
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
# verify it's there
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvs")" = "$lvs_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvs")" = "lvs_test" ]
|
|
|
|
[ "$(jq -r '.[0].base_bdev' <<< "$lvs")" = "$malloc_name" ]
|
|
|
|
|
|
|
|
# verify some of its parameters
|
|
|
|
cluster_size=$(jq -r '.[0].cluster_size' <<< "$lvs")
|
|
|
|
[ "$cluster_size" = "$LVS_DEFAULT_CLUSTER_SIZE" ]
|
|
|
|
total_clusters=$(jq -r '.[0].total_data_clusters' <<< "$lvs")
|
|
|
|
[ "$(jq -r '.[0].free_clusters' <<< "$lvs")" = "$total_clusters" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$((total_clusters * cluster_size))" = "$LVS_DEFAULT_CAPACITY" ]
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
|
2019-06-27 18:04:19 +00:00
|
|
|
# remove the lvs and verify it's gone
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
2020-03-05 12:12:55 +00:00
|
|
|
NOT rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid"
|
2019-07-18 11:38:41 +00:00
|
|
|
# make sure we can't delete the same lvs again
|
2020-03-05 12:12:55 +00:00
|
|
|
NOT rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
2019-07-18 11:38:41 +00:00
|
|
|
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
2019-12-06 16:26:12 +00:00
|
|
|
check_leftover_devices
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
}
|
|
|
|
|
2019-06-27 18:00:25 +00:00
|
|
|
# call bdev_lvol_create_lvstore with base bdev name which does not
|
|
|
|
# exist in configuration
|
|
|
|
function test_construct_lvs_nonexistent_bdev() {
|
|
|
|
# make sure we can't create lvol store on nonexistent bdev
|
|
|
|
rpc_cmd bdev_lvol_create_lvstore NotMalloc lvs_test && false
|
|
|
|
return 0
|
|
|
|
}
|
|
|
|
|
2019-06-27 18:04:19 +00:00
|
|
|
# try to create two lvol stores on the same bdev
|
|
|
|
function test_construct_two_lvs_on_the_same_bdev() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# try to create another lvs on the same malloc bdev
|
|
|
|
rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test2 && false
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
|
|
|
rpc_cmd bdev_get_bdevs -b "$malloc_name" && false
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2019-06-27 18:10:34 +00:00
|
|
|
# try to create two lvs with conflicting aliases
|
|
|
|
function test_construct_lvs_conflict_alias() {
|
|
|
|
# create first bdev and lvs
|
|
|
|
malloc1_name=$(rpc_cmd construct_malloc_bdev $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs1_uuid=$(rpc_cmd construct_lvol_store "$malloc1_name" lvs_test)
|
|
|
|
|
|
|
|
# create second bdev and lvs with the same name as previously
|
|
|
|
malloc2_name=$(rpc_cmd construct_malloc_bdev $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
rpc_cmd construct_lvol_store "$malloc2_name" lvs_test && false
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd destroy_lvol_store -u "$lvs1_uuid"
|
|
|
|
rpc_cmd get_lvol_stores -u "$lvs1_uuid" && false
|
|
|
|
rpc_cmd delete_malloc_bdev "$malloc1_name"
|
|
|
|
rpc_cmd delete_malloc_bdev "$malloc2_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
2019-06-27 18:04:19 +00:00
|
|
|
|
2019-07-15 11:21:58 +00:00
|
|
|
# call bdev_lvol_create_lvstore with cluster size equals to malloc bdev size + 1B
|
|
|
|
# call bdev_lvol_create_lvstore with cluster size smaller than minimal value of 8192
|
|
|
|
function test_construct_lvs_different_cluster_size() {
|
|
|
|
# create the first lvs
|
|
|
|
malloc1_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs1_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc1_name" lvs_test)
|
|
|
|
|
|
|
|
# make sure we've got 1 lvs
|
|
|
|
lvol_stores=$(rpc_cmd bdev_lvol_get_lvstores)
|
|
|
|
[ "$(jq length <<< "$lvol_stores")" == "1" ]
|
|
|
|
|
|
|
|
# use the second malloc for some more lvs creation negative tests
|
|
|
|
malloc2_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
# capacity bigger than malloc's
|
2020-05-07 11:27:06 +00:00
|
|
|
rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs2_test -c $((MALLOC_SIZE + 1)) && false
|
2019-07-15 11:21:58 +00:00
|
|
|
# capacity equal to malloc's (no space left for metadata)
|
|
|
|
rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs2_test -c $MALLOC_SIZE && false
|
|
|
|
# capacity smaller than malloc's, but still no space left for metadata
|
2020-05-07 11:27:06 +00:00
|
|
|
rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs2_test -c $((MALLOC_SIZE - 1)) && false
|
2019-07-15 11:21:58 +00:00
|
|
|
# cluster size smaller than the minimum (8192)
|
|
|
|
rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs2_test -c 8191 && false
|
|
|
|
|
|
|
|
# no additional lvol stores should have been created
|
|
|
|
lvol_stores=$(rpc_cmd bdev_lvol_get_lvstores)
|
|
|
|
[ "$(jq length <<< "$lvol_stores")" == "1" ]
|
|
|
|
|
|
|
|
# this one should be fine
|
|
|
|
lvs2_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs2_test -c 8192)
|
|
|
|
# we should have one more lvs
|
|
|
|
lvol_stores=$(rpc_cmd bdev_lvol_get_lvstores)
|
|
|
|
[ "$(jq length <<< "$lvol_stores")" == "2" ]
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs1_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs1_uuid" && false
|
2019-07-17 10:06:35 +00:00
|
|
|
|
|
|
|
# delete the second lvs (using its name only)
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -l lvs2_test
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -l lvs2_test && false
|
2019-07-15 11:21:58 +00:00
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs2_uuid" && false
|
|
|
|
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc1_name"
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc2_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2019-07-15 11:43:28 +00:00
|
|
|
# test different methods of clearing the disk on lvolstore creation
|
|
|
|
function test_construct_lvs_clear_methods() {
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
|
|
|
|
# first try to provide invalid clear method
|
2020-05-07 11:27:06 +00:00
|
|
|
rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs2_test --clear-method invalid123 && false
|
2019-07-15 11:43:28 +00:00
|
|
|
|
|
|
|
# no lvs should be created
|
|
|
|
lvol_stores=$(rpc_cmd bdev_lvol_get_lvstores)
|
|
|
|
[ "$(jq length <<< "$lvol_stores")" == "0" ]
|
|
|
|
|
|
|
|
methods="none unmap write_zeroes"
|
|
|
|
for clear_method in $methods; do
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test --clear-method $clear_method)
|
|
|
|
|
|
|
|
# create an lvol on top
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$LVS_DEFAULT_CAPACITY_MB")
|
|
|
|
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test" ]
|
|
|
|
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$((LVS_DEFAULT_CAPACITY / MALLOC_BS))" ]
|
2019-07-15 11:43:28 +00:00
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
|
|
|
rpc_cmd bdev_get_bdevs -b "$lvol_uuid" && false
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
|
|
|
done
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2020-02-17 12:57:10 +00:00
|
|
|
# Test for clear_method equals to none
|
|
|
|
function test_construct_lvol_fio_clear_method_none() {
|
|
|
|
local nbd_name=/dev/nbd0
|
|
|
|
local clear_method=none
|
|
|
|
|
|
|
|
local lvstore_name=lvs_test lvstore_uuid
|
|
|
|
local lvol_name=lvol_test lvol_uuid
|
|
|
|
local malloc_dev
|
|
|
|
|
|
|
|
malloc_dev=$(rpc_cmd bdev_malloc_create 256 "$MALLOC_BS")
|
|
|
|
lvstore_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_dev" "$lvstore_name")
|
|
|
|
|
|
|
|
get_lvs_jq bdev_lvol_get_lvstores -u "$lvstore_uuid"
|
|
|
|
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create \
|
2020-05-07 11:27:06 +00:00
|
|
|
-c "$clear_method" \
|
|
|
|
-u "$lvstore_uuid" \
|
|
|
|
"$lvol_name" \
|
|
|
|
$((jq_out["cluster_size"] / 1024 ** 2)))
|
2020-02-17 12:57:10 +00:00
|
|
|
|
|
|
|
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" "$nbd_name"
|
|
|
|
run_fio_test "$nbd_name" 0 "${jq_out["cluster_size"]}" write 0xdd
|
|
|
|
nbd_stop_disks "$DEFAULT_RPC_ADDR" "$nbd_name"
|
|
|
|
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvstore_uuid"
|
|
|
|
nbd_start_disks "$DEFAULT_RPC_ADDR" "$malloc_dev" "$nbd_name"
|
|
|
|
|
|
|
|
local metadata_pages
|
|
|
|
local last_metadata_lba
|
|
|
|
local offset_metadata_end
|
|
|
|
local last_cluster_of_metadata
|
|
|
|
local offset
|
|
|
|
local size_metadata_end
|
|
|
|
|
|
|
|
metadata_pages=$(calc "1 + ${jq_out["total_data_clusters"]} + ceil(5 + ceil(${jq_out["total_data_clusters"]} / 8) / 4096) * 3")
|
|
|
|
|
2020-05-07 11:27:06 +00:00
|
|
|
last_metadata_lba=$((metadata_pages * 4096 / MALLOC_BS))
|
|
|
|
offset_metadata_end=$((last_metadata_lba * MALLOC_BS))
|
2020-02-17 12:57:10 +00:00
|
|
|
last_cluster_of_metadata=$(calc "ceil($metadata_pages / ${jq_out["cluster_size"]} / 4096)")
|
2020-05-07 11:27:06 +00:00
|
|
|
last_cluster_of_metadata=$((last_cluster_of_metadata == 0 ? 1 : last_cluster_of_metadata))
|
|
|
|
offset=$((last_cluster_of_metadata * jq_out["cluster_size"]))
|
|
|
|
size_metadata_end=$((offset - offset_metadata_end))
|
2020-02-17 12:57:10 +00:00
|
|
|
|
|
|
|
# Check if data on area between end of metadata and first cluster of lvol bdev remained unchaged.
|
|
|
|
run_fio_test "$nbd_name" "$offset_metadata_end" "$size_metadata_end" "read" 0x00
|
|
|
|
# Check if data on first lvol bdevs remains unchanged.
|
|
|
|
run_fio_test "$nbd_name" "$offset" "${jq_out["cluster_size"]}" "read" 0xdd
|
|
|
|
|
|
|
|
nbd_stop_disks "$DEFAULT_RPC_ADDR" "$nbd_name"
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_dev"
|
|
|
|
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
|
|
|
# Test for clear_method equals to unmap
|
|
|
|
function test_construct_lvol_fio_clear_method_unmap() {
|
|
|
|
local nbd_name=/dev/nbd0
|
|
|
|
local clear_method=unmap
|
|
|
|
|
|
|
|
local lvstore_name=lvs_test lvstore_uuid
|
|
|
|
local lvol_name=lvol_test lvol_uuid
|
|
|
|
local malloc_dev
|
|
|
|
|
|
|
|
malloc_dev=$(rpc_cmd bdev_malloc_create 256 "$MALLOC_BS")
|
|
|
|
|
|
|
|
nbd_start_disks "$DEFAULT_RPC_ADDR" "$malloc_dev" "$nbd_name"
|
2020-05-07 11:27:06 +00:00
|
|
|
run_fio_test "$nbd_name" 0 $((256 * 1024 ** 2)) write 0xdd
|
2020-02-17 12:57:10 +00:00
|
|
|
nbd_stop_disks "$DEFAULT_RPC_ADDR" "$nbd_name"
|
|
|
|
|
|
|
|
lvstore_uuid=$(rpc_cmd bdev_lvol_create_lvstore --clear-method none "$malloc_dev" "$lvstore_name")
|
|
|
|
get_lvs_jq bdev_lvol_get_lvstores -u "$lvstore_uuid"
|
|
|
|
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create \
|
2020-05-07 11:27:06 +00:00
|
|
|
-c "$clear_method" \
|
|
|
|
-u "$lvstore_uuid" \
|
|
|
|
"$lvol_name" \
|
|
|
|
$((jq_out["cluster_size"] / 1024 ** 2)))
|
2020-02-17 12:57:10 +00:00
|
|
|
|
|
|
|
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" "$nbd_name"
|
|
|
|
run_fio_test "$nbd_name" 0 "${jq_out["cluster_size"]}" read 0xdd
|
|
|
|
nbd_stop_disks "$DEFAULT_RPC_ADDR" "$nbd_name"
|
|
|
|
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvstore_uuid"
|
|
|
|
nbd_start_disks "$DEFAULT_RPC_ADDR" "$malloc_dev" "$nbd_name"
|
|
|
|
|
|
|
|
local metadata_pages
|
|
|
|
local last_metadata_lba
|
|
|
|
local offset_metadata_end
|
|
|
|
local last_cluster_of_metadata
|
|
|
|
local offset
|
|
|
|
local size_metadata_end
|
|
|
|
|
|
|
|
metadata_pages=$(calc "1 + ${jq_out["total_data_clusters"]} + ceil(5 + ceil(${jq_out["total_data_clusters"]} / 8) / 4096) * 3")
|
|
|
|
|
2020-05-07 11:27:06 +00:00
|
|
|
last_metadata_lba=$((metadata_pages * 4096 / MALLOC_BS))
|
|
|
|
offset_metadata_end=$((last_metadata_lba * MALLOC_BS))
|
2020-02-17 12:57:10 +00:00
|
|
|
last_cluster_of_metadata=$(calc "ceil($metadata_pages / ${jq_out["cluster_size"]} / 4096)")
|
2020-05-07 11:27:06 +00:00
|
|
|
last_cluster_of_metadata=$((last_cluster_of_metadata == 0 ? 1 : last_cluster_of_metadata))
|
|
|
|
offset=$((last_cluster_of_metadata * jq_out["cluster_size"]))
|
|
|
|
size_metadata_end=$((offset - offset_metadata_end))
|
2020-02-17 12:57:10 +00:00
|
|
|
|
|
|
|
# Check if data on area between end of metadata and first cluster of lvol bdev remained unchaged.
|
|
|
|
run_fio_test "$nbd_name" "$offset_metadata_end" "$size_metadata_end" "read" 0xdd
|
|
|
|
# Check if data on lvol bdev was zeroed. Malloc bdev should zero any data that is unmapped.
|
|
|
|
run_fio_test "$nbd_name" "$offset" "${jq_out["cluster_size"]}" "read" 0x00
|
|
|
|
|
|
|
|
nbd_stop_disks "$DEFAULT_RPC_ADDR" "$nbd_name"
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_dev"
|
|
|
|
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2019-06-26 17:35:56 +00:00
|
|
|
# create lvs + lvol on top, verify lvol's parameters
|
|
|
|
function test_construct_lvol() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# create an lvol on top
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$LVS_DEFAULT_CAPACITY_MB")
|
|
|
|
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test" ]
|
|
|
|
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$((LVS_DEFAULT_CAPACITY / MALLOC_BS))" ]
|
2019-06-27 16:39:22 +00:00
|
|
|
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol")" = "$lvs_uuid" ]
|
|
|
|
|
|
|
|
# clean up and create another lvol, this time use lvs alias instead of uuid
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
2019-12-18 14:35:34 +00:00
|
|
|
rpc_cmd bdev_get_bdevs -b "$lvol_uuid" && false
|
2019-06-27 16:39:22 +00:00
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test lvol_test "$LVS_DEFAULT_CAPACITY_MB")
|
|
|
|
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test" ]
|
|
|
|
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$((LVS_DEFAULT_CAPACITY / MALLOC_BS))" ]
|
2019-06-27 16:39:22 +00:00
|
|
|
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol")" = "$lvs_uuid" ]
|
2019-06-26 17:35:56 +00:00
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
2019-12-18 14:35:34 +00:00
|
|
|
rpc_cmd bdev_get_bdevs -b "$lvol_uuid" && false
|
2019-06-26 17:35:56 +00:00
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
2019-12-18 14:35:34 +00:00
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
2019-06-26 17:35:56 +00:00
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
2019-12-06 16:26:12 +00:00
|
|
|
check_leftover_devices
|
2019-06-26 17:35:56 +00:00
|
|
|
}
|
|
|
|
|
2019-06-26 17:57:32 +00:00
|
|
|
# create lvs + multiple lvols, verify their params
|
|
|
|
function test_construct_multi_lvols() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# create 4 lvols
|
2020-05-07 11:27:06 +00:00
|
|
|
lvol_size_mb=$((LVS_DEFAULT_CAPACITY_MB / 4))
|
2019-06-26 17:57:32 +00:00
|
|
|
# round down lvol size to the nearest cluster size boundary
|
2020-05-07 11:27:06 +00:00
|
|
|
lvol_size_mb=$((lvol_size_mb / LVS_DEFAULT_CLUSTER_SIZE_MB * LVS_DEFAULT_CLUSTER_SIZE_MB))
|
|
|
|
lvol_size=$((lvol_size_mb * 1024 * 1024))
|
2019-06-26 17:57:32 +00:00
|
|
|
for i in $(seq 1 4); do
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" "lvol_test${i}" "$lvol_size_mb")
|
|
|
|
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test${i}" ]
|
|
|
|
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$((lvol_size / MALLOC_BS))" ]
|
2019-06-26 17:57:32 +00:00
|
|
|
done
|
|
|
|
|
|
|
|
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
|
|
|
|
[ "$(jq length <<< "$lvols")" == "4" ]
|
|
|
|
|
|
|
|
# remove all lvols
|
|
|
|
for i in $(seq 0 3); do
|
|
|
|
lvol_uuid=$(jq -r ".[$i].name" <<< "$lvols")
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
|
|
|
done
|
|
|
|
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
|
|
|
|
[ "$(jq length <<< "$lvols")" == "0" ]
|
|
|
|
|
|
|
|
# create the same 4 lvols again and perform the same checks
|
|
|
|
for i in $(seq 1 4); do
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" "lvol_test${i}" "$lvol_size_mb")
|
|
|
|
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test${i}" ]
|
|
|
|
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$((lvol_size / MALLOC_BS))" ]
|
2019-06-26 17:57:32 +00:00
|
|
|
done
|
|
|
|
|
|
|
|
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
|
|
|
|
[ "$(jq length <<< "$lvols")" == "4" ]
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
for i in $(seq 0 3); do
|
|
|
|
lvol_uuid=$(jq -r ".[$i].name" <<< "$lvols")
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
|
|
|
done
|
|
|
|
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
|
|
|
|
[ "$(jq length <<< "$lvols")" == "0" ]
|
|
|
|
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
2019-12-18 14:35:34 +00:00
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
2019-06-26 17:57:32 +00:00
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
2019-12-06 16:26:12 +00:00
|
|
|
check_leftover_devices
|
2019-06-26 17:57:32 +00:00
|
|
|
}
|
|
|
|
|
2019-06-27 17:20:45 +00:00
|
|
|
# create 2 lvolstores, each with a single lvol on top.
|
|
|
|
# use a single alias for both lvols, there should be no conflict
|
|
|
|
# since they're in different lvolstores
|
|
|
|
function test_construct_lvols_conflict_alias() {
|
|
|
|
# create an lvol store 1
|
|
|
|
malloc1_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs1_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc1_name" lvs_test1)
|
|
|
|
|
|
|
|
# create an lvol on lvs1
|
|
|
|
lvol1_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test1 lvol_test "$LVS_DEFAULT_CAPACITY_MB")
|
|
|
|
lvol1=$(rpc_cmd bdev_get_bdevs -b "$lvol1_uuid")
|
|
|
|
|
|
|
|
# use a different size for second malloc to keep those differentiable
|
2020-05-07 11:27:06 +00:00
|
|
|
malloc2_size_mb=$((MALLOC_SIZE_MB / 2))
|
2019-06-27 17:20:45 +00:00
|
|
|
|
|
|
|
# create an lvol store 2
|
|
|
|
malloc2_name=$(rpc_cmd bdev_malloc_create $malloc2_size_mb $MALLOC_BS)
|
|
|
|
lvs2_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs_test2)
|
|
|
|
|
2020-05-07 11:27:06 +00:00
|
|
|
lvol2_size_mb=$(round_down $((LVS_DEFAULT_CAPACITY_MB / 2)))
|
2019-06-27 17:20:45 +00:00
|
|
|
|
|
|
|
# create an lvol on lvs2
|
|
|
|
lvol2_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test2 lvol_test "$lvol2_size_mb")
|
|
|
|
lvol2=$(rpc_cmd bdev_get_bdevs -b "$lvol2_uuid")
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol1")" = "$lvol1_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol1")" = "$lvol1_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol1")" = "lvs_test1/lvol_test" ]
|
|
|
|
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol1")" = "$lvs1_uuid" ]
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$lvol2")" = "$lvol2_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$lvol2")" = "$lvol2_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol2")" = "lvs_test2/lvol_test" ]
|
|
|
|
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol2")" = "$lvs2_uuid" ]
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs1_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs1_uuid" && false
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs2_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs2_uuid" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc1_name"
|
|
|
|
rpc_cmd bdev_get_bdevs -b "$malloc1_name" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc2_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2019-06-27 17:29:23 +00:00
|
|
|
# try to create an lvol on inexistent lvs uuid
|
|
|
|
function test_construct_lvol_inexistent_lvs() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# try to create an lvol on inexistent lvs
|
|
|
|
dummy_uuid="00000000-0000-0000-0000-000000000000"
|
|
|
|
rpc_cmd bdev_lvol_create -u "$dummy_uuid" lvol_test "$LVS_DEFAULT_CAPACITY_MB" && false
|
|
|
|
|
|
|
|
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
|
|
|
|
[ "$(jq length <<< "$lvols")" == "0" ]
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2019-06-27 17:32:18 +00:00
|
|
|
# try to create lvol on full lvs
|
|
|
|
function test_construct_lvol_full_lvs() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# create valid lvol
|
|
|
|
lvol1_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test lvol_test1 "$LVS_DEFAULT_CAPACITY_MB")
|
|
|
|
lvol1=$(rpc_cmd bdev_get_bdevs -b "$lvol1_uuid")
|
|
|
|
|
|
|
|
# try to create an lvol on lvs without enough free clusters
|
|
|
|
rpc_cmd bdev_lvol_create -l lvs_test lvol_test2 1 && false
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
2019-06-27 17:29:23 +00:00
|
|
|
|
2019-06-27 17:47:38 +00:00
|
|
|
# try to create two lvols with conflicting aliases
|
|
|
|
function test_construct_lvol_alias_conflict() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# create valid lvol
|
2020-05-07 11:27:06 +00:00
|
|
|
lvol_size_mb=$(round_down $((LVS_DEFAULT_CAPACITY_MB / 2)))
|
2019-06-27 17:47:38 +00:00
|
|
|
lvol1_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test lvol_test "$lvol_size_mb")
|
|
|
|
lvol1=$(rpc_cmd bdev_get_bdevs -b "$lvol1_uuid")
|
|
|
|
|
|
|
|
# try to create another lvol with a name that's already taken
|
|
|
|
rpc_cmd bdev_lvol_create -l lvs_test lvol_test "$lvol_size_mb" && false
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
|
|
|
rpc_cmd bdev_get_bdevs -b "$malloc_name" && false
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
|
|
|
|
2019-07-15 10:47:39 +00:00
|
|
|
# create an lvs+lvol, create another lvs on lvol and then a nested lvol
|
|
|
|
function test_construct_nested_lvol() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# create an lvol on top
|
|
|
|
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$LVS_DEFAULT_CAPACITY_MB")
|
|
|
|
# create a nested lvs
|
|
|
|
nested_lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$lvol_uuid" nested_lvs)
|
|
|
|
|
2020-05-07 11:27:06 +00:00
|
|
|
nested_lvol_size_mb=$((LVS_DEFAULT_CAPACITY_MB - LVS_DEFAULT_CLUSTER_SIZE_MB))
|
|
|
|
nested_lvol_size=$((nested_lvol_size_mb * 1024 * 1024))
|
2019-07-15 10:47:39 +00:00
|
|
|
|
|
|
|
# create a nested lvol
|
|
|
|
nested_lvol1_uuid=$(rpc_cmd bdev_lvol_create -u "$nested_lvs_uuid" nested_lvol1 "$nested_lvol_size_mb")
|
|
|
|
nested_lvol1=$(rpc_cmd bdev_get_bdevs -b "$nested_lvol1_uuid")
|
|
|
|
|
|
|
|
[ "$(jq -r '.[0].name' <<< "$nested_lvol1")" = "$nested_lvol1_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].uuid' <<< "$nested_lvol1")" = "$nested_lvol1_uuid" ]
|
|
|
|
[ "$(jq -r '.[0].aliases[0]' <<< "$nested_lvol1")" = "nested_lvs/nested_lvol1" ]
|
|
|
|
[ "$(jq -r '.[0].block_size' <<< "$nested_lvol1")" = "$MALLOC_BS" ]
|
2020-05-07 11:27:06 +00:00
|
|
|
[ "$(jq -r '.[0].num_blocks' <<< "$nested_lvol1")" = "$((nested_lvol_size / MALLOC_BS))" ]
|
2019-07-15 10:47:39 +00:00
|
|
|
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$nested_lvol1")" = "$nested_lvs_uuid" ]
|
|
|
|
|
|
|
|
# try to create another nested lvol on a lvs that's already full
|
|
|
|
rpc_cmd bdev_lvol_create -u "$nested_lvs_uuid" nested_lvol2 "$nested_lvol_size_mb" && false
|
|
|
|
|
|
|
|
# clean up
|
|
|
|
rpc_cmd bdev_lvol_delete "$nested_lvol1_uuid"
|
|
|
|
rpc_cmd bdev_get_bdevs -b "$nested_lvol1_uuid" && false
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$nested_lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$nested_lvs_uuid" && false
|
|
|
|
rpc_cmd bdev_lvol_delete "$lvol_uuid"
|
|
|
|
rpc_cmd bdev_get_bdevs -b "$lvol_uuid" && false
|
|
|
|
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
|
|
|
|
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
|
|
|
|
rpc_cmd bdev_malloc_delete "$malloc_name"
|
|
|
|
check_leftover_devices
|
|
|
|
}
|
2019-06-27 17:47:38 +00:00
|
|
|
|
2019-12-13 21:31:33 +00:00
|
|
|
# Send SIGTERM after creating lvol store
|
|
|
|
function test_sigterm() {
|
|
|
|
# create an lvol store
|
|
|
|
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
|
|
|
|
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
|
|
|
|
|
|
|
|
# Send SIGTERM signal to the application
|
|
|
|
killprocess $spdk_pid
|
|
|
|
}
|
|
|
|
|
2020-05-11 21:15:03 +00:00
|
|
|
$SPDK_BIN_DIR/spdk_tgt &
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
spdk_pid=$!
|
|
|
|
trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT
|
|
|
|
waitforlisten $spdk_pid
|
|
|
|
|
2019-12-19 23:03:30 +00:00
|
|
|
run_test "test_construct_lvs" test_construct_lvs
|
2019-06-27 18:00:25 +00:00
|
|
|
run_test "test_construct_lvs_nonexistent_bdev" test_construct_lvs_nonexistent_bdev
|
2019-06-27 18:04:19 +00:00
|
|
|
run_test "test_construct_two_lvs_on_the_same_bdev" test_construct_two_lvs_on_the_same_bdev
|
2019-06-27 18:10:34 +00:00
|
|
|
run_test "test_construct_lvs_conflict_alias" test_construct_lvs_conflict_alias
|
2019-07-15 11:21:58 +00:00
|
|
|
run_test "test_construct_lvs_different_cluster_size" test_construct_lvs_different_cluster_size
|
2019-07-15 11:43:28 +00:00
|
|
|
run_test "test_construct_lvs_clear_methods" test_construct_lvs_clear_methods
|
2020-02-17 12:57:10 +00:00
|
|
|
run_test "test_construct_lvol_fio_clear_method_none" test_construct_lvol_fio_clear_method_none
|
|
|
|
run_test "test_construct_lvol_fio_clear_method_unmap" test_construct_lvol_fio_clear_method_unmap
|
2019-12-19 23:03:30 +00:00
|
|
|
run_test "test_construct_lvol" test_construct_lvol
|
|
|
|
run_test "test_construct_multi_lvols" test_construct_multi_lvols
|
2019-06-27 17:20:45 +00:00
|
|
|
run_test "test_construct_lvols_conflict_alias" test_construct_lvols_conflict_alias
|
2019-06-27 17:29:23 +00:00
|
|
|
run_test "test_construct_lvol_inexistent_lvs" test_construct_lvol_inexistent_lvs
|
2019-06-27 17:32:18 +00:00
|
|
|
run_test "test_construct_lvol_full_lvs" test_construct_lvol_full_lvs
|
2019-06-27 17:47:38 +00:00
|
|
|
run_test "test_construct_lvol_alias_conflict" test_construct_lvol_alias_conflict
|
2019-07-15 10:47:39 +00:00
|
|
|
run_test "test_construct_nested_lvol" test_construct_nested_lvol
|
2019-12-13 21:31:33 +00:00
|
|
|
run_test "test_sigterm" test_sigterm
|
test/lvol: start rewriting python tests to bash
There are multiple things wrong with current python tests:
* they don't stop the execution on error
* the output makes it difficult to understand what really
happened inside the test
* there is no easy way to reproduce a failure if there
is one (besides running the same test script again)
* they currently suffer from intermittent failures and
there's no-one there to fix them
* they stand out from the rest of spdk tests, which are
written in bash
So we rewrite those tests to bash. They will use rpc.py
daemon to send RPC commands, so they won't take any more
time to run than python tests.
The tests are going to be split them into a few different
categories:
* clones
* snapshots
* thin provisioning
* tasting
* renaming
* resizing
* all the dumb ones - construct, destruct, etc
Each file is a standalone test script, with common utility
functions located in test/lvol/common.sh. Each file tests
a single, specific feature, but under multiple conditions.
Each test case is implemented as a separate function, so
if you touch only one lvol feature, you can run only one
test script, and if e.g. only a later test case notoriously
breaks, you can comment out all the previous test case
invocations (up to ~10 lines) and focus only on that
failing one.
The new tests don't correspond 1:1 to the old python ones
- they now cover more. Whenever there was a negative test
to check if creating lvs on inexistent bdev failed, we'll
now also create a dummy bdev beforehand, so that lvs will
have more opportunity to do something it should not.
Some other test cases were squashed. A few negative tests
required a lot of setup just to try doing something
illegal and see if spdk crashed. We'll now do those illegal
operations in a single test case, giving lvol lib more
opportunity to break. Even if illegal operation did not
cause any segfault, is the lvolstore/lvol still usable?
E.g. if we try to create an lvol on an lvs that doesn't
have enough free clusters and it fails as expected, will
it be still possible to create a valid lvol afterwards?
Besides sending various RPC commands and checking their
return code, we'll also parse and compare various fields
in JSON RPC output from get_lvol_stores or get_bdevs RPC.
We'll use inline jq calls for that. Whenever something's
off, it will be clear which RPC returned invalid values
and what were the expected values even without having
detailed error prints.
The tests are designed to be as easy as possible to debug
whenever something goes wrong.
This patch removes one test case from python tests and
adds a corresponding test into the new test/lvol/lvol2.sh
file. The script will be renamed to just lvol.sh after
the existing lvol.sh (which starts all python tests) is
finally removed.
As for the bash script itself - each test case is run
through a run_test() function which verifies there were
no lvolstores, lvols, or bdevs left after the test case
has finished. Inside the particular tests we will still
check if the lvolstore removal at the end was successful,
but that's because we want to make sure it's gone e.g even
before we remove the underlying lvs' base bdev.
Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
|
|
|
|
|
|
|
trap - SIGINT SIGTERM EXIT
|
2019-12-13 21:31:33 +00:00
|
|
|
if ps -p $spdk_pid; then
|
|
|
|
killprocess $spdk_pid
|
|
|
|
fi
|