numam-spdk/test/lvol/basic.sh

193 lines
7.8 KiB
Bash
Raw Normal View History

test/lvol: start rewriting python tests to bash There are multiple things wrong with current python tests: * they don't stop the execution on error * the output makes it difficult to understand what really happened inside the test * there is no easy way to reproduce a failure if there is one (besides running the same test script again) * they currently suffer from intermittent failures and there's no-one there to fix them * they stand out from the rest of spdk tests, which are written in bash So we rewrite those tests to bash. They will use rpc.py daemon to send RPC commands, so they won't take any more time to run than python tests. The tests are going to be split them into a few different categories: * clones * snapshots * thin provisioning * tasting * renaming * resizing * all the dumb ones - construct, destruct, etc Each file is a standalone test script, with common utility functions located in test/lvol/common.sh. Each file tests a single, specific feature, but under multiple conditions. Each test case is implemented as a separate function, so if you touch only one lvol feature, you can run only one test script, and if e.g. only a later test case notoriously breaks, you can comment out all the previous test case invocations (up to ~10 lines) and focus only on that failing one. The new tests don't correspond 1:1 to the old python ones - they now cover more. Whenever there was a negative test to check if creating lvs on inexistent bdev failed, we'll now also create a dummy bdev beforehand, so that lvs will have more opportunity to do something it should not. Some other test cases were squashed. A few negative tests required a lot of setup just to try doing something illegal and see if spdk crashed. We'll now do those illegal operations in a single test case, giving lvol lib more opportunity to break. Even if illegal operation did not cause any segfault, is the lvolstore/lvol still usable? E.g. if we try to create an lvol on an lvs that doesn't have enough free clusters and it fails as expected, will it be still possible to create a valid lvol afterwards? Besides sending various RPC commands and checking their return code, we'll also parse and compare various fields in JSON RPC output from get_lvol_stores or get_bdevs RPC. We'll use inline jq calls for that. Whenever something's off, it will be clear which RPC returned invalid values and what were the expected values even without having detailed error prints. The tests are designed to be as easy as possible to debug whenever something goes wrong. This patch removes one test case from python tests and adds a corresponding test into the new test/lvol/lvol2.sh file. The script will be renamed to just lvol.sh after the existing lvol.sh (which starts all python tests) is finally removed. As for the bash script itself - each test case is run through a run_test() function which verifies there were no lvolstores, lvols, or bdevs left after the test case has finished. Inside the particular tests we will still check if the lvolstore removal at the end was successful, but that's because we want to make sure it's gone e.g even before we remove the underlying lvs' base bdev. Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623 Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com> Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517 Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
#!/usr/bin/env bash
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
source $rootdir/test/lvol/common.sh
# create empty lvol store and verify its parameters
function test_construct_lvs() {
# create an lvol store
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
lvs=$(rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid")
# verify it's there
[ "$(jq -r '.[0].uuid' <<< "$lvs")" = "$lvs_uuid" ]
[ "$(jq -r '.[0].name' <<< "$lvs")" = "lvs_test" ]
[ "$(jq -r '.[0].base_bdev' <<< "$lvs")" = "$malloc_name" ]
# verify some of its parameters
cluster_size=$(jq -r '.[0].cluster_size' <<< "$lvs")
[ "$cluster_size" = "$LVS_DEFAULT_CLUSTER_SIZE" ]
total_clusters=$(jq -r '.[0].total_data_clusters' <<< "$lvs")
[ "$(jq -r '.[0].free_clusters' <<< "$lvs")" = "$total_clusters" ]
[ "$(( total_clusters * cluster_size ))" = "$LVS_DEFAULT_CAPACITY" ]
# remove it and verify it's gone
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
test/lvol: start rewriting python tests to bash There are multiple things wrong with current python tests: * they don't stop the execution on error * the output makes it difficult to understand what really happened inside the test * there is no easy way to reproduce a failure if there is one (besides running the same test script again) * they currently suffer from intermittent failures and there's no-one there to fix them * they stand out from the rest of spdk tests, which are written in bash So we rewrite those tests to bash. They will use rpc.py daemon to send RPC commands, so they won't take any more time to run than python tests. The tests are going to be split them into a few different categories: * clones * snapshots * thin provisioning * tasting * renaming * resizing * all the dumb ones - construct, destruct, etc Each file is a standalone test script, with common utility functions located in test/lvol/common.sh. Each file tests a single, specific feature, but under multiple conditions. Each test case is implemented as a separate function, so if you touch only one lvol feature, you can run only one test script, and if e.g. only a later test case notoriously breaks, you can comment out all the previous test case invocations (up to ~10 lines) and focus only on that failing one. The new tests don't correspond 1:1 to the old python ones - they now cover more. Whenever there was a negative test to check if creating lvs on inexistent bdev failed, we'll now also create a dummy bdev beforehand, so that lvs will have more opportunity to do something it should not. Some other test cases were squashed. A few negative tests required a lot of setup just to try doing something illegal and see if spdk crashed. We'll now do those illegal operations in a single test case, giving lvol lib more opportunity to break. Even if illegal operation did not cause any segfault, is the lvolstore/lvol still usable? E.g. if we try to create an lvol on an lvs that doesn't have enough free clusters and it fails as expected, will it be still possible to create a valid lvol afterwards? Besides sending various RPC commands and checking their return code, we'll also parse and compare various fields in JSON RPC output from get_lvol_stores or get_bdevs RPC. We'll use inline jq calls for that. Whenever something's off, it will be clear which RPC returned invalid values and what were the expected values even without having detailed error prints. The tests are designed to be as easy as possible to debug whenever something goes wrong. This patch removes one test case from python tests and adds a corresponding test into the new test/lvol/lvol2.sh file. The script will be renamed to just lvol.sh after the existing lvol.sh (which starts all python tests) is finally removed. As for the bash script itself - each test case is run through a run_test() function which verifies there were no lvolstores, lvols, or bdevs left after the test case has finished. Inside the particular tests we will still check if the lvolstore removal at the end was successful, but that's because we want to make sure it's gone e.g even before we remove the underlying lvs' base bdev. Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623 Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com> Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517 Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
test/lvol: start rewriting python tests to bash There are multiple things wrong with current python tests: * they don't stop the execution on error * the output makes it difficult to understand what really happened inside the test * there is no easy way to reproduce a failure if there is one (besides running the same test script again) * they currently suffer from intermittent failures and there's no-one there to fix them * they stand out from the rest of spdk tests, which are written in bash So we rewrite those tests to bash. They will use rpc.py daemon to send RPC commands, so they won't take any more time to run than python tests. The tests are going to be split them into a few different categories: * clones * snapshots * thin provisioning * tasting * renaming * resizing * all the dumb ones - construct, destruct, etc Each file is a standalone test script, with common utility functions located in test/lvol/common.sh. Each file tests a single, specific feature, but under multiple conditions. Each test case is implemented as a separate function, so if you touch only one lvol feature, you can run only one test script, and if e.g. only a later test case notoriously breaks, you can comment out all the previous test case invocations (up to ~10 lines) and focus only on that failing one. The new tests don't correspond 1:1 to the old python ones - they now cover more. Whenever there was a negative test to check if creating lvs on inexistent bdev failed, we'll now also create a dummy bdev beforehand, so that lvs will have more opportunity to do something it should not. Some other test cases were squashed. A few negative tests required a lot of setup just to try doing something illegal and see if spdk crashed. We'll now do those illegal operations in a single test case, giving lvol lib more opportunity to break. Even if illegal operation did not cause any segfault, is the lvolstore/lvol still usable? E.g. if we try to create an lvol on an lvs that doesn't have enough free clusters and it fails as expected, will it be still possible to create a valid lvol afterwards? Besides sending various RPC commands and checking their return code, we'll also parse and compare various fields in JSON RPC output from get_lvol_stores or get_bdevs RPC. We'll use inline jq calls for that. Whenever something's off, it will be clear which RPC returned invalid values and what were the expected values even without having detailed error prints. The tests are designed to be as easy as possible to debug whenever something goes wrong. This patch removes one test case from python tests and adds a corresponding test into the new test/lvol/lvol2.sh file. The script will be renamed to just lvol.sh after the existing lvol.sh (which starts all python tests) is finally removed. As for the bash script itself - each test case is run through a run_test() function which verifies there were no lvolstores, lvols, or bdevs left after the test case has finished. Inside the particular tests we will still check if the lvolstore removal at the end was successful, but that's because we want to make sure it's gone e.g even before we remove the underlying lvs' base bdev. Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623 Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com> Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517 Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
}
# create lvs + lvol on top, verify lvol's parameters
function test_construct_lvol() {
# create an lvol store
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# create an lvol on top
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$LVS_DEFAULT_CAPACITY_MB")
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test" ]
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$(( LVS_DEFAULT_CAPACITY / MALLOC_BS ))" ]
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol")" = "$lvs_uuid" ]
# clean up and create another lvol, this time use lvs alias instead of uuid
rpc_cmd bdev_lvol_delete "$lvol_uuid"
rpc_cmd bdev_get_bdevs -b "$lvol_uuid" && false
lvol_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test lvol_test "$LVS_DEFAULT_CAPACITY_MB")
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test" ]
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$(( LVS_DEFAULT_CAPACITY / MALLOC_BS ))" ]
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol")" = "$lvs_uuid" ]
# clean up
rpc_cmd bdev_lvol_delete "$lvol_uuid"
rpc_cmd bdev_get_bdevs -b "$lvol_uuid" && false
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
# create lvs + multiple lvols, verify their params
function test_construct_multi_lvols() {
# create an lvol store
malloc_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# create 4 lvols
lvol_size_mb=$(( LVS_DEFAULT_CAPACITY_MB / 4 ))
# round down lvol size to the nearest cluster size boundary
lvol_size_mb=$(( lvol_size_mb / LVS_DEFAULT_CLUSTER_SIZE_MB * LVS_DEFAULT_CLUSTER_SIZE_MB ))
lvol_size=$(( lvol_size_mb * 1024 * 1024 ))
for i in $(seq 1 4); do
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" "lvol_test${i}" "$lvol_size_mb")
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test${i}" ]
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$(( lvol_size / MALLOC_BS ))" ]
done
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
[ "$(jq length <<< "$lvols")" == "4" ]
# remove all lvols
for i in $(seq 0 3); do
lvol_uuid=$(jq -r ".[$i].name" <<< "$lvols")
rpc_cmd bdev_lvol_delete "$lvol_uuid"
done
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
[ "$(jq length <<< "$lvols")" == "0" ]
# create the same 4 lvols again and perform the same checks
for i in $(seq 1 4); do
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" "lvol_test${i}" "$lvol_size_mb")
lvol=$(rpc_cmd bdev_get_bdevs -b "$lvol_uuid")
[ "$(jq -r '.[0].name' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].uuid' <<< "$lvol")" = "$lvol_uuid" ]
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol")" = "lvs_test/lvol_test${i}" ]
[ "$(jq -r '.[0].block_size' <<< "$lvol")" = "$MALLOC_BS" ]
[ "$(jq -r '.[0].num_blocks' <<< "$lvol")" = "$(( lvol_size / MALLOC_BS ))" ]
done
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
[ "$(jq length <<< "$lvols")" == "4" ]
# clean up
for i in $(seq 0 3); do
lvol_uuid=$(jq -r ".[$i].name" <<< "$lvols")
rpc_cmd bdev_lvol_delete "$lvol_uuid"
done
lvols=$(rpc_cmd bdev_get_bdevs | jq -r '[ .[] | select(.product_name == "Logical Volume") ]')
[ "$(jq length <<< "$lvols")" == "0" ]
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
# create 2 lvolstores, each with a single lvol on top.
# use a single alias for both lvols, there should be no conflict
# since they're in different lvolstores
function test_construct_lvols_conflict_alias() {
# create an lvol store 1
malloc1_name=$(rpc_cmd bdev_malloc_create $MALLOC_SIZE_MB $MALLOC_BS)
lvs1_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc1_name" lvs_test1)
# create an lvol on lvs1
lvol1_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test1 lvol_test "$LVS_DEFAULT_CAPACITY_MB")
lvol1=$(rpc_cmd bdev_get_bdevs -b "$lvol1_uuid")
# use a different size for second malloc to keep those differentiable
malloc2_size_mb=$(( MALLOC_SIZE_MB / 2 ))
# create an lvol store 2
malloc2_name=$(rpc_cmd bdev_malloc_create $malloc2_size_mb $MALLOC_BS)
lvs2_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc2_name" lvs_test2)
lvol2_size_mb=$(round_down $(( LVS_DEFAULT_CAPACITY_MB / 2 )) )
# create an lvol on lvs2
lvol2_uuid=$(rpc_cmd bdev_lvol_create -l lvs_test2 lvol_test "$lvol2_size_mb")
lvol2=$(rpc_cmd bdev_get_bdevs -b "$lvol2_uuid")
[ "$(jq -r '.[0].name' <<< "$lvol1")" = "$lvol1_uuid" ]
[ "$(jq -r '.[0].uuid' <<< "$lvol1")" = "$lvol1_uuid" ]
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol1")" = "lvs_test1/lvol_test" ]
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol1")" = "$lvs1_uuid" ]
[ "$(jq -r '.[0].name' <<< "$lvol2")" = "$lvol2_uuid" ]
[ "$(jq -r '.[0].uuid' <<< "$lvol2")" = "$lvol2_uuid" ]
[ "$(jq -r '.[0].aliases[0]' <<< "$lvol2")" = "lvs_test2/lvol_test" ]
[ "$(jq -r '.[0].driver_specific.lvol.lvol_store_uuid' <<< "$lvol2")" = "$lvs2_uuid" ]
# clean up
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs1_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs1_uuid" && false
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs2_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs2_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc1_name"
rpc_cmd bdev_get_bdevs -b "$malloc1_name" && false
rpc_cmd bdev_malloc_delete "$malloc2_name"
check_leftover_devices
}
test/lvol: start rewriting python tests to bash There are multiple things wrong with current python tests: * they don't stop the execution on error * the output makes it difficult to understand what really happened inside the test * there is no easy way to reproduce a failure if there is one (besides running the same test script again) * they currently suffer from intermittent failures and there's no-one there to fix them * they stand out from the rest of spdk tests, which are written in bash So we rewrite those tests to bash. They will use rpc.py daemon to send RPC commands, so they won't take any more time to run than python tests. The tests are going to be split them into a few different categories: * clones * snapshots * thin provisioning * tasting * renaming * resizing * all the dumb ones - construct, destruct, etc Each file is a standalone test script, with common utility functions located in test/lvol/common.sh. Each file tests a single, specific feature, but under multiple conditions. Each test case is implemented as a separate function, so if you touch only one lvol feature, you can run only one test script, and if e.g. only a later test case notoriously breaks, you can comment out all the previous test case invocations (up to ~10 lines) and focus only on that failing one. The new tests don't correspond 1:1 to the old python ones - they now cover more. Whenever there was a negative test to check if creating lvs on inexistent bdev failed, we'll now also create a dummy bdev beforehand, so that lvs will have more opportunity to do something it should not. Some other test cases were squashed. A few negative tests required a lot of setup just to try doing something illegal and see if spdk crashed. We'll now do those illegal operations in a single test case, giving lvol lib more opportunity to break. Even if illegal operation did not cause any segfault, is the lvolstore/lvol still usable? E.g. if we try to create an lvol on an lvs that doesn't have enough free clusters and it fails as expected, will it be still possible to create a valid lvol afterwards? Besides sending various RPC commands and checking their return code, we'll also parse and compare various fields in JSON RPC output from get_lvol_stores or get_bdevs RPC. We'll use inline jq calls for that. Whenever something's off, it will be clear which RPC returned invalid values and what were the expected values even without having detailed error prints. The tests are designed to be as easy as possible to debug whenever something goes wrong. This patch removes one test case from python tests and adds a corresponding test into the new test/lvol/lvol2.sh file. The script will be renamed to just lvol.sh after the existing lvol.sh (which starts all python tests) is finally removed. As for the bash script itself - each test case is run through a run_test() function which verifies there were no lvolstores, lvols, or bdevs left after the test case has finished. Inside the particular tests we will still check if the lvolstore removal at the end was successful, but that's because we want to make sure it's gone e.g even before we remove the underlying lvs' base bdev. Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623 Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com> Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517 Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
$rootdir/app/spdk_tgt/spdk_tgt &
spdk_pid=$!
trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT
waitforlisten $spdk_pid
run_test "test_construct_lvs" test_construct_lvs
run_test "test_construct_lvol" test_construct_lvol
run_test "test_construct_multi_lvols" test_construct_multi_lvols
run_test "test_construct_lvols_conflict_alias" test_construct_lvols_conflict_alias
test/lvol: start rewriting python tests to bash There are multiple things wrong with current python tests: * they don't stop the execution on error * the output makes it difficult to understand what really happened inside the test * there is no easy way to reproduce a failure if there is one (besides running the same test script again) * they currently suffer from intermittent failures and there's no-one there to fix them * they stand out from the rest of spdk tests, which are written in bash So we rewrite those tests to bash. They will use rpc.py daemon to send RPC commands, so they won't take any more time to run than python tests. The tests are going to be split them into a few different categories: * clones * snapshots * thin provisioning * tasting * renaming * resizing * all the dumb ones - construct, destruct, etc Each file is a standalone test script, with common utility functions located in test/lvol/common.sh. Each file tests a single, specific feature, but under multiple conditions. Each test case is implemented as a separate function, so if you touch only one lvol feature, you can run only one test script, and if e.g. only a later test case notoriously breaks, you can comment out all the previous test case invocations (up to ~10 lines) and focus only on that failing one. The new tests don't correspond 1:1 to the old python ones - they now cover more. Whenever there was a negative test to check if creating lvs on inexistent bdev failed, we'll now also create a dummy bdev beforehand, so that lvs will have more opportunity to do something it should not. Some other test cases were squashed. A few negative tests required a lot of setup just to try doing something illegal and see if spdk crashed. We'll now do those illegal operations in a single test case, giving lvol lib more opportunity to break. Even if illegal operation did not cause any segfault, is the lvolstore/lvol still usable? E.g. if we try to create an lvol on an lvs that doesn't have enough free clusters and it fails as expected, will it be still possible to create a valid lvol afterwards? Besides sending various RPC commands and checking their return code, we'll also parse and compare various fields in JSON RPC output from get_lvol_stores or get_bdevs RPC. We'll use inline jq calls for that. Whenever something's off, it will be clear which RPC returned invalid values and what were the expected values even without having detailed error prints. The tests are designed to be as easy as possible to debug whenever something goes wrong. This patch removes one test case from python tests and adds a corresponding test into the new test/lvol/lvol2.sh file. The script will be renamed to just lvol.sh after the existing lvol.sh (which starts all python tests) is finally removed. As for the bash script itself - each test case is run through a run_test() function which verifies there were no lvolstores, lvols, or bdevs left after the test case has finished. Inside the particular tests we will still check if the lvolstore removal at the end was successful, but that's because we want to make sure it's gone e.g even before we remove the underlying lvs' base bdev. Change-Id: Iaa2bb656233e1c9f0c35093f190ac26c39e78623 Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com> Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com> Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459517 Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-06-10 13:22:11 +00:00
trap - SIGINT SIGTERM EXIT
killprocess $spdk_pid