test/blobfs: Allocate enough memory for db_bench tests
Currently, BlobFS-nightly-autotest jobs are failing due to this test not having enough memory around. Change that by allocating some overhead depending on what CACHE_SIZE was set to. Change-Id: I9b122189d39a7a73ab8521f699324516d09ae1c3 Signed-off-by: Michal Berger <michalx.berger@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3941 Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This commit is contained in:
parent
150339cc59
commit
68740678e1
@ -91,6 +91,15 @@ else
|
||||
DURATION=20
|
||||
NUM_KEYS=20000000
|
||||
fi
|
||||
# Make sure that there's enough memory available for the mempool. Unfortunately,
|
||||
# db_bench doesn't seem to allocate memory from all numa nodes since all of it
|
||||
# comes exclusively from node0. With that in mind, try to allocate CACHE_SIZE
|
||||
# + some_overhead (1G) of pages but only on node0 to make sure that we end up
|
||||
# with the right amount not allowing setup.sh to split it by using the global
|
||||
# nr_hugepages setting. Instead of bypassing it completely, we use it to also
|
||||
# get the right size of hugepages.
|
||||
HUGEMEM=$((CACHE_SIZE + 1024)) HUGENODE=0 \
|
||||
"$rootdir/scripts/setup.sh"
|
||||
|
||||
cd $RESULTS_DIR
|
||||
cp $testdir/common_flags.txt insert_flags.txt
|
||||
|
Loading…
Reference in New Issue
Block a user