Compare commits

...

11 Commits

Author SHA1 Message Date
Jim Harris
f49fa23d9a blob: always use uint64_t to represent page_idx
4KiB page size * UINT32_MAX = 16TiB - so we must use
a uint64_t for any blobstores on backing devices of
16TiB or greater.

Cherry-pick of commit f300130872 from master.

Change-Id: I917d772cbcf83d124f9957054cf5a182e803a3c8
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reviewed-on: https://review.gerrithub.io/416448
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/416563
2018-06-22 21:15:55 +00:00
Daniel Verkamp
001edfaabf blob: change lba to uint64_t in serialize_extent
Make sure we don't truncate the LBA when using it to serialize the
cluster array into an extent list.

We also need to add an explicit cast in _spdk_bs_cluster_to_lba
to ensure the conversion doesn't get truncated.  While here, do
the same cast for _spdk_bs_cluster_to_page.

Cherry-pick of commit 89426e9bb5 from master.

Change-Id: I4fff3ea8248b012bacfda92765088c868efcb218
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reviewed-on: https://review.gerrithub.io/416231
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/416562
2018-06-22 21:15:55 +00:00
Daniel Verkamp
5bf3a5f383 blob: fix load of non-multiple-of-8 masks
Previously, the blobstore load code was iterating over the masks (blob
IDs, clusters) byte by byte, then bit by bit in a nested loop, but it
was rounding incorrectly and skipping any bits set in the last byte if
the total size was not a multiple of 8.

Replace the nested loops with a single loop iterating over bits to
simplify the code and avoid the bug.

Cherry-picked from commit 9d149a706b on master.

Change-Id: I539958bc8783b5fb1600bfe82a594a4ea17d34ab
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/416230
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-on: https://review.gerrithub.io/416429
2018-06-22 17:28:34 +00:00
Daniel Verkamp
7cb8df7a4e version: SPDK v18.04.2-pre
Change-Id: I581c5f4f01560ba9fa28945699cf4137c7d84140
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/416428
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2018-06-22 17:28:34 +00:00
Daniel Verkamp
727a80b328 SPDK 18.04.1
Change-Id: I5526837b75bb21f9c5d7fdd4363763f6ebfa0a2b
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/414070
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-06-06 22:17:48 +00:00
Andrey Kuzmin
94f3053de3 Fix fio_plugin build for FIO_IOOPS_VERSION >= 24.
Fio commit d3b07186b1d4c7c1d9adc1306407458ce41ad048 changed return
type of the ioengine->queue method to a new enumeration type
fio_q_status. Add preprocessor checks to ensure SPDK fio_plugin builds
successfully with both current and earlier fio ioengine API version.

This is a cherry-pick of commit 4e51a274de from master.

Change-Id: Ie7b696c9c410fa5b85d0487bbf99e8e5f1f4a886
Signed-off-by: Andrey Kuzmin <andrey.v.kuzmin@gmail.com>
Reviewed-on: https://review.gerrithub.io/413749
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-on: https://review.gerrithub.io/414057
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-06-06 19:36:37 +00:00
Dariusz Stojaczyk
16c8d766b5 event/app: fix setting "single-file-segments"
This got broken in 93cb4a31 [1]

[1] 93cb4a31: event/app: Refactor initialization of app environment
in spdk_app_start()

This is a cherry pick of commit f8387acaa5 from master.

Change-Id: Id146ab10a0011d5f87efe1243f9bd689201b576b
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/413168
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/414060
2018-06-06 19:36:23 +00:00
Dariusz Stojaczyk
494c365eca virtio: merge contiguous memory regions
Although contiguous, memory regions may appear as separate
/proc/self/maps entries.

This patch brings support for DPDK 18.05.

This is a cherry pick of commit b5ad869cc1 from master.

Change-Id: I3029ac8b702e98133eb035f0b0cc3dd26656253a
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/413167
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/414059
2018-06-06 19:36:23 +00:00
Dariusz Stojaczyk
e110a0d1e6 env/dpdk: add support for DPDK 18.05 dynamic memory allocation
This brings DPDK 18.05 support and introduces
dynamic hugepage memory allocation.

The following is now possible:
    ./spdk_tgt -s 32
    rpc.py construct_malloc_bdev 128 512

or even:
    ./spdk_tgt -s 0

Note that if no -s param is given, DPDK will still
allocate all available hugepage memory.

This has been tested with DPDK 18.05-rc6.

Fixes #281

This is a cherry-pick of commit b6fce1912d from master.

Change-Id: I04e23cfcd8c0af913ed402a310fd596bc25d685c
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/410540
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/412868
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
2018-05-30 00:07:04 +00:00
Jim Harris
5f59d919a4 blob: don't try to claim cluster 0 in recovery code
Thin provisioned blobs mark unallocated clusters with
cluster ID 0.  During recovery from a dirty shutdown,
we must not try to claim cluster 0 - we should ignore
them instead.

Fixes issue #291.

This is a cherry-pick of commit e8ddb060f8 from master.

Change-Id: I83083bba3a2468d68874b485f1fc1e302e46fbd4
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reviewed-on: https://review.gerrithub.io/410065
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-on: https://review.gerrithub.io/410699
2018-05-09 20:52:11 +00:00
Daniel Verkamp
f1b747f50f version: SPDK v18.04.1-pre
Change-Id: I42efa8e09c5b3bb38c1126b782b1fdc19b5de2d1
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/410069
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2018-05-07 16:49:34 +00:00
13 changed files with 141 additions and 59 deletions

View File

@ -1,5 +1,18 @@
# Changelog
## v18.04.2: Maintenance Release
## v18.04.1: Maintenance Release
SPDK v18.04.1 is a bug fix and maintenance release.
A bug in the blobstore recovery code for thin provisioned blobs has been fixed
(GitHub issue #291).
The `env_dpdk` environment layer has been updated to work with DPDK 18.05.
The NVMe and bdev `fio_plugin` examples have been updated to work with FIO 3.7.
## v18.04: Logical Volume Snapshot/Clone, iSCSI Initiator, Bdev QoS, VPP Userspace TCP/IP
### vhost

View File

@ -489,7 +489,13 @@ spdk_fio_completion_cb(struct spdk_bdev_io *bdev_io,
spdk_bdev_free_io(bdev_io);
}
static int
#if FIO_IOOPS_VERSION >= 24
typedef enum fio_q_status fio_q_status_t;
#else
typedef int fio_q_status_t;
#endif
static fio_q_status_t
spdk_fio_queue(struct thread_data *td, struct io_u *io_u)
{
int rc = 1;

View File

@ -416,7 +416,14 @@ spdk_nvme_io_next_sge(void *ref, void **address, uint32_t *length)
return 0;
}
static int spdk_fio_queue(struct thread_data *td, struct io_u *io_u)
#if FIO_IOOPS_VERSION >= 24
typedef enum fio_q_status fio_q_status_t;
#else
typedef int fio_q_status_t;
#endif
static fio_q_status_t
spdk_fio_queue(struct thread_data *td, struct io_u *io_u)
{
int rc = 1;
struct spdk_fio_thread *fio_thread = td->io_ops_data;

View File

@ -54,12 +54,12 @@
* Patch level is incremented on maintenance branch releases and reset to 0 for each
* new major.minor release.
*/
#define SPDK_VERSION_PATCH 0
#define SPDK_VERSION_PATCH 2
/**
* Version string suffix.
*/
#define SPDK_VERSION_SUFFIX ""
#define SPDK_VERSION_SUFFIX "-pre"
/**
* Single numeric value representing a version number for compile-time comparisons.

View File

@ -566,7 +566,7 @@ _spdk_blob_serialize_extent(const struct spdk_blob *blob,
struct spdk_blob_md_descriptor_extent *desc;
size_t cur_sz;
uint64_t i, extent_idx;
uint32_t lba, lba_per_cluster, lba_count;
uint64_t lba, lba_per_cluster, lba_count;
/* The buffer must have room for at least one extent */
cur_sz = sizeof(struct spdk_blob_md_descriptor) + sizeof(desc->extents[0]);
@ -2479,7 +2479,7 @@ static void
_spdk_bs_load_used_blobids_cpl(spdk_bs_sequence_t *seq, void *cb_arg, int bserrno)
{
struct spdk_bs_load_ctx *ctx = cb_arg;
uint32_t i, j;
uint32_t i;
int rc;
/* The type must be correct */
@ -2500,13 +2500,9 @@ _spdk_bs_load_used_blobids_cpl(spdk_bs_sequence_t *seq, void *cb_arg, int bserrn
return;
}
for (i = 0; i < ctx->mask->length / 8; i++) {
uint8_t segment = ctx->mask->mask[i];
for (j = 0; segment; j++) {
if (segment & 1U) {
spdk_bit_array_set(ctx->bs->used_blobids, (i * 8) + j);
}
segment >>= 1U;
for (i = 0; i < ctx->mask->length; i++) {
if (ctx->mask->mask[i / 8] & (1U << (i % 8))) {
spdk_bit_array_set(ctx->bs->used_blobids, i);
}
}
@ -2518,7 +2514,7 @@ _spdk_bs_load_used_clusters_cpl(spdk_bs_sequence_t *seq, void *cb_arg, int bserr
{
struct spdk_bs_load_ctx *ctx = cb_arg;
uint64_t lba, lba_count, mask_size;
uint32_t i, j;
uint32_t i;
int rc;
/* The type must be correct */
@ -2537,15 +2533,11 @@ _spdk_bs_load_used_clusters_cpl(spdk_bs_sequence_t *seq, void *cb_arg, int bserr
}
ctx->bs->num_free_clusters = ctx->bs->total_clusters;
for (i = 0; i < ctx->mask->length / 8; i++) {
uint8_t segment = ctx->mask->mask[i];
for (j = 0; segment && (j < 8); j++) {
if (segment & 1U) {
spdk_bit_array_set(ctx->bs->used_clusters, (i * 8) + j);
assert(ctx->bs->num_free_clusters > 0);
ctx->bs->num_free_clusters--;
}
segment >>= 1U;
for (i = 0; i < ctx->mask->length; i++) {
if (ctx->mask->mask[i / 8] & (1U << (i % 8))) {
spdk_bit_array_set(ctx->bs->used_clusters, i);
assert(ctx->bs->num_free_clusters > 0);
ctx->bs->num_free_clusters--;
}
}
@ -2569,7 +2561,7 @@ _spdk_bs_load_used_pages_cpl(spdk_bs_sequence_t *seq, void *cb_arg, int bserrno)
{
struct spdk_bs_load_ctx *ctx = cb_arg;
uint64_t lba, lba_count, mask_size;
uint32_t i, j;
uint32_t i;
int rc;
/* The type must be correct */
@ -2587,13 +2579,9 @@ _spdk_bs_load_used_pages_cpl(spdk_bs_sequence_t *seq, void *cb_arg, int bserrno)
return;
}
for (i = 0; i < ctx->mask->length / 8; i++) {
uint8_t segment = ctx->mask->mask[i];
for (j = 0; segment && (j < 8); j++) {
if (segment & 1U) {
spdk_bit_array_set(ctx->bs->used_md_pages, (i * 8) + j);
}
segment >>= 1U;
for (i = 0; i < ctx->mask->length; i++) {
if (ctx->mask->mask[i / 8] & (1U << (i % 8))) {
spdk_bit_array_set(ctx->bs->used_md_pages, i);
}
}
spdk_dma_free(ctx->mask);
@ -2648,16 +2636,24 @@ _spdk_bs_load_replay_md_parse_page(const struct spdk_blob_md_page *page, struct
struct spdk_blob_md_descriptor_extent *desc_extent;
unsigned int i, j;
unsigned int cluster_count = 0;
uint32_t cluster_idx;
desc_extent = (struct spdk_blob_md_descriptor_extent *)desc;
for (i = 0; i < desc_extent->length / sizeof(desc_extent->extents[0]); i++) {
for (j = 0; j < desc_extent->extents[i].length; j++) {
spdk_bit_array_set(bs->used_clusters, desc_extent->extents[i].cluster_idx + j);
if (bs->num_free_clusters == 0) {
return -1;
cluster_idx = desc_extent->extents[i].cluster_idx;
/*
* cluster_idx = 0 means an unallocated cluster - don't mark that
* in the used cluster map.
*/
if (cluster_idx != 0) {
spdk_bit_array_set(bs->used_clusters, cluster_idx + j);
if (bs->num_free_clusters == 0) {
return -1;
}
bs->num_free_clusters--;
}
bs->num_free_clusters--;
cluster_count++;
}
}

View File

@ -170,7 +170,7 @@ struct spdk_blob_store {
uint64_t total_clusters;
uint64_t total_data_clusters;
uint64_t num_free_clusters;
uint32_t pages_per_cluster;
uint64_t pages_per_cluster;
spdk_blob_id super_blob;
struct spdk_bs_type bstype;
@ -399,7 +399,7 @@ _spdk_bs_dev_page_to_lba(struct spdk_bs_dev *bs_dev, uint64_t page)
return page * SPDK_BS_PAGE_SIZE / bs_dev->blocklen;
}
static inline uint32_t
static inline uint64_t
_spdk_bs_lba_to_page(struct spdk_blob_store *bs, uint64_t lba)
{
uint64_t lbas_per_page;
@ -426,7 +426,7 @@ _spdk_bs_dev_lba_to_page(struct spdk_bs_dev *bs_dev, uint64_t lba)
static inline uint64_t
_spdk_bs_cluster_to_page(struct spdk_blob_store *bs, uint32_t cluster)
{
return cluster * bs->pages_per_cluster;
return (uint64_t)cluster * bs->pages_per_cluster;
}
static inline uint32_t
@ -440,7 +440,7 @@ _spdk_bs_page_to_cluster(struct spdk_blob_store *bs, uint64_t page)
static inline uint64_t
_spdk_bs_cluster_to_lba(struct spdk_blob_store *bs, uint32_t cluster)
{
return cluster * (bs->cluster_sz / bs->dev->blocklen);
return (uint64_t)cluster * (bs->cluster_sz / bs->dev->blocklen);
}
static inline uint32_t
@ -465,7 +465,7 @@ _spdk_bs_blob_lba_from_back_dev_lba(struct spdk_blob *blob, uint64_t lba)
/* End basic conversions */
static inline uint32_t
static inline uint64_t
_spdk_bs_blobid_to_page(spdk_blob_id id)
{
return id & 0xFFFFFFFF;
@ -476,8 +476,11 @@ _spdk_bs_blobid_to_page(spdk_blob_id id)
* code assumes blob id == page_idx.
*/
static inline spdk_blob_id
_spdk_bs_page_to_blobid(uint32_t page_idx)
_spdk_bs_page_to_blobid(uint64_t page_idx)
{
if (page_idx > UINT32_MAX) {
return SPDK_BLOBID_INVALID;
}
return SPDK_BLOB_BLOBID_HIGH_BIT | page_idx;
}
@ -485,10 +488,10 @@ _spdk_bs_page_to_blobid(uint32_t page_idx)
* start of that page.
*/
static inline uint64_t
_spdk_bs_blob_page_to_lba(struct spdk_blob *blob, uint32_t page)
_spdk_bs_blob_page_to_lba(struct spdk_blob *blob, uint64_t page)
{
uint64_t lba;
uint32_t pages_per_cluster;
uint64_t pages_per_cluster;
pages_per_cluster = blob->bs->pages_per_cluster;
@ -504,9 +507,9 @@ _spdk_bs_blob_page_to_lba(struct spdk_blob *blob, uint32_t page)
* next cluster boundary.
*/
static inline uint32_t
_spdk_bs_num_pages_to_cluster_boundary(struct spdk_blob *blob, uint32_t page)
_spdk_bs_num_pages_to_cluster_boundary(struct spdk_blob *blob, uint64_t page)
{
uint32_t pages_per_cluster;
uint64_t pages_per_cluster;
pages_per_cluster = blob->bs->pages_per_cluster;
@ -515,9 +518,9 @@ _spdk_bs_num_pages_to_cluster_boundary(struct spdk_blob *blob, uint32_t page)
/* Given a page offset into a blob, look up the number of pages into blob to beginning of current cluster */
static inline uint32_t
_spdk_bs_page_to_cluster_start(struct spdk_blob *blob, uint32_t page)
_spdk_bs_page_to_cluster_start(struct spdk_blob *blob, uint64_t page)
{
uint32_t pages_per_cluster;
uint64_t pages_per_cluster;
pages_per_cluster = blob->bs->pages_per_cluster;
@ -526,10 +529,10 @@ _spdk_bs_page_to_cluster_start(struct spdk_blob *blob, uint32_t page)
/* Given a page offset into a blob, look up if it is from allocated cluster. */
static inline bool
_spdk_bs_page_is_allocated(struct spdk_blob *blob, uint32_t page)
_spdk_bs_page_is_allocated(struct spdk_blob *blob, uint64_t page)
{
uint64_t lba;
uint32_t pages_per_cluster;
uint64_t pages_per_cluster;
pages_per_cluster = blob->bs->pages_per_cluster;

View File

@ -80,7 +80,8 @@ endif
DPDK_LIB = $(DPDK_LIB_LIST:%=$(DPDK_ABS_DIR)/lib/lib%$(DPDK_LIB_EXT))
ENV_CFLAGS = $(DPDK_INC)
# SPDK memory registration requires experimental (deprecated) rte_memory API for DPDK 18.05
ENV_CFLAGS = $(DPDK_INC) -Wno-deprecated-declarations
ENV_CXXFLAGS = $(ENV_CFLAGS)
ENV_DPDK_FILE = $(call spdk_lib_list_to_files,env_dpdk)
ENV_LIBS = $(ENV_DPDK_FILE) $(DPDK_LIB)

View File

@ -498,12 +498,29 @@ spdk_mem_map_translate(const struct spdk_mem_map *map, uint64_t vaddr)
return map_2mb->translation_2mb;
}
#if RTE_VERSION >= RTE_VERSION_NUM(18, 05, 0, 0)
static void
memory_hotplug_cb(enum rte_mem_event event_type,
const void *addr, size_t len, void *arg)
{
if (event_type == RTE_MEM_EVENT_ALLOC) {
spdk_mem_register((void *)addr, len);
} else if (event_type == RTE_MEM_EVENT_FREE) {
spdk_mem_unregister((void *)addr, len);
}
}
static int
memory_iter_cb(const struct rte_memseg_list *msl,
const struct rte_memseg *ms, size_t len, void *arg)
{
return spdk_mem_register(ms->addr, len);
}
#endif
int
spdk_mem_map_init(void)
{
struct rte_mem_config *mcfg;
size_t seg_idx;
g_mem_reg_map = spdk_mem_map_alloc(0, NULL, NULL);
if (g_mem_reg_map == NULL) {
DEBUG_PRINT("memory registration map allocation failed\n");
@ -514,8 +531,14 @@ spdk_mem_map_init(void)
* Walk all DPDK memory segments and register them
* with the master memory map
*/
mcfg = rte_eal_get_configuration()->mem_config;
#if RTE_VERSION >= RTE_VERSION_NUM(18, 05, 0, 0)
rte_mem_event_callback_register("spdk", memory_hotplug_cb, NULL);
rte_memseg_contig_walk(memory_iter_cb, NULL);
#else
struct rte_mem_config *mcfg;
size_t seg_idx;
mcfg = rte_eal_get_configuration()->mem_config;
for (seg_idx = 0; seg_idx < RTE_MAX_MEMSEG; seg_idx++) {
struct rte_memseg *seg = &mcfg->memseg[seg_idx];
@ -525,5 +548,6 @@ spdk_mem_map_init(void)
spdk_mem_register(seg->addr, seg->len);
}
#endif
return 0;
}

View File

@ -214,12 +214,23 @@ static uint64_t
vtophys_get_paddr_memseg(uint64_t vaddr)
{
uintptr_t paddr;
struct rte_mem_config *mcfg;
struct rte_memseg *seg;
#if RTE_VERSION >= RTE_VERSION_NUM(18, 05, 0, 0)
seg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
if (seg != NULL) {
paddr = seg->phys_addr;
if (paddr == RTE_BAD_IOVA) {
return SPDK_VTOPHYS_ERROR;
}
paddr += (vaddr - (uintptr_t)seg->addr);
return paddr;
}
#else
struct rte_mem_config *mcfg;
uint32_t seg_idx;
mcfg = rte_eal_get_configuration()->mem_config;
for (seg_idx = 0; seg_idx < RTE_MAX_MEMSEG; seg_idx++) {
seg = &mcfg->memseg[seg_idx];
if (seg->addr == NULL) {
@ -240,6 +251,7 @@ vtophys_get_paddr_memseg(uint64_t vaddr)
return paddr;
}
}
#endif
return SPDK_VTOPHYS_ERROR;
}

View File

@ -347,6 +347,7 @@ spdk_app_setup_env(struct spdk_app_opts *opts)
env_opts.mem_channel = opts->mem_channel;
env_opts.master_core = opts->master_core;
env_opts.mem_size = opts->mem_size;
env_opts.hugepage_single_segments = opts->hugepage_single_segments;
env_opts.no_pci = opts->no_pci;
rc = spdk_env_init(&env_opts);

View File

@ -226,6 +226,14 @@ get_hugepage_file_info(struct hugepage_file_info huges[], int max)
SPDK_ERRLOG("Exceed maximum of %d\n", max);
goto error;
}
if (idx > 0 &&
strncmp(tmp, huges[idx - 1].path, PATH_MAX) == 0 &&
v_start == huges[idx - 1].addr + huges[idx - 1].size) {
huges[idx - 1].size += (v_end - v_start);
continue;
}
huges[idx].addr = v_start;
huges[idx].size = v_end - v_start;
snprintf(huges[idx].path, PATH_MAX, "%s", tmp);

View File

@ -50,6 +50,16 @@ rte_eal_get_configuration(void)
return &g_cfg;
}
#if RTE_VERSION >= RTE_VERSION_NUM(18, 05, 0, 0)
typedef void (*rte_mem_event_callback_t)(enum rte_mem_event event_type,
const void *addr, size_t len, void *arg);
typedef int (*rte_memseg_contig_walk_t)(const struct rte_memseg_list *msl,
const struct rte_memseg *ms, size_t len, void *arg);
DEFINE_STUB(rte_mem_event_callback_register, int, (const char *name, rte_mem_event_callback_t clb,
void *arg), 0);
DEFINE_STUB(rte_memseg_contig_walk, int, (rte_memseg_contig_walk_t func, void *arg), 0);
#endif
#define PAGE_ARRAY_SIZE (100)
static struct spdk_bit_array *g_page_array;

View File

@ -524,9 +524,10 @@ blob_thin_provision(void)
spdk_blob_close(blob, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
spdk_bs_unload(g_bs, bs_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
g_bs = NULL;
/* Do not shut down cleanly. This makes sure that when we load again
* and try to recover a valid used_cluster map, that blobstore will
* ignore clusters with index 0 since these are unallocated clusters.
*/
/* Load an existing blob store and check if invalid_flags is set */
dev = init_dev();