062da7a08a
Following patch made sure that CQ/SQ are allocated in
physically contiguous manner:
(64db67) nvme/pcie: make sure sq and cq are physically contiguous
Using MAX_IO_QUEUE_ENTRIES is enough to make sure that either
queue does not span multiple hugepages.
Yet the patch made sure that whole page is occupied only
by the queue. Which unnecessarily increases memory consumption
up to two hugepages per each qpair.
This patch changes it so that each queue alignment is limited
up to its size.
Changes in hugepages consumed when allocating io_qpair in hello_world
application:
io_queue_size Without patch With patch
256 8MiB 0MiB
1024 12MiB 4MiB
4096 24MiB 16MiB
Note: 0MiB means no new hugepages were required and qpair fits into
previously allocated hugepages (see all steps before io_qpair
allocation in hello_world).
Intersting result of this patch is that since we required alignment
up to the hugepage size this resulted in reserving even two 2MiB
hugepages to account for DPDK internal malloc trailing element.
See alloc_sz in try_expand_heap_primary() within malloc_heap.c
This patch not only reduces overall memory reserved for the
queues, but decreases increase in heap consumption on DPDK side.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2244 (master)
(cherry picked from commit
|
||
---|---|---|
.. | ||
bdev | ||
blob | ||
blobfs | ||
conf | ||
copy | ||
env_dpdk | ||
env_ocf | ||
event | ||
ftl | ||
ioat | ||
iscsi | ||
json | ||
jsonrpc | ||
log | ||
log_rpc | ||
lvol | ||
nbd | ||
net | ||
notify | ||
nvme | ||
nvmf | ||
reduce | ||
rocksdb | ||
rpc | ||
rte_vhost | ||
scsi | ||
sock | ||
thread | ||
trace | ||
ut_mock | ||
util | ||
vhost | ||
virtio | ||
vmd | ||
Makefile |