numam-spdk/etc/spdk/nvmf.conf.in
Changpeng Liu 8a23223e1b nvmf: Allow users to configure which lcore each subsystem runs on
Users can specify the core for each subsystem and the acceptor listen routine
to run on different cores for performance consideration.

Change-Id: I4bd1a96f39194c870863b4b778e6ea7cf8fc1a2d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
2016-08-16 09:20:42 -07:00

83 lines
3.0 KiB
Plaintext

# NVMf Target Configuration File
#
# Please write all parameters using ASCII.
# The parameter must be quoted if it includes whitespace.
#
# Configuration syntax:
# Leading whitespace is ignored.
# Lines starting with '#' are comments.
# Lines ending with '\' are concatenated with the next line.
# Bracketed ([]) names define sections
[Global]
# Users can restrict work items to only run on certain cores by
# specifying a ReactorMask. Default ReactorMask mask is defined as
# -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.
#ReactorMask 0x00FF
# Tracepoint group mask for spdk trace buffers
# Default: 0x0 (all tracepoint groups disabled)
# Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.
#TpointGroupMask 0x0
# syslog facility
LogFacility "local7"
# Define NVMf protocol global options
[Nvmf]
# Set the maximum number of submission and completion queues per session.
# Setting this to '8', for example, allows for 8 submission and 8 completion queues
# per session.
MaxQueuesPerSession 4
# Set the maximum number of outstanding I/O per queue.
#MaxQueueDepth 128
# Set the maximum in-capsule data size. Must be a multiple of 16.
#InCapsuleDataSize 4096
# Set the maximum I/O size. Must be a multiple of 4096.
#MaxIOSize 131072
# Set the global acceptor lcore ID, lcores are numbered starting at 0.
#AcceptorCore 0
# Define an NVMf Subsystem.
# - NQN is required and must be unique.
# - Core may be set or not. If set, the specified subsystem will run on
# it, otherwise each subsystem will use a round-robin method to allocate
# core from available cores, lcores are numbered starting at 0.
# - Mode may be either "Direct" or "Virtual". Direct means that physical
# devices attached to the target will be presented to hosts as if they
# were directly attached to the host. No software emulation or command
# validation is performed. Virtual means that an NVMe controller is
# emulated in software and the namespaces it contains map to block devices
# on the target system. These block devices do not need to be NVMe devices.
# Only Direct mode is currently supported.
# - Between 1 and 255 Listen directives are allowed. This defines
# the addresses on which new connections may be accepted. The format
# is Listen <type> <address> where type currently can only be RDMA.
# - Between 0 and 255 Host directives are allowed. This defines the
# NQNs of allowed hosts. If no Host directive is specified, all hosts
# are allowed to connect.
# - Exactly 1 NVMe directive specifying an NVMe device by PCI BDF. The
# PCI domain:bus:device.function can be replaced by "*" to indicate
# any PCI device.
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Mode Direct
Listen RDMA 15.15.15.2:4420
Host nqn.2016-06.io.spdk:init
NVMe 0000:00:00.0
# Multiple subsystems are allowed.
[Subsystem2]
NQN nqn.2016-06.io.spdk:cnode2
Core 0
Mode Direct
Listen RDMA 192.168.2.21:4420
Host nqn.2016-06.io.spdk:init
NVMe 0000:01:00.0