2017-04-28 23:19:05 +00:00
# iSCSI Target {#iscsi}
2017-09-25 23:26:56 +00:00
# iSCSI Target Getting Started Guide {#iscsi_getting_started}
2017-01-04 21:53:20 +00:00
2018-01-16 17:12:22 +00:00
The Storage Performance Development Kit iSCSI target application is named `iscsi_tgt` .
2016-08-27 02:20:48 +00:00
This following section describes how to run iscsi from your cloned package.
2017-04-28 23:19:05 +00:00
## Prerequisites {#iscsi_prereqs}
2016-08-27 02:20:48 +00:00
This guide starts by assuming that you can already build the standard SPDK distribution on your
2018-01-02 03:26:05 +00:00
platform.
2016-08-27 02:20:48 +00:00
Once built, the binary will be in `app/iscsi_tgt` .
2017-07-13 05:14:57 +00:00
If you want to kill the application by using signal, make sure use the SIGTERM, then the application
will release all the shared memory resource before exit, the SIGKILL will make the shared memory
resource have no chance to be released by applications, you may need to release the resource manually.
2017-04-28 23:19:05 +00:00
## Configuring iSCSI Target {#iscsi_config}
2016-08-27 02:20:48 +00:00
A `iscsi_tgt` specific configuration file is used to configure the iSCSI target. A fully documented
example configuration file is located at `etc/spdk/iscsi.conf.in` .
The configuration file is used to configure the SPDK iSCSI target. This file defines the following:
TCP ports to use as iSCSI portals; general iSCSI parameters; initiator names and addresses to allow
access to iSCSI target nodes; number and types of storage backends to export over iSCSI LUNs; iSCSI
target node mappings between portal groups, initiator groups, and LUNs.
2018-01-02 03:26:05 +00:00
You should make a copy of the example configuration file, modify it to suit your environment, and
then run the iscsi_tgt application and pass it the configuration file using the -c option. Right now,
the target requires elevated privileges (root) to run.
~~~
app/iscsi_tgt/iscsi_tgt -c /path/to/iscsi.conf
~~~
## Assigning CPU Cores to the iSCSI Target {#iscsi_config_lcore}
SPDK uses the [DPDK Environment Abstraction Layer ](http://dpdk.org/doc/guides/prog_guide/env_abstraction_layer.html )
to gain access to hardware resources such as huge memory pages and CPU core(s). DPDK EAL provides
functions to assign threads to specific cores.
To ensure the SPDK iSCSI target has the best performance, place the NICs and the NVMe devices on the
same NUMA node and configure the target to run on CPU cores associated with that node. The following
parameters in the configuration file are used to configure SPDK iSCSI target:
**ReactorMask:** A hexadecimal bit mask of the CPU cores that SPDK is allowed to execute work
items on. The ReactorMask is located in the [Global] section of the configuration file. For example,
to assign lcores 24,25,26 and 27 to iSCSI target work items, set the ReactorMask to:
~~~{.sh}
ReactorMask 0xF000000
~~~
## Configuring a LUN in the iSCSI Target {#iscsi_lun}
2017-03-15 21:47:17 +00:00
Each LUN in an iSCSI target node is associated with an SPDK block device. See @ref bdev_getting_started
for details on configuring SPDK block devices. The block device to LUN mappings are specified in the
configuration file as:
2016-08-27 02:20:48 +00:00
2017-03-15 21:47:17 +00:00
~~~~
2016-08-27 02:20:48 +00:00
[TargetNodeX]
LUN0 Malloc0
2017-03-15 21:47:17 +00:00
LUN1 Nvme0n1
~~~~
2016-08-27 02:20:48 +00:00
This exports a malloc'd target. The disk is a RAM disk that is a chunk of memory allocated by iscsi in
user space. It will use offload engine to do the copy job instead of memcpy if the system has enough DMA
channels.
2018-01-02 03:26:05 +00:00
## Configuring iSCSI Target via RPC method {#iscsi_rpc}
In addition to the configuration file, the iSCSI target may also be configured via JSON-RPC calls. See
@ref jsonrpc for details.
### Add the portal group
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
~~~
2018-01-02 03:26:05 +00:00
python /path/to/spdk/scripts/rpc.py add_portal_group 1 127.0.0.1:3260
~~~
### Add the initiator group
~~~
python /path/to/spdk/scripts/rpc.py add_initiator_group 2 ANY 127.0.0.1/32
~~~
### Construct the backend block device
~~~
python /path/to/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
~~~
### Construct the target node
~~~
python /path/to/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64 0 0 0 1
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
2017-04-28 23:19:05 +00:00
## Configuring iSCSI Initiator {#iscsi_initiator}
2016-08-27 02:20:48 +00:00
The Linux initiator is open-iscsi.
Installing open-iscsi package
Fedora:
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
yum install -y iscsi-initiator-utils
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
Ubuntu:
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
apt-get install -y open-iscsi
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
2017-04-28 23:19:05 +00:00
### Setup
2016-08-27 02:20:48 +00:00
Edit /etc/iscsi/iscsid.conf
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
node.session.cmds_max = 4096
node.session.queue_depth = 128
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run:
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
killall -HUP iscsid
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
Recommended changes to /etc/sysctl.conf
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 0
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
### Discovery
2016-08-27 02:20:48 +00:00
Assume target is at 192.168.1.5
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
iscsiadm -m discovery -t sendtargets -p 192.168.1.5
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
### Connect to target
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
iscsiadm -m node --login
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
At this point the iSCSI target should show up as SCSI disks. Check dmesg to see what
they came up as.
2017-01-04 21:53:20 +00:00
### Disconnect from target
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
iscsiadm -m node --logout
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
### Deleting target node cache
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
iscsiadm -m node -o delete
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
This will cause the initiator to forget all previously discovered iSCSI target nodes.
2017-01-04 21:53:20 +00:00
### Finding /dev/sdX nodes for iSCSI LUNs
2016-08-27 02:20:48 +00:00
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
This will show the /dev node name for each SCSI LUN in all logged in iSCSI sessions.
2017-01-04 21:53:20 +00:00
### Tuning
2016-08-27 02:20:48 +00:00
After the targets are connected, they can be tuned. For example if /dev/sdc is
an iSCSI disk then the following can be done:
Set noop to scheduler
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
echo noop > /sys/block/sdc/queue/scheduler
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
Disable merging/coalescing (can be useful for precise workload measurements)
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
echo "2" > /sys/block/sdc/queue/nomerges
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
Increase requests for block queue
2017-01-04 21:53:20 +00:00
~~~
2016-08-27 02:20:48 +00:00
echo "1024" > /sys/block/sdc/queue/nr_requests
2017-01-04 21:53:20 +00:00
~~~
2017-04-28 23:19:05 +00:00
2018-03-19 13:15:19 +00:00
# Vector Packet Processing {#vpp}
VPP (part of [Fast Data - Input/Output ](https://fd.io/ ) project) is an extensible
userspace framework providing networking functionality. It is build on idea of
packet processing graph (see [What is VPP? ](https://wiki.fd.io/view/VPP/What_is_VPP? )).
A detailed instructions for **simplified steps 1-3** below, can be found on
VPP [Quick Start Guide ](https://wiki.fd.io/view/VPP ).
*SPDK supports VPP version 18.01.1.*
## 1. Building VPP (optional) {#vpp_build}
*Please skip this step if using already built packages.*
Clone and checkout VPP
~~~
git clone https://gerrit.fd.io/r/vpp & & cd vpp
git checkout v18.01.1
~~~
Install VPP build dependencies
~~~
make install-dep
~~~
Build and create .rpm packages
~~~
make pkg-rpm
~~~
Alternatively, build and create .deb packages
~~~
make pkg-deb
~~~
Packages can be found in `vpp/build-root/` directory.
For more in depth instructions please see Building section in
[VPP documentation ](https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Building )
2018-03-13 15:11:28 +00:00
*Please note: VPP 18.01.1 does not support OpenSSL 1.1. It is suggested to install a compatibility package
for compilation time.*
~~~
sudo dnf install -y --allowerasing compat-openssl10-devel
~~~
*Then reinstall latest OpenSSL devel package:*
~~~
sudo dnf install -y --allowerasing openssl-devel
~~~
2018-03-19 13:15:19 +00:00
## 2. Installing VPP {#vpp_install}
Packages can be installed from distribution repository or built in previous step.
Minimal set of packages consists of `vpp` , `vpp-lib` and `vpp-devel` .
*Note: Please remove or modify /etc/sysctl.d/80-vpp.conf file with appropriate values
dependent on number of hugepages that will be used on system.*
## 3. Running VPP {#vpp_run}
VPP takes over any network interfaces that were bound to userspace driver,
for details please see DPDK guide on
[Binding and Unbinding Network Ports to/from the Kernel Modules ](http://dpdk.org/doc/guides/linux_gsg/linux_drivers.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules ).
VPP is installed as service and disabled by default. To start VPP with default config:
~~~
sudo systemctl start vpp
~~~
Alternatively, use `vpp` binary directly
~~~
sudo vpp unix {cli-listen /run/vpp/cli.sock}
~~~
A usefull tool is `vppctl` , that allows to control running VPP instance.
Either by entering VPP configuration prompt
~~~
sudo vppctl
~~~
Or, by sending single command directly. For example to display interfaces within VPP:
~~~
sudo vppctl show interface
~~~
### Example: Tap interfaces on single host
For functional test purpose a virtual tap interface can be created,
so no additional network hardware is required.
This will allow network communication between SPDK iSCSI target using VPP end of tap
and kernel iSCSI initiator using the kernel part of tap. A single host is used in this scenario.
Create tap interface via VPP
~~~
vppctl tap connect tap0
vppctl set interface state tapcli-0 up
vppctl set interface ip address tapcli-0 10.0.0.1/24
vppctl show int addr
~~~
Assign address on kernel interface
~~~
sudo ip addr add 10.0.0.2/24 dev tap0
sudo ip link set tap0 up
~~~
To verify connectivity
~~~
ping 10.0.0.1
~~~
## 4. Building SPDK with VPP {#vpp_built_into_spdk}
Support for VPP can be built into SPDK by using configuration option.
~~~
configure --with-vpp
~~~
Alternatively, directory with built libraries can be pointed at
and will be used for compilation instead of installed packages.
~~~
configure --with-vpp=/path/to/vpp/repo/build-root/vpp
~~~
## 5. Running SPDK with VPP {#vpp_running_with_spdk}
VPP application has to be started before SPDK iSCSI target,
in order to enable usage of network interfaces.
After SPDK iSCSI target initialization finishes,
interfaces configured within VPP will be available to be configured as portal addresses.
Please refer to @ref iscsi_rpc.
2017-04-28 23:19:05 +00:00
# iSCSI Hotplug {#iscsi_hotplug}
At the iSCSI level, we provide the following support for Hotplug:
1. bdev/nvme:
At the bdev/nvme level, we start one hotplug monitor which will call
spdk_nvme_probe() periodically to get the hotplug events. We provide the
private attach_cb and remove_cb for spdk_nvme_probe(). For the attach_cb,
we will create the block device base on the NVMe device attached, and for the
remove_cb, we will unregister the block device, which will also notify the
upper level stack (for iSCSI target, the upper level stack is scsi/lun) to
handle the hot-remove event.
2. scsi/lun:
When the LUN receive the hot-remove notification from block device layer,
the LUN will be marked as removed, and all the IOs after this point will
return with check condition status. Then the LUN starts one poller which will
wait for all the commands which have already been submitted to block device to
return back; after all the commands return back, the LUN will be deleted.
2017-07-18 23:52:31 +00:00
## Known bugs and limitations {#iscsi_hotplug_bugs}
For write command, if you want to test hotplug with write command which will
cause r2t, for example 1M size IO, it will crash the iscsi tgt.
For read command, if you want to test hotplug with large read IO, for example 1M
size IO, it will probably crash the iscsi tgt.
2017-04-28 23:19:05 +00:00
@sa spdk_nvme_probe