markdownlint: enable rule MD040
MD040 - Fenced code blocks should have a language specified Fixed all errors Signed-off-by: Maciej Wawryk <maciejx.wawryk@intel.com> Change-Id: Iddd307068c1047ca9a0bb12c1b0d9c88f496765f Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9272 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
This commit is contained in:
parent
1c81d1afa2
commit
63ee471b64
@ -1882,7 +1882,7 @@ Preliminary support for cross compilation is now available. Targeting an older
|
||||
CPU on the same architecture using your native compiler can be accomplished by
|
||||
using the `--target-arch` option to `configure` as follows:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
./configure --target-arch=broadwell
|
||||
~~~
|
||||
|
||||
@ -1890,7 +1890,7 @@ Additionally, some support for cross-compiling to other architectures has been
|
||||
added via the `--cross-prefix` argument to `configure`. To cross-compile, set CC
|
||||
and CXX to the cross compilers, then run configure as follows:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
./configure --target-arch=aarm64 --cross-prefix=aarch64-linux-gnu
|
||||
~~~
|
||||
|
||||
|
@ -129,7 +129,9 @@ Boolean (on/off) options are configured with a 'y' (yes) or 'n' (no). For
|
||||
example, this line of `CONFIG` controls whether the optional RDMA (libibverbs)
|
||||
support is enabled:
|
||||
|
||||
CONFIG_RDMA?=n
|
||||
~~~{.sh}
|
||||
CONFIG_RDMA?=n
|
||||
~~~
|
||||
|
||||
To enable RDMA, this line may be added to `mk/config.mk` with a 'y' instead of
|
||||
'n'. For the majority of options this can be done using the `configure` script.
|
||||
|
@ -151,7 +151,7 @@ Whenever the `CPU mask` is mentioned it is a string in one of the following form
|
||||
|
||||
The following CPU masks are equal and correspond to CPUs 0, 1, 2, 8, 9, 10, 11 and 12:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
0x1f07
|
||||
0x1F07
|
||||
1f07
|
||||
|
@ -236,7 +236,7 @@ Example command
|
||||
|
||||
### Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part}
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
# Expose bdev Nvme0n1 as kernel block device /dev/nbd0 by JSON-RPC
|
||||
rpc.py nbd_start_disk Nvme0n1 /dev/nbd0
|
||||
|
||||
|
@ -88,7 +88,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
|
||||
|
||||
### Initial Creation
|
||||
|
||||
```
|
||||
```text
|
||||
+--------------------+
|
||||
Backing Device | |
|
||||
+--------------------+
|
||||
@ -123,7 +123,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
|
||||
store the 16KB of data.
|
||||
* Write the chunk map index to entry 2 in the logical map.
|
||||
|
||||
```
|
||||
```text
|
||||
+--------------------+
|
||||
Backing Device |01 |
|
||||
+--------------------+
|
||||
@ -157,7 +157,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
|
||||
* Write (2, X, X, X) to the chunk map.
|
||||
* Write the chunk map index to entry 0 in the logical map.
|
||||
|
||||
```
|
||||
```text
|
||||
+--------------------+
|
||||
Backing Device |012 |
|
||||
+--------------------+
|
||||
@ -205,7 +205,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
|
||||
* Free chunk map 1 back to the free chunk map list.
|
||||
* Free backing IO unit 2 back to the free backing IO unit list.
|
||||
|
||||
```
|
||||
```text
|
||||
+--------------------+
|
||||
Backing Device |01 34 |
|
||||
+--------------------+
|
||||
|
@ -156,7 +156,7 @@ To verify that the drive is emulated correctly, one can check the output of the
|
||||
(assuming that `scripts/setup.sh` was called before and the driver has been changed for that
|
||||
device):
|
||||
|
||||
```
|
||||
```bash
|
||||
$ build/examples/identify
|
||||
=====================================================
|
||||
NVMe Controller at 0000:00:0a.0 [1d1d:1f1f]
|
||||
|
54
doc/iscsi.md
54
doc/iscsi.md
@ -32,7 +32,7 @@ To ensure the SPDK iSCSI target has the best performance, place the NICs and the
|
||||
same NUMA node and configure the target to run on CPU cores associated with that node. The following
|
||||
command line option is used to configure the SPDK iSCSI target:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
-m 0xF000000
|
||||
~~~
|
||||
|
||||
@ -51,7 +51,7 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
|
||||
- iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node.
|
||||
- iscsi_get_portal_groups -- Show information about all available portal groups.
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
/path/to/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
|
||||
~~~
|
||||
|
||||
@ -62,7 +62,7 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
|
||||
- iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group.
|
||||
- iscsi_get_initiator_groups -- Show information about all available initiator groups.
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
/path/to/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
|
||||
~~~
|
||||
|
||||
@ -73,7 +73,7 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
|
||||
- iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node.
|
||||
- iscsi_get_target_nodes -- Show information about all available iSCSI target nodes.
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
/path/to/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -d
|
||||
~~~
|
||||
|
||||
@ -83,30 +83,30 @@ The Linux initiator is open-iscsi.
|
||||
|
||||
Installing open-iscsi package
|
||||
Fedora:
|
||||
~~~
|
||||
~~~bash
|
||||
yum install -y iscsi-initiator-utils
|
||||
~~~
|
||||
|
||||
Ubuntu:
|
||||
~~~
|
||||
~~~bash
|
||||
apt-get install -y open-iscsi
|
||||
~~~
|
||||
|
||||
### Setup
|
||||
|
||||
Edit /etc/iscsi/iscsid.conf
|
||||
~~~
|
||||
~~~bash
|
||||
node.session.cmds_max = 4096
|
||||
node.session.queue_depth = 128
|
||||
~~~
|
||||
|
||||
iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run:
|
||||
~~~
|
||||
~~~bash
|
||||
killall -HUP iscsid
|
||||
~~~
|
||||
|
||||
Recommended changes to /etc/sysctl.conf
|
||||
~~~
|
||||
~~~bash
|
||||
net.ipv4.tcp_timestamps = 1
|
||||
net.ipv4.tcp_sack = 0
|
||||
|
||||
@ -124,13 +124,14 @@ net.core.netdev_max_backlog = 300000
|
||||
### Discovery
|
||||
|
||||
Assume target is at 10.0.0.1
|
||||
~~~
|
||||
|
||||
~~~bash
|
||||
iscsiadm -m discovery -t sendtargets -p 10.0.0.1
|
||||
~~~
|
||||
|
||||
### Connect to target
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
iscsiadm -m node --login
|
||||
~~~
|
||||
|
||||
@ -139,13 +140,13 @@ they came up as.
|
||||
|
||||
### Disconnect from target
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
iscsiadm -m node --logout
|
||||
~~~
|
||||
|
||||
### Deleting target node cache
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
iscsiadm -m node -o delete
|
||||
~~~
|
||||
|
||||
@ -153,7 +154,7 @@ This will cause the initiator to forget all previously discovered iSCSI target n
|
||||
|
||||
### Finding /dev/sdX nodes for iSCSI LUNs
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
|
||||
~~~
|
||||
|
||||
@ -165,19 +166,19 @@ After the targets are connected, they can be tuned. For example if /dev/sdc is
|
||||
an iSCSI disk then the following can be done:
|
||||
Set noop to scheduler
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
echo noop > /sys/block/sdc/queue/scheduler
|
||||
~~~
|
||||
|
||||
Disable merging/coalescing (can be useful for precise workload measurements)
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
echo "2" > /sys/block/sdc/queue/nomerges
|
||||
~~~
|
||||
|
||||
Increase requests for block queue
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
echo "1024" > /sys/block/sdc/queue/nr_requests
|
||||
~~~
|
||||
|
||||
@ -191,33 +192,34 @@ Assuming we have one iSCSI Target server with portal at 10.0.0.1:3200, two LUNs
|
||||
#### Configure iSCSI Target
|
||||
|
||||
Start iscsi_tgt application:
|
||||
```
|
||||
|
||||
```bash
|
||||
./build/bin/iscsi_tgt
|
||||
```
|
||||
|
||||
Construct two 64MB Malloc block devices with 512B sector size "Malloc0" and "Malloc1":
|
||||
|
||||
```
|
||||
```bash
|
||||
./scripts/rpc.py bdev_malloc_create -b Malloc0 64 512
|
||||
./scripts/rpc.py bdev_malloc_create -b Malloc1 64 512
|
||||
```
|
||||
|
||||
Create new portal group with id 1, and address 10.0.0.1:3260:
|
||||
|
||||
```
|
||||
```bash
|
||||
./scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
|
||||
```
|
||||
|
||||
Create one initiator group with id 2 to accept any connection from 10.0.0.2/32:
|
||||
|
||||
```
|
||||
```bash
|
||||
./scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
|
||||
```
|
||||
|
||||
Finally construct one target using previously created bdevs as LUN0 (Malloc0) and LUN1 (Malloc1)
|
||||
with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator group 2.
|
||||
|
||||
```
|
||||
```bash
|
||||
./scripts/rpc.py iscsi_create_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1" 1:2 64 -d
|
||||
```
|
||||
|
||||
@ -225,14 +227,14 @@ with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator gr
|
||||
|
||||
Discover target
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
$ iscsiadm -m discovery -t sendtargets -p 10.0.0.1
|
||||
10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1
|
||||
~~~
|
||||
|
||||
Connect to the target
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
iscsiadm -m node --login
|
||||
~~~
|
||||
|
||||
@ -240,7 +242,7 @@ At this point the iSCSI target should show up as SCSI disks.
|
||||
|
||||
Check dmesg to see what they came up as. In this example it can look like below:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
...
|
||||
[630111.860078] scsi host68: iSCSI Initiator over TCP/IP
|
||||
[630112.124743] scsi 68:0:0:0: Direct-Access INTEL Malloc disk 0001 PQ: 0 ANSI: 5
|
||||
@ -263,7 +265,7 @@ Check dmesg to see what they came up as. In this example it can look like below:
|
||||
You may also use simple bash command to find /dev/sdX nodes for each iSCSI LUN
|
||||
in all logged iSCSI sessions:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
$ iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
|
||||
sdd
|
||||
sde
|
||||
|
786
doc/jsonrpc.md
786
doc/jsonrpc.md
File diff suppressed because it is too large
Load Diff
@ -31,7 +31,7 @@ Status 200 with resultant JSON object included on success.
|
||||
Below is a sample python script acting as a client side. It sends `bdev_get_bdevs` method with optional `name`
|
||||
parameter and prints JSON object returned from remote_rpc script.
|
||||
|
||||
~~~
|
||||
~~~python
|
||||
import json
|
||||
import requests
|
||||
|
||||
@ -48,7 +48,7 @@ if __name__ == '__main__':
|
||||
|
||||
Output:
|
||||
|
||||
~~~
|
||||
~~~python
|
||||
python client.py
|
||||
[{u'num_blocks': 2621440, u'name': u'Malloc0', u'uuid': u'fb57e59c-599d-42f1-8b89-3e46dbe12641', u'claimed': True,
|
||||
u'driver_specific': {}, u'supported_io_types': {u'reset': True, u'nvme_admin': False, u'unmap': True, u'read': True,
|
||||
|
@ -97,7 +97,7 @@ logical volumes is kept on block devices.
|
||||
|
||||
RPC regarding lvolstore:
|
||||
|
||||
```
|
||||
```bash
|
||||
bdev_lvol_create_lvstore [-h] [-c CLUSTER_SZ] bdev_name lvs_name
|
||||
Constructs lvolstore on specified bdev with specified name. During
|
||||
construction bdev is unmapped at initialization and all data is
|
||||
@ -129,7 +129,7 @@ bdev_lvol_rename_lvstore [-h] old_name new_name
|
||||
|
||||
RPC regarding lvol and spdk bdev:
|
||||
|
||||
```
|
||||
```bash
|
||||
bdev_lvol_create [-h] [-u UUID] [-l LVS_NAME] [-t] [-c CLEAR_METHOD] lvol_name size
|
||||
Creates lvol with specified size and name on lvolstore specified by its uuid
|
||||
or name. Then constructs spdk bdev on top of that lvol and presents it as spdk bdev.
|
||||
|
@ -131,7 +131,7 @@ E.g. To send fused compare and write operation user must call spdk_nvme_ns_cmd_c
|
||||
followed with spdk_nvme_ns_cmd_write and make sure no other operations are submitted
|
||||
in between on the same queue, like in example below:
|
||||
|
||||
~~~
|
||||
~~~c
|
||||
rc = spdk_nvme_ns_cmd_compare(ns, qpair, cmp_buf, 0, 1, nvme_fused_first_cpl_cb,
|
||||
NULL, SPDK_NVME_CMD_FUSE_FIRST);
|
||||
if (rc != 0) {
|
||||
|
@ -17,14 +17,14 @@ Tracepoints are placed in groups. They are enabled and disabled as a group. To e
|
||||
the instrumentation of all the tracepoints group in an SPDK target application, start the
|
||||
target with -e parameter set to 0xFFFF:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
build/bin/nvmf_tgt -e 0xFFFF
|
||||
~~~
|
||||
|
||||
To enable the instrumentation of just the NVMe-oF RDMA tracepoints in an SPDK target
|
||||
application, start the target with the -e parameter set to 0x10:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
build/bin/nvmf_tgt -e 0x10
|
||||
~~~
|
||||
|
||||
@ -32,7 +32,7 @@ When the target starts, a message is logged with the information you need to vie
|
||||
the tracepoints in a human-readable format using the spdk_trace application. The target
|
||||
will also log information about the shared memory file.
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
app.c: 527:spdk_app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
|
||||
app.c: 531:spdk_app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -p 24147' to capture a snapshot of events at runtime.
|
||||
app.c: 533:spdk_app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.pid24147 for offline analysis/debug.
|
||||
@ -49,14 +49,14 @@ Send I/Os to the SPDK target application to generate events. The following is
|
||||
an example usage of perf to send I/Os to the NVMe-oF target over an RDMA network
|
||||
interface for 10 minutes.
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
./perf -q 128 -s 4096 -w randread -t 600 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.100.2 trsvcid:4420'
|
||||
~~~
|
||||
|
||||
The spdk_trace program can be found in the app/trace directory. To analyze the tracepoints on the same
|
||||
system running the NVMe-oF target, simply execute the command line shown in the log:
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
build/bin/spdk_trace -s nvmf -p 24147
|
||||
~~~
|
||||
|
||||
@ -64,13 +64,13 @@ To analyze the tracepoints on a different system, first prepare the tracepoint f
|
||||
tracepoint file can be large, but usually compresses very well. This step can also be used to prepare
|
||||
a tracepoint file to attach to a GitHub issue for debugging NVMe-oF application crashes.
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
bzip2 -c /dev/shm/nvmf_trace.pid24147 > /tmp/trace.bz2
|
||||
~~~
|
||||
|
||||
After transferring the /tmp/trace.bz2 tracepoint file to a different system:
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
bunzip2 /tmp/trace.bz2
|
||||
build/bin/spdk_trace -f /tmp/trace
|
||||
~~~
|
||||
@ -79,7 +79,7 @@ The following is sample trace capture showing the cumulative time that each
|
||||
I/O spends at each RDMA state. All the trace captures with the same id are for
|
||||
the same I/O.
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
28: 6026.658 ( 12656064) RDMA_REQ_NEED_BUFFER id: r3622 time: 0.019
|
||||
28: 6026.694 ( 12656140) RDMA_REQ_RDY_TO_EXECUTE id: r3622 time: 0.055
|
||||
28: 6026.820 ( 12656406) RDMA_REQ_EXECUTING id: r3622 time: 0.182
|
||||
@ -135,20 +135,20 @@ spdk_trace_record is used to poll the spdk tracepoint shared memory, record new
|
||||
and store all entries into specified output file at its shutdown on SIGINT or SIGTERM.
|
||||
After SPDK nvmf target is launched, simply execute the command line shown in the log:
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
build/bin/spdk_trace_record -q -s nvmf -p 24147 -f /tmp/spdk_nvmf_record.trace
|
||||
~~~
|
||||
|
||||
Also send I/Os to the SPDK target application to generate events by previous perf example for 10 minutes.
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
./perf -q 128 -s 4096 -w randread -t 600 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.100.2 trsvcid:4420'
|
||||
~~~
|
||||
|
||||
After the completion of perf example, shut down spdk_trace_record by signal SIGINT (Ctrl + C).
|
||||
To analyze the tracepoints output file from spdk_trace_record, simply run spdk_trace program by:
|
||||
|
||||
~~~{.sh}
|
||||
~~~bash
|
||||
build/bin/spdk_trace -f /tmp/spdk_nvmf_record.trace
|
||||
~~~
|
||||
|
||||
@ -159,7 +159,7 @@ tracepoints to the existing trace groups. For example, to add a new tracepoints
|
||||
to the SPDK RDMA library (lib/nvmf/rdma.c) trace group TRACE_GROUP_NVMF_RDMA,
|
||||
define the tracepoints and assigning them a unique ID using the SPDK_TPOINT_ID macro:
|
||||
|
||||
~~~
|
||||
~~~c
|
||||
#define TRACE_GROUP_NVMF_RDMA 0x4
|
||||
#define TRACE_RDMA_REQUEST_STATE_NEW SPDK_TPOINT_ID(TRACE_GROUP_NVMF_RDMA, 0x0)
|
||||
...
|
||||
@ -170,7 +170,7 @@ You also need to register the new trace points in the SPDK_TRACE_REGISTER_FN mac
|
||||
within the application/library using the spdk_trace_register_description function
|
||||
as shown below:
|
||||
|
||||
~~~
|
||||
~~~c
|
||||
SPDK_TRACE_REGISTER_FN(nvmf_trace)
|
||||
{
|
||||
spdk_trace_register_object(OBJECT_NVMF_RDMA_IO, 'r');
|
||||
@ -191,7 +191,7 @@ application/library to record the current trace state for the new trace points.
|
||||
The following example shows the usage of the spdk_trace_record function to
|
||||
record the current trace state of several tracepoints.
|
||||
|
||||
~~~
|
||||
~~~c
|
||||
case RDMA_REQUEST_STATE_NEW:
|
||||
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_NEW, 0, 0, (uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id);
|
||||
...
|
||||
|
@ -8,21 +8,21 @@ when SPDK adds or modifies library dependencies.
|
||||
If your application is using the SPDK nvme library, you would use the following
|
||||
to get the list of required SPDK libraries:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_nvme
|
||||
~~~
|
||||
|
||||
To get the list of required SPDK and DPDK libraries to use the DPDK-based
|
||||
environment layer:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_env_dpdk
|
||||
~~~
|
||||
|
||||
When linking with static libraries, the dependent system libraries must also be
|
||||
specified. To get the list of required system libraries:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_syslibs
|
||||
~~~
|
||||
|
||||
@ -33,7 +33,7 @@ the `-Wl,--no-as-needed` parameters while with static libraries `-Wl,--whole-arc
|
||||
is used. Here is an example Makefile snippet that shows how to use pkg-config to link
|
||||
an application that uses the SPDK nvme shared library:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
PKG_CONFIG_PATH = $(SPDK_DIR)/build/lib/pkgconfig
|
||||
SPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_nvme
|
||||
DPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_env_dpdk
|
||||
@ -44,7 +44,7 @@ app:
|
||||
|
||||
If using the SPDK nvme static library:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
PKG_CONFIG_PATH = $(SPDK_DIR)/build/lib/pkgconfig
|
||||
SPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_nvme
|
||||
DPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_env_dpdk
|
||||
|
@ -115,7 +115,7 @@ shared by its vhost clients as described in the
|
||||
|
||||
Open the `/etc/security/limits.conf` file as root and append the following:
|
||||
|
||||
```
|
||||
```bash
|
||||
spdk hard memlock unlimited
|
||||
spdk soft memlock unlimited
|
||||
```
|
||||
|
20
doc/usdt.md
20
doc/usdt.md
@ -28,7 +28,7 @@ flex
|
||||
We have found issues with the packaged bpftrace on both Ubuntu 20.04
|
||||
and Fedora 33. So bpftrace should be built and installed from source.
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/iovisor/bpftrace.git
|
||||
mkdir bpftrace/build
|
||||
cd bpftrace/build
|
||||
@ -42,7 +42,7 @@ sudo make install
|
||||
bpftrace.sh is a helper script that facilitates running bpftrace scripts
|
||||
against a running SPDK application. Here is a typical usage:
|
||||
|
||||
```
|
||||
```bash
|
||||
scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/nvmf.bt
|
||||
```
|
||||
|
||||
@ -58,7 +58,7 @@ that string with the PID provided to the script.
|
||||
|
||||
## Configuring SPDK Build
|
||||
|
||||
```
|
||||
```bash
|
||||
./configure --with-usdt
|
||||
```
|
||||
|
||||
@ -66,13 +66,13 @@ that string with the PID provided to the script.
|
||||
|
||||
From first terminal:
|
||||
|
||||
```
|
||||
```bash
|
||||
build/bin/spdk_tgt -m 0xC
|
||||
```
|
||||
|
||||
From second terminal:
|
||||
|
||||
```
|
||||
```bash
|
||||
scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/nvmf.bt
|
||||
```
|
||||
|
||||
@ -81,7 +81,7 @@ group info state transitions.
|
||||
|
||||
From third terminal:
|
||||
|
||||
```
|
||||
```bash
|
||||
scripts/rpc.py <<EOF
|
||||
nvmf_create_transport -t tcp
|
||||
nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
|
||||
@ -96,7 +96,7 @@ port, and a null bdev which is added as a namespace to the new nvmf subsystem.
|
||||
|
||||
You will see output from the second terminal that looks like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
2110.935735: nvmf_tgt reached state NONE
|
||||
2110.954316: nvmf_tgt reached state CREATE_TARGET
|
||||
2110.967905: nvmf_tgt reached state CREATE_POLL_GROUPS
|
||||
@ -145,14 +145,14 @@ it again with the send_msg.bt script. This script keeps a count of
|
||||
functions executed as part of an spdk_for_each_channel or
|
||||
spdk_thread_send_msg function call.
|
||||
|
||||
```
|
||||
```bash
|
||||
scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/send_msg.bt
|
||||
```
|
||||
|
||||
From the third terminal, create another null bdev and add it as a
|
||||
namespace to the cnode1 subsystem.
|
||||
|
||||
```
|
||||
```bash
|
||||
scripts/rpc.py <<EOF
|
||||
bdev_null_create null1 1000 512
|
||||
nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 null1
|
||||
@ -162,7 +162,7 @@ EOF
|
||||
Now Ctrl-C the bpftrace.sh in the second terminal, and it will
|
||||
print the final results of the maps.
|
||||
|
||||
```
|
||||
```bash
|
||||
@for_each_channel[subsystem_state_change_on_pg]: 2
|
||||
|
||||
@send_msg[_finish_unregister]: 1
|
||||
|
@ -18,12 +18,10 @@ reference.
|
||||
Reading from the
|
||||
[Virtio specification](http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html):
|
||||
|
||||
```
|
||||
The purpose of virtio and [virtio] specification is that virtual environments
|
||||
and guests should have a straightforward, efficient, standard and extensible
|
||||
mechanism for virtual devices, rather than boutique per-environment or per-OS
|
||||
mechanisms.
|
||||
```
|
||||
> The purpose of virtio and [virtio] specification is that virtual environments
|
||||
> and guests should have a straightforward, efficient, standard and extensible
|
||||
> mechanism for virtual devices, rather than boutique per-environment or per-OS
|
||||
> mechanisms.
|
||||
|
||||
Virtio devices use virtqueues to transport data efficiently. Virtqueue is a set
|
||||
of three different single-producer, single-consumer ring structures designed to
|
||||
@ -47,23 +45,21 @@ SPDK to expose a vhost device is Vhost-user protocol.
|
||||
The [Vhost-user specification](https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/interop/vhost-user.txt;hb=HEAD)
|
||||
describes the protocol as follows:
|
||||
|
||||
```
|
||||
[Vhost-user protocol] is aiming to complement the ioctl interface used to
|
||||
control the vhost implementation in the Linux kernel. It implements the control
|
||||
plane needed to establish virtqueue sharing with a user space process on the
|
||||
same host. It uses communication over a Unix domain socket to share file
|
||||
descriptors in the ancillary data of the message.
|
||||
|
||||
The protocol defines 2 sides of the communication, master and slave. Master is
|
||||
the application that shares its virtqueues, in our case QEMU. Slave is the
|
||||
consumer of the virtqueues.
|
||||
|
||||
In the current implementation QEMU is the Master, and the Slave is intended to
|
||||
be a software Ethernet switch running in user space, such as Snabbswitch.
|
||||
|
||||
Master and slave can be either a client (i.e. connecting) or server (listening)
|
||||
in the socket communication.
|
||||
```
|
||||
> [Vhost-user protocol] is aiming to complement the ioctl interface used to
|
||||
> control the vhost implementation in the Linux kernel. It implements the control
|
||||
> plane needed to establish virtqueue sharing with a user space process on the
|
||||
> same host. It uses communication over a Unix domain socket to share file
|
||||
> descriptors in the ancillary data of the message.
|
||||
>
|
||||
> The protocol defines 2 sides of the communication, master and slave. Master is
|
||||
> the application that shares its virtqueues, in our case QEMU. Slave is the
|
||||
> consumer of the virtqueues.
|
||||
>
|
||||
> In the current implementation QEMU is the Master, and the Slave is intended to
|
||||
> be a software Ethernet switch running in user space, such as Snabbswitch.
|
||||
>
|
||||
> Master and slave can be either a client (i.e. connecting) or server (listening)
|
||||
> in the socket communication.
|
||||
|
||||
SPDK vhost is a Vhost-user slave server. It exposes Unix domain sockets and
|
||||
allows external applications to connect.
|
||||
@ -125,7 +121,7 @@ the request data, and putting guest addresses of those buffers into virtqueues.
|
||||
|
||||
A Virtio-Block request looks as follows.
|
||||
|
||||
```
|
||||
```c
|
||||
struct virtio_blk_req {
|
||||
uint32_t type; // READ, WRITE, FLUSH (read-only)
|
||||
uint64_t offset; // offset in the disk (read-only)
|
||||
@ -135,7 +131,7 @@ struct virtio_blk_req {
|
||||
```
|
||||
And a Virtio-SCSI request as follows.
|
||||
|
||||
```
|
||||
```c
|
||||
struct virtio_scsi_req_cmd {
|
||||
struct virtio_scsi_cmd_req *req; // request data (read-only)
|
||||
struct iovec read_only_buffers[]; // scatter-gatter list for WRITE I/Os
|
||||
@ -149,7 +145,7 @@ to be converted into a chain of such descriptors. A single descriptor can be
|
||||
either readable or writable, so each I/O request consists of at least two
|
||||
(request + response).
|
||||
|
||||
```
|
||||
```c
|
||||
struct virtq_desc {
|
||||
/* Address (guest-physical). */
|
||||
le64 addr;
|
||||
|
@ -2,4 +2,6 @@
|
||||
|
||||
To use perf test on FreeBSD over NVMe-oF, explicitly link userspace library of HBA. For example, on a setup with Mellanox HBA,
|
||||
|
||||
```make
|
||||
LIBS += -lmlx5
|
||||
```
|
||||
|
@ -8,6 +8,5 @@ rule 'MD029', :style => "ordered"
|
||||
exclude_rule 'MD031'
|
||||
exclude_rule 'MD033'
|
||||
exclude_rule 'MD034'
|
||||
exclude_rule 'MD040'
|
||||
exclude_rule 'MD041'
|
||||
exclude_rule 'MD046'
|
||||
|
@ -22,7 +22,7 @@ Quick start instructions for OSX:
|
||||
* Note: The extension pack has different licensing than main VirtualBox, please
|
||||
review them carefully as the evaluation license is for personal use only.
|
||||
|
||||
```
|
||||
```bash
|
||||
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||
brew doctor
|
||||
brew update
|
||||
@ -69,7 +69,7 @@ If you are behind a corporate firewall, configure the following proxy settings.
|
||||
1. Set the http_proxy and https_proxy
|
||||
2. Install the proxyconf plugin
|
||||
|
||||
```
|
||||
```bash
|
||||
$ export http_proxy=....
|
||||
$ export https_proxy=....
|
||||
$ vagrant plugin install vagrant-proxyconf
|
||||
@ -93,7 +93,7 @@ Use the `spdk/scripts/vagrant/create_vbox.sh` script to create a VM of your choi
|
||||
- fedora28
|
||||
- freebsd11
|
||||
|
||||
```
|
||||
```bash
|
||||
$ spdk/scripts/vagrant/create_vbox.sh -h
|
||||
Usage: create_vbox.sh [-n <num-cpus>] [-s <ram-size>] [-x <http-proxy>] [-hvrld] <distro>
|
||||
|
||||
@ -124,7 +124,7 @@ It is recommended that you call the `create_vbox.sh` script from outside of the
|
||||
Call this script from a parent directory. This will allow the creation of multiple VMs in separate
|
||||
<distro> directories, all using the same spdk repository. For example:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ spdk/scripts/vagrant/create_vbox.sh -s 2048 -n 2 fedora26
|
||||
```
|
||||
|
||||
@ -141,7 +141,7 @@ This script will:
|
||||
This arrangement allows the provisioning of multiple, different VMs within that same directory hierarchy using thesame
|
||||
spdk repository. Following the creation of the vm you'll need to ssh into your virtual box and finish the VM initialization.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ cd <distro>
|
||||
$ vagrant ssh
|
||||
```
|
||||
@ -152,7 +152,7 @@ A copy of the `spdk` repository you cloned will exist in the `spdk_repo` directo
|
||||
account. After using `vagrant ssh` to enter your VM you must complete the initialization of your VM by running
|
||||
the `scripts/vagrant/update.sh` script. For example:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ script -c 'sudo spdk_repo/spdk/scripts/vagrant/update.sh' update.log
|
||||
```
|
||||
|
||||
@ -175,14 +175,14 @@ Following VM initialization you must:
|
||||
|
||||
### Verify you have an emulated NVMe device
|
||||
|
||||
```
|
||||
```bash
|
||||
$ lspci | grep "Non-Volatile"
|
||||
00:0e.0 Non-Volatile memory controller: InnoTek Systemberatung GmbH Device 4e56
|
||||
```
|
||||
|
||||
### Compile SPDK
|
||||
|
||||
```
|
||||
```bash
|
||||
$ cd spdk_repo/spdk
|
||||
$ git submodule update --init
|
||||
$ ./configure --enable-debug
|
||||
@ -191,7 +191,7 @@ Following VM initialization you must:
|
||||
|
||||
### Run the hello_world example script
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo scripts/setup.sh
|
||||
$ sudo scripts/gen_nvme.sh --json-with-subsystems > ./build/examples/hello_bdev.json
|
||||
$ sudo ./build/examples/hello_bdev --json ./build/examples/hello_bdev.json -b Nvme0n1
|
||||
@ -202,7 +202,7 @@ Following VM initialization you must:
|
||||
After running vm_setup.sh the `run-autorun.sh` can be used to run `spdk/autorun.sh` on a Fedora vagrant machine.
|
||||
Note that the `spdk/scripts/vagrant/autorun-spdk.conf` should be copied to `~/autorun-spdk.conf` before starting your tests.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ cp spdk/scripts/vagrant/autorun-spdk.conf ~/
|
||||
$ spdk/scripts/vagrant/run-autorun.sh -h
|
||||
Usage: scripts/vagrant/run-autorun.sh -d <path_to_spdk_tree> [-h] | [-q] | [-n]
|
||||
@ -224,7 +224,7 @@ Note that the `spdk/scripts/vagrant/autorun-spdk.conf` should be copied to `~/au
|
||||
|
||||
The following steps are done by the `update.sh` script. It is recommended that you capture the output of `update.sh` with a typescript. E.g.:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ script update.log sudo spdk_repo/spdk/scripts/vagrant/update.sh
|
||||
```
|
||||
|
||||
@ -232,7 +232,7 @@ The following steps are done by the `update.sh` script. It is recommended that y
|
||||
1. Installs the needed FreeBSD packages on the system by calling pkgdep.sh
|
||||
2. Installs the FreeBSD source in /usr/src
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo pkg upgrade -f
|
||||
$ sudo spdk_repo/spdk/scripts/pkgdep.sh --all
|
||||
$ sudo git clone --depth 10 -b releases/11.1.0 https://github.com/freebsd/freebsd.git /usr/src
|
||||
@ -240,7 +240,7 @@ The following steps are done by the `update.sh` script. It is recommended that y
|
||||
|
||||
To build spdk on FreeBSD use `gmake MAKE=gmake`. E.g.:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ cd spdk_repo/spdk
|
||||
$ git submodule update --init
|
||||
$ ./configure --enable-debug
|
||||
|
@ -25,6 +25,6 @@ script for targeted debugging on a subsequent run.
|
||||
|
||||
At the end of each test run, a summary is printed in the following format:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
device 0x11c3b90 stats: Sent 1543 valid opcode PDUs, 16215 invalid opcode PDUs.
|
||||
~~~
|
||||
|
@ -26,7 +26,7 @@ This can be overridden with the -V flag. if -V is specified, each command will b
|
||||
it is completed in the JSON format specified above.
|
||||
At the end of each test run, a summary is printed for each namespace in the following format:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
NS: 0x200079262300 admin qp, Total commands completed: 462459, total successful commands: 1960, random_seed: 4276918833
|
||||
~~~
|
||||
|
||||
|
@ -38,7 +38,7 @@ submitted to the proper block devices.
|
||||
The vhost fuzzer differs from the NVMe fuzzer in that it expects devices to be configured via rpc. The fuzzer should
|
||||
always be started with the --wait-for-rpc argument. Please see below for an example of starting the fuzzer.
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
./test/app/fuzz/vhost_fuzz/vhost_fuzz -t 30 --wait-for-rpc &
|
||||
./scripts/rpc.py fuzz_vhost_create_dev -s ./Vhost.1 -b -V
|
||||
./scripts/rpc.py fuzz_vhost_create_dev -s ./naa.VhostScsi0.1 -l -V
|
||||
|
@ -8,7 +8,7 @@ This directory also contains a convenient test script, test_make.sh, which autom
|
||||
and testing all six of these linker options. It takes a single argument, the path to an SPDK
|
||||
repository and should be run as follows:
|
||||
|
||||
~~~
|
||||
~~~bash
|
||||
sudo ./test_make.sh /path/to/spdk
|
||||
~~~
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user