markdownlint: enable rule MD025

MD025 - Multiple top level headers in the same document
Fixed all errors
Update check_format.sh to fit new header style in jsonrpc.md

Signed-off-by: Maciej Wawryk <maciejx.wawryk@intel.com>
Change-Id: Ib5f832c549880771c99c15b89affe1e82acd3fa4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9045
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This commit is contained in:
wawryk 2021-08-02 14:27:54 +02:00 committed by Jim Harris
parent a9704f6c21
commit 1e1fd9ac21
45 changed files with 939 additions and 938 deletions

View File

@ -18,7 +18,7 @@ The development kit currently includes:
* [vhost target](http://www.spdk.io/doc/vhost.html)
* [Virtio-SCSI driver](http://www.spdk.io/doc/virtio.html)
# In this readme
## In this readme
* [Documentation](#documentation)
* [Prerequisites](#prerequisites)

View File

@ -1,4 +1,6 @@
# ABI and API Deprecation {#deprecation}
# Deprecation
## ABI and API Deprecation {#deprecation}
This document details the policy for maintaining stability of SPDK ABI and API.
@ -10,9 +12,9 @@ Each entry must describe what will be removed and can suggest the future use or
Specific future SPDK release for the removal must be provided.
ABI cannot be removed without providing deprecation notice for at least single SPDK release.
# Deprecation Notices {#deprecation-notices}
## Deprecation Notices {#deprecation-notices}
## bdev
### bdev
Deprecated `spdk_bdev_module_finish_done()` API, which will be removed in SPDK 22.01.
Bdev modules should use `spdk_bdev_module_fini_done()` instead.

View File

@ -1,11 +1,9 @@
SPDK Documentation
==================
# SPDK Documentation
The current version of the SPDK documentation can be found online at
http://www.spdk.io/doc/
Building the Documentation
==========================
## Building the Documentation
To convert the documentation into HTML run `make` in the `doc`
directory. The output will be located in `doc/output/html`. Before

View File

@ -15,12 +15,12 @@ For the software module, all capabilities will be reported as supported. For the
accelerated by hardware will be reported however any function can still be called, it will just be backed by
software if it is not reported as a supported capability.
# Acceleration Framework Functions {#accel_functions}
## Acceleration Framework Functions {#accel_functions}
Functions implemented via the framework can be found in the DoxyGen documentation of the
framework public header file here [accel_engine.h](https://spdk.io/doc/accel__engine_8h.html)
# Acceleration Framework Design Considerations {#accel_dc}
## Acceleration Framework Design Considerations {#accel_dc}
The general interface is defined by `/include/accel_engine.h` and implemented
in `/lib/accel`. These functions may be called by an SPDK application and in
@ -35,7 +35,7 @@ optimized implementation. For example, IOAT does not support the dualcast funct
in hardware but if the IOAT module has been initialized and the public dualcast API
is called, it will actually be done via software behind the scenes.
# Acceleration Low Level Libraries {#accel_libs}
## Acceleration Low Level Libraries {#accel_libs}
Low level libraries provide only the most basic functions that are specific to
the hardware. Low level libraries are located in the '/lib' directory with the
@ -51,7 +51,7 @@ way needs to be certain that the underlying hardware exists everywhere that it r
The low level library for IOAT is located in `/lib/ioat`. The low level library
for DSA is in `/liv/idxd` (IDXD stands for Intel(R) Data Acceleration Driver).
# Acceleration Plug-In Modules {#accel_modules}
## Acceleration Plug-In Modules {#accel_modules}
Plug-in modules depend on low level libraries to interact with the hardware and
add additional functionality such as queueing during busy conditions or flow
@ -60,11 +60,11 @@ the complete implementation of the acceleration component. A module must be
selected via startup RPC when the application is started. Otherwise, if no startup
RPC is provided, the framework is available and will use the software plug-in module.
## IOAT Module {#accel_ioat}
### IOAT Module {#accel_ioat}
To use the IOAT engine, use the RPC [`ioat_scan_accel_engine`](https://spdk.io/doc/jsonrpc.html) before starting the application.
## IDXD Module {#accel_idxd}
### IDXD Module {#accel_idxd}
To use the DSA engine, use the RPC [`idxd_scan_accel_engine`](https://spdk.io/doc/jsonrpc.html) with an optional parameter
of `-c` and provide a configuration number of either 0 or 1. These pre-defined configurations determine how the DSA engine
@ -86,14 +86,14 @@ of service parameters on the work queues that are not currently utilized by
the module. Specialized use of DSA may require different configurations that
can be added to the module as needed.
## Software Module {#accel_sw}
### Software Module {#accel_sw}
The software module is enabled by default. If no hardware engine is explicitly
enabled via startup RPC as discussed earlier, the software module will use ISA-L
if available for functions such as CRC32C. Otherwise, standard glibc calls are
used to back the framework API.
## Batching {#batching}
### Batching {#batching}
Batching is exposed by the acceleration framework and provides an interface to
batch sets of commands up and then submit them with a single command. The public

View File

@ -1,11 +1,11 @@
# Block Device User Guide {#bdev}
# Target Audience {#bdev_ug_targetaudience}
## Target Audience {#bdev_ug_targetaudience}
This user guide is intended for software developers who have knowledge of block storage, storage drivers, issuing JSON-RPC
commands and storage services such as RAID, compression, crypto, and others.
# Introduction {#bdev_ug_introduction}
## Introduction {#bdev_ug_introduction}
The SPDK block device layer, often simply called *bdev*, is a C library
intended to be equivalent to the operating system block storage layer that
@ -27,7 +27,7 @@ device underneath (please refer to @ref bdev_module for details). SPDK
provides also vbdev modules which creates block devices on existing bdev. For
example @ref bdev_ug_logical_volumes or @ref bdev_ug_gpt
# Prerequisites {#bdev_ug_prerequisites}
## Prerequisites {#bdev_ug_prerequisites}
This guide assumes that you can already build the standard SPDK distribution
on your platform. The block device layer is a C library with a single public
@ -40,14 +40,14 @@ directly from SPDK application by running `scripts/rpc.py rpc_get_methods`.
Detailed help for each command can be displayed by adding `-h` flag as a
command parameter.
# Configuring Block Device Modules {#bdev_ug_general_rpcs}
## Configuring Block Device Modules {#bdev_ug_general_rpcs}
Block devices can be configured using JSON RPCs. A complete list of available RPC commands
with detailed information can be found on the @ref jsonrpc_components_bdev page.
# Common Block Device Configuration Examples
## Common Block Device Configuration Examples
# Ceph RBD {#bdev_config_rbd}
## Ceph RBD {#bdev_config_rbd}
The SPDK RBD bdev driver provides SPDK block layer access to Ceph RADOS block
devices (RBD). Ceph RBD devices are accessed via librbd and librados libraries
@ -70,7 +70,7 @@ To resize a bdev use the bdev_rbd_resize command.
This command will resize the Rbd0 bdev to 4096 MiB.
# Compression Virtual Bdev Module {#bdev_config_compress}
## Compression Virtual Bdev Module {#bdev_config_compress}
The compression bdev module can be configured to provide compression/decompression
services for an underlying thinly provisioned logical volume. Although the underlying
@ -134,7 +134,7 @@ all volumes, if used it will return the name or an error that the device does no
`rpc.py bdev_compress_get_orphans --name COMP_Nvme0n1`
# Crypto Virtual Bdev Module {#bdev_config_crypto}
## Crypto Virtual Bdev Module {#bdev_config_crypto}
The crypto virtual bdev module can be configured to provide at rest data encryption
for any underlying bdev. The module relies on the DPDK CryptoDev Framework to provide
@ -171,7 +171,7 @@ To remove the vbdev use the bdev_crypto_delete command.
`rpc.py bdev_crypto_delete CryNvmeA`
# Delay Bdev Module {#bdev_config_delay}
## Delay Bdev Module {#bdev_config_delay}
The delay vbdev module is intended to apply a predetermined additional latency on top of a lower
level bdev. This enables the simulation of the latency characteristics of a device during the functional
@ -202,13 +202,13 @@ Example command:
`rpc.py bdev_delay_delete delay0`
# GPT (GUID Partition Table) {#bdev_config_gpt}
## GPT (GUID Partition Table) {#bdev_config_gpt}
The GPT virtual bdev driver is enabled by default and does not require any configuration.
It will automatically detect @ref bdev_ug_gpt on any attached bdev and will create
possibly multiple virtual bdevs.
## SPDK GPT partition table {#bdev_ug_gpt}
### SPDK GPT partition table {#bdev_ug_gpt}
The SPDK partition type GUID is `7c5222bd-8f5d-4087-9c00-bf9843c7b58c`. Existing SPDK bdevs
can be exposed as Linux block devices via NBD and then can be partitioned with
@ -234,7 +234,7 @@ Example command
`rpc.py nbd_stop_disk -n /dev/nbd0`
## Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part}
### Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part}
~~~
# Expose bdev Nvme0n1 as kernel block device /dev/nbd0 by JSON-RPC
@ -258,7 +258,7 @@ rpc.py nbd_stop_disk /dev/nbd0
# Nvme0n1p1 in SPDK applications.
~~~
# iSCSI bdev {#bdev_config_iscsi}
## iSCSI bdev {#bdev_config_iscsi}
The SPDK iSCSI bdev driver depends on libiscsi and hence is not enabled by default.
In order to use it, build SPDK with an extra `--with-iscsi-initiator` configure option.
@ -271,7 +271,7 @@ with `iqn.2016-06.io.spdk:init` as the reported initiator IQN.
The URL is in the following format:
`iscsi://[<username>[%<password>]@]<host>[:<port>]/<target-iqn>/<lun>`
# Linux AIO bdev {#bdev_config_aio}
## Linux AIO bdev {#bdev_config_aio}
The SPDK AIO bdev driver provides SPDK block layer access to Linux kernel block
devices or a file on a Linux filesystem via Linux AIO. Note that O_DIRECT is
@ -294,7 +294,7 @@ To delete an aio bdev use the bdev_aio_delete command.
`rpc.py bdev_aio_delete aio0`
# OCF Virtual bdev {#bdev_config_cas}
## OCF Virtual bdev {#bdev_config_cas}
OCF virtual bdev module is based on [Open CAS Framework](https://github.com/Open-CAS/ocf) - a
high performance block storage caching meta-library.
@ -321,7 +321,7 @@ During removal OCF-cache will be stopped and all cached data will be written to
Note that OCF has a per-device RAM requirement. More details can be found in the
[OCF documentation](https://open-cas.github.io/guide_system_requirements.html).
# Malloc bdev {#bdev_config_malloc}
## Malloc bdev {#bdev_config_malloc}
Malloc bdevs are ramdisks. Because of its nature they are volatile. They are created from hugepage memory given to SPDK
application.
@ -334,7 +334,7 @@ Example command for removing malloc bdev:
`rpc.py bdev_malloc_delete Malloc0`
# Null {#bdev_config_null}
## Null {#bdev_config_null}
The SPDK null bdev driver is a dummy block I/O target that discards all writes and returns undefined
data for reads. It is useful for benchmarking the rest of the bdev I/O stack with minimal block
@ -351,7 +351,7 @@ To delete a null bdev use the bdev_null_delete command.
`rpc.py bdev_null_delete Null0`
# NVMe bdev {#bdev_config_nvme}
## NVMe bdev {#bdev_config_nvme}
There are two ways to create block device based on NVMe device in SPDK. First
way is to connect local PCIe drive and second one is to connect NVMe-oF device.
@ -373,7 +373,7 @@ To remove an NVMe controller use the bdev_nvme_detach_controller command.
This command will remove NVMe bdev named Nvme0.
## NVMe bdev character device {#bdev_config_nvme_cuse}
### NVMe bdev character device {#bdev_config_nvme_cuse}
This feature is considered as experimental. You must configure with --with-nvme-cuse
option to enable this RPC.
@ -397,14 +397,14 @@ with command:
`rpc.py bdev_nvme_cuse_unregister -n Nvme0`
# Logical volumes {#bdev_ug_logical_volumes}
## Logical volumes {#bdev_ug_logical_volumes}
The Logical Volumes library is a flexible storage space management system. It allows
creating and managing virtual block devices with variable size on top of other bdevs.
The SPDK Logical Volume library is built on top of @ref blob. For detailed description
please refer to @ref lvol.
## Logical volume store {#bdev_ug_lvol_store}
### Logical volume store {#bdev_ug_lvol_store}
Before creating any logical volumes (lvols), an lvol store has to be created first on
selected block device. Lvol store is lvols vessel responsible for managing underlying
@ -443,7 +443,7 @@ Example commands
`rpc.py bdev_lvol_delete_lvstore -l lvs`
## Lvols {#bdev_ug_lvols}
### Lvols {#bdev_ug_lvols}
To create lvols on existing lvol store user should use `bdev_lvol_create` RPC command.
Each created lvol will be represented by new bdev.
@ -454,7 +454,7 @@ Example commands
`rpc.py bdev_lvol_create lvol2 25 -u 330a6ab2-f468-11e7-983e-001e67edf35d`
# Passthru {#bdev_config_passthru}
## Passthru {#bdev_config_passthru}
The SPDK Passthru virtual block device module serves as an example of how to write a
virtual block device module. It implements the required functionality of a vbdev module
@ -466,7 +466,7 @@ Example commands
`rpc.py bdev_passthru_delete pt`
# Pmem {#bdev_config_pmem}
## Pmem {#bdev_config_pmem}
The SPDK pmem bdev driver uses pmemblk pool as the target for block I/O operations. For
details on Pmem memory please refer to PMDK documentation on http://pmem.io website.
@ -503,7 +503,7 @@ To remove a block device representation use the bdev_pmem_delete command.
`rpc.py bdev_pmem_delete pmem`
# RAID {#bdev_ug_raid}
## RAID {#bdev_ug_raid}
RAID virtual bdev module provides functionality to combine any SPDK bdevs into
one RAID bdev. Currently SPDK supports only RAID 0. RAID functionality does not
@ -523,7 +523,7 @@ Example commands
`rpc.py bdev_raid_delete Raid0`
# Split {#bdev_ug_split}
## Split {#bdev_ug_split}
The split block device module takes an underlying block device and splits it into
several smaller equal-sized virtual block devices. This serves as an example to create
@ -545,7 +545,7 @@ To remove the split bdevs, use the `bdev_split_delete` command with th
`rpc.py bdev_split_delete bdev_b0`
# Uring {#bdev_ug_uring}
## Uring {#bdev_ug_uring}
The uring bdev module issues I/O to kernel block devices using the io_uring Linux kernel API. This module requires liburing.
For more information on io_uring refer to kernel [IO_uring] (https://kernel.dk/io_uring.pdf)
@ -562,7 +562,7 @@ To remove a uring bdev use the `bdev_uring_delete` RPC.
`rpc.py bdev_uring_delete bdev_u0`
# Virtio Block {#bdev_config_virtio_blk}
## Virtio Block {#bdev_config_virtio_blk}
The Virtio-Block driver allows creating SPDK bdevs from Virtio-Block devices.
@ -583,7 +583,7 @@ Virtio-Block devices can be removed with the following command
`rpc.py bdev_virtio_detach_controller VirtioBlk0`
# Virtio SCSI {#bdev_config_virtio_scsi}
## Virtio SCSI {#bdev_config_virtio_scsi}
The Virtio-SCSI driver allows creating SPDK block devices from Virtio-SCSI LUNs.

View File

@ -1,6 +1,6 @@
# Blobstore Programmer's Guide {#blob}
# In this document {#blob_pg_toc}
## In this document {#blob_pg_toc}
* @ref blob_pg_audience
* @ref blob_pg_intro

View File

@ -1,8 +1,8 @@
# BlobFS (Blobstore Filesystem) {#blobfs}
# BlobFS Getting Started Guide {#blobfs_getting_started}
## BlobFS Getting Started Guide {#blobfs_getting_started}
# RocksDB Integration {#blobfs_rocksdb}
## RocksDB Integration {#blobfs_rocksdb}
Clone and build the SPDK repository as per https://github.com/spdk/spdk
@ -68,7 +68,7 @@ At this point, RocksDB is ready for testing with SPDK. Three `db_bench` paramet
SPDK has a set of scripts which will run `db_bench` against a variety of workloads and capture performance and profiling
data. The primary script is `test/blobfs/rocksdb/rocksdb.sh`.
# FUSE
## FUSE
BlobFS provides a FUSE plug-in to mount an SPDK BlobFS as a kernel filesystem for inspection or debug purposes.
The FUSE plug-in requires fuse3 and will be built automatically when fuse3 is detected on the system.
@ -79,7 +79,7 @@ test/blobfs/fuse/fuse /usr/local/etc/spdk/rocksdb.json Nvme0n1 /mnt/fuse
Note that the FUSE plug-in has some limitations - see the list below.
# Limitations
## Limitations
* BlobFS has primarily been tested with RocksDB so far, so any use cases different from how RocksDB uses a filesystem
may run into issues. BlobFS will be tested in a broader range of use cases after this initial release.

View File

@ -1,6 +1,6 @@
# Message Passing and Concurrency {#concurrency}
# Theory
## Theory
One of the primary aims of SPDK is to scale linearly with the addition of
hardware. This can mean many things in practice. For instance, moving from one
@ -56,7 +56,7 @@ data isn't mutated very often, but is read very frequently, and is often
employed in the I/O path. This of course trades memory size for computational
efficiency, so it is used in only the most critical code paths.
# Message Passing Infrastructure
## Message Passing Infrastructure
SPDK provides several layers of message passing infrastructure. The most
fundamental libraries in SPDK, for instance, don't do any message passing on
@ -110,7 +110,7 @@ repeatedly call `spdk_thread_poll()` on each `spdk_thread()` that exists. This
makes SPDK very portable to a wide variety of asynchronous, event-based
frameworks such as [Seastar](https://www.seastar.io) or [libuv](https://libuv.org/).
# The event Framework
## The event Framework
The SPDK project didn't want to officially pick an asynchronous, event-based
framework for all of the example applications it shipped with, in the interest
@ -122,7 +122,7 @@ signal handlers to cleanly shutdown, and basic command line option parsing.
Only established applications should consider directly integrating the lower
level libraries.
# Limitations of the C Language
## Limitations of the C Language
Message passing is efficient, but it results in asynchronous code.
Unfortunately, asynchronous code is a challenge in C. It's often implemented by

View File

@ -4,12 +4,12 @@ This is a living document as there are many ways to use containers with
SPDK. As new usages are identified and tested, they will be documented
here.
# In this document {#containers_toc}
## In this document {#containers_toc}
* @ref kata_containers_with_spdk_vhost
* @ref spdk_in_docker
# Using SPDK vhost target to provide volume service to Kata Containers and Docker {#kata_containers_with_spdk_vhost}
## Using SPDK vhost target to provide volume service to Kata Containers and Docker {#kata_containers_with_spdk_vhost}
[Kata Containers](https://katacontainers.io) can build a secure container
runtime with lightweight virtual machines that feel and perform like
@ -23,7 +23,7 @@ In addition, a container manager like Docker, can be configured easily to launch
a Kata container with an SPDK vhost-user block device. For operating details, visit
Kata containers use-case [Setup to run SPDK vhost-user devices with Kata Containers and Docker](https://github.com/kata-containers/documentation/blob/master/use-cases/using-SPDK-vhostuser-and-kata.md#host-setup-for-vhost-user-devices)
# Containerizing an SPDK Application for Docker {#spdk_in_docker}
## Containerizing an SPDK Application for Docker {#spdk_in_docker}
There are no SPDK specific changes needed to run an SPDK based application in
a docker container, however this quick start guide should help you as you

View File

@ -14,7 +14,7 @@ concurrency.
The event framework public interface is defined in event.h.
# Event Framework Design Considerations {#event_design}
## Event Framework Design Considerations {#event_design}
Simple server applications can be written in a single-threaded fashion. This
allows for straightforward code that can maintain state without any locking or
@ -27,9 +27,9 @@ synchronization. Unfortunately, in many real-world cases, the connections are
not entirely independent and cross-thread shared state is necessary. SPDK
provides an event framework to help solve this problem.
# SPDK Event Framework Components {#event_components}
## SPDK Event Framework Components {#event_components}
## Events {#event_component_events}
### Events {#event_component_events}
To accomplish cross-thread communication while minimizing synchronization
overhead, the framework provides message passing in the form of events. The
@ -45,7 +45,7 @@ asynchronous operations to achieve concurrency. Asynchronous I/O may be issued
with a non-blocking function call, and completion is typically signaled using
a callback function.
## Reactors {#event_component_reactors}
### Reactors {#event_component_reactors}
Each reactor has a lock-free queue for incoming events to that core, and
threads from any core may insert events into the queue of any other core. The
@ -54,7 +54,7 @@ in first-in, first-out order as they are received. Event functions should
never block and should preferably execute very quickly, since they are called
directly from the event loop on the destination core.
## Pollers {#event_component_pollers}
### Pollers {#event_component_pollers}
The framework also defines another type of function called a poller. Pollers
may be registered with the spdk_poller_register() function. Pollers, like
@ -66,7 +66,7 @@ intended to poll hardware as a replacement for interrupts. Normally, pollers
are executed on every iteration of the main event loop. Pollers may also be
scheduled to execute periodically on a timer if low latency is not required.
## Application Framework {#event_component_app}
### Application Framework {#event_component_app}
The framework itself is bundled into a higher level abstraction called an "app". Once
spdk_app_start() is called, it will block the current thread until the application
@ -74,7 +74,7 @@ terminates by calling spdk_app_stop() or an error condition occurs during the
initialization code within spdk_app_start(), itself, before invoking the caller's
supplied function.
## Custom shutdown callback {#event_component_shutdown}
### Custom shutdown callback {#event_component_shutdown}
When creating SPDK based application user may add custom shutdown callback which
will be called before the application framework starts the shutdown process.

View File

@ -5,9 +5,9 @@ implementing bdev_zone interface.
It handles the logical to physical address mapping, responds to the asynchronous
media management events, and manages the defragmentation process.
# Terminology {#ftl_terminology}
## Terminology {#ftl_terminology}
## Logical to physical address map
### Logical to physical address map
- Shorthand: L2P
@ -17,7 +17,7 @@ are calculated during device formation and are subtracted from the available add
spare blocks account for zones going offline throughout the lifespan of the device as well as
provide necessary buffer for data [defragmentation](#ftl_reloc).
## Band {#ftl_band}
### Band {#ftl_band}
A band describes a collection of zones, each belonging to a different parallel unit. All writes to
a band follow the same pattern - a batch of logical blocks is written to one zone, another batch
@ -64,7 +64,7 @@ is being written. Then the band moves to the `OPEN` state and actual user data c
band. Once the whole available space is filled, tail metadata is written and the band transitions to
`CLOSING` state. When that finishes the band becomes `CLOSED`.
## Ring write buffer {#ftl_rwb}
### Ring write buffer {#ftl_rwb}
- Shorthand: RWB
@ -97,7 +97,7 @@ After that operation is completed the whole batch can be freed. For the whole ti
the `rwb`, the L2P points at the buffer entry instead of a location on the SSD. This allows for
servicing read requests from the buffer.
## Defragmentation and relocation {#ftl_reloc}
### Defragmentation and relocation {#ftl_reloc}
- Shorthand: defrag, reloc
@ -133,14 +133,14 @@ index of its zones (3) (how many times the band was written to). The lower the r
higher its age (2) and the lower its write count (3), the higher the chance the band will be chosen
for defrag.
# Usage {#ftl_usage}
## Usage {#ftl_usage}
## Prerequisites {#ftl_prereq}
### Prerequisites {#ftl_prereq}
In order to use the FTL module, a device capable of zoned interface is required e.g. `zone_block`
bdev or OCSSD `nvme` bdev.
## FTL bdev creation {#ftl_create}
### FTL bdev creation {#ftl_create}
Similar to other bdevs, the FTL bdevs can be created either based on JSON config files or via RPC.
Both interfaces require the same arguments which are described by the `--help` option of the
@ -150,7 +150,7 @@ Both interfaces require the same arguments which are described by the `--help` o
- base bdev's name (base bdev must implement bdev_zone API)
- UUID of the FTL device (if the FTL is to be restored from the SSD)
## FTL usage with OCSSD nvme bdev {#ftl_ocssd}
### FTL usage with OCSSD nvme bdev {#ftl_ocssd}
This option requires an Open Channel SSD, which can be emulated using QEMU.

View File

@ -1,6 +1,6 @@
# GDB Macros User Guide {#gdb_macros}
# Introduction
## Introduction
When debugging an spdk application using gdb we may need to view data structures
in lists, e.g. information about bdevs or threads.
@ -125,7 +125,7 @@ nqn "nqn.2016-06.io.spdk.umgmt:cnode1", '\000' <repeats 191 times>
ID 1
~~~
# Loading The gdb Macros
## Loading The gdb Macros
Copy the gdb macros to the host where you are about to debug.
It is best to copy the file either to somewhere within the PYTHONPATH, or to add
@ -146,7 +146,7 @@ the PYTHONPATH, so I had to manually add the directory to the path.
(gdb) spdk_load_macros
~~~
# Using the gdb Data Directory
## Using the gdb Data Directory
On most systems, the data directory is /usr/share/gdb. The python script should
be copied into the python/gdb/function (or python/gdb/command) directory under
@ -155,7 +155,7 @@ the data directory, e.g. /usr/share/gdb/python/gdb/function.
If the python script is in there, then the only thing you need to do when
starting gdb is type "spdk_load_macros".
# Using .gdbinit To Load The Macros
## Using .gdbinit To Load The Macros
.gdbinit can also be used in order to run automatically run the manual steps
above prior to starting gdb.
@ -168,7 +168,7 @@ source /opt/km/install/tools/gdb_macros/gdb_macros.py
When starting gdb you still have to call spdk_load_macros.
# Why Do We Need to Explicitly Call spdk_load_macros
## Why Do We Need to Explicitly Call spdk_load_macros
The reason is that the macros need to use globals provided by spdk in order to
iterate the spdk lists and build iterable representations of the list objects.
@ -196,7 +196,7 @@ Error occurred in Python command: No symbol table is loaded. Use the "file"
command.
~~~
# Macros available
## Macros available
- spdk_load_macros: load the macros (use --reload in order to reload them)
- spdk_print_bdevs: information about bdevs
@ -205,7 +205,7 @@ command.
- spdk_print_nvmf_subsystems: information about nvmf subsystems
- spdk_print_threads: information about threads
# Adding New Macros
## Adding New Macros
The list iteration macros are usually built from 3 layers:

View File

@ -1,6 +1,6 @@
# Getting Started {#getting_started}
# Getting the Source Code {#getting_started_source}
## Getting the Source Code {#getting_started_source}
~~~{.sh}
git clone https://github.com/spdk/spdk
@ -8,7 +8,7 @@ cd spdk
git submodule update --init
~~~
# Installing Prerequisites {#getting_started_prerequisites}
## Installing Prerequisites {#getting_started_prerequisites}
The `scripts/pkgdep.sh` script will automatically install the bare minimum
dependencies required to build SPDK.
@ -24,7 +24,7 @@ Option --all will install all dependencies needed by SPDK features.
sudo scripts/pkgdep.sh --all
~~~
# Building {#getting_started_building}
## Building {#getting_started_building}
Linux:
@ -57,7 +57,7 @@ can enable it by doing the following:
make
~~~
# Running the Unit Tests {#getting_started_unittests}
## Running the Unit Tests {#getting_started_unittests}
It's always a good idea to confirm your build worked by running the
unit tests.
@ -70,7 +70,7 @@ You will see several error messages when running the unit tests, but they are
part of the test suite. The final message at the end of the script indicates
success or failure.
# Running the Example Applications {#getting_started_examples}
## Running the Example Applications {#getting_started_examples}
Before running an SPDK application, some hugepages must be allocated and
any NVMe and I/OAT devices must be unbound from the native kernel drivers.

View File

@ -1,10 +1,10 @@
# IDXD Driver {#idxd}
# Public Interface {#idxd_interface}
## Public Interface {#idxd_interface}
- spdk/idxd.h
# Key Functions {#idxd_key_functions}
## Key Functions {#idxd_key_functions}
Function | Description
--------------------------------------- | -----------
@ -19,7 +19,7 @@ spdk_idxd_submit_crc32c() | @copybrief spdk_idxd_submit_crc32c()
spdk_idxd_submit_dualcast | @copybrief spdk_idxd_submit_dualcast()
spdk_idxd_submit_fill() | @copybrief spdk_idxd_submit_fill()
# Pre-defined configurations {#idxd_configs}
## Pre-defined configurations {#idxd_configs}
The RPC `idxd_scan_accel_engine` is used to both enable IDXD and set it's
configuration to one of two pre-defined configs:

View File

@ -1,41 +1,41 @@
# Storage Performance Development Kit {#mainpage}
# Introduction
## Introduction
@copydoc intro
# Concepts
## Concepts
@copydoc concepts
# User Guides
## User Guides
@copydoc user_guides
# Programmer Guides
## Programmer Guides
@copydoc prog_guides
# General Information
## General Information
@copydoc general
# Miscellaneous
## Miscellaneous
@copydoc misc
# Driver Modules
## Driver Modules
@copydoc driver_modules
# Tools
## Tools
@copydoc tools
# CI Tools
## CI Tools
@copydoc ci_tools
# Performance Reports
## Performance Reports
@copydoc performance_reports

View File

@ -1,10 +1,10 @@
# I/OAT Driver {#ioat}
# Public Interface {#ioat_interface}
## Public Interface {#ioat_interface}
- spdk/ioat.h
# Key Functions {#ioat_key_functions}
## Key Functions {#ioat_key_functions}
Function | Description
--------------------------------------- | -----------

View File

@ -1,6 +1,6 @@
# iSCSI Target {#iscsi}
# iSCSI Target Getting Started Guide {#iscsi_getting_started}
## iSCSI Target Getting Started Guide {#iscsi_getting_started}
The Storage Performance Development Kit iSCSI target application is named `iscsi_tgt`.
This following section describes how to run iscsi from your cloned package.
@ -269,7 +269,7 @@ sdd
sde
~~~
# iSCSI Hotplug {#iscsi_hotplug}
## iSCSI Hotplug {#iscsi_hotplug}
At the iSCSI level, we provide the following support for Hotplug:
@ -293,7 +293,7 @@ return back; after all the commands return back, the LUN will be deleted.
@sa spdk_nvme_probe
# iSCSI Login Redirection {#iscsi_login_redirection}
## iSCSI Login Redirection {#iscsi_login_redirection}
The SPDK iSCSI target application supports iSCSI login redirection feature.

File diff suppressed because it is too large Load Diff

View File

@ -9,7 +9,7 @@ mixing of SPDK event framework dependent code and lower level libraries. This do
is aimed at explaining the structure, naming conventions, versioning scheme, and use cases
of the libraries contained in these two directories.
# Directory Structure {#structure}
## Directory Structure {#structure}
The SPDK libraries are divided into two directories. The `lib` directory contains the base libraries that
compose SPDK. Some of these base libraries define plug-in systems. Instances of those plug-ins are called
@ -17,7 +17,7 @@ modules and are located in the `module` directory. For example, the `spdk_sock`
`lib` directory while the implementations of socket abstractions, `sock_posix` and `sock_uring`
are contained in the `module` directory.
## lib {#lib}
### lib {#lib}
The libraries in the `lib` directory can be readily divided into four categories:
@ -48,7 +48,7 @@ Much like the `spdk_event` library, the `spdk_env_dpdk` library has been archite
can be readily replaced by an alternate environment shim. More information on replacing the `spdk_env_dpdk`
module and the underlying `dpdk` environment can be found in the [environment](#env_replacement) section.
## module {#module}
### module {#module}
The component libraries in the `module` directory represent specific implementations of the base libraries in
the `lib` directory. As with the `lib` directory, much care has been taken to avoid dependencies on the
@ -77,11 +77,11 @@ explicitly registered to that library via a constructor. The libraries in the `b
directories fall into this category. None of the libraries in this category depend explicitly on the
`spdk_event` library.
# Library Conventions {#conventions}
## Library Conventions {#conventions}
The SPDK libraries follow strict conventions for naming functions, logging, versioning, and header files.
## Headers {#headers}
### Headers {#headers}
All public SPDK header files exist in the `include` directory of the SPDK repository. These headers
are divided into two sub-directories.
@ -105,7 +105,7 @@ Other header files contained directly in the `lib` and `module` directories are
by source files of their corresponding library. Any symbols intended to be used across libraries need to be
included in a header in the `include/spdk_internal` directory.
## Naming Conventions {#naming}
### Naming Conventions {#naming}
All public types and functions in SPDK libraries begin with the prefix `spdk_`. They are also typically
further namespaced using the spdk library name. The rest of the function or type name describes its purpose.
@ -114,15 +114,15 @@ There are no internal library functions that begin with the `spdk_` prefix. This
enforced by the SPDK continuous Integration testing. Functions not intended for use outside of their home
library should be namespaced with the name of the library only.
## Map Files {#map}
### Map Files {#map}
SPDK libraries can be built as both static and shared object files. To facilitate building libraries as shared
objects, each one has a corresponding map file (e.g. `spdk_nvmf` relies on `spdk_nvmf.map`). SPDK libraries
not exporting any symbols rely on a blank map file located at `mk/spdk_blank.map`.
# SPDK Shared Objects {#shared_objects}
## SPDK Shared Objects {#shared_objects}
## Shared Object Versioning {#versioning}
### Shared Object Versioning {#versioning}
SPDK shared objects follow a semantic versioning pattern with a major and minor version. Any changes which
break backwards compatibility (symbol removal or change) will cause a shared object major increment and
@ -141,7 +141,7 @@ Shared objects are versioned independently of one another. This means that `libs
with the same suffix are not necessarily compatible with each other. It is important to source all of your
SPDK libraries from the same repository and version to ensure inter-library compatibility.
## Linking to Shared Objects {#so_linking}
### Linking to Shared Objects {#so_linking}
Shared objects in SPDK are created on a per-library basis. There is a top level `libspdk.so` object
which is a linker script. It simply contains references to all of the other spdk shared objects.
@ -172,7 +172,7 @@ itself need to be supplied to the linker. In the examples above, these are `spdk
respectively. This was intentional and allows one to easily swap out both the environment and the
environment shim.
## Replacing the env abstraction {#env_replacement}
### Replacing the env abstraction {#env_replacement}
SPDK depends on an environment abstraction that provides crucial pinned memory management and PCIe
bus management operations. The interface for this environment abstraction is defined in the
@ -193,7 +193,7 @@ shim/implementation library system.
gcc -o my_app ./my_app.c -lspdk -lcustom_env_shim -lcustom_env_implementation
~~~
# SPDK Static Objects {#static_objects}
## SPDK Static Objects {#static_objects}
SPDK static objects are compiled by default even when no parameters are supplied to the build system.
Unlike SPDK shared objects, the filename does not contain any versioning semantics. Linking against

View File

@ -3,9 +3,9 @@
The Logical Volumes library is a flexible storage space management system. It provides creating and managing virtual
block devices with variable size. The SPDK Logical Volume library is built on top of @ref blob.
# Terminology {#lvol_terminology}
## Terminology {#lvol_terminology}
## Logical volume store {#lvs}
### Logical volume store {#lvs}
* Shorthand: lvolstore, lvs
* Type name: struct spdk_lvol_store
@ -16,7 +16,7 @@ creation, so that it can be uniquely identified from other lvolstores.
By default when creating lvol store data region is unmapped. Optional --clear-method parameter can be passed
on creation to change that behavior to writing zeroes or performing no operation.
## Logical volume {#lvol}
### Logical volume {#lvol}
* Shorthand: lvol
* Type name: struct spdk_lvol
@ -24,7 +24,7 @@ on creation to change that behavior to writing zeroes or performing no operation
A logical volume is implemented as an SPDK blob created from an lvolstore. An lvol is uniquely identified by
its UUID. Lvol additional can have alias name.
## Logical volume block device {#lvol_bdev}
### Logical volume block device {#lvol_bdev}
* Shorthand: lvol_bdev
* Type name: struct spdk_lvol_bdev
@ -39,7 +39,7 @@ option is enabled, no space is taken from lvol store until data is written to lv
By default when deleting lvol bdev or resizing down, allocated clusters are unmapped. Optional --clear-method
parameter can be passed on creation to change that behavior to writing zeroes or performing no operation.
## Thin provisioning {#lvol_thin_provisioning}
### Thin provisioning {#lvol_thin_provisioning}
Thin provisioned lvols rely on dynamic cluster allocation (e.g. when the first write operation on a cluster is performed), only space
required to store data is used and unallocated clusters are obtained from underlying device (e.g. zeroes_dev).
@ -52,7 +52,7 @@ Sample read operations and the structure of thin provisioned blob are shown on t
![Reading clusters from thin provisioned blob](lvol_thin_provisioning.svg)
## Snapshots and clone {#lvol_snapshots}
### Snapshots and clone {#lvol_snapshots}
Logical volumes support snapshots and clones functionality. User may at any given time create snapshot of existing
logical volume to save a backup of current volume state. When creating snapshot original volume becomes thin provisioned
@ -74,26 +74,26 @@ A snapshot can be removed only if there is a single clone on top of it. The rela
The cluster map of clone and snapshot will be merged and entries for unallocated clusters in the clone will be updated with
addresses from the snapshot cluster map. The entire operation modifies metadata only - no data is copied during this process.
## Inflation {#lvol_inflation}
### Inflation {#lvol_inflation}
Blobs can be inflated to copy data from backing devices (e.g. snapshots) and allocate all remaining clusters. As a result of this
operation all dependencies for the blob are removed.
![Removing backing blob and bdevs relations using inflate call](lvol_inflate_clone_snapshot.svg)
## Decoupling {#lvol_decoupling}
### Decoupling {#lvol_decoupling}
Blobs can be decoupled from their parent blob by copying data from backing devices (e.g. snapshots) for all allocated clusters.
Remaining unallocated clusters are kept thin provisioned.
Note: When decouple is performed, only single dependency is removed. To remove all dependencies in a chain of blobs depending
on each other, multiple calls need to be issued.
# Configuring Logical Volumes
## Configuring Logical Volumes
There is no static configuration available for logical volumes. All configuration is done trough RPC. Information about
logical volumes is kept on block devices.
# RPC overview {#lvol_rpc}
## RPC overview {#lvol_rpc}
RPC regarding lvolstore:

View File

@ -92,7 +92,7 @@ SPDK must be allocated using spdk_dma_malloc() or its siblings. The buffers
must be allocated specifically so that they are pinned and so that physical
addresses are known.
# IOMMU Support
## IOMMU Support
Many platforms contain an extra piece of hardware called an I/O Memory
Management Unit (IOMMU). An IOMMU is much like a regular MMU, except it

View File

@ -9,32 +9,32 @@ do not poll frequently enough, events may be lost. All events are identified by
monotonically increasing integer, so missing events may be detected, although
not recovered.
# Register event types {#notify_register}
## Register event types {#notify_register}
During initialization the sender library should register its own event types using
`spdk_notify_type_register(const char *type)`. Parameter 'type' is the name of
notification type.
# Get info about events {#notify_get_info}
## Get info about events {#notify_get_info}
A consumer can get information about the available event types during runtime using
`spdk_notify_foreach_type`, which iterates over registered notification types and
calls a callback on each of them, so that user can produce detailed information
about notification.
# Get new events {#notify_listen}
## Get new events {#notify_listen}
A consumer can get events by calling function `spdk_notify_foreach_event`.
The caller should specify last received event and the maximum number of invocations.
There might be multiple consumers of each event. The event bus is implemented as a
circular buffer, so older events may be overwritten by newer ones.
# Send events {#notify_send}
## Send events {#notify_send}
When an event occurs, a library can invoke `spdk_notify_send` with two strings.
One containing the type of the event, like "spdk_bdev_register", second with context,
for example "Nvme0n1"
# RPC Calls {#rpc_calls}
## RPC Calls {#rpc_calls}
See [JSON-RPC documentation](jsonrpc.md/#rpc_notify_get_types)

View File

@ -1,6 +1,6 @@
# NVMe Driver {#nvme}
# In this document {#nvme_toc}
## In this document {#nvme_toc}
- @ref nvme_intro
- @ref nvme_examples
@ -11,7 +11,7 @@
- @ref nvme_hotplug
- @ref nvme_cuse
# Introduction {#nvme_intro}
## Introduction {#nvme_intro}
The NVMe driver is a C library that may be linked directly into an application
that provides direct, zero-copy data transfer to and from
@ -29,23 +29,23 @@ devices via NVMe over Fabrics. Users may now call spdk_nvme_probe() on both
local PCI busses and on remote NVMe over Fabrics discovery services. The API is
otherwise unchanged.
# Examples {#nvme_examples}
## Examples {#nvme_examples}
## Getting Start with Hello World {#nvme_helloworld}
### Getting Start with Hello World {#nvme_helloworld}
There are a number of examples provided that demonstrate how to use the NVMe
library. They are all in the [examples/nvme](https://github.com/spdk/spdk/tree/master/examples/nvme)
directory in the repository. The best place to start is
[hello_world](https://github.com/spdk/spdk/blob/master/examples/nvme/hello_world/hello_world.c).
## Running Benchmarks with Fio Plugin {#nvme_fioplugin}
### Running Benchmarks with Fio Plugin {#nvme_fioplugin}
SPDK provides a plugin to the very popular [fio](https://github.com/axboe/fio)
tool for running some basic benchmarks. See the fio start up
[guide](https://github.com/spdk/spdk/blob/master/examples/nvme/fio_plugin/)
for more details.
## Running Benchmarks with Perf Tool {#nvme_perf}
### Running Benchmarks with Perf Tool {#nvme_perf}
NVMe perf utility in the [examples/nvme/perf](https://github.com/spdk/spdk/tree/master/examples/nvme/perf)
is one of the examples which also can be used for performance tests. The fio
@ -79,7 +79,7 @@ perf -q 1 -o 4096 -w write -r 'trtype:PCIe traddr:0000:04:00.0' -t 300 -e 'PRACT
perf -q 1 -o 4096 -w read -r 'trtype:PCIe traddr:0000:04:00.0' -t 200 -e 'PRACT=0,PRCKH=GUARD'
~~~
# Public Interface {#nvme_interface}
## Public Interface {#nvme_interface}
- spdk/nvme.h
@ -103,9 +103,9 @@ spdk_nvme_ctrlr_process_admin_completions() | @copybrief spdk_nvme_ctrlr_process
spdk_nvme_ctrlr_cmd_io_raw() | @copybrief spdk_nvme_ctrlr_cmd_io_raw()
spdk_nvme_ctrlr_cmd_io_raw_with_md() | @copybrief spdk_nvme_ctrlr_cmd_io_raw_with_md()
# NVMe Driver Design {#nvme_design}
## NVMe Driver Design {#nvme_design}
## NVMe I/O Submission {#nvme_io_submission}
### NVMe I/O Submission {#nvme_io_submission}
I/O is submitted to an NVMe namespace using nvme_ns_cmd_xxx functions. The NVMe
driver submits the I/O request as an NVMe submission queue entry on the queue
@ -117,7 +117,7 @@ spdk_nvme_qpair_process_completions().
@sa spdk_nvme_ns_cmd_read, spdk_nvme_ns_cmd_write, spdk_nvme_ns_cmd_dataset_management,
spdk_nvme_ns_cmd_flush, spdk_nvme_qpair_process_completions
### Fused operations {#nvme_fuses}
#### Fused operations {#nvme_fuses}
To "fuse" two commands, the first command should have the SPDK_NVME_IO_FLAGS_FUSE_FIRST
io flag set, and the next one should have the SPDK_NVME_IO_FLAGS_FUSE_SECOND.
@ -149,7 +149,7 @@ The NVMe specification currently defines compare-and-write as a fused operation.
Support for compare-and-write is reported by the controller flag
SPDK_NVME_CTRLR_COMPARE_AND_WRITE_SUPPORTED.
### Scaling Performance {#nvme_scaling}
#### Scaling Performance {#nvme_scaling}
NVMe queue pairs (struct spdk_nvme_qpair) provide parallel submission paths for
I/O. I/O may be submitted on multiple queue pairs simultaneously from different
@ -182,7 +182,7 @@ require that data should be done by sending a request to the owning thread.
This results in a message passing architecture, as opposed to a locking
architecture, and will result in superior scaling across CPU cores.
## NVMe Driver Internal Memory Usage {#nvme_memory_usage}
### NVMe Driver Internal Memory Usage {#nvme_memory_usage}
The SPDK NVMe driver provides a zero-copy data transfer path, which means that
there are no data buffers for I/O commands. However, some Admin commands have
@ -202,12 +202,12 @@ Each submission queue entry (SQE) and completion queue entry (CQE) consumes 64 b
and 16 bytes respectively. Therefore, the maximum memory used for each I/O queue
pair is (MQES + 1) * (64 + 16) Bytes.
# NVMe over Fabrics Host Support {#nvme_fabrics_host}
## NVMe over Fabrics Host Support {#nvme_fabrics_host}
The NVMe driver supports connecting to remote NVMe-oF targets and
interacting with them in the same manner as local NVMe SSDs.
## Specifying Remote NVMe over Fabrics Targets {#nvme_fabrics_trid}
### Specifying Remote NVMe over Fabrics Targets {#nvme_fabrics_trid}
The method for connecting to a remote NVMe-oF target is very similar
to the normal enumeration process for local PCIe-attached NVMe devices.
@ -228,11 +228,11 @@ single NVM subsystem directly, the NVMe library will call `probe_cb`
for just that subsystem; this allows the user to skip the discovery step
and connect directly to a subsystem with a known address.
## RDMA Limitations
### RDMA Limitations
Please refer to NVMe-oF target's @ref nvmf_rdma_limitations
# NVMe Multi Process {#nvme_multi_process}
## NVMe Multi Process {#nvme_multi_process}
This capability enables the SPDK NVMe driver to support multiple processes accessing the
same NVMe device. The NVMe driver allocates critical structures from shared memory, so
@ -243,7 +243,7 @@ The primary motivation for this feature is to support management tools that can
to long running applications, perform some maintenance work or gather information, and
then detach.
## Configuration {#nvme_multi_process_configuration}
### Configuration {#nvme_multi_process_configuration}
DPDK EAL allows different types of processes to be spawned, each with different permissions
on the hugepage memory used by the applications.
@ -269,7 +269,7 @@ Example: identical shm_id and non-overlapping core masks
./perf -q 8 -o 131072 -w write -c 0x10 -t 60 -i 1
~~~
## Limitations {#nvme_multi_process_limitations}
### Limitations {#nvme_multi_process_limitations}
1. Two processes sharing memory may not share any cores in their core mask.
2. If a primary process exits while secondary processes are still running, those processes
@ -280,7 +280,7 @@ Example: identical shm_id and non-overlapping core masks
@sa spdk_nvme_probe, spdk_nvme_ctrlr_process_admin_completions
# NVMe Hotplug {#nvme_hotplug}
## NVMe Hotplug {#nvme_hotplug}
At the NVMe driver level, we provide the following support for Hotplug:
@ -300,11 +300,11 @@ At the NVMe driver level, we provide the following support for Hotplug:
@sa spdk_nvme_probe
# NVMe Character Devices {#nvme_cuse}
## NVMe Character Devices {#nvme_cuse}
This feature is considered as experimental.
## Design
### Design
![NVMe character devices processing diagram](nvme_cuse.svg)
@ -326,9 +326,9 @@ immediate response, without passing them through the ring.
This interface reserves one additional qpair for sending down the I/O for each controller.
## Usage
### Usage
### Enabling cuse support for NVMe
#### Enabling cuse support for NVMe
Cuse support is disabled by default. To enable support for NVMe-CUSE devices first
install required dependencies
@ -337,7 +337,7 @@ sudo scripts/pkgdep.sh --fuse
~~~
Then compile SPDK with "./configure --with-nvme-cuse".
### Creating NVMe-CUSE device
#### Creating NVMe-CUSE device
First make sure to prepare the environment (see @ref getting_started).
This includes loading CUSE kernel module.
@ -367,7 +367,7 @@ $ ls /dev/spdk/
nvme0 nvme0n1
~~~
### Example of using nvme-cli
#### Example of using nvme-cli
Most nvme-cli commands can point to specific controller or namespace by providing a path to it.
This can be leveraged to issue commands to the SPDK NVMe-CUSE devices.
@ -381,7 +381,7 @@ sudo nvme id-ns /dev/spdk/nvme0n1
Note: `nvme list` command does not display SPDK NVMe-CUSE devices,
see nvme-cli [PR #773](https://github.com/linux-nvme/nvme-cli/pull/773).
### Examples of using smartctl
#### Examples of using smartctl
smartctl tool recognizes device type based on the device path. If none of expected
patterns match, SCSI translation layer is used to identify device.
@ -395,7 +395,7 @@ the NVMe device.
...
~~~
## Limitations
### Limitations
NVMe namespaces are created as character devices and their use may be limited for
tools expecting block devices.

View File

@ -3,7 +3,7 @@
@sa @ref nvme_fabrics_host
@sa @ref nvmf_tgt_tracepoints
# NVMe-oF Target Getting Started Guide {#nvmf_getting_started}
## NVMe-oF Target Getting Started Guide {#nvmf_getting_started}
The SPDK NVMe over Fabrics target is a user space application that presents block devices over a fabrics
such as Ethernet, Infiniband or Fibre Channel. SPDK currently supports RDMA and TCP transports.

View File

@ -1,6 +1,6 @@
# NVMe-oF Target Tracepoints {#nvmf_tgt_tracepoints}
# Introduction {#tracepoints_intro}
## Introduction {#tracepoints_intro}
SPDK has a tracing framework for capturing low-level event information at runtime.
Tracepoints provide a high-performance tracing mechanism that is accessible at runtime.
@ -9,7 +9,7 @@ processes. The NVMe-oF target is instrumented with tracepoints to enable analysi
both performance and application crashes. (Note: the SPDK tracing framework should still
be considered experimental. Work to formalize and document the framework is in progress.)
# Enabling Tracepoints {#enable_tracepoints}
## Enabling Tracepoints {#enable_tracepoints}
Tracepoints are placed in groups. They are enabled and disabled as a group. To enable
the instrumentation of all the tracepoints group in an SPDK target application, start the
@ -41,7 +41,7 @@ exits. This ensures the file can be used for analysis after the application exi
shared memory files are in /dev/shm, and can be deleted manually to free shm space if needed. A system
reboot will also free all of the /dev/shm files.
# Capturing a snapshot of events {#capture_tracepoints}
## Capturing a snapshot of events {#capture_tracepoints}
Send I/Os to the SPDK target application to generate events. The following is
an example usage of perf to send I/Os to the NVMe-oF target over an RDMA network
@ -124,7 +124,7 @@ the same I/O.
28: 6033.056 ( 12669500) RDMA_REQ_COMPLETED id: r3564 time: 100.211
~~~
# Capturing sufficient trace events {#capture_trace_events}
## Capturing sufficient trace events {#capture_trace_events}
Since the tracepoint file generated directly by SPDK application is a circular buffer in shared memory,
the trace events captured by it may be insufficient for further analysis.
@ -150,7 +150,7 @@ To analyze the tracepoints output file from spdk_trace_record, simply run spdk_t
build/bin/spdk_trace -f /tmp/spdk_nvmf_record.trace
~~~
# Adding New Tracepoints {#add_tracepoints}
## Adding New Tracepoints {#add_tracepoints}
SPDK applications and libraries provide several trace points. You can add new
tracepoints to the existing trace groups. For example, to add a new tracepoints

View File

@ -1,6 +1,6 @@
# SPDK Structural Overview {#overview}
# Overview {#dir_overview}
## Overview {#dir_overview}
SPDK is composed of a set of C libraries residing in `lib` with public interface
header files in `include/spdk`, plus a set of applications built out of those

View File

@ -3,14 +3,14 @@
Please note that the functionality discussed in this document is
currently tagged as experimental.
# In this document {#p2p_toc}
## In this document {#p2p_toc}
* @ref p2p_overview
* @ref p2p_nvme_api
* @ref p2p_cmb_copy
* @ref p2p_issues
# Overview {#p2p_overview}
## Overview {#p2p_overview}
Peer-2-Peer (P2P) is the concept of DMAing data directly from one PCI
End Point (EP) to another without using a system memory buffer. The
@ -22,7 +22,7 @@ In this section of documentation we outline how to perform P2P
operations in SPDK and outline some of the issues that can occur when
performing P2P operations.
# The P2P API for NVMe {#p2p_nvme_api}
## The P2P API for NVMe {#p2p_nvme_api}
The functions that provide access to the NVMe CMBs for P2P
capabilities are given in the table below.
@ -33,7 +33,7 @@ spdk_nvme_ctrlr_map_cmb() | @copybrief spdk_nvme_ctrlr_map_cmb
spdk_nvme_ctrlr_unmap_cmb() | @copybrief spdk_nvme_ctrlr_unmap_cmb()
spdk_nvme_ctrlr_get_regs_cmbsz() | @copybrief spdk_nvme_ctrlr_get_regs_cmbsz()
# Determining device support {#p2p_support}
## Determining device support {#p2p_support}
SPDK's identify example application displays whether a device has a controller
memory buffer and which operations it supports. Run it as follows:
@ -42,7 +42,7 @@ memory buffer and which operations it supports. Run it as follows:
./build/examples/identify -r traddr:<pci id of ssd>
~~~
# cmb_copy: An example P2P Application {#p2p_cmb_copy}
## cmb_copy: An example P2P Application {#p2p_cmb_copy}
Run the cmb_copy example application.
@ -53,7 +53,7 @@ This should copy a single LBA (LBA 0) from namespace 1 on the read
NVMe SSD to LBA 0 on namespace 1 on the write SSD using the CMB as the
DMA buffer.
# Issues with P2P {#p2p_issues}
## Issues with P2P {#p2p_issues}
* In some systems when performing peer-2-peer DMAs between PCIe EPs
that are directly connected to the Root Complex (RC) the DMA may

View File

@ -1,10 +1,10 @@
# RPMs {#rpms}
# In this document {#rpms_toc}
## In this document {#rpms_toc}
* @ref building_rpms
# Building SPDK RPMs {#building_rpms}
## Building SPDK RPMs {#building_rpms}
To build basic set of RPM packages out of the SPDK repo simply run:

View File

@ -1,13 +1,13 @@
# shfmt {#shfmt}
# In this document {#shfmt_toc}
## In this document {#shfmt_toc}
* @ref shfmt_overview
* @ref shfmt_usage
* @ref shfmt_installation
* @ref shfmt_examples
# Overview {#shfmt_overview}
## Overview {#shfmt_overview}
The majority of tests (and scripts overall) in the SPDK repo are written
in Bash (with a quite significant emphasis on "Bashism"), thus a style
@ -15,7 +15,7 @@ formatter, shfmt, was introduced to help keep the .sh code consistent
across the entire repo. For more details on the tool itself, please see
[shfmt](https://github.com/mvdan/sh).
# Usage {#shfmt_usage}
## Usage {#shfmt_usage}
On the CI pool, the shfmt is run against all the updated .sh files that
have been committed but not merged yet. Additionally, shfmt will pick
@ -36,7 +36,7 @@ Please, see ./scripts/check_format.sh for all the arguments the shfmt
is run with. Additionally, @ref shfmt_examples has more details on how
each of the arguments behave.
# Installation {#shfmt_installation}
## Installation {#shfmt_installation}
The shfmt can be easily installed via pkgdep.sh:
@ -55,7 +55,7 @@ SHFMT_DIR_OUT=/and_link_it_here \
./scripts/pkgdep.sh -d
~~~
# Examples {#shfmt_examples}
## Examples {#shfmt_examples}
~~~{.sh}
#######################################

View File

@ -15,7 +15,7 @@ spdk_top utility gets the fine grained metrics from the pollers, analyzes and re
This information enables users to identify CPU cores that are busy doing real work so that they can determine if the application
needs more or less CPU resources.
# Run spdk_top
## Run spdk_top
Before running spdk_top you need to run the SPDK application whose performance you want to analyze using spdk_top.
@ -25,7 +25,7 @@ Run the spdk_top application
./build/bin/spdk_top
~~~
# Bottom menu
## Bottom menu
Menu at the bottom of SPDK top window shows many options for changing displayed data. Each menu item has a key associated with it in square brackets.
@ -36,7 +36,7 @@ of all available pages.
* Item details - displays details pop-up window for highlighted data row. Selection is changed by pressing UP and DOWN arrow keys.
* Help - displays help pop-up window.
# Threads Tab
## Threads Tab
The threads tab displays a line item for each spdk thread. The information displayed shows:
@ -52,7 +52,7 @@ Pop-up then can be closed by pressing ESC key.
To learn more about spdk threads see @ref concurrency.
# Pollers Tab
## Pollers Tab
The pollers tab displays a line item for each poller. The information displayed shows:
@ -67,7 +67,7 @@ The pollers tab displays a line item for each poller. The information displayed
Poller pop-up window can be displayed by pressing ENTER on a selected data row and displays above information.
Pop-up can be closed by pressing ESC key.
# Cores Tab
## Cores Tab
The cores tab provides insights into how the application is using the CPU cores assigned to it. The information displayed for each core shows:
@ -81,6 +81,6 @@ The cores tab provides insights into how the application is using the CPU cores
Pressing ENTER key makes a pop-up window appear, showing above information, along with a list of threads running on selected core. Cores details
window allows to select a thread and display thread details pop-up on top of it. To close both pop-ups use ESC key.
# Help Window
## Help Window
Help window pop-up can be invoked by pressing H key inside any tab. It contains explanations for each key used inside the spdk_top application.

View File

@ -2,7 +2,7 @@
This system configuration guide describes how to configure a system for use with SPDK.
# IOMMU configuration {#iommu_config}
## IOMMU configuration {#iommu_config}
An IOMMU may be present and enabled on many platforms. When an IOMMU is present and enabled, it is
recommended that SPDK applications are deployed with the `vfio-pci` kernel driver. SPDK's
@ -21,7 +21,7 @@ version they are using has a bug where `uio_pci_generic` [fails to bind to NVMe
In these cases, users can build the `igb_uio` kernel module which can be found in dpdk-kmods repository.
To ensure that the driver is properly bound, users should specify `DRIVER_OVERRIDE=/path/to/igb_uio.ko`.
# Running SPDK as non-priviledged user {#system_configuration_nonroot}
## Running SPDK as non-priviledged user {#system_configuration_nonroot}
One of the benefits of using the `VFIO` Linux kernel driver is the ability to
perform DMA operations with peripheral devices as unprivileged user. The
@ -29,7 +29,7 @@ permissions to access particular devices still need to be granted by the system
administrator, but only on a one-time basis. Note that this functionality
is supported with DPDK starting from version 18.11.
## Hugetlbfs access
### Hugetlbfs access
Make sure the target user has RW access to at least one hugepage mount.
A good idea is to create a new mount specifically for SPDK:
@ -44,7 +44,7 @@ Then start SPDK applications with an additional parameter `--huge-dir /mnt/spdk_
Full guide on configuring hugepage mounts is available in the
[Linux Hugetlbpage Documentation](https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt)
## Device access {#system_configuration_nonroot_device_access}
### Device access {#system_configuration_nonroot_device_access}
`VFIO` device access is protected with sysfs file permissions and can be
configured with chown/chmod.
@ -86,7 +86,7 @@ devices, use the following:
# chown spdk /dev/vfio/5
~~~
## Memory constraints {#system_configuration_nonroot_memory_constraints}
### Memory constraints {#system_configuration_nonroot_memory_constraints}
As soon as the first device is attached to SPDK, all of SPDK memory will be
mapped to the IOMMU through the VFIO APIs. VFIO will try to mlock that memory and
@ -111,7 +111,7 @@ try to map not only its reserved hugepages, but also all the memory that's
shared by its vhost clients as described in the
[Vhost processing guide](https://spdk.io/doc/vhost_processing.html#vhost_processing_init).
### Increasing the memlock limit permanently
#### Increasing the memlock limit permanently
Open the `/etc/security/limits.conf` file as root and append the following:
@ -122,7 +122,7 @@ spdk soft memlock unlimited
Then logout from the target user account. The changes will take effect after the next login.
### Increasing the memlock for a specific process
#### Increasing the memlock for a specific process
Linux offers a `prlimit` utility that can override limits of any given process.
On Ubuntu, it is a part of the `util-linux` package.

View File

@ -1,6 +1,6 @@
# ComponentName Programmer's Guide {#componentname_pg}
# In this document {#componentname_pg_toc}
## In this document {#componentname_pg_toc}
@ref componentname_pg_audience
@ref componentname_pg_intro

View File

@ -1,6 +1,6 @@
# User Space Drivers {#userspace}
# Controlling Hardware From User Space {#userspace_control}
## Controlling Hardware From User Space {#userspace_control}
Much of the documentation for SPDK talks about _user space drivers_, so it's
important to understand what that means at a technical level. First and
@ -53,7 +53,7 @@ with the
[NVMe Specification](http://nvmexpress.org/wp-content/uploads/NVM_Express_Revision_1.3.pdf)
to initialize the device, create queue pairs, and ultimately send I/O.
# Interrupts {#userspace_interrupts}
## Interrupts {#userspace_interrupts}
SPDK polls devices for completions instead of waiting for interrupts. There
are a number of reasons for doing this: 1) practically speaking, routing an
@ -69,7 +69,7 @@ technologies such as Intel's
will ensure that the host memory being checked is present in the CPU cache
after an update by the device.
# Threading {#userspace_threading}
## Threading {#userspace_threading}
NVMe devices expose multiple queues for submitting requests to the hardware.
Separate queues can be accessed without coordination, so software can send

View File

@ -1,6 +1,6 @@
# Vagrant Development Environment {#vagrant}
# Introduction {#vagrant_intro}
## Introduction {#vagrant_intro}
[Vagrant](https://www.vagrantup.com/) provides a quick way to get a basic
NVMe enabled virtual machine sandbox running without the need for any
@ -22,7 +22,7 @@ vagrant plugin install vagrant-proxyconf
In case you want use kvm/libvirt you should also install `vagrant-libvirt`
# VM Configuration {#vagrant_config}
## VM Configuration {#vagrant_config}
To create a configured VM with vagrant you need to run `create_vbox.sh` script.
@ -47,7 +47,7 @@ world example application.
vagrant --help
~~~
# Running An Example {#vagrant_example}
## Running An Example {#vagrant_example}
The following shows sample output from starting up a Ubuntu18 VM,
compiling SPDK on it and running the NVMe sample application `hello_world`.

View File

@ -1,6 +1,6 @@
# vhost Target {#vhost}
# Table of Contents {#vhost_toc}
## Table of Contents {#vhost_toc}
- @ref vhost_intro
- @ref vhost_prereqs
@ -11,7 +11,7 @@
- @ref vhost_advanced_topics
- @ref vhost_bugs
# Introduction {#vhost_intro}
## Introduction {#vhost_intro}
A vhost target provides a local storage service as a process running on a local machine.
It is capable of exposing virtualized block devices to QEMU instances or other arbitrary
@ -28,12 +28,12 @@ techniques as other components in SPDK. Since SPDK is polling for vhost submiss
it can signal the VM to skip notifications on submission. This avoids VMEXITs on I/O
submission and can significantly reduce CPU usage in the VM on heavy I/O workloads.
# Prerequisites {#vhost_prereqs}
## Prerequisites {#vhost_prereqs}
This guide assumes the SPDK has been built according to the instructions in @ref
getting_started. The SPDK vhost target is built with the default configure options.
## Vhost Command Line Parameters {#vhost_cmd_line_args}
### Vhost Command Line Parameters {#vhost_cmd_line_args}
Additional command line flags are available for Vhost target.
@ -41,7 +41,7 @@ Param | Type | Default | Description
-------- | -------- | ---------------------- | -----------
-S | string | $PWD | directory where UNIX domain sockets will be created
## Supported Guest Operating Systems
### Supported Guest Operating Systems
The guest OS must contain virtio-scsi or virtio-blk drivers. Most Linux and FreeBSD
distributions include virtio drivers.
@ -49,7 +49,7 @@ distributions include virtio drivers.
installed separately. The SPDK vhost target has been tested with recent versions of Ubuntu,
Fedora, and Windows
## QEMU
### QEMU
Userspace vhost-scsi target support was added to upstream QEMU in v2.10.0. Run
the following command to confirm your QEMU supports userspace vhost-scsi.
@ -74,7 +74,7 @@ Run the following command to confirm your QEMU supports userspace vhost-nvme.
qemu-system-x86_64 -device vhost-user-nvme,help
~~~
# Starting SPDK vhost target {#vhost_start}
## Starting SPDK vhost target {#vhost_start}
First, run the SPDK setup.sh script to setup some hugepages for the SPDK vhost target
application. This will allocate 4096MiB (4GiB) of hugepages, enough for the SPDK
@ -100,9 +100,9 @@ To list all available vhost options use the following command.
build/bin/vhost -h
~~~
# SPDK Configuration {#vhost_config}
## SPDK Configuration {#vhost_config}
## Create bdev (block device) {#vhost_bdev_create}
### Create bdev (block device) {#vhost_bdev_create}
SPDK bdevs are block devices which will be exposed to the guest OS.
For vhost-scsi, bdevs are exposed as SCSI LUNs on SCSI devices attached to the
@ -121,9 +121,9 @@ will create a 64MB malloc bdev with 512-byte block size.
scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
~~~
## Create a vhost device {#vhost_vdev_create}
### Create a vhost device {#vhost_vdev_create}
### Vhost-SCSI
#### Vhost-SCSI
The following RPC will create a vhost-scsi controller which can be accessed
by QEMU via /var/tmp/vhost.0. At the time of creation the controller will be
@ -152,7 +152,7 @@ To remove a bdev from a vhost-scsi controller use the following RPC:
scripts/rpc.py vhost_scsi_controller_remove_target vhost.0 0
~~~
### Vhost-BLK
#### Vhost-BLK
The following RPC will create a vhost-blk device exposing Malloc0 bdev.
The device will be accessible to QEMU via /var/tmp/vhost.1. All the I/O polling
@ -171,7 +171,7 @@ extra `-r` or `--readonly` parameter.
scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 -r vhost.1 Malloc0
~~~
## QEMU {#vhost_qemu_config}
### QEMU {#vhost_qemu_config}
Now the virtual machine can be started with QEMU. The following command-line
parameters must be added to connect the virtual machine to its vhost controller.
@ -195,21 +195,21 @@ SPDK malloc block device by specifying bootindex=0 for the boot image.
Finally, specify the SPDK vhost devices:
### Vhost-SCSI
#### Vhost-SCSI
~~~{.sh}
-chardev socket,id=char0,path=/var/tmp/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
~~~
### Vhost-BLK
#### Vhost-BLK
~~~{.sh}
-chardev socket,id=char1,path=/var/tmp/vhost.1
-device vhost-user-blk-pci,id=blk0,chardev=char1
~~~
## Example output {#vhost_example}
### Example output {#vhost_example}
This example uses an NVMe bdev alongside Mallocs. SPDK vhost application is started
on CPU cores 0 and 1, QEMU on cores 2 and 3.
@ -310,9 +310,9 @@ vhost.c:1006:session_shutdown: *NOTICE*: Exiting
We can see that `sdb` and `sdc` are SPDK vhost-scsi LUNs, and `vda` is SPDK a
vhost-blk disk.
# Advanced Topics {#vhost_advanced_topics}
## Advanced Topics {#vhost_advanced_topics}
## Multi-Queue Block Layer (blk-mq) {#vhost_multiqueue}
### Multi-Queue Block Layer (blk-mq) {#vhost_multiqueue}
For best performance use the Linux kernel block multi-queue feature with vhost.
To enable it on Linux, it is required to modify kernel options inside the
@ -335,7 +335,7 @@ Some Linux distributions report a kernel panic when starting the VM if the numbe
specified via the `num-queues` parameter is greater than number of vCPUs. If you need to use
more I/O queues than vCPUs, check that your OS image supports that configuration.
## Hot-attach/hot-detach {#vhost_hotattach}
### Hot-attach/hot-detach {#vhost_hotattach}
Hotplug/hotremove within a vhost controller is called hot-attach/detach. This is to
distinguish it from SPDK bdev hotplug/hotremove. E.g. if an NVMe bdev is attached
@ -348,7 +348,7 @@ to hot-attach/detach the bdev from a Vhost-BLK device. If Vhost-BLK device expos
an NVMe bdev that is hotremoved, all the I/O traffic on that Vhost-BLK device will
be aborted - possibly flooding a VM with syslog warnings and errors.
### Hot-attach
#### Hot-attach
Hot-attach is done by simply attaching a bdev to a vhost controller with a QEMU VM
already started. No other extra action is necessary.
@ -357,7 +357,7 @@ already started. No other extra action is necessary.
scripts/rpc.py vhost_scsi_controller_add_target vhost.0 0 Malloc0
~~~
### Hot-detach
#### Hot-detach
Just like hot-attach, the hot-detach is done by simply removing bdev from a controller
when QEMU VM is already started.
@ -372,22 +372,22 @@ Removing an entire bdev will hot-detach it from a controller as well.
scripts/rpc.py bdev_malloc_delete Malloc0
~~~
# Known bugs and limitations {#vhost_bugs}
## Known bugs and limitations {#vhost_bugs}
## Vhost-NVMe (experimental) can only be supported with latest Linux kernel
### Vhost-NVMe (experimental) can only be supported with latest Linux kernel
Vhost-NVMe target was designed for one new feature of NVMe 1.3 specification, Doorbell
Buffer Config Admin command, which is used for emulated NVMe controller only. Linux 4.12
added this feature, so a new Guest kernel later than 4.12 is required to test this feature.
## Windows virtio-blk driver before version 0.1.130-1 only works with 512-byte sectors
### Windows virtio-blk driver before version 0.1.130-1 only works with 512-byte sectors
The Windows `viostor` driver before version 0.1.130-1 is buggy and does not
correctly support vhost-blk devices with non-512-byte block size.
See the [bug report](https://bugzilla.redhat.com/show_bug.cgi?id=1411092) for
more information.
## QEMU vhost-user-blk
### QEMU vhost-user-blk
QEMU [vhost-user-blk](https://git.qemu.org/?p=qemu.git;a=commit;h=00343e4b54ba) is
supported from version 2.12.

View File

@ -1,6 +1,6 @@
# Virtualized I/O with Vhost-user {#vhost_processing}
# Table of Contents {#vhost_processing_toc}
## Table of Contents {#vhost_processing_toc}
- @ref vhost_processing_intro
- @ref vhost_processing_qemu
@ -8,7 +8,7 @@
- @ref vhost_processing_io_path
- @ref vhost_spdk_optimizations
# Introduction {#vhost_processing_intro}
## Introduction {#vhost_processing_intro}
This document is intended to provide an overview of how Vhost works behind the
scenes. Code snippets used in this document might have been simplified for the
@ -68,7 +68,7 @@ in the socket communication.
SPDK vhost is a Vhost-user slave server. It exposes Unix domain sockets and
allows external applications to connect.
# QEMU {#vhost_processing_qemu}
## QEMU {#vhost_processing_qemu}
One of major Vhost-user use cases is networking (DPDK) or storage (SPDK)
offload in QEMU. The following diagram presents how QEMU-based VM
@ -76,7 +76,7 @@ communicates with SPDK Vhost-SCSI device.
![QEMU/SPDK vhost data flow](img/qemu_vhost_data_flow.svg)
# Device initialization {#vhost_processing_init}
## Device initialization {#vhost_processing_init}
All initialization and management information is exchanged using Vhost-user
messages. The connection always starts with the feature negotiation. Both
@ -118,7 +118,7 @@ If multiqueue feature has been negotiated, the driver has to send a specific
*ENABLE* message for each extra queue it wants to be polled. Other queues are
polled as soon as they're initialized.
# I/O path {#vhost_processing_io_path}
## I/O path {#vhost_processing_io_path}
The Master sends I/O by allocating proper buffers in shared memory, filling
the request data, and putting guest addresses of those buffers into virtqueues.
@ -186,7 +186,7 @@ proper data and interrupts the guest by doing an eventfd_write on the call
descriptor for proper virtqueue. There are multiple interrupt coalescing
features involved, but they are not be discussed in this document.
## SPDK optimizations {#vhost_spdk_optimizations}
### SPDK optimizations {#vhost_spdk_optimizations}
Due to its poll-mode nature, SPDK vhost removes the requirement for I/O submission
notifications, drastically increasing the vhost server throughput and decreasing

View File

@ -1,6 +1,6 @@
# Virtio driver {#virtio}
# Introduction {#virtio_intro}
## Introduction {#virtio_intro}
SPDK Virtio driver is a C library that allows communicating with Virtio devices.
It allows any SPDK application to become an initiator for (SPDK) vhost targets.
@ -20,7 +20,7 @@ This Virtio library is currently used to implement two bdev modules:
@ref bdev_config_virtio_scsi and @ref bdev_config_virtio_blk.
These modules will export generic SPDK block devices usable by any SPDK application.
# 2MB hugepages {#virtio_2mb}
## 2MB hugepages {#virtio_2mb}
vhost-user specification puts a limitation on the number of "memory regions" used (8).
Each region corresponds to one file descriptor, and DPDK - as SPDK's memory allocator -

View File

@ -1,6 +1,6 @@
# VMD driver {#vmd}
# In this document {#vmd_toc}
## In this document {#vmd_toc}
* @ref vmd_intro
* @ref vmd_interface
@ -10,7 +10,7 @@
* @ref vmd_app
* @ref vmd_led
# Introduction {#vmd_intro}
## Introduction {#vmd_intro}
Intel Volume Management Device is a hardware logic inside processor's Root Complex
responsible for management of PCIe NVMe SSDs. It provides robust Hot Plug support
@ -19,11 +19,11 @@ and Status LED management.
The driver is responsible for enumeration and hooking NVMe devices behind VMD
into SPDK PCIe subsystem. It also provides API for LED management and hot plug.
# Public Interface {#vmd_interface}
## Public Interface {#vmd_interface}
- spdk/vmd.h
# Key Functions {#vmd_key_functions}
## Key Functions {#vmd_key_functions}
Function | Description
--------------------------------------- | -----------
@ -33,7 +33,7 @@ spdk_vmd_set_led_state() | @copybrief spdk_vmd_set_led_state()
spdk_vmd_get_led_state() | @copybrief spdk_vmd_get_led_state()
spdk_vmd_hotplug_monitor() | @copybrief spdk_vmd_hotplug_monitor()
# Configuration {#vmd_config}
## Configuration {#vmd_config}
To enable VMD driver enumeration, the following steps are required:
@ -75,7 +75,7 @@ Example:
$ ./scripts/rpc.py bdev_nvme_attach_controller -b NVMe1 -t PCIe -a 5d0505:01:00.0
```
# Application framework {#vmd_app_frame}
## Application framework {#vmd_app_frame}
When application framework is used, VMD section needs to be added to the configuration file:
@ -98,7 +98,7 @@ $ ./build/bin/spdk_tgt --wait_for_rpc
$ ./scripts/rpc.py enable_vmd
$ ./scripts/rpc.py framework_start_init
```
# Applications w/o application framework {#vmd_app}
## Applications w/o application framework {#vmd_app}
To enable VMD enumeration in SPDK application that are not using application framework
e.g nvme/perf, nvme/identify -V flag is required - please refer to app help if it supports VMD.
@ -107,7 +107,7 @@ Applications need to call spdk_vmd_init() to enumerate NVMe devices behind the V
spdk_nvme_(probe|connect).
To support hot plugs spdk_vmd_hotplug_monitor() needs to be called periodically.
# LED management {#vmd_led}
## LED management {#vmd_led}
VMD LED utility in the [examples/vmd/led](https://github.com/spdk/spdk/tree/master/examples/vmd/led)
could be used to set LED states.

View File

@ -4,7 +4,7 @@ This directory contains a plug-in module for fio to enable use
with SPDK. Fio is free software published under version 2 of
the GPL license.
# Compiling fio
## Compiling fio
Clone the fio source repository from https://github.com/axboe/fio
@ -16,7 +16,7 @@ Compile the fio code and install:
make
make install
# Compiling SPDK
## Compiling SPDK
Clone the SPDK source repository from https://github.com/spdk/spdk
@ -39,7 +39,7 @@ with -fPIC by modifying your DPDK configuration file and adding the line:
EXTRA_CFLAGS=-fPIC
# Usage
## Usage
To use the SPDK fio plugin with fio, specify the plugin binary using LD_PRELOAD when running
fio and set ioengine=spdk_bdev in the fio configuration file (see example_config.fio in the same
@ -72,7 +72,7 @@ When testing random workloads, it is recommended to set norandommap=1. fio's ra
processing consumes extra CPU cycles which will degrade performance over time with
the fio_plugin since all I/O are submitted and completed on a single CPU core.
# Zoned Block Devices
## Zoned Block Devices
SPDK has a zoned block device API (bdev_zone.h) which currently supports Open-channel SSDs,
NVMe Zoned Namespaces (ZNS), and the virtual zoned block device SPDK module.
@ -86,7 +86,7 @@ If using --numjobs=1, fio version >= 3.23 should suffice.
See zbd_example.fio in this directory for a zoned block device example config.
## Maximum Open Zones
### Maximum Open Zones
Most zoned block devices have a resource constraint on the amount of zones which can be in an opened
state at any point in time. It is very important to not exceed this limit.
@ -97,7 +97,7 @@ You can control how many zones fio will keep in an open state by using the
If you use a fio version newer than 3.26, fio will automatically detect and set the proper value.
If you use an old version of fio, make sure to provide the proper --max_open_zones value yourself.
## Maximum Active Zones
### Maximum Active Zones
Zoned block devices may also have a resource constraint on the number of zones that can be active at
any point in time. Unlike ``max_open_zones``, fio currently does not manage this constraint, and
@ -110,7 +110,7 @@ starts running its jobs by using the engine option:
--initial_zone_reset=1
## Zone Append
### Zone Append
When running fio against a zoned block device you need to specify --iodepth=1 to avoid
"Zone Invalid Write: The write to a zone was not at the write pointer." I/O errors.

View File

@ -1,4 +1,6 @@
# Compiling fio
# FIO plugin
## Compiling fio
First, clone the fio source repository from https://github.com/axboe/fio
@ -8,7 +10,7 @@ Then check out the latest fio version and compile the code:
make
# Compiling SPDK
## Compiling SPDK
First, clone the SPDK source repository from https://github.com/spdk/spdk
@ -30,7 +32,7 @@ with -fPIC by modifying your DPDK configuration file and adding the line:
EXTRA_CFLAGS=-fPIC
# Usage
## Usage
To use the SPDK fio plugin with fio, specify the plugin binary using LD_PRELOAD when running
fio and set ioengine=spdk in the fio configuration file (see example_config.fio in the same
@ -76,7 +78,7 @@ multiple jobs for FIO test, the performance of FIO is similiar with SPDK perf. A
think that is caused by the FIO architecture. Mainly FIO can scale with multiple threads (i.e., using CPU cores),
but it is not good to use one thread against many I/O devices.
# End-to-end Data Protection (Optional)
## End-to-end Data Protection (Optional)
Running with PI setting, following settings steps are required.
First, format device namespace with proper PI setting. For example:
@ -102,13 +104,13 @@ Expose two options 'apptag' and 'apptag_mask', users can change them in the conf
application tag and application tag mask in end-to-end data protection. Application tag and application
tag mask are set to 0x1234 and 0xFFFF by default.
# VMD (Optional)
## VMD (Optional)
To enable VMD enumeration add enable_vmd flag in fio configuration file:
enable_vmd=1
# ZNS
## ZNS
To use Zoned Namespaces then build the io-engine against, and run using, a fio version >= 3.23 and add:
@ -119,7 +121,7 @@ To your fio-script, also have a look at script-examples provided with fio:
fio/examples/zbd-seq-read.fio
fio/examples/zbd-rand-write.fio
## Maximum Open Zones
### Maximum Open Zones
Zoned Namespaces has a resource constraint on the amount of zones which can be in an opened state at
any point in time. You can control how many zones fio will keep in an open state by using the
@ -128,7 +130,7 @@ any point in time. You can control how many zones fio will keep in an open state
If you use a fio version newer than 3.26, fio will automatically detect and set the proper value.
If you use an old version of fio, make sure to provide the proper --max_open_zones value yourself.
## Maximum Active Zones
### Maximum Active Zones
Zoned Namespaces has a resource constraint on the number of zones that can be active at any point in
time. Unlike ``max_open_zones``, then fio currently do not manage this constraint, and there is thus
@ -140,7 +142,7 @@ then you can reset all zones before fio start running its jobs by using the engi
--initial_zone_reset=1
## Zone Append
### Zone Append
When running FIO against a Zoned Namespace you need to specify --iodepth=1 to avoid
"Zone Invalid Write: The write to a zone was not at the write pointer." I/O errors.
@ -151,7 +153,7 @@ However, if your controller supports Zone Append, you can use the engine option:
To send zone append commands instead of write commands to the controller.
When using zone append, you will be able to specify a --iodepth greater than 1.
## Shared Memory Increase
### Shared Memory Increase
If your device has a lot of zones, fio can give you errors such as:

View File

@ -4,7 +4,6 @@ exclude_rule 'MD004'
exclude_rule 'MD010'
rule 'MD013', :line_length => 170
exclude_rule 'MD024'
exclude_rule 'MD025'
exclude_rule 'MD026'
exclude_rule 'MD027'
exclude_rule 'MD028'

View File

@ -573,7 +573,7 @@ function check_json_rpc() {
local rc=1
while IFS='"' read -r _ rpc _; do
if ! grep -q "^## $rpc" doc/jsonrpc.md; then
if ! grep -q "^### $rpc" doc/jsonrpc.md; then
echo "Missing JSON-RPC documentation for ${rpc}"
rc=1
continue

View File

@ -3,7 +3,7 @@
This application is intended to fuzz test the iSCSI target by submitting
randomized PDU commands through a simulated iSCSI initiator.
# Input
## Input
1. iSCSI initiator send a login request PDU to iSCSI Target. Once a session is connected,
2. iSCSI initiator send huge amount and random PDUs continuously to iSCSI Target.
@ -12,7 +12,7 @@ Especially, iSCSI initiator need to build different bhs according to different b
Then iSCSI initiator will receive all kinds of responsed opcodes from iSCSI Target.
The application will terminate when run time expires (see the -t flag).
# Output
## Output
By default, the fuzzer will print commands that:
1. Complete successfully back from the target, or

View File

@ -12,7 +12,7 @@ application will terminate under three conditions:
2. One of the target controllers stops completing I/O operations back to the fuzzer i.e. controller timeout.
3. The user specified a json file containing operations to run and the fuzzer has received valid completions for all of them.
# Output
## Output
By default, the fuzzer will print commands that:
@ -30,7 +30,7 @@ At the end of each test run, a summary is printed for each namespace in the foll
NS: 0x200079262300 admin qp, Total commands completed: 462459, total successful commands: 1960, random_seed: 4276918833
~~~
# Debugging
## Debugging
If a controller hangs when processing I/O generated by the fuzzer, the fuzzer will stop
submitting I/O and dump out all outstanding I/O on the qpair that timed out. The I/O are
@ -42,7 +42,7 @@ structures.
Please note that you can also craft your own custom command values by using the output
from the fuzzer as a template.
# JSON Format
## JSON Format
Most of the variables in the spdk_nvme_cmd structure are represented as numbers in JSON.
The only exception to this rule is the dptr union. This is a 16 byte union structure that

View File

@ -8,7 +8,7 @@ queue or the scsi admin queue. Please see the NVMe fuzzer readme for information
on how output is generated, debugging procedures, and the JSON format expected
when supplying preconstructed values to the fuzzer.
# Request Types
## Request Types
Like the NVMe fuzzer, there is an example json file showing the types of requests
that the application accepts. Since the vhost application accepts both vhost block
@ -33,7 +33,7 @@ the request will no longer point to a valid memory location.
It is possible to supply all three types of requests in a single array to the application. They will be parsed and
submitted to the proper block devices.
# RPC
## RPC
The vhost fuzzer differs from the NVMe fuzzer in that it expects devices to be configured via rpc. The fuzzer should
always be started with the --wait-for-rpc argument. Please see below for an example of starting the fuzzer.