Fix Markdown MD012 linter warnings - multiple blank lines

MD012 - Multiple consecutive blank lines

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I3f48cdc54b1587c9ef2185b88f608ba8420f738b
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/654
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This commit is contained in:
Karol Latecki 2020-02-07 12:36:49 +01:00 committed by Tomasz Zawadzki
parent c344dac9cc
commit a41c031609
10 changed files with 2 additions and 16 deletions

View File

@ -1908,7 +1908,6 @@ See the [Virtio SCSI](http://www.spdk.io/doc/virtio.html) documentation and [Get
The vhost target application now supports live migration between QEMU instances.
## v17.07: Build system improvements, userspace vhost-blk target, and GPT bdev
### Build System
@ -1985,7 +1984,6 @@ This analysis provides:
See the VTune Amplifier documentation for more information.
## v17.03: Blobstore and userspace vhost-scsi target
### Blobstore and BlobFS
@ -2225,7 +2223,6 @@ This release adds a user-space driver with support for the Intel I/O Acceleratio
- Declarations and `nvme/identify` support for Intel SSD DC P3700 series vendor-specific log pages and features
- Updated to support DPDK 2.2.0
## v1.0.0: NVMe user-space driver
This is the initial open source release of the Storage Performance Development Kit (SPDK).

View File

@ -47,7 +47,6 @@ Param | Long Param | Type | Default | Descript
| | --huge-dir | string | the first discovered | allocate hugepages from a specific mount
-L | --logflag | string | | @ref cmd_arg_debug_log_flags
### Configuration file {#cmd_arg_config_file}
Historically, the SPDK applications were configured using a configuration file.

View File

@ -87,7 +87,6 @@ The Blobstore defines a hierarchy of storage abstractions as follows.
35,
{ alignment: 'center', fill: 'white' });
for (var j = 0; j < 4; j++) {
let pageWidth = 100;
let pageHeight = canvasHeight;

View File

@ -51,14 +51,12 @@ metadata is split in two parts:
sequence number, etc.), located at the beginning blocks of the band,
* the tail part, containing the address map and the valid map, located at the end of the band.
head metadata band's data tail metadata
+-------------------+-------------------------------+------------------------+
|zone 1 |...|zone n |...|...|zone 1 |...| | ... |zone m-1 |zone m|
|block 1| |block 1| | |block x| | | |block y |block y|
+-------------------+-------------+-----------------+------------------------+
Bands are written sequentially (in a way that was described earlier). Before a band can be written
to, all of its zones need to be erased. During that time, the band is considered to be in a `PREP`
state. After that is done, the band transitions to the `OPENING` state, in which head metadata

View File

@ -2227,7 +2227,6 @@ Example response:
}
~~~
## bdev_pmem_create_pool {#rpc_bdev_pmem_create_pool}
Create a @ref bdev_config_pmem blk pool file. It is equivalent of following `pmempool create` command:
@ -4642,7 +4641,6 @@ ctrlr | Required | string | Controller name
io_queues | Required | number | Number between 1 and 31 of IO queues for the controller
cpumask | Optional | string | @ref cpu_mask for this controller
### Example
Example request:
@ -4724,7 +4722,6 @@ bdev_name | Required | string | Name of bdev to expose block
readonly | Optional | boolean | If true, this target will be read only (default: false)
cpumask | Optional | string | @ref cpu_mask for this controller
### Example
Example request:
@ -4764,7 +4761,6 @@ Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
name | Optional | string | Vhost controller name
### Response {#rpc_vhost_get_controllers_response}
Response is an array of objects describing requested controller(s). Common fields are:
@ -5526,7 +5522,6 @@ strip_size_kb | Required | number | Strip size in KB
raid_level | Required | number | RAID level
base_bdevs | Required | string | Base bdevs name, whitespace separated list in quotes
### Example
Example request:

View File

@ -280,7 +280,6 @@ Example: identical shm_id and non-overlapping core masks
@sa spdk_nvme_probe, spdk_nvme_ctrlr_process_admin_completions
# NVMe Hotplug {#nvme_hotplug}
At the NVMe driver level, we provide the following support for Hotplug:

View File

@ -337,7 +337,6 @@ vhost.c:1006:session_shutdown: *NOTICE*: Exiting
We can see that `sdb` and `sdc` are SPDK vhost-scsi LUNs, and `vda` is SPDK a
vhost-blk disk.
# Advanced Topics {#vhost_advanced_topics}
## Multi-Queue Block Layer (blk-mq) {#vhost_multiqueue}

View File

@ -55,7 +55,6 @@ And remote devices accessed via NVMe over Fabrics will look like this:
filename=trtype=RDMA adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1
**Note**: The specification of the PCIe address should not use the normal ':'
and instead only use '.'. This is a limitation in fio - it splits filenames on
':'. Also, the NVMe namespaces start at 1, not 0, and the namespace must be

View File

@ -1,10 +1,12 @@
# NVMe-oF target without SPDK event framework
## Overview
This example is used to show how to use the nvmf lib. In this example we want to encourage user
to use RPC cmd so we would only support RPC style.
## Usage:
This example's usage is very similar with nvmf_tgt, difference is that you must use the RPC cmd
to setup the nvmf target.

View File

@ -8,7 +8,6 @@ for spinning up a VM capable of running the SPDK test suite.
There is no need for external hardware to run these tests. The linux kernel comes with the drivers necessary
to emulate an RDMA enabled NIC. NVMe controllers can also be virtualized in emulators such as QEMU.
## VM Envronment Requirements (Host):
- 8 GiB of RAM (for DPDK)