markdownlint: enable rule MD005

MD005 - Inconsistent indentation for list items at the same level
Fixed all MD005 errors

Signed-off-by: wawryk <maciejx.wawryk@intel.com>
Change-Id: If6a12d6dab938094394a72c804f2a028f1c40f45
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8995
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This commit is contained in:
wawryk 2021-07-30 11:55:02 +02:00 committed by Tomasz Zawadzki
parent e5b5eabe82
commit 1df1583be5
7 changed files with 140 additions and 139 deletions

View File

@ -1385,11 +1385,11 @@ of logical blocks for write operation.
New zone-related fields were added to the result of the `get_bdevs` RPC call:
- `zoned`: indicates whether the device is zoned or a regular
block device
- `zone_size`: number of blocks in a single zone
- `max_open_zones`: maximum number of open zones
- `optimal_open_zones`: optimal number of open zones
- `zoned`: indicates whether the device is zoned or a regular
block device
- `zone_size`: number of blocks in a single zone
- `max_open_zones`: maximum number of open zones
- `optimal_open_zones`: optimal number of open zones
The `zoned` field is a boolean and is always present, while the rest is only available for zoned
bdevs.
@ -2109,19 +2109,19 @@ for which the memory is contiguous in the physical memory address space.
The following functions were removed:
- spdk_pci_nvme_device_attach()
- spdk_pci_nvme_enumerate()
- spdk_pci_ioat_device_attach()
- spdk_pci_ioat_enumerate()
- spdk_pci_virtio_device_attach()
- spdk_pci_virtio_enumerate()
- spdk_pci_nvme_device_attach()
- spdk_pci_nvme_enumerate()
- spdk_pci_ioat_device_attach()
- spdk_pci_ioat_enumerate()
- spdk_pci_virtio_device_attach()
- spdk_pci_virtio_enumerate()
They were replaced with generic spdk_pci_device_attach() and spdk_pci_enumerate() which
require a new spdk_pci_driver object to be provided. It can be one of the following:
- spdk_pci_nvme_get_driver()
- spdk_pci_ioat_get_driver()
- spdk_pci_virtio_get_driver()
- spdk_pci_nvme_get_driver()
- spdk_pci_ioat_get_driver()
- spdk_pci_virtio_get_driver()
spdk_pci_hook_device() and spdk_pci_unhook_device() were added. Those allow adding a virtual
spdk_pci_device into the SPDK PCI subsystem. A virtual device calls provided callbacks for
@ -2300,12 +2300,12 @@ Dropped support for DPDK 16.07 and earlier, which SPDK won't even compile with r
The following RPC commands deprecated in the previous release are now removed:
- construct_virtio_user_scsi_bdev
- construct_virtio_pci_scsi_bdev
- construct_virtio_user_blk_bdev
- construct_virtio_pci_blk_bdev
- remove_virtio_scsi_bdev
- construct_nvmf_subsystem
- construct_virtio_user_scsi_bdev
- construct_virtio_pci_scsi_bdev
- construct_virtio_user_blk_bdev
- construct_virtio_pci_blk_bdev
- remove_virtio_scsi_bdev
- construct_nvmf_subsystem
### Miscellaneous
@ -2489,11 +2489,11 @@ respectively.
The following RPC commands have been deprecated:
- construct_virtio_user_scsi_bdev
- construct_virtio_pci_scsi_bdev
- construct_virtio_user_blk_bdev
- construct_virtio_pci_blk_bdev
- remove_virtio_scsi_bdev
- construct_virtio_user_scsi_bdev
- construct_virtio_pci_scsi_bdev
- construct_virtio_user_blk_bdev
- construct_virtio_pci_blk_bdev
- remove_virtio_scsi_bdev
The `construct_virtio_*` ones were replaced with a single `construct_virtio_dev`
command that can create any type of Virtio bdev(s). `remove_virtio_scsi_bdev`
@ -2510,12 +2510,12 @@ Added jsonrpc-client C library intended for issuing RPC commands from applicatio
Added API enabling iteration over JSON object:
- spdk_json_find()
- spdk_json_find_string()
- spdk_json_find_array()
- spdk_json_object_first()
- spdk_json_array_first()
- spdk_json_next()
- spdk_json_find()
- spdk_json_find_string()
- spdk_json_find_array()
- spdk_json_object_first()
- spdk_json_array_first()
- spdk_json_next()
### Blobstore
@ -3282,33 +3282,33 @@ in app/iscsi_tgt and a documented configuration file can be found at etc/spdk/sp
This release also significantly improves the existing NVMe over Fabrics target.
- The configuration file format was changed, which will require updates to
any existing nvmf.conf files (see `etc/spdk/nvmf.conf.in`):
- `SubsystemGroup` was renamed to `Subsystem`.
- `AuthFile` was removed (it was unimplemented).
- `nvmf_tgt` was updated to correctly recognize NQN (NVMe Qualified Names)
when naming subsystems. The default node name was changed to reflect this;
it is now "nqn.2016-06.io.spdk".
- `Port` and `Host` sections were merged into the `Subsystem` section
- Global options to control max queue depth, number of queues, max I/O
size, and max in-capsule data size were added.
- The Nvme section was removed. Now a list of devices is specified by
bus/device/function directly in the Subsystem section.
- Subsystems now have a Mode, which can be Direct or Virtual. This is an attempt
to future-proof the interface, so the only mode supported by this release
is "Direct".
- Many bug fixes and cleanups were applied to the `nvmf_tgt` app and library.
- The target now supports discovery.
- The configuration file format was changed, which will require updates to
any existing nvmf.conf files (see `etc/spdk/nvmf.conf.in`):
- `SubsystemGroup` was renamed to `Subsystem`.
- `AuthFile` was removed (it was unimplemented).
- `nvmf_tgt` was updated to correctly recognize NQN (NVMe Qualified Names)
when naming subsystems. The default node name was changed to reflect this;
it is now "nqn.2016-06.io.spdk".
- `Port` and `Host` sections were merged into the `Subsystem` section
- Global options to control max queue depth, number of queues, max I/O
size, and max in-capsule data size were added.
- The Nvme section was removed. Now a list of devices is specified by
bus/device/function directly in the Subsystem section.
- Subsystems now have a Mode, which can be Direct or Virtual. This is an attempt
to future-proof the interface, so the only mode supported by this release
is "Direct".
- Many bug fixes and cleanups were applied to the `nvmf_tgt` app and library.
- The target now supports discovery.
This release also adds one new feature and provides some better examples and tools
for the NVMe driver.
- The Weighted Round Robin arbitration method is now supported. This allows
the user to specify different priorities on a per-I/O-queue basis. To
enable WRR, set the `arb_mechanism` field during `spdk_nvme_probe()`.
- A simplified "Hello World" example was added to show the proper way to use
the NVMe library API; see `examples/nvme/hello_world/hello_world.c`.
- A test for measuring software overhead was added. See `test/lib/nvme/overhead`.
- The Weighted Round Robin arbitration method is now supported. This allows
the user to specify different priorities on a per-I/O-queue basis. To
enable WRR, set the `arb_mechanism` field during `spdk_nvme_probe()`.
- A simplified "Hello World" example was added to show the proper way to use
the NVMe library API; see `examples/nvme/hello_world/hello_world.c`.
- A test for measuring software overhead was added. See `test/lib/nvme/overhead`.
## v16.06: NVMf userspace target
@ -3336,13 +3336,13 @@ user code.
supported on any attached controller.
- Added support for the Write Zeroes command.
- `examples/nvme/perf` can now report I/O command latency from the
the controller's viewpoint using the Intel vendor-specific read/write latency
log page.
the controller's viewpoint using the Intel vendor-specific read/write latency
log page.
- Added namespace reservation command support, which can be used to coordinate
sharing of a namespace between multiple hosts.
- Added hardware SGL support, which enables use of scattered buffers that
don't conform to the PRP list alignment and length requirements on supported
NVMe controllers.
don't conform to the PRP list alignment and length requirements on supported
NVMe controllers.
- Added end-to-end data protection support, including the ability to write and
read metadata in extended LBA (metadata appended to each block of data in the
buffer) and separate metadata buffer modes.

View File

@ -45,11 +45,11 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
### Portal groups
- iscsi_create_portal_group -- Add a portal group.
- iscsi_delete_portal_group -- Delete an existing portal group.
- iscsi_target_node_add_pg_ig_maps -- Add initiator group to portal group mappings to an existing iSCSI target node.
- iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node.
- iscsi_get_portal_groups -- Show information about all available portal groups.
- iscsi_create_portal_group -- Add a portal group.
- iscsi_delete_portal_group -- Delete an existing portal group.
- iscsi_target_node_add_pg_ig_maps -- Add initiator group to portal group mappings to an existing iSCSI target node.
- iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node.
- iscsi_get_portal_groups -- Show information about all available portal groups.
~~~
/path/to/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
@ -57,10 +57,10 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
### Initiator groups
- iscsi_create_initiator_group -- Add an initiator group.
- iscsi_delete_initiator_group -- Delete an existing initiator group.
- iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group.
- iscsi_get_initiator_groups -- Show information about all available initiator groups.
- iscsi_create_initiator_group -- Add an initiator group.
- iscsi_delete_initiator_group -- Delete an existing initiator group.
- iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group.
- iscsi_get_initiator_groups -- Show information about all available initiator groups.
~~~
/path/to/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
@ -68,10 +68,10 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
### Target nodes
- iscsi_create_target_node -- Add an iSCSI target node.
- iscsi_delete_target_node -- Delete an iSCSI target node.
- iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node.
- iscsi_get_target_nodes -- Show information about all available iSCSI target nodes.
- iscsi_create_target_node -- Add an iSCSI target node.
- iscsi_delete_target_node -- Delete an iSCSI target node.
- iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node.
- iscsi_get_target_nodes -- Show information about all available iSCSI target nodes.
~~~
/path/to/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -d
@ -274,20 +274,22 @@ sde
At the iSCSI level, we provide the following support for Hotplug:
1. bdev/nvme:
At the bdev/nvme level, we start one hotplug monitor which will call
spdk_nvme_probe() periodically to get the hotplug events. We provide the
private attach_cb and remove_cb for spdk_nvme_probe(). For the attach_cb,
we will create the block device base on the NVMe device attached, and for the
remove_cb, we will unregister the block device, which will also notify the
upper level stack (for iSCSI target, the upper level stack is scsi/lun) to
handle the hot-remove event.
At the bdev/nvme level, we start one hotplug monitor which will call
spdk_nvme_probe() periodically to get the hotplug events. We provide the
private attach_cb and remove_cb for spdk_nvme_probe(). For the attach_cb,
we will create the block device base on the NVMe device attached, and for the
remove_cb, we will unregister the block device, which will also notify the
upper level stack (for iSCSI target, the upper level stack is scsi/lun) to
handle the hot-remove event.
2. scsi/lun:
When the LUN receive the hot-remove notification from block device layer,
the LUN will be marked as removed, and all the IOs after this point will
return with check condition status. Then the LUN starts one poller which will
wait for all the commands which have already been submitted to block device to
return back; after all the commands return back, the LUN will be deleted.
When the LUN receive the hot-remove notification from block device layer,
the LUN will be marked as removed, and all the IOs after this point will
return with check condition status. Then the LUN starts one poller which will
wait for all the commands which have already been submitted to block device to
return back; after all the commands return back, the LUN will be deleted.
@sa spdk_nvme_probe

View File

@ -2,14 +2,14 @@
# In this document {#nvme_toc}
* @ref nvme_intro
* @ref nvme_examples
* @ref nvme_interface
* @ref nvme_design
* @ref nvme_fabrics_host
* @ref nvme_multi_process
* @ref nvme_hotplug
* @ref nvme_cuse
- @ref nvme_intro
- @ref nvme_examples
- @ref nvme_interface
- @ref nvme_design
- @ref nvme_fabrics_host
- @ref nvme_multi_process
- @ref nvme_hotplug
- @ref nvme_cuse
# Introduction {#nvme_intro}
@ -124,8 +124,8 @@ io flag set, and the next one should have the SPDK_NVME_IO_FLAGS_FUSE_SECOND.
In addition, the following rules must be met to execute two commands as an atomic unit:
- The commands shall be inserted next to each other in the same submission queue.
- The LBA range, should be the same for the two commands.
- The commands shall be inserted next to each other in the same submission queue.
- The LBA range, should be the same for the two commands.
E.g. To send fused compare and write operation user must call spdk_nvme_ns_cmd_compare
followed with spdk_nvme_ns_cmd_write and make sure no other operations are submitted

View File

@ -77,13 +77,13 @@ directory and include the headers by prefixing `spdk/` like this:
Most of the headers here correspond with a library in the `lib` directory. There
are a few headers that stand alone, however. They are:
- `assert.h`
- `barrier.h`
- `endian.h`
- `fd.h`
- `mmio.h`
- `queue.h` and `queue_extras.h`
- `string.h`
- `assert.h`
- `barrier.h`
- `endian.h`
- `fd.h`
- `mmio.h`
- `queue.h` and `queue_extras.h`
- `string.h`
There is also an `spdk_internal` directory that contains header files widely included
by libraries within SPDK, but that are not part of the public API and would not be

View File

@ -90,13 +90,13 @@ physically-discontiguous regions and Vhost-user specification puts a limit on
their number - currently 8. The driver sends a single message for each region with
the following data:
* file descriptor - for mmap
* user address - for memory translations in Vhost-user messages (e.g.
translating vring addresses)
* guest address - for buffers addresses translations in vrings (for QEMU this
is a physical address inside the guest)
* user offset - positive offset for the mmap
* size
- file descriptor - for mmap
- user address - for memory translations in Vhost-user messages (e.g.
translating vring addresses)
- guest address - for buffers addresses translations in vrings (for QEMU this
is a physical address inside the guest)
- user offset - positive offset for the mmap
- size
The Master will send new memory regions after each memory change - usually
hotplug/hotremove. The previous mappings will be removed.
@ -108,11 +108,11 @@ as they use common SCSI I/O to inquiry the underlying disk(s).
Afterwards, the driver requests the number of maximum supported queues and
starts sending virtqueue data, which consists of:
* unique virtqueue id
* index of the last processed vring descriptor
* vring addresses (from user address space)
* call descriptor (for interrupting the driver after I/O completions)
* kick descriptor (to listen for I/O requests - unused by SPDK)
- unique virtqueue id
- index of the last processed vring descriptor
- vring addresses (from user address space)
- call descriptor (for interrupting the driver after I/O completions)
- kick descriptor (to listen for I/O requests - unused by SPDK)
If multiqueue feature has been negotiated, the driver has to send a specific
*ENABLE* message for each extra queue it wants to be polled. Other queues are

View File

@ -1,7 +1,6 @@
all
exclude_rule 'MD003'
exclude_rule 'MD004'
exclude_rule 'MD005'
exclude_rule 'MD006'
exclude_rule 'MD007'
exclude_rule 'MD009'

View File

@ -49,34 +49,34 @@ To create the VM image manually use following steps:
1. Create an image file for the VM. It does not have to be large, about 3.5G should suffice.
2. Create an ssh keypair for host-guest communications (performed on the host):
- Generate an ssh keypair with the name spdk_vhost_id_rsa and save it in `/root/.ssh`.
- Make sure that only root has read access to the private key.
- Generate an ssh keypair with the name spdk_vhost_id_rsa and save it in `/root/.ssh`.
- Make sure that only root has read access to the private key.
3. Install the OS in the VM image (performed on guest):
- Use the latest Fedora Cloud (Currently Fedora 32).
- When partitioning the disk, make one partion that consumes the whole disk mounted at /. Do not encrypt the disk or enable LVM.
- Choose the OpenSSH server packages during install.
- Use the latest Fedora Cloud (Currently Fedora 32).
- When partitioning the disk, make one partion that consumes the whole disk mounted at /. Do not encrypt the disk or enable LVM.
- Choose the OpenSSH server packages during install.
4. Post installation configuration (performed on guest):
- Run the following commands to enable all necessary dependencies:
~~~{.sh}
sudo dnf update
sudo dnf upgrade
sudo dnf -y install git sg3_utils bc wget libubsan libasan xfsprogs btrfs-progs ntfsprogs ntfs-3g
git clone https://github.com/spdk/spdk.git
./spdk/scripts/pkgdep.sh -p -f -r -u
~~~
- Enable the root user: "sudo passwd root -> root".
- Enable root login over ssh: vim `/etc/ssh/sshd_config` -> PermitRootLogin=yes.
- Change the grub boot options for the guest as follows:
- Add "console=ttyS0 earlyprintk=ttyS0" to the boot options in `/etc/default/grub` (for serial output redirect).
- Add "scsi_mod.use_blk_mq=1" to boot options in `/etc/default/grub`.
~~~{.sh}
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
~~~
- Reboot the VM.
- Remove any unnecessary packages (this is to make booting the VM faster):
~~~{.sh}
sudo dnf clean all
~~~
- Run the following commands to enable all necessary dependencies:
~~~{.sh}
sudo dnf update
sudo dnf upgrade
sudo dnf -y install git sg3_utils bc wget libubsan libasan xfsprogs btrfs-progs ntfsprogs ntfs-3g
git clone https://github.com/spdk/spdk.git
./spdk/scripts/pkgdep.sh -p -f -r -u
~~~
- Enable the root user: "sudo passwd root -> root".
- Enable root login over ssh: vim `/etc/ssh/sshd_config` -> PermitRootLogin=yes.
- Change the grub boot options for the guest as follows:
- Add "console=ttyS0 earlyprintk=ttyS0" to the boot options in `/etc/default/grub` (for serial output redirect).
- Add "scsi_mod.use_blk_mq=1" to boot options in `/etc/default/grub`.
~~~{.sh}
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
~~~
- Reboot the VM.
- Remove any unnecessary packages (this is to make booting the VM faster):
~~~{.sh}
sudo dnf clean all
~~~
5. Install fio:
~~~
./spdk/test/common/config/vm_setup.sh -t 'fio'