Compare commits

...

20 Commits

Author SHA1 Message Date
Tomasz Zawadzki
06bba16f0a version: 20.07.1 pre
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I9b631a5c261d4c7381058844a0deea25d4117a68
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3601
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2020-07-31 07:13:02 +00:00
Tomasz Zawadzki
1a527e501f SPDK 20.07
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ic67fcdbf2c54b17355de3b2a04893f275a0c745a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3600
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2020-07-31 07:07:10 +00:00
Alexey Marchuk
9866a31b49 nvmf/rdma: Submit recv to SRQ when AER is released
Currently we don't resubmit receive request associated with AER
request to SRQ. This leads to reducing of SRQ elements and may
lead to non responsive NVMF target.

Fixes #1507

Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3558 (master)

(cherry picked from commit a3f89d62eb495cf5449ffcd926974da08ace8181)
Change-Id: Ie96f8c4be0202ae973e561ebe5ea28688a6a3b72
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3599
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-07-31 07:07:10 +00:00
Vitaliy Mysak
fda3aafd14 doc: add bdevperf doc section
Describe bdevperf tool usage and its config file.

Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3602 (master)

(cherry picked from commit 8e5d5b8ff6)
Change-Id: I3648e9fcf6eb9e332dadda0d73f52740a19d5ad8
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3608
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2020-07-31 07:07:10 +00:00
Tomasz Zawadzki
ab691e3d5a bdev/ocf: take additional reference for ocf_cache
Fixes #1498

When shutting down the application, it was possible to
reference stale ocf_cache pointer. This was the case
when two or more vbdev_ocf devices were based on top
of single cache bdev.

This issue did not occur outside of the shutdown case,
since RPC only allows deletion of the vbdev_ocf.
This erases on disk metadata and next run of the application,
would not detect such vbdev_ocf.

Shutdown meanwhile works different, by first stopping
the instance of running "ocf_mngt_cache" and later detaching
"core" devices (the ones being cached). This prevented
erasing the on disk metadata and allowed for restarted
application to detect vbdev_ocf.
See patch (1292ef2) for details.

Since references to ocf_cache are copied between vbdev_ocf
[see start_cache()], the reference count inside ocf_cache
was limited to original ocf_mngt_cache_start() and
management queue creation. First call into ocf_mngt_cache_stop()
released all references to ocf_cache. Leaving other
vbdev_ocfs pointing to released memory.

This patch works around this issue by increasing ref cnt
on ocf_cache for each vbdev based on top of it.
It allows to call into ocf_mngt_cache_stop(), but not
release the memory for ocf_cache until last vbdev.

Note:
A proper redesign here is in order:
- either rearranging structures to be based around single ocf_cache,
rather than multiple vbdev_ocf instances
- better use of OCF API to reduce book keeping logic in vbdev

There are plans to implement detach/attach in RPC,
so it should be a focus during that effort.

Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3574 (master)

(cherry picked from commit 1350922d09)
Change-Id: I560a7fbb1c052bf53970e655bdb60803c561a252
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3593
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-31 07:07:10 +00:00
Tomasz Zawadzki
71469b64ba bdev/ocf: simplify check for running cache instance
There are additional conditions which SPDK tracks,
that are known before issuing call to OCF.

Two main ones are:
- if vbdev->ocf_cache was not yet created [start_cache()]
- if the cache bdev was opened [attach_base()]

Both happen for the first cache bdev once. Then for
consecutive vbdev_ocf on same cache bdev, reference
will be copied.

This call will simplify checking both conditions.
Calling into OCF with NULL or stale ocf_cache pointer,
rightly so will cause issues with ASAN.

Related #1498

Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3573 (master)

(cherry picked from commit 868ba17780)
Change-Id: Ib202c15bda4cbbffa1516c69168e8bfb80370047
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3592
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-31 07:07:10 +00:00
Ben Walker
fbf9098c0c Revert "thread: add spdk_env_get_primary_core"
This reverts commit 6194cb2e15.

It's unclear whether we need to add a new API for the env layer
for upcoming work. Nothing currently uses it. When we have a clear
need, we can add this back in.

Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3561 (master)

(cherry picked from commit e12a4f6ec8)
Change-Id: I174276799d650a1365b37a737271a54a796cd455
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3580
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-31 07:07:10 +00:00
Maciej Wawryk
fbe0a864b2 pkgdep/git: Fix QAT permision error during install
Signed-off-by: Maciej Wawryk <maciejx.wawryk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3530 (master)

(cherry picked from commit 35599b776f)
Change-Id: I813d48674885c78369ec6a7d17c4e147e10f0c0b
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3579
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-31 07:07:10 +00:00
Ziye Yang
9d504f223a nvme/perf: Do not use IORING_SETUP_IOPOLL
This flag only works for local device. If the device from the kernel
is getting from remote (e.g., /dev/nvme2n1 is from NVMe-oF target),
then it will not work for those kernel devices while using
IORING_SETUP_IOPOLL flag.

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3531 (master)

(cherry picked from commit d2a194c4f0)
Change-Id: Ide396de9f53b884c4d12af64693293d57fac9523
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3572
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-07-31 07:07:10 +00:00
Jin Yu
f758696fa6 vhost-blk: add resize bdev support
This will allow us to resize the backend bdev of vhost-blk
and notify the guest OS that the capactiy of virtio-blk
disk has been resized.

The spdk api entry is `spdk_bdev_notify_blockcnt_change`.
Any bdev if used as vhost-blk backend may need to implement
a rpc that calls this function.

Related DPDK patch has been merged and release in 20.02.
https://www.mail-archive.com/dev@dpdk.org/msg153365.html

Signed-off-by: Li Feng <fengli@smartx.com>
Signed-off-by: Jin Yu <jin.yu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1468 (master)

(cherry picked from commit b45f293d57)
Change-Id: I961c61de0fc03e210d776035a40f3a4adfa9b4f3
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3571
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-07-31 07:07:10 +00:00
Seth Howell
6d2247c2e2 nvme/transport: addd assert for transport.
Silences a KW error.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3553 (master)

(cherry picked from commit 0b1799cd98)
Change-Id: Ifd8d6088a22de7c230d48751be2b3991d0649778
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3570
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-07-31 07:07:10 +00:00
Seth Howell
97b095904a changelog: update with new 20.07 features.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3540 (master)

(cherry picked from commit 7177e94973)
Change-Id: If0db6036203f95efcf2e4e1776e6f6fc9a68d6bf
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3569
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-31 07:07:10 +00:00
Maciej Wawryk
89f55134f6 scripts/pkgdep: Add liburing to install_all_dependencies function
Signed-off-by: Maciej Wawryk <maciejx.wawryk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3541 (master)

(cherry picked from commit 817d077a10)
Change-Id: I84cbd8c7a5563b704349c95df0c83513dcf91c45
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3568
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-30 06:58:50 +00:00
Seth Howell
84de31e494 changelog: alphabetize and consolidate 20.07 section
In preparation for the upcoming release.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3539 (master)

(cherry picked from commit 91bca45725)
Change-Id: I8f118289612365a8d2be7baecb20450ae31068c8
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3550
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-30 06:58:50 +00:00
paul luse
297290f4b5 accel: add API to cancel a batch sequence
Added to the framework as well as all 3 engines.  Needed by apps
in the event that they have to fail following the creation of a
batch, allows them to tell the framework to forget about the batch
as they have no intent to send it.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3389 (master)

(cherry picked from commit 8d059e7a18)
Change-Id: Id94754ab1350e5a969a5fd2306bd59c38f0a0120
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3549
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-07-30 06:58:50 +00:00
Jim Harris
b1deebd18c nvme: do not abort reqs in multi-process cleanup path
When a process cleans up IO qpairs from another crashed
process in a multi-process environment, we must not try to
abort reqs for that IO qpair.  Any reqs will contain callbacks
for the crashed process which we must not try to execute in
a different process.

Fixes issue #1509.

Signed-off-by: Jim Harris <james.r.harris@intel.com>

Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3536 (master)

(cherry picked from commit 751e2812bc)
Change-Id: I5e58cce7bdb86e3feb4084733815c086901f867e
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3548
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-30 06:58:50 +00:00
paul luse
d9a6e4e220 module/compress: add new parm to RPC for create compress vol
To specify the desired logical block size. Must be 4K or 512.
If no block size is provided a default of 0 means to use the
underlying bdev block size. For cases where something other
than 4K or 512 is desired, format the underlying device
accordingly and don't specify a logical block size on creation
of the compress vol.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3177 (master)

(cherry picked from commit 62b3b171cb)
Change-Id: I58b71e210cfa77b3237c0c454585c734e2e22aea
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3547
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2020-07-30 06:58:50 +00:00
Anil Veerabhadrappa
d963a3313e nvmf/fc/ut: Add missing function stub
Fix the compilation error by adding spdk_nvmf_request_complete()
  function stub.

Signed-off-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3523 (master)

(cherry picked from commit 03fe6a77c8)
Change-Id: I564cdb16238d5c45c50895b8b2a512096789ba38
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3546
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-07-30 06:58:50 +00:00
Alexey Marchuk
4be65dbf65 nvmf/rdma: Send ibv async event to the correct thread
Since rqpair->qpair.group is set to NULL when we remove the
qpair from poll group, we fail to send event to qpair's thread.
This patch adds a pointer to io_chaneel to spdk_nvmf_rdma_qpair
structure and a function to handle poll_group_remove transport
operation. In this function we get io_channel from nvmf_tgt,
this channel will be used to get a thread for sending
async event notification. This also guarantees that the thread
will be alive while we are destroying qpair.

Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3475 (master)

(cherry picked from commit 3d1d4fcf54)
Change-Id: I1222be9f9004304ba0a90edf6d56d316d014efda
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3545
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-30 06:58:50 +00:00
Maciej Wawryk
459d3e8f07 test/bdevperf: test config file
Signed-off-by: Maciej Wawryk <maciejx.wawryk@intel.com>
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3291 (master)

(cherry picked from commit 8991773390)
Change-Id: I6a92345e1c3fae1f7f8b77bc68e0f715e7ac9ed9
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3544
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2020-07-30 06:58:50 +00:00
38 changed files with 598 additions and 120 deletions

View File

@ -1,6 +1,26 @@
# Changelog
## v20.07: (Upcoming Release)
## v20.07.1: (Upcoming Release)
## v20.07:
### accel
A new API was added `spdk_accel_get_capabilities` that allows applications to
query the capabilities of the currently enabled accel engine back-end.
A new capability, CRC-32C, was added via `spdk_accel_submit_crc32c`.
The software accel engine implemenation has added support for CRC-32C.
A new capability, compare, was added via `spdk_accel_submit_compare`.
The software accel engine implemenation has added support for compare.
Several APIs were added to `accel_engine.h` to support batched submission
of operations.
Several APIs were added to `accel_engine.h` to support dualcast operations.
### accel_fw
@ -16,16 +36,54 @@ The accel_fw was updated to support compare, dualcast, crc32c.
The accel_fw introduced batching support for all commands in all plug-ins.
See docs for detailed information.
### bdev
A new API `spdk_bdev_abort` has been added to submit abort requests to abort all I/Os
whose callback context match to the bdev on the given channel.
### build
The fio plugins now compile to `build/fio` and are named `spdk_bdev` and `spdk_nvme`.
Existing fio configuration files will need to be updated.
### dpdk
Updated DPDK submodule to DPDK 20.05.
### env
Several new APIs have been added to provide greater flexibility in registering and
accessing polled mode PCI drivers. See `env.h` for more details.
### idxd
The idxd library and plug-in module for the accel_fw were updated to support
all accel_fw commands as well as batching. Batching is supported both
through the library and the plug-in module.
IDXD engine support for CRC-32C has been added.
### ioat
A new API `spdk_ioat_get_max_descriptors` was added.
### nvme
An `opts_size`element was added in the `spdk_nvme_ctrlr_opts` structure
to solve the ABI compatiblity issue between different SPDK version.
A new API `spdk_nvme_ctrlr_cmd_abort_ext` has been added to abort previously submitted
commands whose callback argument match.
Convenience functions, `spdk_nvme_print_command` and `spdk_nvme-print_completion` were added
to the public API.
A new function, `spdk_nvmf_cuse_update_namespaces`, updates the cuse representation of an NVMe
controller.
A new function `qpair_iterate_requests` has been added to the nvme transport interface. ALl
implementations of the transport interface will have to implement that function.
### nvmf
The NVMe-oF target no longer supports connecting scheduling configuration and instead
@ -40,51 +98,6 @@ takes a function pointer as an argument. Instead, transports should call
The NVMe-oF target now supports aborting any submitted NVM or Admin command. Previously,
the NVMe-oF target could abort only Asynchronous Event Request commands.
### nvme
Add `opts_size` in `spdk_nvme_ctrlr_opts` structure in order to solve the compatiblity issue
for different ABI version.
A new API `spdk_nvme_ctrlr_cmd_abort_ext` has been added to abort previously submitted
commands whose callback argument match.
### bdev
A new API `spdk_bdev_abort` has been added to submit abort requests to abort all I/Os
whose callback context match to the bdev on the given channel.
### RPC
Command line parameters `-r` and `--rpc-socket` will longer accept TCP ports. RPC server
must now be started on a Unix domain socket. Exposing RPC on the network, as well as providing
proper authentication (if needed) is now a responsibility of the user.
### build
The fio plugins now compile to `build/fio` and are named `spdk_bdev` and `spdk_nvme`.
Existing fio configuration files will need to be updated.
### accel
A new API was added `spdk_accel_get_capabilities` that allows applications to
query the capabilities of the currently enabled accel engine back-end.
A new capability, CRC-32C, was added via `spdk_accel_submit_crc32c`.
The software accel engine implemenation has added support for CRC-32C.
A new capability, compare, was added via `spdk_accel_submit_compare`.
The software accel engine implemenation has added support for compare.
### dpdk
Updated DPDK submodule to DPDK 20.05.
### idxd
IDXD engine support for CRC-32C has been added.
### rdma
A new `rdma` library has been added. It is an abstraction layer over different RDMA providers.
@ -95,13 +108,20 @@ Using mlx5_dv requires libmlx5 installed on the system.
### rpc
Parameter `-p` or `--max-qpairs-per-ctrlr` of `nvmf_create_transport` RPC command accepted by the
rpc.py script is deprecated, new parameter `-m` or `--max-io-qpairs-per-ctrlr` is added.
Parameter `max_qpairs_per_ctrlr` of `nvmf_create_transport` RPC command accepted by the NVMF target
is deprecated, new parameter `max_io_qpairs_per_ctrlr` is added.
rpc.py script is deprecated, new parameter `-m` or `--max-io-qpairs-per-ctrlr` was added.
Added `sock_impl_get_options` and `sock_impl_set_options` RPC methods.
Command line parameters `-r` and `--rpc-socket` will longer accept TCP ports. RPC server
must now be started on a Unix domain socket. Exposing RPC on the network, as well as providing
proper authentication (if needed) is now a responsibility of the user.
The `bdev_set_options` RPC has a new option, `bdev_auto_examine` to control the auto examine function
of bdev modules.
New RPCs `sock_impl_get_options` and `sock_impl_set_options` been added to expose new socket features.
See `sock` section for more details.
### sock
Added `spdk_sock_impl_get_opts` and `spdk_sock_impl_set_opts` functions to set/get socket layer configuration
@ -119,6 +139,11 @@ New option is used only in posix implementation.
Added `enable_zerocopy_send` socket layer option to allow disabling of zero copy flow on send.
New option is used only in posix implementation.
### util
Some previously exposed CRC32 functions have been removed from the public API -
`spdk_crc32_update`, `spdk_crc32_table_init`, and the `spdk_crc32_table` struct.
### vhost
The function `spdk_vhost_blk_get_dev` has been removed.

View File

@ -173,6 +173,7 @@ if [ $SPDK_RUN_FUNCTIONAL_TEST -eq 1 ]; then
if [ $SPDK_TEST_BLOCKDEV -eq 1 ]; then
run_test "blockdev_general" test/bdev/blockdev.sh
run_test "bdev_raid" test/bdev/bdev_raid.sh
run_test "bdevperf_config" test/bdev/bdevperf/test_config.sh
if [[ $(uname -s) == Linux ]]; then
run_test "spdk_dd" test/dd/dd.sh
fi

View File

@ -803,6 +803,7 @@ INPUT += \
accel_fw.md \
applications.md \
bdev.md \
bdevperf.md \
bdev_module.md \
bdev_pg.md \
blob.md \

86
doc/bdevperf.md Normal file
View File

@ -0,0 +1,86 @@
# Using bdevperf application {#bdevperf}
## Introduction
bdevperf is an SPDK application that is used for performance testing
of block devices (bdevs) exposed by the SPDK bdev layer. It is an
alternative to the SPDK bdev fio plugin for benchmarking SPDK bdevs.
In some cases, bdevperf can provide much lower overhead than the fio
plugin, resulting in much better performance for tests using a limited
number of CPU cores.
bdevperf exposes command line interface that allows to specify
SPDK framework options as well as testing options.
Since SPDK 20.07, bdevperf supports configuration file that is similar
to FIO. It allows user to create jobs parameterized by
filename, cpumask, blocksize, queuesize, etc.
## Config file
Bdevperf's config file is similar to FIO's config file format.
Below is an example config file that uses all available parameters:
~~~{.ini}
[global]
filename=Malloc0:Malloc1
bs=1024
iosize=256
rw=randrw
rwmixread=90
[A]
cpumask=0xff
[B]
cpumask=[0-128]
filename=Malloc1
[global]
filename=Malloc0
rw=write
[C]
bs=4096
iosize=128
offset=1000000
length=1000000
~~~
Jobs `[A]` `[B]` or `[C]`, inherit default values from `[global]`
section residing above them. So in the example, job `[A]` inherits
`filename` value and uses both `Malloc0` and `Malloc1` bdevs as targets,
job `[B]` overrides its `filename` value and uses `Malloc1` and
job `[C]` inherits value `Malloc0` for its `filename`.
Interaction with CLI arguments is not the same as in FIO however.
If bdevperf receives CLI argument, it overrides values
of corresponding parameter for all `[global]` sections of config file.
So if example config is used, specifying `-q` argument
will make jobs `[A]` and `[B]` use its value.
Below is a full list of supported parameters with descriptions.
Param | Default | Description
--------- | ----------------- | -----------
filename | | Bdevs to use, separated by ":"
cpumask | Maximum available | CPU mask. Format is defined at @ref cpu_mask
bs | | Block size (io size)
iodepth | | Queue depth
rwmixread | `50` | Percentage of a mixed workload that should be reads
offset | `0` | Start I/O at the provided offset on the bdev
length | 100% of bdev size | End I/O at `offset`+`length` on the bdev
rw | | Type of I/O pattern
Available rw types:
- read
- randread
- write
- randwrite
- verify
- reset
- unmap
- write_zeroes
- flush
- rw
- randrw

View File

@ -2,3 +2,4 @@
- @subpage spdkcli
- @subpage nvme-cli
- @subpage bdevperf

View File

@ -51,10 +51,6 @@
#ifdef SPDK_CONFIG_URING
#include <liburing.h>
#ifndef __NR_sys_io_uring_enter
#define __NR_sys_io_uring_enter 426
#endif
#endif
#if HAVE_LIBAIO
@ -310,25 +306,19 @@ uring_check_io(struct ns_worker_ctx *ns_ctx)
struct perf_task *task;
to_submit = ns_ctx->u.uring.io_pending;
to_complete = ns_ctx->u.uring.io_inflight;
if (to_submit > 0) {
/* If there are I/O to submit, use io_uring_submit here.
* It will automatically call spdk_io_uring_enter appropriately. */
ret = io_uring_submit(&ns_ctx->u.uring.ring);
if (ret < 0) {
return;
}
ns_ctx->u.uring.io_pending = 0;
ns_ctx->u.uring.io_inflight += to_submit;
} else if (to_complete > 0) {
/* If there are I/O in flight but none to submit, we need to
* call io_uring_enter ourselves. */
ret = syscall(__NR_sys_io_uring_enter, ns_ctx->u.uring.ring.ring_fd, 0,
0, IORING_ENTER_GETEVENTS, NULL, 0);
}
if (ret < 0) {
return;
}
to_complete = ns_ctx->u.uring.io_inflight;
if (to_complete > 0) {
count = io_uring_peek_batch_cqe(&ns_ctx->u.uring.ring, ns_ctx->u.uring.cqes, to_complete);
ns_ctx->u.uring.io_inflight -= count;
@ -353,7 +343,7 @@ uring_verify_io(struct perf_task *task, struct ns_entry *entry)
static int
uring_init_ns_worker_ctx(struct ns_worker_ctx *ns_ctx)
{
if (io_uring_queue_init(g_queue_depth, &ns_ctx->u.uring.ring, IORING_SETUP_IOPOLL) < 0) {
if (io_uring_queue_init(g_queue_depth, &ns_ctx->u.uring.ring, 0) < 0) {
SPDK_ERRLOG("uring I/O context setup failure\n");
return -1;
}

View File

@ -166,6 +166,17 @@ struct spdk_accel_batch *spdk_accel_batch_create(struct spdk_io_channel *ch);
int spdk_accel_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
spdk_accel_completion_cb cb_fn, void *cb_arg);
/**
* Synchronous call to cancel a batch sequence. In some cases prepared commands will be
* processed if they cannot be cancelled.
*
* \param ch I/O channel associated with this call.
* \param batch Handle provided when the batch was started with spdk_accel_batch_create().
*
* \return 0 on success, negative errno on failure.
*/
int spdk_accel_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch);
/**
* Synchronous call to prepare a copy request into a previously initialized batch
* created with spdk_accel_batch_create(). The callback will be called when the copy

View File

@ -468,13 +468,6 @@ uint32_t spdk_env_get_core_count(void);
*/
uint32_t spdk_env_get_current_core(void);
/**
* Get the index of the primary dedicated CPU core for this application.
*
* \return the index of the primary dedicated CPU core.
*/
uint32_t spdk_env_get_primary_core(void);
/**
* Get the index of the first dedicated CPU core for this application.
*

View File

@ -172,6 +172,16 @@ struct idxd_batch *spdk_idxd_batch_create(struct spdk_idxd_io_channel *chan);
int spdk_idxd_batch_submit(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch,
spdk_idxd_req_cb cb_fn, void *cb_arg);
/**
* Cancel a batch sequence.
*
* \param chan IDXD channel to submit request.
* \param batch Handle provided when the batch was started with spdk_idxd_batch_create().
*
* \return 0 on success, negative errno on failure.
*/
int spdk_idxd_batch_cancel(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch);
/**
* Synchronous call to prepare a copy request into a previously initialized batch
* created with spdk_idxd_batch_create(). The callback will be called when the copy

View File

@ -54,7 +54,7 @@
* Patch level is incremented on maintenance branch releases and reset to 0 for each
* new major.minor release.
*/
#define SPDK_VERSION_PATCH 0
#define SPDK_VERSION_PATCH 1
/**
* Version string suffix.

View File

@ -67,6 +67,7 @@ struct spdk_accel_engine {
spdk_accel_completion_cb cb_fn, void *cb_arg);
int (*batch_submit)(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
spdk_accel_completion_cb cb_fn, void *cb_arg);
int (*batch_cancel)(struct spdk_io_channel *ch, struct spdk_accel_batch *batch);
int (*compare)(struct spdk_io_channel *ch, void *src1, void *src2,
uint64_t nbytes, spdk_accel_completion_cb cb_fn, void *cb_arg);
int (*fill)(struct spdk_io_channel *ch, void *dst, uint8_t fill,

View File

@ -231,6 +231,17 @@ spdk_accel_batch_get_max(struct spdk_io_channel *ch)
return accel_ch->engine->batch_get_max();
}
/* Accel framework public API for for when an app is unable to complete a batch sequence,
* it cancels with this API.
*/
int
spdk_accel_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch)
{
struct accel_io_channel *accel_ch = spdk_io_channel_get_ctx(ch);
return accel_ch->engine->batch_cancel(accel_ch->ch, batch);
}
/* Accel framework public API for batch prep_copy function. All engines are
* required to implement this API.
*/
@ -791,6 +802,27 @@ sw_accel_batch_prep_crc32c(struct spdk_io_channel *ch, struct spdk_accel_batch *
return 0;
}
static int
sw_accel_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch)
{
struct sw_accel_op *op;
struct sw_accel_io_channel *sw_ch = spdk_io_channel_get_ctx(ch);
if ((struct spdk_accel_batch *)&sw_ch->batch != batch) {
SPDK_ERRLOG("Invalid batch\n");
return -EINVAL;
}
/* Cancel the batch items by moving them back to the op_pool. */
while ((op = TAILQ_FIRST(&sw_ch->batch))) {
TAILQ_REMOVE(&sw_ch->batch, op, link);
TAILQ_INSERT_TAIL(&sw_ch->op_pool, op, link);
}
return 0;
}
static int
sw_accel_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
spdk_accel_completion_cb cb_fn, void *cb_arg)
@ -927,6 +959,7 @@ static struct spdk_accel_engine sw_accel_engine = {
.dualcast = sw_accel_submit_dualcast,
.batch_get_max = sw_accel_batch_get_max,
.batch_create = sw_accel_batch_start,
.batch_cancel = sw_accel_batch_cancel,
.batch_prep_copy = sw_accel_batch_prep_copy,
.batch_prep_dualcast = sw_accel_batch_prep_dualcast,
.batch_prep_compare = sw_accel_batch_prep_compare,

View File

@ -16,6 +16,7 @@
spdk_accel_batch_prep_fill;
spdk_accel_batch_prep_crc32c;
spdk_accel_batch_submit;
spdk_accel_batch_cancel;
spdk_accel_submit_copy;
spdk_accel_submit_dualcast;
spdk_accel_submit_compare;

View File

@ -33,7 +33,6 @@
spdk_mempool_lookup;
spdk_env_get_core_count;
spdk_env_get_current_core;
spdk_env_get_primary_core;
spdk_env_get_first_core;
spdk_env_get_last_core;
spdk_env_get_next_core;

View File

@ -48,12 +48,6 @@ spdk_env_get_current_core(void)
return rte_lcore_id();
}
uint32_t
spdk_env_get_primary_core(void)
{
return rte_get_master_lcore();
}
uint32_t
spdk_env_get_first_core(void)
{

View File

@ -926,6 +926,26 @@ _does_batch_exist(struct idxd_batch *batch, struct spdk_idxd_io_channel *chan)
return found;
}
int
spdk_idxd_batch_cancel(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch)
{
if (_does_batch_exist(batch, chan) == false) {
SPDK_ERRLOG("Attempt to cancel a batch that doesn't exist\n.");
return -EINVAL;
}
if (batch->remaining > 0) {
SPDK_ERRLOG("Cannot cancel batch, already submitted to HW\n.");
return -EINVAL;
}
TAILQ_REMOVE(&chan->batches, batch, link);
spdk_bit_array_clear(chan->ring_ctrl.user_ring_slots, batch->batch_num);
TAILQ_INSERT_TAIL(&chan->batch_pool, batch, link);
return 0;
}
int
spdk_idxd_batch_submit(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch,
spdk_idxd_req_cb cb_fn, void *cb_arg)

View File

@ -13,6 +13,7 @@
spdk_idxd_batch_prep_compare;
spdk_idxd_batch_submit;
spdk_idxd_batch_create;
spdk_idxd_batch_cancel;
spdk_idxd_batch_get_max;
spdk_idxd_set_config;
spdk_idxd_submit_compare;

View File

@ -563,7 +563,16 @@ spdk_nvme_ctrlr_free_io_qpair(struct spdk_nvme_qpair *qpair)
/* Do not retry. */
nvme_qpair_set_state(qpair, NVME_QPAIR_DESTROYING);
nvme_qpair_abort_reqs(qpair, 1);
/* In the multi-process case, a process may call this function on a foreign
* I/O qpair (i.e. one that this process did not create) when that qpairs process
* exits unexpectedly. In that case, we must not try to abort any reqs associated
* with that qpair, since the callbacks will also be foreign to this process.
*/
if (qpair->active_proc == nvme_ctrlr_get_current_process(ctrlr)) {
nvme_qpair_abort_reqs(qpair, 1);
}
nvme_robust_mutex_lock(&ctrlr->ctrlr_lock);
nvme_ctrlr_proc_remove_io_qpair(qpair);

View File

@ -278,7 +278,16 @@ nvme_transport_ctrlr_create_io_qpair(struct spdk_nvme_ctrlr *ctrlr, uint16_t qid
int
nvme_transport_ctrlr_delete_io_qpair(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_qpair *qpair)
{
return qpair->transport->ops.ctrlr_delete_io_qpair(ctrlr, qpair);
const struct spdk_nvme_transport *transport = nvme_get_transport(ctrlr->trid.trstring);
assert(transport != NULL);
/* Do not rely on qpair->transport. For multi-process cases, a foreign process may delete
* the IO qpair, in which case the transport object would be invalid (each process has their
* own unique transport objects since they contain function pointers). So we look up the
* transport object in the delete_io_qpair case.
*/
return transport->ops.ctrlr_delete_io_qpair(ctrlr, qpair);
}
int

View File

@ -403,6 +403,11 @@ struct spdk_nvmf_rdma_qpair {
struct spdk_poller *destruct_poller;
/*
* io_channel which is used to destroy qpair when it is removed from poll group
*/
struct spdk_io_channel *destruct_channel;
/* List of ibv async events */
STAILQ_HEAD(, spdk_nvmf_rdma_ibv_event_ctx) ibv_events;
@ -910,6 +915,11 @@ nvmf_rdma_qpair_destroy(struct spdk_nvmf_rdma_qpair *rqpair)
nvmf_rdma_qpair_clean_ibv_events(rqpair);
if (rqpair->destruct_channel) {
spdk_put_io_channel(rqpair->destruct_channel);
rqpair->destruct_channel = NULL;
}
free(rqpair);
}
@ -3076,22 +3086,36 @@ nvmf_rdma_send_qpair_async_event(struct spdk_nvmf_rdma_qpair *rqpair,
spdk_nvmf_rdma_qpair_ibv_event fn)
{
struct spdk_nvmf_rdma_ibv_event_ctx *ctx;
struct spdk_thread *thr = NULL;
int rc;
if (!rqpair->qpair.group) {
return EINVAL;
if (rqpair->qpair.group) {
thr = rqpair->qpair.group->thread;
} else if (rqpair->destruct_channel) {
thr = spdk_io_channel_get_thread(rqpair->destruct_channel);
}
if (!thr) {
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "rqpair %p has no thread\n", rqpair);
return -EINVAL;
}
ctx = calloc(1, sizeof(*ctx));
if (!ctx) {
return ENOMEM;
return -ENOMEM;
}
ctx->rqpair = rqpair;
ctx->cb_fn = fn;
STAILQ_INSERT_TAIL(&rqpair->ibv_events, ctx, link);
return spdk_thread_send_msg(rqpair->qpair.group->thread, nvmf_rdma_qpair_process_ibv_event,
ctx);
rc = spdk_thread_send_msg(thr, nvmf_rdma_qpair_process_ibv_event, ctx);
if (rc) {
STAILQ_REMOVE(&rqpair->ibv_events, ctx, spdk_nvmf_rdma_ibv_event_ctx, link);
free(ctx);
}
return rc;
}
static void
@ -3115,8 +3139,9 @@ nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
SPDK_ERRLOG("Fatal event received for rqpair %p\n", rqpair);
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
(uintptr_t)rqpair->cm_id, event.event_type);
if (nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_qp_fatal)) {
SPDK_ERRLOG("Failed to send QP_FATAL event for rqpair %p\n", rqpair);
rc = nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_qp_fatal);
if (rc) {
SPDK_WARNLOG("Failed to send QP_FATAL event. rqpair %p, err %d\n", rqpair, rc);
nvmf_rdma_handle_qp_fatal(rqpair);
}
break;
@ -3124,8 +3149,9 @@ nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
/* This event only occurs for shared receive queues. */
rqpair = event.element.qp->qp_context;
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Last WQE reached event received for rqpair %p\n", rqpair);
if (nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_last_wqe_reached)) {
SPDK_ERRLOG("Failed to send LAST_WQE_REACHED event for rqpair %p\n", rqpair);
rc = nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_last_wqe_reached);
if (rc) {
SPDK_WARNLOG("Failed to send LAST_WQE_REACHED event. rqpair %p, err %d\n", rqpair, rc);
rqpair->last_wqe_reached = true;
}
break;
@ -3137,8 +3163,9 @@ nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
(uintptr_t)rqpair->cm_id, event.event_type);
if (nvmf_rdma_update_ibv_state(rqpair) == IBV_QPS_ERR) {
if (nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_sq_drained)) {
SPDK_ERRLOG("Failed to send SQ_DRAINED event for rqpair %p\n", rqpair);
rc = nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_sq_drained);
if (rc) {
SPDK_WARNLOG("Failed to send SQ_DRAINED event. rqpair %p, err %d\n", rqpair, rc);
nvmf_rdma_handle_sq_drained(rqpair);
}
}
@ -3510,12 +3537,53 @@ nvmf_rdma_poll_group_add(struct spdk_nvmf_transport_poll_group *group,
return 0;
}
static int
nvmf_rdma_poll_group_remove(struct spdk_nvmf_transport_poll_group *group,
struct spdk_nvmf_qpair *qpair)
{
struct spdk_nvmf_rdma_qpair *rqpair;
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
assert(group->transport->tgt != NULL);
rqpair->destruct_channel = spdk_get_io_channel(group->transport->tgt);
if (!rqpair->destruct_channel) {
SPDK_WARNLOG("failed to get io_channel, qpair %p\n", qpair);
return 0;
}
/* Sanity check that we get io_channel on the correct thread */
if (qpair->group) {
assert(qpair->group->thread == spdk_io_channel_get_thread(rqpair->destruct_channel));
}
return 0;
}
static int
nvmf_rdma_request_free(struct spdk_nvmf_request *req)
{
struct spdk_nvmf_rdma_request *rdma_req = SPDK_CONTAINEROF(req, struct spdk_nvmf_rdma_request, req);
struct spdk_nvmf_rdma_transport *rtransport = SPDK_CONTAINEROF(req->qpair->transport,
struct spdk_nvmf_rdma_transport, transport);
struct spdk_nvmf_rdma_qpair *rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair,
struct spdk_nvmf_rdma_qpair, qpair);
/*
* AER requests are freed when a qpair is destroyed. The recv corresponding to that request
* needs to be returned to the shared receive queue or the poll group will eventually be
* starved of RECV structures.
*/
if (rqpair->srq && rdma_req->recv) {
int rc;
struct ibv_recv_wr *bad_recv_wr;
rc = ibv_post_srq_recv(rqpair->srq, &rdma_req->recv->wr, &bad_recv_wr);
if (rc) {
SPDK_ERRLOG("Unable to re-post rx descriptor\n");
}
}
_nvmf_rdma_request_free(rdma_req, rtransport);
return 0;
@ -4225,6 +4293,7 @@ const struct spdk_nvmf_transport_ops spdk_nvmf_transport_rdma = {
.get_optimal_poll_group = nvmf_rdma_get_optimal_poll_group,
.poll_group_destroy = nvmf_rdma_poll_group_destroy,
.poll_group_add = nvmf_rdma_poll_group_add,
.poll_group_remove = nvmf_rdma_poll_group_remove,
.poll_group_poll = nvmf_rdma_poll_group_poll,
.req_free = nvmf_rdma_request_free,

View File

@ -44,6 +44,7 @@
#include "spdk/vhost.h"
#include "vhost_internal.h"
#include <rte_version.h>
/* Minimal set of features supported by every SPDK VHOST-BLK device */
#define SPDK_VHOST_BLK_FEATURES_BASE (SPDK_VHOST_FEATURES | \
@ -801,6 +802,32 @@ to_blk_dev(struct spdk_vhost_dev *vdev)
return SPDK_CONTAINEROF(vdev, struct spdk_vhost_blk_dev, vdev);
}
static int
vhost_session_bdev_resize_cb(struct spdk_vhost_dev *vdev,
struct spdk_vhost_session *vsession,
void *ctx)
{
#if RTE_VERSION >= RTE_VERSION_NUM(20, 02, 0, 0)
SPDK_NOTICELOG("bdev send slave msg to vid(%d)\n", vsession->vid);
rte_vhost_slave_config_change(vsession->vid, false);
#else
SPDK_NOTICELOG("bdev does not support resize until DPDK submodule version >= 20.02\n");
#endif
return 0;
}
static void
blk_resize_cb(void *resize_ctx)
{
struct spdk_vhost_blk_dev *bvdev = resize_ctx;
spdk_vhost_lock();
vhost_dev_foreach_session(&bvdev->vdev, vhost_session_bdev_resize_cb,
NULL, NULL);
spdk_vhost_unlock();
}
static void
vhost_dev_bdev_remove_cpl_cb(struct spdk_vhost_dev *vdev, void *ctx)
{
@ -845,6 +872,29 @@ bdev_remove_cb(void *remove_ctx)
spdk_vhost_unlock();
}
static void
bdev_event_cb(enum spdk_bdev_event_type type, struct spdk_bdev *bdev,
void *event_ctx)
{
SPDK_DEBUGLOG(SPDK_LOG_VHOST_BLK, "Bdev event: type %d, name %s\n",
type,
bdev->name);
switch (type) {
case SPDK_BDEV_EVENT_REMOVE:
SPDK_NOTICELOG("bdev name (%s) received event(SPDK_BDEV_EVENT_REMOVE)\n", bdev->name);
bdev_remove_cb(event_ctx);
break;
case SPDK_BDEV_EVENT_RESIZE:
SPDK_NOTICELOG("bdev name (%s) received event(SPDK_BDEV_EVENT_RESIZE)\n", bdev->name);
blk_resize_cb(event_ctx);
break;
default:
SPDK_NOTICELOG("Unsupported bdev event: type %d\n", type);
break;
}
}
static void
free_task_pool(struct spdk_vhost_blk_session *bvsession)
{
@ -1234,7 +1284,7 @@ spdk_vhost_blk_construct(const char *name, const char *cpumask, const char *dev_
vdev->virtio_features |= (1ULL << VIRTIO_BLK_F_FLUSH);
}
ret = spdk_bdev_open(bdev, true, bdev_remove_cb, bvdev, &bvdev->bdev_desc);
ret = spdk_bdev_open_ext(dev_name, true, bdev_event_cb, bvdev, &bvdev->bdev_desc);
if (ret != 0) {
SPDK_ERRLOG("%s: could not open bdev '%s', error=%d\n",
name, dev_name, ret);

View File

@ -443,6 +443,15 @@ idxd_batch_start(struct spdk_io_channel *ch)
return (struct spdk_accel_batch *)spdk_idxd_batch_create(chan->chan);
}
static int
idxd_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *_batch)
{
struct idxd_io_channel *chan = spdk_io_channel_get_ctx(ch);
struct idxd_batch *batch = (struct idxd_batch *)_batch;
return spdk_idxd_batch_cancel(chan->chan, batch);
}
static int
idxd_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *_batch,
spdk_accel_completion_cb cb_fn, void *cb_arg)
@ -561,6 +570,7 @@ static struct spdk_accel_engine idxd_accel_engine = {
.copy = idxd_submit_copy,
.batch_get_max = idxd_batch_get_max,
.batch_create = idxd_batch_start,
.batch_cancel = idxd_batch_cancel,
.batch_prep_copy = idxd_batch_prep_copy,
.batch_prep_fill = idxd_batch_prep_fill,
.batch_prep_dualcast = idxd_batch_prep_dualcast,

View File

@ -390,6 +390,30 @@ ioat_batch_prep_crc32c(struct spdk_io_channel *ch,
return 0;
}
static int
ioat_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch)
{
struct ioat_accel_op *op;
struct ioat_io_channel *ioat_ch = spdk_io_channel_get_ctx(ch);
if ((struct spdk_accel_batch *)&ioat_ch->hw_batch != batch) {
SPDK_ERRLOG("Invalid batch\n");
return -EINVAL;
}
/* Flush the batched HW items, there's no way to cancel these without resetting. */
spdk_ioat_flush(ioat_ch->ioat_ch);
ioat_ch->hw_batch = false;
/* Return batched software items to the pool. */
while ((op = TAILQ_FIRST(&ioat_ch->sw_batch))) {
TAILQ_REMOVE(&ioat_ch->sw_batch, op, link);
TAILQ_INSERT_TAIL(&ioat_ch->op_pool, op, link);
}
return 0;
}
static int
ioat_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
spdk_accel_completion_cb cb_fn, void *cb_arg)
@ -449,6 +473,7 @@ static struct spdk_accel_engine ioat_accel_engine = {
.fill = ioat_submit_fill,
.batch_get_max = ioat_batch_get_max,
.batch_create = ioat_batch_create,
.batch_cancel = ioat_batch_cancel,
.batch_prep_copy = ioat_batch_prep_copy,
.batch_prep_dualcast = ioat_batch_prep_dualcast,
.batch_prep_compare = ioat_batch_prep_compare,

View File

@ -188,7 +188,7 @@ static struct rte_comp_xform g_decomp_xform = {
static void vbdev_compress_examine(struct spdk_bdev *bdev);
static void vbdev_compress_claim(struct vbdev_compress *comp_bdev);
static void vbdev_compress_queue_io(struct spdk_bdev_io *bdev_io);
struct vbdev_compress *_prepare_for_load_init(struct spdk_bdev *bdev);
struct vbdev_compress *_prepare_for_load_init(struct spdk_bdev *bdev, uint32_t lb_size);
static void vbdev_compress_submit_request(struct spdk_io_channel *ch, struct spdk_bdev_io *bdev_io);
static void comp_bdev_ch_destroy_cb(void *io_device, void *ctx_buf);
static void vbdev_compress_delete_done(void *cb_arg, int bdeverrno);
@ -1284,7 +1284,7 @@ vbdev_compress_base_bdev_hotremove_cb(void *ctx)
* information for reducelib to init or load.
*/
struct vbdev_compress *
_prepare_for_load_init(struct spdk_bdev *bdev)
_prepare_for_load_init(struct spdk_bdev *bdev, uint32_t lb_size)
{
struct vbdev_compress *meta_ctx;
@ -1306,7 +1306,12 @@ _prepare_for_load_init(struct spdk_bdev *bdev)
meta_ctx->backing_dev.blockcnt = bdev->blockcnt;
meta_ctx->params.chunk_size = CHUNK_SIZE;
meta_ctx->params.logical_block_size = bdev->blocklen;
if (lb_size == 0) {
meta_ctx->params.logical_block_size = bdev->blocklen;
} else {
meta_ctx->params.logical_block_size = lb_size;
}
meta_ctx->params.backing_io_unit_size = BACKING_IO_SZ;
return meta_ctx;
}
@ -1334,12 +1339,12 @@ _set_pmd(struct vbdev_compress *comp_dev)
/* Call reducelib to initialize a new volume */
static int
vbdev_init_reduce(struct spdk_bdev *bdev, const char *pm_path)
vbdev_init_reduce(struct spdk_bdev *bdev, const char *pm_path, uint32_t lb_size)
{
struct vbdev_compress *meta_ctx;
int rc;
meta_ctx = _prepare_for_load_init(bdev);
meta_ctx = _prepare_for_load_init(bdev, lb_size);
if (meta_ctx == NULL) {
return -EINVAL;
}
@ -1471,7 +1476,7 @@ comp_bdev_ch_destroy_cb(void *io_device, void *ctx_buf)
/* RPC entry point for compression vbdev creation. */
int
create_compress_bdev(const char *bdev_name, const char *pm_path)
create_compress_bdev(const char *bdev_name, const char *pm_path, uint32_t lb_size)
{
struct spdk_bdev *bdev;
@ -1480,7 +1485,12 @@ create_compress_bdev(const char *bdev_name, const char *pm_path)
return -ENODEV;
}
return vbdev_init_reduce(bdev, pm_path);;
if ((lb_size != 0) && (lb_size != LB_SIZE_4K) && (lb_size != LB_SIZE_512B)) {
SPDK_ERRLOG("Logical block size must be 512 or 4096\n");
return -EINVAL;
}
return vbdev_init_reduce(bdev, pm_path, lb_size);
}
/* On init, just init the compress drivers. All metadata is stored on disk. */
@ -1822,7 +1832,7 @@ vbdev_compress_examine(struct spdk_bdev *bdev)
return;
}
meta_ctx = _prepare_for_load_init(bdev);
meta_ctx = _prepare_for_load_init(bdev, 0);
if (meta_ctx == NULL) {
spdk_bdev_module_examine_done(&compress_if);
return;

View File

@ -38,6 +38,9 @@
#include "spdk/bdev.h"
#define LB_SIZE_4K 0x1000UL
#define LB_SIZE_512B 0x200UL
/**
* Get the first compression bdev.
*
@ -85,9 +88,10 @@ typedef void (*spdk_delete_compress_complete)(void *cb_arg, int bdeverrno);
*
* \param bdev_name Bdev on which compression bdev will be created.
* \param pm_path Path to persistent memory.
* \param lb_size Logical block size for the compressed volume in bytes. Must be 4K or 512.
* \return 0 on success, other on failure.
*/
int create_compress_bdev(const char *bdev_name, const char *pm_path);
int create_compress_bdev(const char *bdev_name, const char *pm_path, uint32_t lb_size);
/**
* Delete compress bdev.

View File

@ -149,6 +149,7 @@ SPDK_RPC_REGISTER_ALIAS_DEPRECATED(compress_set_pmd, set_compress_pmd)
struct rpc_construct_compress {
char *base_bdev_name;
char *pm_path;
uint32_t lb_size;
};
/* Free the allocated memory resource after the RPC handling. */
@ -163,6 +164,7 @@ free_rpc_construct_compress(struct rpc_construct_compress *r)
static const struct spdk_json_object_decoder rpc_construct_compress_decoders[] = {
{"base_bdev_name", offsetof(struct rpc_construct_compress, base_bdev_name), spdk_json_decode_string},
{"pm_path", offsetof(struct rpc_construct_compress, pm_path), spdk_json_decode_string},
{"lb_size", offsetof(struct rpc_construct_compress, lb_size), spdk_json_decode_uint32},
};
/* Decode the parameters for this RPC method and properly construct the compress
@ -181,12 +183,12 @@ rpc_bdev_compress_create(struct spdk_jsonrpc_request *request,
SPDK_COUNTOF(rpc_construct_compress_decoders),
&req)) {
SPDK_DEBUGLOG(SPDK_LOG_VBDEV_COMPRESS, "spdk_json_decode_object failed\n");
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_PARSE_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
rc = create_compress_bdev(req.base_bdev_name, req.pm_path);
rc = create_compress_bdev(req.base_bdev_name, req.pm_path, req.lb_size);
if (rc != 0) {
spdk_jsonrpc_send_error_response(request, rc, spdk_strerror(-rc));
goto cleanup;

View File

@ -135,6 +135,15 @@ get_other_cache_base(struct vbdev_ocf_base *base)
return NULL;
}
static bool
is_ocf_cache_running(struct vbdev_ocf *vbdev)
{
if (vbdev->cache.attached && vbdev->ocf_cache) {
return ocf_cache_is_running(vbdev->ocf_cache);
}
return false;
}
/* Get existing OCF cache instance
* that is started by other vbdev */
static ocf_cache_t
@ -149,7 +158,7 @@ get_other_cache_instance(struct vbdev_ocf *vbdev)
if (strcmp(cmp->cache.name, vbdev->cache.name)) {
continue;
}
if (cmp->ocf_cache) {
if (is_ocf_cache_running(cmp)) {
return cmp->ocf_cache;
}
}
@ -190,6 +199,7 @@ static void
unregister_finish(struct vbdev_ocf *vbdev)
{
spdk_bdev_destruct_done(&vbdev->exp_bdev, vbdev->state.stop_status);
ocf_mngt_cache_put(vbdev->ocf_cache);
vbdev_ocf_cache_ctx_put(vbdev->cache_ctx);
vbdev_ocf_mngt_continue(vbdev, 0);
}
@ -230,7 +240,7 @@ remove_core_cache_lock_cmpl(ocf_cache_t cache, void *priv, int error)
static void
detach_core(struct vbdev_ocf *vbdev)
{
if (vbdev->ocf_cache && ocf_cache_is_running(vbdev->ocf_cache)) {
if (is_ocf_cache_running(vbdev)) {
ocf_mngt_cache_lock(vbdev->ocf_cache, remove_core_cache_lock_cmpl, vbdev);
} else {
vbdev_ocf_mngt_continue(vbdev, 0);
@ -291,7 +301,7 @@ stop_vbdev_cache_lock_cmpl(ocf_cache_t cache, void *priv, int error)
static void
stop_vbdev(struct vbdev_ocf *vbdev)
{
if (!ocf_cache_is_running(vbdev->ocf_cache)) {
if (!is_ocf_cache_running(vbdev)) {
vbdev_ocf_mngt_continue(vbdev, 0);
return;
}
@ -334,7 +344,7 @@ flush_vbdev_cache_lock_cmpl(ocf_cache_t cache, void *priv, int error)
static void
flush_vbdev(struct vbdev_ocf *vbdev)
{
if (!ocf_cache_is_running(vbdev->ocf_cache)) {
if (!is_ocf_cache_running(vbdev)) {
vbdev_ocf_mngt_continue(vbdev, -EINVAL);
return;
}
@ -1040,7 +1050,7 @@ start_cache(struct vbdev_ocf *vbdev)
ocf_cache_t existing;
int rc;
if (vbdev->ocf_cache) {
if (is_ocf_cache_running(vbdev)) {
vbdev_ocf_mngt_stop(vbdev, NULL, -EALREADY);
return;
}
@ -1050,6 +1060,7 @@ start_cache(struct vbdev_ocf *vbdev)
SPDK_NOTICELOG("OCF bdev %s connects to existing cache device %s\n",
vbdev->name, vbdev->cache.name);
vbdev->ocf_cache = existing;
ocf_mngt_cache_get(vbdev->ocf_cache);
vbdev->cache_ctx = ocf_cache_get_priv(existing);
vbdev_ocf_cache_ctx_get(vbdev->cache_ctx);
vbdev_ocf_mngt_continue(vbdev, 0);
@ -1070,6 +1081,7 @@ start_cache(struct vbdev_ocf *vbdev)
vbdev_ocf_mngt_exit(vbdev, unregister_path_dirty, rc);
return;
}
ocf_mngt_cache_get(vbdev->ocf_cache);
ocf_cache_set_priv(vbdev->ocf_cache, vbdev->cache_ctx);

View File

@ -2,12 +2,12 @@
%bcond_with doc
Name: spdk
Version: master
Version: 20.07.x
Release: 0%{?dist}
Epoch: 0
URL: http://spdk.io
Source: https://github.com/spdk/spdk/archive/master.tar.gz
Source: https://github.com/spdk/spdk/archive/v20.07.x.tar.gz
Summary: Set of libraries and utilities for high performance user-mode storage
%define package_version %{epoch}:%{version}-%{release}

View File

@ -27,6 +27,7 @@ function install_all_dependencies() {
INSTALL_FUSE=true
INSTALL_RDMA=true
INSTALL_DOCS=true
INSTALL_LIBURING=true
}
function install_liburing() {

View File

@ -177,12 +177,14 @@ if __name__ == "__main__":
def bdev_compress_create(args):
print_json(rpc.bdev.bdev_compress_create(args.client,
base_bdev_name=args.base_bdev_name,
pm_path=args.pm_path))
pm_path=args.pm_path,
lb_size=args.lb_size))
p = subparsers.add_parser('bdev_compress_create', aliases=['construct_compress_bdev'],
help='Add a compress vbdev')
p.add_argument('-b', '--base_bdev_name', help="Name of the base bdev")
p.add_argument('-p', '--pm_path', help="Path to persistent memory")
p.add_argument('-l', '--lb_size', help="Compressed vol logical block size (optional, if used must be 512 or 4096)", type=int, default=0)
p.set_defaults(func=bdev_compress_create)
def bdev_compress_delete(args):

View File

@ -23,17 +23,18 @@ def bdev_set_options(client, bdev_io_pool_size=None, bdev_io_cache_size=None, bd
@deprecated_alias('construct_compress_bdev')
def bdev_compress_create(client, base_bdev_name, pm_path):
def bdev_compress_create(client, base_bdev_name, pm_path, lb_size):
"""Construct a compress virtual block device.
Args:
base_bdev_name: name of the underlying base bdev
pm_path: path to persistent memory
lb_size: logical block size for the compressed vol in bytes. Must be 4K or 512.
Returns:
Name of created virtual block device.
"""
params = {'base_bdev_name': base_bdev_name, 'pm_path': pm_path}
params = {'base_bdev_name': base_bdev_name, 'pm_path': pm_path, 'lb_size': lb_size}
return client.call('bdev_compress_create', params)

View File

@ -0,0 +1,33 @@
bdevperf=$rootdir/test/bdev/bdevperf/bdevperf
function create_job() {
local job_section=$1
local rw=$2
local filename=$3
if [[ $job_section == "global" ]]; then
cat <<- EOF >> "$testdir"/test.conf
[global]
filename=${filename}
EOF
fi
job="[${job_section}]"
echo $global
cat <<- EOF >> "$testdir"/test.conf
${job}
filename=${filename}
bs=1024
rwmixread=70
rw=${rw}
iodepth=256
cpumask=0xff
EOF
}
function get_num_jobs() {
echo "$1" | grep -oE "Using job config with [0-9]+ jobs" | grep -oE "[0-9]+"
}
function cleanup() {
rm -f $testdir/test.conf
}

View File

@ -0,0 +1,25 @@
{
"subsystems": [
{
"subsystem": "bdev",
"config": [
{
"method": "bdev_malloc_create",
"params": {
"name": "Malloc0",
"num_blocks": 102400,
"block_size": 512
}
},
{
"method": "bdev_malloc_create",
"params": {
"name": "Malloc1",
"num_blocks": 102400,
"block_size": 512
}
}
]
}
]
}

View File

@ -0,0 +1,41 @@
#!/usr/bin/env bash
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../../..)
source $rootdir/test/common/autotest_common.sh
source $testdir/common.sh
jsonconf=$testdir/conf.json
testconf=$testdir/test.conf
trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
#Test inheriting filename and rw_mode parameters from global section.
create_job "global" "read" "Malloc0"
create_job "job0"
create_job "job1"
create_job "job2"
create_job "job3"
bdevperf_output=$($bdevperf -t 2 --json $jsonconf -j $testconf 2>&1)
[[ $(get_num_jobs "$bdevperf_output") == "4" ]]
bdevperf_output=$($bdevperf -C -t 2 --json $jsonconf -j $testconf)
cleanup
#Test missing global section.
create_job "job0" "write" "Malloc0"
create_job "job1" "write" "Malloc0"
create_job "job2" "write" "Malloc0"
bdevperf_output=$($bdevperf -t 2 --json $jsonconf -j $testconf 2>&1)
[[ $(get_num_jobs "$bdevperf_output") == "3" ]]
cleanup
#Test inheriting multiple filenames and rw_mode parameters from global section.
create_job "global" "rw" "Malloc0:Malloc1"
create_job "job0"
create_job "job1"
create_job "job2"
create_job "job3"
bdevperf_output=$($bdevperf -t 2 --json $jsonconf -j $testconf 2>&1)
[[ $(get_num_jobs "$bdevperf_output") == "4" ]]
cleanup
trap - SIGINT SIGTERM EXIT

View File

@ -84,7 +84,7 @@ function install_qat() {
sudo rm -rf "$GIT_REPOS/QAT"
fi
sudo mkdir "$GIT_REPOS/QAT"
mkdir "$GIT_REPOS/QAT"
tar -C "$GIT_REPOS/QAT" -xzof - < <(wget -O- "$DRIVER_LOCATION_QAT")

View File

@ -33,7 +33,11 @@ function create_vols() {
waitforbdev lvs0/lv0
$rpc_py compress_set_pmd -p "$pmd"
$rpc_py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem
if [ -z "$1" ]; then
$rpc_py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem
else
$rpc_py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l $1
fi
waitforbdev COMP_lvs0/lv0
}
@ -54,7 +58,7 @@ function run_bdevperf() {
bdevperf_pid=$!
trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT
waitforlisten $bdevperf_pid
create_vols
create_vols $4
$rootdir/test/bdev/bdevperf/bdevperf.py perform_tests
destroy_vols
trap - SIGINT SIGTERM EXIT
@ -78,7 +82,10 @@ esac
mkdir -p /tmp/pmem
# per patch bdevperf uses slightly different params than nightly
# logical block size same as underlying device, then 512 then 4096
run_bdevperf 32 4096 3
run_bdevperf 32 4096 3 512
run_bdevperf 32 4096 3 4096
if [ $RUN_NIGHTLY -eq 1 ]; then
run_bdevio

View File

@ -10,6 +10,5 @@ run_test "ocf_bdevperf_iotypes" "$testdir/integrity/bdevperf-iotypes.sh"
run_test "ocf_stats" "$testdir/integrity/stats.sh"
run_test "ocf_create_destruct" "$testdir/management/create-destruct.sh"
run_test "ocf_multicore" "$testdir/management/multicore.sh"
# Disabled due to issue #1498
# run_test "ocf_persistent_metadata" "$testdir/management/persistent-metadata.sh"
run_test "ocf_persistent_metadata" "$testdir/management/persistent-metadata.sh"
run_test "ocf_remove" "$testdir/management/remove.sh"

View File

@ -127,6 +127,8 @@ DEFINE_STUB_V(spdk_nvme_trid_populate_transport, (struct spdk_nvme_transport_id
enum spdk_nvme_transport_type trtype));
DEFINE_STUB_V(spdk_nvmf_ctrlr_data_init, (struct spdk_nvmf_transport_opts *opts,
struct spdk_nvmf_ctrlr_data *cdata));
DEFINE_STUB(spdk_nvmf_request_complete, int, (struct spdk_nvmf_request *req),
-ENOSPC);
const char *
spdk_nvme_transport_id_trtype_str(enum spdk_nvme_transport_type trtype)