Compare commits

...

38 Commits

Author SHA1 Message Date
Tomasz Zawadzki
df70a66ff4 SPDK 18.10.1
Change-Id: I86b9307d11cfe80f240bc302f1af74594adc6695
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/437196
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Lance Hartmann <lance.hartmann@oracle.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-20 09:16:29 +00:00
Ziye Yang
86dbefdda6 nvmf: check the qpair->ctrlr
The ctrlr may be NULL, so we need to add a check here
to present segment fault.

Change-Id: I6c5361cc829af065082a95df0b8cc2f8d49a6002
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/436950 (master)
Reviewed-on: https://review.gerrithub.io/437916
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2018-12-20 09:11:26 +00:00
Evgeniy Kochetov
2f1ac3644a nvmf/rdma: Fix refcnt check on RDMA QP destroy
Check for QP reference counter in RDMA QP destroy function was wrong
and QP resources were never released.

Change-Id: I6ab0ce39452e8263f89589d138c90f749516ebb1
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-on: https://review.gerrithub.io/436974 (master)
Reviewed-on: https://review.gerrithub.io/437348
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-19 21:33:06 +00:00
Tomasz Zawadzki
51fc40e2d0 changelog: sum up changes going into 18.10.1
Change-Id: Iad911ede120654edbeff4ceafb54197534604b5f
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/437195
Reviewed-by: Lance Hartmann <lance.hartmann@oracle.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-19 16:42:54 +00:00
Lance Hartmann
e9e2d6fd23 build: Install targets for nvme perf and identify
Introduces new macro INSTALL_EXAMPLE and two example apps,
examples/nvme/perf and example/nvme/identify that make
use of it to install them while at the same time
renaming them in the target directory based on the
source directory path relative to examples.

Change-Id: I2d850458bb2589f80e0af6fb7a9d00aa3bbc6907
Signed-off-by: Lance Hartmann <lance.hartmann@oracle.com>
Reviewed-on: https://review.gerrithub.io/429963 (master)
Reviewed-on: https://review.gerrithub.io/435696
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2018-12-19 16:42:54 +00:00
Pawel Wodkowski
b8718ca652 pkg: add spec file for RPM package build
This spec file build following packages:

x86_64/spdk
  Base SPDK package - all apps intalled by `make install`
  - spdk_* applications
  - shared libs in libs64 dir

x86_64/spdk-devel
  SPDK development package:
  - header files
  - static/shared libs

x86_64/spdk-debuginfo
x86_64/spdk-debugsource
  These two are autogenerated by rpmbuild

no_arch/spdk-tools:
  SPDK tools package:
    scripts/rpc.py -> /usr/sbin/spdk-rpc
    scripts/spdkcli.py -> /usr/sbin/spdk-cli

no_arch/spdk-doc: optional, generated when adding '--with doc'
  SPDK html doc package:
    doc/output/html/ -> /usr/share/doc/spdk/html

Change-Id: Ice107a7d014a6b63b2946a557e3ca7be5e486c03
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/424973 (master)
Reviewed-on: https://review.gerrithub.io/435695
Reviewed-by: Lance Hartmann <lance.hartmann@oracle.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-19 16:42:54 +00:00
Evgeniy Kochetov
59c5be6231 nvmf: Improve error handling in spdk_nvmf_transport_poll_group_create
At least in case of RDMA transport, poll_group_create (spdk_nvmf_rdma_poll_group_create)
 can return error (NULL).

Change-Id: If1576b3515e7f9ede76af08bfa6b1c8399dcda09
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Reviewed-on: https://review.gerrithub.io/436887 (master)
Reviewed-on: https://review.gerrithub.io/437349
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-19 16:39:27 +00:00
Maciej Szwed
8ca38e04b1 blob: Remove snpashot from the list only on blob deletion
When last clone of a snapshot is being deleted
we remove that snapshot from snapshots list.
We should not do that as it still works as a
snapshot and it is read-only, but it does not list
as a snpashot from get_bdevs. Instead remove snapshot
entry from the list when blob that represents that
snapshot is being removed.

Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: I8d76229567fb0d9f15d29bad3fd94b9813249604
Reviewed-on: https://review.gerrithub.io/436971 (master)
Reviewed-on: https://review.gerrithub.io/437194
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-14 17:59:20 +00:00
Seth Howell
34d47ecc35 memory: return first translation from mem_map_translate
This should have always been the case with spdk_mem_map_translate. For
some memory maps (like RDMA) this doesn't matter, but for others like
our virtual to physical map, this is critical for retrieving valid
translations.

This behavior change will only affect maps that have a registered
contiguous memory callback.

Change-Id: I67517667f01d974702d7daa7c81238281aae0cf6
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/436562 (master)
Reviewed-on: https://review.gerrithub.io/437202
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-14 17:59:03 +00:00
Ben Walker
ffa07029b1 fio_plugin: Perform initialization and teardown on consistent thread
Change-Id: I2798f2b0c5a7fe210f03b4e1477fd04f480febb3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/431843 (master)
Reviewed-on: https://review.gerrithub.io/437200
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-14 14:53:56 +00:00
Ben Walker
b180363dc2 fio_plugin: Move spdk_fio_module_finish_done up
This is going to be used in another function later in this
patch series, so move it up to avoid forward declaring.

Change-Id: I05227ed38b0d98b95f6a7126e9db1e3c31dc21c5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/432087 (master)
Reviewed-on: https://review.gerrithub.io/437199
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-14 14:53:56 +00:00
Ben Walker
4860351f5c fio_plugin: Move spdk_fio_cleanup_thread higher up
We're going to use this in another function later in this
patch series, so move it up now so we don't have to
forward declare it later.

Change-Id: I95244f062c6e75904ec2458cbad7a18a0923a5b0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/432086 (master)
Reviewed-on: https://review.gerrithub.io/437198
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-14 14:53:56 +00:00
Liang Yan
8c3856bb6b test/nvme: replace PAGE_SIZE with OCSSD_SECTOR_SIZE
Same as 29be88f (test/blob: always use SPDK_BS_PAGE_SIZE instead of
PAGE_SIZE), ARM system could not find defination of PAGE_SIZE when
building nvme_ns_ocssd_cmd_ut.c., we use OCSSD_SECTOR_SIZE instead of
Page_SIZE here, also use OCSSD_SECTOR_SIZE instead of "0x1000" for const
uint32_t sector_size.

Change-Id: Ib3062232e44b0be26ade7c64340918f2f23ada03
Signed-off-by: Liang Yan <lyan@suse.com>
Reviewed-on: https://review.gerrithub.io/430802 (master)
Reviewed-on: https://review.gerrithub.io/437054
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-13 20:12:00 +00:00
Seth Howell
36a9358315 NVMe-oF: Add explicit reports for MR-split buffers:
This is a failsafe for finding and reporting data buffers that span
multiple Memory Regions. These errors should never be triggered, but
finding and reporting them will help any debugging.

Change-Id: I3c61e3cc510f5a36039fc1815ff0de45fce794d5
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/436054 (master)
Reviewed-on: https://review.gerrithub.io/437016
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-12 23:44:29 +00:00
Evgeniy Kochetov
567b1006a9 nvmf/rdma: Fix QP shutdown procedure implementation
This patch implements the following QP shutdown flow:
1. Move the QP to ERR state
2. Post dummy work requests to send and receive queues
3. Poll CQ until it returns dummy work requests (with WR Flush Error status)
4. Call ibv_destroy_qp and release resources

In order to differentiate dummy and normal WRs new spdk_nvmf_rdma_wr
structure was introduced which contains type of WR. Since now it is
expected that wr_id field in ibv_recv/send_wr and ibv_wc always points
to this structure. Based on WR type wr_id can be safely casted to
correct container structure. In case of unsuccessful work completions
'opcode' can not be used for this purpose because it may be
invalid (see "IB Architecture Specification Volume 1", ch. 11.4.2.1
"Poll for completion").

Change-Id: Ifb791e36114c619c71ad4d831f2c7972fe7cf13d
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-on: https://review.gerrithub.io/430754 (master)
Reviewed-on: https://review.gerrithub.io/436859
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2018-12-12 15:26:21 +00:00
yidong0635
addb8110f2 nvmf: change the return type of calloc failed
1.nvmf: change the return type of calloc failed to -ENOMEM and
keep consistency in this file.
2.thread: revise rc condition to ( rc!= 0),to deal with
all abnormal return.

Change-Id: I7cccb548f30448eaa1bac1a5904c3edcad9c1208
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Reviewed-on: https://review.gerrithub.io/431459 (master)
Reviewed-on: https://review.gerrithub.io/436858
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-12 15:26:21 +00:00
Ben Walker
bb8aaef404 nvmf/rdma: Simplify code that casts wr_id field
We were previously doing lots of checks in debug mode
to verify the validity of this field. Now we understand
how it works, so these checks are never going to hit
and are just making the code harder to read.

Change-Id: Ic82d479ae34a8c7db06db62aee1cdf6e8bec126e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/430866 (master)
Reviewed-on: https://review.gerrithub.io/436857
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-12 15:26:21 +00:00
Ben Walker
e0108d9d1f nvmf: Simplify qpair states
When we thought we could do error recovery we differentiated between
inactive and erro states. However, that's not possible so collapse
them back into one.

Change-Id: I57622c400378f2d4c518efbc12fb52e665a9ba4c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/430627 (master)
Reviewed-on: https://review.gerrithub.io/436856
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-12 15:26:21 +00:00
Ben Walker
e64897fdb0 nvmf/rdma: No longer rely on wr.opcode being valid on error
The specification states that opcode is not valid when the status
is not success. Instead, keep track of the operation type ourselves.

Change-Id: I60af4b35e761c46f5f296a61cedfca198836197f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Co-authored-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Reviewed-on: https://review.gerrithub.io/430865 (master)
Reviewed-on: https://review.gerrithub.io/436855
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-12 15:26:21 +00:00
Ben Walker
958065377b nvmf/rdma: Remove error recovery of RDMA qps
After some lengthy discussions with the RDMA experts, the only
way forward on an RDMA qp error is to disconnect it. The initiator
can create a new qp if it wants to later on.

Remove all of the error recovery code and disconnect the qp
any time an error is encountered.

Change-Id: I11e1df5aaeb8592a47ca01529cfd6a069828bd7f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/430389 (master)
Reviewed-on: https://review.gerrithub.io/436854
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2018-12-12 15:26:21 +00:00
Tomasz Zawadzki
f7dcb01028 app/trace: fix app_name dereference on init
In case when only file_name was provided,
app_name was still dereferenced.

Change-Id: Ica948e072ef02a8daadf303b3e2a004640d19000
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/433609 (master)
Reviewed-on: https://review.gerrithub.io/435683
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-10 19:39:00 +00:00
Ben Walker
a08acfbfe4 fio_plugin: exit immediately if spdk_fio_init_env fails
This is going to be moved to a separate thread later in
the patch series and it won't be able to return an error
code. The entire program must exit.

Change-Id: I718d2a82346f78596f805492392a843e7a79359e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/432088 (master)
Reviewed-on: https://review.gerrithub.io/435680
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2018-12-10 19:38:46 +00:00
Darek Stojaczyk
6bf1af641f pci: fix config access return codes on BSD
BSD implementation for config access in DPDK seems to
return 0 on success while Linux implementation returns 0
only on failure. The env wrapper was always treating 0 as
an error and caused some of our PCI initialization code
to fail prematurely.

At one point DPDK harmonized this BSD behavior with Linux,
but only for config reads.

Fixes #484

Change-Id: I4ea850ea50f5e667fad28e8125209b21c377a2a3
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/432401 (master)
Reviewed-on: https://review.gerrithub.io/435682
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2018-12-07 21:28:01 +00:00
Pawel Wodkowski
fbe6a4a3b0 mk: set executable bit only for real libraries
Change-Id: I9f68f83472844cabe6a36f70a00f2d2cd319fc62
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/431169 (master)
Reviewed-on: https://review.gerrithub.io/435694
Reviewed-by: Lance Hartmann <lance.hartmann@oracle.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-06 17:21:13 +00:00
Jim Harris
7a58660763 env_dpdk: tell DPDK to not free dynamically allocated memory
This keeps us from having to deal with ALLOC and FREE events
for mismatching regions - which necessitated splitting new
regions into individual pages.  This caused all kinds of
problems with NVMe-oF - for example, buffers that spanned
memory regions, or bumping up against MR limits on RDMA
NICs.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I18dcdae148436b55d4481bb9fb8799f4832c7de1
Reviewed-on: https://review.gerrithub.io/434895 (master)
Reviewed-on: https://review.gerrithub.io/435692
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-06 17:09:54 +00:00
Seth Howell
2f7de0751b nvme_rdma/nvmf: add cb_fns to check mr contiguity
This is necessary to confirm that a buffer that spans a 2_MB boundary is
still in a single MR.

Change-Id: If0d14e514ab2197a0d2e3af4f565f56d50591210
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/435179 (master)
Reviewed-on: https://review.gerrithub.io/435689
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-06 17:09:20 +00:00
Darek Stojaczyk
c93b418730 memory: fix contiguous memory calculation for unaligned buffers
We assumed spdk_mem_map_translate() translates only 2MB-aligned
addresses, but that's not true. Both vtophys and NVMf can use it
with any user-provided address and that breaks our contiguous memory
length calculations. Right now each buffer appeared to have the
first n * 2MB of memory always contiguous.

This is a bugfix for NVMf which does check the mapping length
internally. It will also become handy when adding the similar
functionality to spdk_vtophys().

Change-Id: I3bc8e0b2b8d203cb90320a79264effb7ea7037a7
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/433076 (master)
Reviewed-on: https://review.gerrithub.io/435691
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-06 17:08:38 +00:00
Andrey Kuzmin
77f89cbc31 bdev: unregister bdevs top-down during shutdown.
There are some use cases such as multipath and RAID expansion where a
vbdev could have been registered before one of its base bdevs.

Currently we unregister bdevs at shutdown in reverse order of their
registration.  Continue to do that in general, but skip any bdev that
is still claimed.  Any bdevs skipped in this way will eventually be
unregistered once any bdevs that have claimed it have completed
unregistration.

Change-Id: Iafde9558430bc5ce56e8608ef50bcb2b5fbfbf71
Signed-off-by: Andrey Kuzmin <akuzmin@jetstreamsoft.com>
Reviewed-on: https://review.gerrithub.io/432136 (master)
Reviewed-on: https://review.gerrithub.io/435688
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-05 17:50:50 +00:00
Paul Luse
126c22020a bdev/crypto: unregister io_device on failure in examine callback
In vbdev_crypto_examine() we were failing to unregsiter the io_device
in the event that spdk_vbdev_register() call failed.  Found via
inspection.

Change-Id: I73c6c0c5693777b93c1ea02dcf2e2e65d46fe27d
Signed-off-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/432933 (master)
Reviewed-on: https://review.gerrithub.io/435677
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-05 14:40:12 +00:00
Paul Luse
421dd2ea28 bdev/crypto: unregister io_device on failure in RPC create
In create_crypto_disk() we were failing to unregsiter the io_device
in the event that spdk_vbdev_register() call failed.  Found via
inspection looking into a CI failure however this potentially could
have caused that failure as well (I don't think so though, there
were no prints in the log that it followed this path).

Change-Id: I7085c4e25665a5a15def38d6726d519731b7e44e
Signed-off-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/432932 (master)
Reviewed-on: https://review.gerrithub.io/435676
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
2018-12-05 14:40:12 +00:00
Paul Luse
1c2d2e753a bdev/crypto: respect return value of vbdev_crypto_claim()
Found via inspection while invetigating a CI failure. In
vbdev_crypto_examine() we were not looking at the rc from
vbdev_crypto_claim()

Change-Id: I8be09b5844e18e35b95f19e378fe280323d183fa
Signed-off-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/432930 (master)
Reviewed-on: https://review.gerrithub.io/435675
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2018-12-05 14:40:12 +00:00
paul luse
abea6a196b dev/crypto: unregister IO device on vbdev delete
The io device was previously not being unregistered, this looks
like the root cause to several recent CI failures in the bdev
JSON tests that use RPC to create/save/clear/load configs.

Change-Id: Ia77ed9fe230c79188d8d862e98b17581ae81b31f
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/433194 (master)
Reviewed-on: https://review.gerrithub.io/435674
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2018-12-05 14:40:12 +00:00
Paul Luse
9469b3ef20 bdev/crypto: prevent duplicates from being added to global name list
Picked up from recent update to the passthru module.

Change-Id: Ie7404e5aadda6691b15020adbbc1e1f4c23d0a12
Signed-off-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/433384 (master)
Reviewed-on: https://review.gerrithub.io/435673
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2018-12-05 14:40:12 +00:00
Tomasz Zawadzki
b177b5fcb8 changelog: add section for 18.10.x release
Change-Id: I3d2b32cc143d076a8b66669713bccb1b3c039481
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/435687
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-05 00:40:14 +00:00
Ziye Yang
3cbfbbf62f iscsi: do not caculate the CRC in multiple times
Since a PDU can be sentout in many times, so this
function is called multiple times, so move the
caculation function in spdk_iscsi_conn_write_pdu

Change-Id: Ib4da4a74c17709d98e4b01c2e76428021afea947
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/429931 (master)
Reviewed-on: https://review.gerrithub.io/435681
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2018-12-05 00:36:40 +00:00
Ben Walker
e2cdfd0ce6 nvmf/rdma: Move cm event processing down near where it is referenced
Code movement only. No other changes.

Change-Id: I04cf179ecd57154172a9369926cbeaaa37e11a52
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/430505
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-on: https://review.gerrithub.io/435671
2018-12-03 15:11:37 +00:00
Ben Walker
6c758f0a50 nvmf/rdma: Remove handling for LAST_WQE_REACHED
This event only occurs when using shared receive queues, which
the target does not currently support.

Change-Id: If155843610cf0e961b9783d4afd64b969b4316f4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/430388
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-on: https://review.gerrithub.io/435670
2018-12-03 15:11:37 +00:00
Kefu Chai
2aa7396d82 barrier.h: fix load fence on armv8
the weak memory ordering on armv8 can be implemented using

dsb ld

see
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0802b/DMB.html

Change-Id: I4db34b87fa659967109adc688cad784018cedaae
Signed-off-by: Kefu Chai <tchaikov@gmail.com>
Reviewed-on: https://review.gerrithub.io/430767
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/435672
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2018-12-03 15:08:43 +00:00
28 changed files with 899 additions and 618 deletions

View File

@ -1,5 +1,56 @@
# Changelog
## v18.10.1:
### Packaging
Spec file for creating RPM SPDK packages is now available:
x86_64/spdk
Base SPDK package - all apps installed by `make install`
- spdk_* applications
- shared libs in libs64 dir
x86_64/spdk-devel
SPDK development package:
- header files
- static/shared libs
x86_64/spdk-debuginfo
x86_64/spdk-debugsource
These two are autogenerated by rpmbuild
no_arch/spdk-tools:
SPDK tools package:
scripts/rpc.py -> /usr/sbin/spdk-rpc
scripts/spdkcli.py -> /usr/sbin/spdk-cli
no_arch/spdk-doc: optional, generated when adding '--with doc'
SPDK html doc package:
doc/output/html/ -> /usr/share/doc/spdk/html
### bdev
On shutdown, bdev unregister now proceeds in top-down fashion, with
claimed bdevs skipped (these will be unregistered later, when virtual
bdev built on top of the respective base bdev unclaims it). This
allows virtual bdevs to be shut down cleanly as opposed to the
previous behavior that didn't differentiate between hotremove and
planned shutdown.
### Bug fixes
- nvmf: improve error handling during disconnect and QP shutdown
- blobstore: remove snpashot from the list only on blob deletion
- memory: return first translation from mem_map_translate
- iSCSI: prevent recalculation of CRC multiple times
- bdev/crypto: improve error handling when creating and unregistering
- env_dpdk/memory: fix contiguous memory calculation for unaligned buffers
- env_dpdk/memory: prevent freeing dynamically allocated memory back to OS
- fio_plugin: Perform initialization and teardown on consistent thread
- fio_plugin: exit immediately if spdk_fio_init_env fails
- app/trace: fix app_name dereference on init
## v18.10:
### nvme

View File

@ -341,15 +341,14 @@ int main(int argc, char **argv)
exit(1);
}
if (file_name) {
fd = open(file_name, O_RDONLY);
} else {
if (shm_id >= 0) {
snprintf(shm_name, sizeof(shm_name), "/%s_trace.%d", app_name, shm_id);
} else {
snprintf(shm_name, sizeof(shm_name), "/%s_trace.pid%d", app_name, shm_pid);
}
if (file_name) {
fd = open(file_name, O_RDONLY);
} else {
fd = shm_open(shm_name, O_RDONLY, 0600);
}
if (fd < 0) {

View File

@ -94,8 +94,6 @@ struct spdk_fio_thread {
unsigned int iocq_size; // number of iocq entries allocated
};
static struct spdk_fio_thread *g_init_thread = NULL;
static pthread_t g_init_thread_id = 0;
static bool g_spdk_env_initialized = false;
static int spdk_fio_init(struct thread_data *td);
@ -208,70 +206,79 @@ spdk_fio_init_thread(struct thread_data *td)
return 0;
}
static void
spdk_fio_cleanup_thread(struct spdk_fio_thread *fio_thread)
{
struct spdk_fio_target *target, *tmp;
TAILQ_FOREACH_SAFE(target, &fio_thread->targets, link, tmp) {
TAILQ_REMOVE(&fio_thread->targets, target, link);
spdk_put_io_channel(target->ch);
spdk_bdev_close(target->desc);
free(target);
}
while (spdk_fio_poll_thread(fio_thread) > 0) {}
spdk_free_thread();
spdk_ring_free(fio_thread->ring);
free(fio_thread->iocq);
free(fio_thread);
}
static void
spdk_fio_module_finish_done(void *cb_arg)
{
*(bool *)cb_arg = true;
}
static pthread_t g_init_thread_id = 0;
static pthread_mutex_t g_init_mtx = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t g_init_cond = PTHREAD_COND_INITIALIZER;
static void *
spdk_init_thread_poll(void *arg)
{
struct spdk_fio_thread *thread = arg;
int oldstate;
int rc;
/* Loop until the thread is cancelled */
while (true) {
rc = pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &oldstate);
if (rc != 0) {
SPDK_ERRLOG("Unable to set cancel state disabled on g_init_thread (%d): %s\n",
rc, spdk_strerror(rc));
}
spdk_fio_poll_thread(thread);
rc = pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, &oldstate);
if (rc != 0) {
SPDK_ERRLOG("Unable to set cancel state enabled on g_init_thread (%d): %s\n",
rc, spdk_strerror(rc));
}
/* This is a pthread cancellation point and cannot be removed. */
sleep(1);
}
return NULL;
}
static int
spdk_fio_init_env(struct thread_data *td)
{
struct spdk_fio_options *eo = arg;
struct spdk_fio_thread *fio_thread;
struct spdk_fio_options *eo;
bool done = false;
int rc;
struct spdk_conf *config;
struct spdk_env_opts opts;
bool done;
int rc;
size_t count;
struct timespec ts;
struct thread_data td = {};
/* Create a dummy thread data for use on the initialization thread. */
td.o.iodepth = 32;
td.eo = eo;
/* Parse the SPDK configuration file */
eo = td->eo;
eo = arg;
if (!eo->conf || !strlen(eo->conf)) {
SPDK_ERRLOG("No configuration file provided\n");
return -1;
rc = EINVAL;
goto err_exit;
}
config = spdk_conf_allocate();
if (!config) {
SPDK_ERRLOG("Unable to allocate configuration file\n");
return -1;
rc = ENOMEM;
goto err_exit;
}
rc = spdk_conf_read(config, eo->conf);
if (rc != 0) {
SPDK_ERRLOG("Invalid configuration file format\n");
spdk_conf_free(config);
return -1;
goto err_exit;
}
if (spdk_conf_first_section(config) == NULL) {
SPDK_ERRLOG("Invalid configuration file format\n");
spdk_conf_free(config);
return -1;
rc = EINVAL;
goto err_exit;
}
spdk_conf_set_as_default(config);
@ -287,23 +294,25 @@ spdk_fio_init_env(struct thread_data *td)
if (spdk_env_init(&opts) < 0) {
SPDK_ERRLOG("Unable to initialize SPDK env\n");
spdk_conf_free(config);
return -1;
rc = EINVAL;
goto err_exit;
}
spdk_unaffinitize_thread();
/* Create an SPDK thread temporarily */
rc = spdk_fio_init_thread(td);
rc = spdk_fio_init_thread(&td);
if (rc < 0) {
SPDK_ERRLOG("Failed to create initialization thread\n");
return -1;
goto err_exit;
}
g_init_thread = fio_thread = td->io_ops_data;
fio_thread = td.io_ops_data;
/* Initialize the copy engine */
spdk_copy_engine_initialize();
/* Initialize the bdev layer */
done = false;
spdk_bdev_initialize(spdk_fio_bdev_init_done, &done);
/* First, poll until initialization is done. */
@ -319,16 +328,75 @@ spdk_fio_init_env(struct thread_data *td)
count = spdk_fio_poll_thread(fio_thread);
} while (count > 0);
/* Set condition variable */
pthread_mutex_lock(&g_init_mtx);
pthread_cond_signal(&g_init_cond);
while (true) {
spdk_fio_poll_thread(fio_thread);
clock_gettime(CLOCK_REALTIME, &ts);
ts.tv_sec += 1;
rc = pthread_cond_timedwait(&g_init_cond, &g_init_mtx, &ts);
if (rc != ETIMEDOUT) {
break;
}
}
pthread_mutex_unlock(&g_init_mtx);
done = false;
spdk_bdev_finish(spdk_fio_module_finish_done, &done);
do {
spdk_fio_poll_thread(fio_thread);
} while (!done);
do {
count = spdk_fio_poll_thread(fio_thread);
} while (count > 0);
done = false;
spdk_copy_engine_finish(spdk_fio_module_finish_done, &done);
do {
spdk_fio_poll_thread(fio_thread);
} while (!done);
do {
count = spdk_fio_poll_thread(fio_thread);
} while (count > 0);
spdk_fio_cleanup_thread(fio_thread);
pthread_exit(NULL);
err_exit:
exit(rc);
return NULL;
}
static int
spdk_fio_init_env(struct thread_data *td)
{
int rc;
/*
* Spawn a thread to continue polling this thread
* occasionally.
* Spawn a thread to handle initialization operations and to poll things
* like the admin queues periodically.
*/
rc = pthread_create(&g_init_thread_id, NULL, &spdk_init_thread_poll, fio_thread);
rc = pthread_create(&g_init_thread_id, NULL, &spdk_init_thread_poll, td->eo);
if (rc != 0) {
SPDK_ERRLOG("Unable to spawn thread to poll admin queue. It won't be polled.\n");
}
/* Wait for background thread to advance past the initialization */
pthread_mutex_lock(&g_init_mtx);
pthread_cond_wait(&g_init_cond, &g_init_mtx);
pthread_mutex_unlock(&g_init_mtx);
return 0;
}
@ -430,26 +498,6 @@ spdk_fio_init(struct thread_data *td)
return 0;
}
static void
spdk_fio_cleanup_thread(struct spdk_fio_thread *fio_thread)
{
struct spdk_fio_target *target, *tmp;
TAILQ_FOREACH_SAFE(target, &fio_thread->targets, link, tmp) {
TAILQ_REMOVE(&fio_thread->targets, target, link);
spdk_put_io_channel(target->ch);
spdk_bdev_close(target->desc);
free(target);
}
while (spdk_fio_poll_thread(fio_thread) > 0) {}
spdk_free_thread();
spdk_ring_free(fio_thread->ring);
free(fio_thread->iocq);
free(fio_thread);
}
static void
spdk_fio_cleanup(struct thread_data *td)
{
@ -725,50 +773,15 @@ static void fio_init spdk_fio_register(void)
register_ioengine(&ioengine);
}
static void
spdk_fio_module_finish_done(void *cb_arg)
{
*(bool *)cb_arg = true;
}
static void
spdk_fio_finish_env(void)
{
struct spdk_fio_thread *fio_thread;
bool done = false;
size_t count;
/* the same thread that called spdk_fio_init_env */
fio_thread = g_init_thread;
if (pthread_cancel(g_init_thread_id) == 0) {
pthread_mutex_lock(&g_init_mtx);
pthread_cond_signal(&g_init_cond);
pthread_mutex_unlock(&g_init_mtx);
pthread_join(g_init_thread_id, NULL);
}
spdk_bdev_finish(spdk_fio_module_finish_done, &done);
do {
spdk_fio_poll_thread(fio_thread);
} while (!done);
do {
count = spdk_fio_poll_thread(fio_thread);
} while (count > 0);
done = false;
spdk_copy_engine_finish(spdk_fio_module_finish_done, &done);
do {
spdk_fio_poll_thread(fio_thread);
} while (!done);
do {
count = spdk_fio_poll_thread(fio_thread);
} while (count > 0);
spdk_fio_cleanup_thread(fio_thread);
}
static void fio_exit spdk_fio_unregister(void)
{
if (g_spdk_env_initialized) {

View File

@ -36,4 +36,7 @@ include $(SPDK_ROOT_DIR)/mk/spdk.common.mk
APP = identify
install: $(APP)
$(INSTALL_EXAMPLE)
include $(SPDK_ROOT_DIR)/mk/nvme.libtest.mk

View File

@ -41,4 +41,7 @@ SYS_LIBS += -laio
CFLAGS += -DHAVE_LIBAIO
endif
install: $(APP)
$(INSTALL_EXAMPLE)
include $(SPDK_ROOT_DIR)/mk/nvme.libtest.mk

View File

@ -64,7 +64,7 @@ extern "C" {
#ifdef __PPC64__
#define spdk_rmb() __asm volatile("sync" ::: "memory")
#elif defined(__aarch64__)
#define spdk_rmb() __asm volatile("dsb lt" ::: "memory")
#define spdk_rmb() __asm volatile("dsb ld" ::: "memory")
#elif defined(__i386__) || defined(__x86_64__)
#define spdk_rmb() __asm volatile("lfence" ::: "memory")
#else

View File

@ -54,7 +54,7 @@
* Patch level is incremented on maintenance branch releases and reset to 0 for each
* new major.minor release.
*/
#define SPDK_VERSION_PATCH 0
#define SPDK_VERSION_PATCH 1
/**
* Version string suffix.

View File

@ -966,7 +966,7 @@ _spdk_bdev_finish_unregister_bdevs_iter(void *cb_arg, int bdeverrno)
if (TAILQ_EMPTY(&g_bdev_mgr.bdevs)) {
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Done unregistering bdevs\n");
/*
* Bdev module finish need to be deffered as we might be in the middle of some context
* Bdev module finish need to be deferred as we might be in the middle of some context
* (like bdev part free) that will use this bdev (or private bdev driver ctx data)
* after returning.
*/
@ -975,12 +975,38 @@ _spdk_bdev_finish_unregister_bdevs_iter(void *cb_arg, int bdeverrno)
}
/*
* Unregister the last bdev in the list. The last bdev in the list should be a bdev
* that has no bdevs that depend on it.
* Unregister last unclaimed bdev in the list, to ensure that bdev subsystem
* shutdown proceeds top-down. The goal is to give virtual bdevs an opportunity
* to detect clean shutdown as opposed to run-time hot removal of the underlying
* base bdevs.
*
* Also, walk the list in the reverse order.
*/
bdev = TAILQ_LAST(&g_bdev_mgr.bdevs, spdk_bdev_list);
for (bdev = TAILQ_LAST(&g_bdev_mgr.bdevs, spdk_bdev_list);
bdev; bdev = TAILQ_PREV(bdev, spdk_bdev_list, internal.link)) {
if (bdev->internal.claim_module != NULL) {
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Skipping claimed bdev '%s'(<-'%s').\n",
bdev->name, bdev->internal.claim_module->name);
continue;
}
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Unregistering bdev '%s'\n", bdev->name);
spdk_bdev_unregister(bdev, _spdk_bdev_finish_unregister_bdevs_iter, bdev);
return;
}
/*
* If any bdev fails to unclaim underlying bdev properly, we may face the
* case of bdev list consisting of claimed bdevs only (if claims are managed
* correctly, this would mean there's a loop in the claims graph which is
* clearly impossible). Warn and unregister last bdev on the list then.
*/
for (bdev = TAILQ_LAST(&g_bdev_mgr.bdevs, spdk_bdev_list);
bdev; bdev = TAILQ_PREV(bdev, spdk_bdev_list, internal.link)) {
SPDK_ERRLOG("Unregistering claimed bdev '%s'!\n", bdev->name);
spdk_bdev_unregister(bdev, _spdk_bdev_finish_unregister_bdevs_iter, bdev);
return;
}
}
void

View File

@ -974,6 +974,19 @@ vbdev_crypto_io_type_supported(void *ctx, enum spdk_bdev_io_type io_type)
}
}
/* Callback for unregistering the IO device. */
static void
_device_unregister_cb(void *io_device)
{
struct vbdev_crypto *crypto_bdev = io_device;
/* Done with this crypto_bdev. */
free(crypto_bdev->drv_name);
free(crypto_bdev->key);
free(crypto_bdev->crypto_bdev.name);
free(crypto_bdev);
}
/* Called after we've unregistered following a hot remove callback.
* Our finish entry point will be called next.
*/
@ -982,18 +995,18 @@ vbdev_crypto_destruct(void *ctx)
{
struct vbdev_crypto *crypto_bdev = (struct vbdev_crypto *)ctx;
/* Remove this device from the internal list */
TAILQ_REMOVE(&g_vbdev_crypto, crypto_bdev, link);
/* Unclaim the underlying bdev. */
spdk_bdev_module_release_bdev(crypto_bdev->base_bdev);
/* Close the underlying bdev. */
spdk_bdev_close(crypto_bdev->base_desc);
/* Done with this crypto_bdev. */
TAILQ_REMOVE(&g_vbdev_crypto, crypto_bdev, link);
free(crypto_bdev->drv_name);
free(crypto_bdev->key);
free(crypto_bdev->crypto_bdev.name);
free(crypto_bdev);
/* Unregister the io_device. */
spdk_io_device_unregister(crypto_bdev, _device_unregister_cb);
return 0;
}
@ -1116,6 +1129,13 @@ vbdev_crypto_insert_name(const char *bdev_name, const char *vbdev_name,
int rc, j;
bool found = false;
TAILQ_FOREACH(name, &g_bdev_names, link) {
if (strcmp(vbdev_name, name->vbdev_name) == 0) {
SPDK_ERRLOG("crypto bdev %s already exists\n", vbdev_name);
return -EEXIST;
}
}
name = calloc(1, sizeof(struct bdev_names));
if (!name) {
SPDK_ERRLOG("could not allocate bdev_names\n");
@ -1222,6 +1242,7 @@ create_crypto_disk(const char *bdev_name, const char *vbdev_name,
SPDK_ERRLOG("could not register crypto_bdev\n");
spdk_bdev_close(crypto_bdev->base_desc);
TAILQ_REMOVE(&g_vbdev_crypto, crypto_bdev, link);
spdk_io_device_unregister(crypto_bdev, NULL);
free(crypto_bdev->crypto_bdev.name);
free(crypto_bdev->key);
free(crypto_bdev);
@ -1487,7 +1508,7 @@ vbdev_crypto_claim(struct spdk_bdev *bdev)
goto error_claim;
}
SPDK_NOTICELOG("registered crypto_bdev for: %s\n", name->vbdev_name);
SPDK_NOTICELOG("registered io_device for: %s\n", name->vbdev_name);
}
return rc;
@ -1537,6 +1558,7 @@ delete_crypto_disk(struct spdk_bdev *bdev, spdk_delete_crypto_complete cb_fn,
}
}
/* Additional cleanup happens in the destruct callback. */
spdk_bdev_unregister(bdev, cb_fn, cb_arg);
}
@ -1553,7 +1575,11 @@ vbdev_crypto_examine(struct spdk_bdev *bdev)
struct vbdev_crypto *crypto_bdev, *tmp;
int rc;
vbdev_crypto_claim(bdev);
rc = vbdev_crypto_claim(bdev);
if (rc) {
spdk_bdev_module_examine_done(&crypto_if);
return;
}
TAILQ_FOREACH_SAFE(crypto_bdev, &g_vbdev_crypto, link, tmp) {
if (strcmp(crypto_bdev->base_bdev->name, bdev->name) == 0) {
@ -1563,6 +1589,7 @@ vbdev_crypto_examine(struct spdk_bdev *bdev)
SPDK_ERRLOG("could not register crypto_bdev\n");
spdk_bdev_close(crypto_bdev->base_desc);
TAILQ_REMOVE(&g_vbdev_crypto, crypto_bdev, link);
spdk_io_device_unregister(crypto_bdev, NULL);
free(crypto_bdev->crypto_bdev.name);
free(crypto_bdev->key);
free(crypto_bdev);

View File

@ -2361,11 +2361,6 @@ _spdk_bs_blob_list_remove(struct spdk_blob *blob)
free(clone_entry);
snapshot_entry->clone_count--;
if (snapshot_entry->clone_count == 0) {
/* Snapshot have no more clones */
TAILQ_REMOVE(&blob->bs->snapshots, snapshot_entry, link);
free(snapshot_entry);
}
return 0;
}
@ -4956,6 +4951,7 @@ static void
_spdk_bs_delete_open_cpl(void *cb_arg, struct spdk_blob *blob, int bserrno)
{
spdk_bs_sequence_t *seq = cb_arg;
struct spdk_blob_list *snapshot = NULL;
uint32_t page_num;
if (bserrno != 0) {
@ -4987,6 +4983,16 @@ _spdk_bs_delete_open_cpl(void *cb_arg, struct spdk_blob *blob, int bserrno)
* get returned after this point by _spdk_blob_lookup().
*/
TAILQ_REMOVE(&blob->bs->blobs, blob, link);
/* If blob is a snapshot then remove it from the list */
TAILQ_FOREACH(snapshot, &blob->bs->snapshots, link) {
if (snapshot->id == blob->id) {
TAILQ_REMOVE(&blob->bs->snapshots, snapshot, link);
free(snapshot);
break;
}
}
page_num = _spdk_bs_blobid_to_page(blob->id);
spdk_bit_array_clear(blob->bs->used_blobids, page_num);
blob->state = SPDK_BLOB_STATE_DIRTY;

View File

@ -68,6 +68,7 @@ extern struct rte_pci_bus rte_pci_bus;
#define SHIFT_4KB 12 /* (1 << 12) == 4KB */
#define MASK_4KB ((1ULL << SHIFT_4KB) - 1)
#define VALUE_4KB (1 << SHIFT_4KB)
struct spdk_pci_enum_ctx {
struct rte_pci_driver driver;

View File

@ -57,6 +57,8 @@
#define MAP_256TB_IDX(vfn_2mb) ((vfn_2mb) >> (SHIFT_1GB - SHIFT_2MB))
#define MAP_1GB_IDX(vfn_2mb) ((vfn_2mb) & ((1ULL << (SHIFT_1GB - SHIFT_2MB)) - 1))
#define _2MB_OFFSET(ptr) (((uintptr_t)(ptr)) & (VALUE_2MB - 1))
/* Page is registered */
#define REG_MAP_REGISTERED (1ULL << 62)
@ -592,6 +594,7 @@ spdk_mem_map_translate(const struct spdk_mem_map *map, uint64_t vaddr, uint64_t
uint64_t total_size = 0;
uint64_t cur_size;
uint64_t prev_translation;
uint64_t orig_translation;
if (size != NULL) {
total_size = *size;
@ -612,9 +615,9 @@ spdk_mem_map_translate(const struct spdk_mem_map *map, uint64_t vaddr, uint64_t
return map->default_translation;
}
cur_size = VALUE_2MB;
cur_size = VALUE_2MB - _2MB_OFFSET(vaddr);
if (size != NULL) {
*size = VALUE_2MB;
*size = cur_size;
}
map_2mb = &map_1gb->map[idx_1gb];
@ -623,7 +626,8 @@ spdk_mem_map_translate(const struct spdk_mem_map *map, uint64_t vaddr, uint64_t
return map_2mb->translation_2mb;
}
prev_translation = map_2mb->translation_2mb;;
orig_translation = map_2mb->translation_2mb;
prev_translation = orig_translation;
while (cur_size < total_size) {
vfn_2mb++;
idx_256tb = MAP_256TB_IDX(vfn_2mb);
@ -644,7 +648,7 @@ spdk_mem_map_translate(const struct spdk_mem_map *map, uint64_t vaddr, uint64_t
}
*size = cur_size;
return prev_translation;
return orig_translation;
}
#if RTE_VERSION >= RTE_VERSION_NUM(18, 05, 0, 0)
@ -653,14 +657,18 @@ memory_hotplug_cb(enum rte_mem_event event_type,
const void *addr, size_t len, void *arg)
{
if (event_type == RTE_MEM_EVENT_ALLOC) {
spdk_mem_register((void *)addr, len);
/* Now mark each segment so that DPDK won't later free it.
* This ensures we don't have to deal with the memory
* getting freed in different units than it was allocated.
*/
while (len > 0) {
struct rte_memseg *seg;
seg = rte_mem_virt2memseg(addr, NULL);
assert(seg != NULL);
assert(len >= seg->hugepage_sz);
spdk_mem_register((void *)seg->addr, seg->hugepage_sz);
seg->flags |= RTE_MEMSEG_FLAG_DO_NOT_FREE;
addr = (void *)((uintptr_t)addr + seg->hugepage_sz);
len -= seg->hugepage_sz;
}

View File

@ -299,6 +299,11 @@ spdk_pci_device_cfg_read(struct spdk_pci_device *dev, void *value, uint32_t len,
#else
rc = rte_eal_pci_read_config(dev, value, len, offset);
#endif
#if defined(__FreeBSD__) && RTE_VERSION < RTE_VERSION_NUM(18, 11, 0, 0)
/* Older DPDKs return 0 on success and -1 on failure */
return rc;
#endif
return (rc > 0 && (uint32_t) rc == len) ? 0 : -1;
}
@ -312,6 +317,11 @@ spdk_pci_device_cfg_write(struct spdk_pci_device *dev, void *value, uint32_t len
#else
rc = rte_eal_pci_write_config(dev, value, len, offset);
#endif
#ifdef __FreeBSD__
/* DPDK returns 0 on success and -1 on failure */
return rc;
#endif
return (rc > 0 && (uint32_t) rc == len) ? 0 : -1;
}

View File

@ -51,6 +51,12 @@
#include "iscsi/tgt_node.h"
#include "iscsi/portal_grp.h"
#define MAKE_DIGEST_WORD(BUF, CRC32C) \
( ((*((uint8_t *)(BUF)+0)) = (uint8_t)((uint32_t)(CRC32C) >> 0)), \
((*((uint8_t *)(BUF)+1)) = (uint8_t)((uint32_t)(CRC32C) >> 8)), \
((*((uint8_t *)(BUF)+2)) = (uint8_t)((uint32_t)(CRC32C) >> 16)), \
((*((uint8_t *)(BUF)+3)) = (uint8_t)((uint32_t)(CRC32C) >> 24)))
#define SPDK_ISCSI_CONNECTION_MEMSET(conn) \
memset(&(conn)->portal, 0, sizeof(*(conn)) - \
offsetof(struct spdk_iscsi_conn, portal));
@ -1261,6 +1267,22 @@ spdk_iscsi_conn_flush_pdus(void *_conn)
void
spdk_iscsi_conn_write_pdu(struct spdk_iscsi_conn *conn, struct spdk_iscsi_pdu *pdu)
{
uint32_t crc32c;
if (pdu->bhs.opcode != ISCSI_OP_LOGIN_RSP) {
/* Header Digest */
if (conn->header_digest) {
crc32c = spdk_iscsi_pdu_calc_header_digest(pdu);
MAKE_DIGEST_WORD(pdu->header_digest, crc32c);
}
/* Data Digest */
if (conn->data_digest && DGET24(pdu->bhs.data_segment_len) != 0) {
crc32c = spdk_iscsi_pdu_calc_data_digest(pdu);
MAKE_DIGEST_WORD(pdu->data_digest, crc32c);
}
}
TAILQ_INSERT_TAIL(&conn->write_pdu_list, pdu, tailq);
spdk_iscsi_conn_flush_pdus(conn);
}

View File

@ -117,12 +117,6 @@ static int spdk_iscsi_reject(struct spdk_iscsi_conn *conn, struct spdk_iscsi_pdu
| (((uint32_t) *((uint8_t *)(BUF)+3)) << 24)) \
== (CRC32C))
#define MAKE_DIGEST_WORD(BUF, CRC32C) \
( ((*((uint8_t *)(BUF)+0)) = (uint8_t)((uint32_t)(CRC32C) >> 0)), \
((*((uint8_t *)(BUF)+1)) = (uint8_t)((uint32_t)(CRC32C) >> 8)), \
((*((uint8_t *)(BUF)+2)) = (uint8_t)((uint32_t)(CRC32C) >> 16)), \
((*((uint8_t *)(BUF)+3)) = (uint8_t)((uint32_t)(CRC32C) >> 24)))
#if 0
static int
spdk_match_digest_word(const uint8_t *buf, uint32_t crc32c)
@ -307,7 +301,7 @@ spdk_islun2lun(uint64_t islun)
return lun_i;
}
static uint32_t
uint32_t
spdk_iscsi_pdu_calc_header_digest(struct spdk_iscsi_pdu *pdu)
{
uint32_t crc32c;
@ -325,7 +319,7 @@ spdk_iscsi_pdu_calc_header_digest(struct spdk_iscsi_pdu *pdu)
return crc32c;
}
static uint32_t
uint32_t
spdk_iscsi_pdu_calc_data_digest(struct spdk_iscsi_pdu *pdu)
{
uint32_t data_len = DGET24(pdu->bhs.data_segment_len);
@ -573,7 +567,6 @@ spdk_iscsi_build_iovecs(struct spdk_iscsi_conn *conn, struct iovec *iovec,
struct spdk_iscsi_pdu *pdu)
{
int iovec_cnt = 0;
uint32_t crc32c;
int enable_digest;
int total_ahs_len;
int data_len;
@ -601,9 +594,6 @@ spdk_iscsi_build_iovecs(struct spdk_iscsi_conn *conn, struct iovec *iovec,
/* Header Digest */
if (enable_digest && conn->header_digest) {
crc32c = spdk_iscsi_pdu_calc_header_digest(pdu);
MAKE_DIGEST_WORD(pdu->header_digest, crc32c);
iovec[iovec_cnt].iov_base = pdu->header_digest;
iovec[iovec_cnt].iov_len = ISCSI_DIGEST_LEN;
iovec_cnt++;
@ -618,9 +608,6 @@ spdk_iscsi_build_iovecs(struct spdk_iscsi_conn *conn, struct iovec *iovec,
/* Data Digest */
if (enable_digest && conn->data_digest && data_len != 0) {
crc32c = spdk_iscsi_pdu_calc_data_digest(pdu);
MAKE_DIGEST_WORD(pdu->data_digest, crc32c);
iovec[iovec_cnt].iov_base = pdu->data_digest;
iovec[iovec_cnt].iov_len = ISCSI_DIGEST_LEN;
iovec_cnt++;

View File

@ -435,6 +435,8 @@ int spdk_iscsi_copy_param2var(struct spdk_iscsi_conn *conn);
void spdk_iscsi_task_cpl(struct spdk_scsi_task *scsi_task);
void spdk_iscsi_task_mgmt_cpl(struct spdk_scsi_task *scsi_task);
uint32_t spdk_iscsi_pdu_calc_header_digest(struct spdk_iscsi_pdu *pdu);
uint32_t spdk_iscsi_pdu_calc_data_digest(struct spdk_iscsi_pdu *pdu);
/* Memory management */
void spdk_put_pdu(struct spdk_iscsi_pdu *pdu);

View File

@ -642,6 +642,13 @@ nvme_rdma_mr_map_notify(void *cb_ctx, struct spdk_mem_map *map,
return rc;
}
static int
nvme_rdma_check_contiguous_entries(uint64_t addr_1, uint64_t addr_2)
{
/* Two contiguous mappings will point to the same address which is the start of the RDMA MR. */
return addr_1 == addr_2;
}
static int
nvme_rdma_register_mem(struct nvme_rdma_qpair *rqpair)
{
@ -649,7 +656,7 @@ nvme_rdma_register_mem(struct nvme_rdma_qpair *rqpair)
struct spdk_nvme_rdma_mr_map *mr_map;
const struct spdk_mem_map_ops nvme_rdma_map_ops = {
.notify_cb = nvme_rdma_mr_map_notify,
.are_contiguous = NULL
.are_contiguous = nvme_rdma_check_contiguous_entries
};
pthread_mutex_lock(&g_rdma_mr_maps_mutex);
@ -873,6 +880,9 @@ nvme_rdma_build_contig_inline_request(struct nvme_rdma_qpair *rqpair,
(uint64_t)payload, &requested_size);
if (mr == NULL || requested_size < req->payload_size) {
if (mr) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
}
return -EINVAL;
}
@ -920,7 +930,11 @@ nvme_rdma_build_contig_request(struct nvme_rdma_qpair *rqpair,
requested_size = req->payload_size;
mr = (struct ibv_mr *)spdk_mem_map_translate(rqpair->mr_map->map, (uint64_t)payload,
&requested_size);
if (mr == NULL || requested_size < req->payload_size) {
if (mr) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
}
return -1;
}
@ -981,6 +995,9 @@ nvme_rdma_build_sgl_request(struct nvme_rdma_qpair *rqpair,
&mr_length);
if (mr == NULL || mr_length < sge_length) {
if (mr) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
}
return -1;
}
@ -1074,6 +1091,9 @@ nvme_rdma_build_sgl_inline_request(struct nvme_rdma_qpair *rqpair,
mr = (struct ibv_mr *)spdk_mem_map_translate(rqpair->mr_map->map, (uint64_t)virt_addr,
&requested_size);
if (mr == NULL || requested_size < req->payload_size) {
if (mr) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
}
return -1;
}

View File

@ -139,7 +139,7 @@ spdk_nvmf_tgt_create_poll_group(void *io_device, void *ctx_buf)
group->num_sgroups = tgt->opts.max_subsystems;
group->sgroups = calloc(tgt->opts.max_subsystems, sizeof(struct spdk_nvmf_subsystem_poll_group));
if (!group->sgroups) {
return -1;
return -ENOMEM;
}
for (sid = 0; sid < tgt->opts.max_subsystems; sid++) {
@ -700,7 +700,7 @@ spdk_nvmf_poll_group_add(struct spdk_nvmf_poll_group *group,
if (rc == 0) {
spdk_nvmf_qpair_set_state(qpair, SPDK_NVMF_QPAIR_ACTIVE);
} else {
spdk_nvmf_qpair_set_state(qpair, SPDK_NVMF_QPAIR_INACTIVE);
spdk_nvmf_qpair_set_state(qpair, SPDK_NVMF_QPAIR_ERROR);
}
return rc;
@ -743,7 +743,7 @@ _spdk_nvmf_qpair_destroy(void *ctx, int status)
struct spdk_nvmf_ctrlr *ctrlr = qpair->ctrlr;
assert(qpair->state == SPDK_NVMF_QPAIR_DEACTIVATING);
spdk_nvmf_qpair_set_state(qpair, SPDK_NVMF_QPAIR_INACTIVE);
spdk_nvmf_qpair_set_state(qpair, SPDK_NVMF_QPAIR_ERROR);
qpair_ctx->qid = qpair->qid;
TAILQ_REMOVE(&qpair->group->qpairs, qpair, link);
@ -781,8 +781,7 @@ spdk_nvmf_qpair_disconnect(struct spdk_nvmf_qpair *qpair, nvmf_qpair_disconnect_
/* The queue pair must be disconnected from the thread that owns it */
assert(qpair->group->thread == spdk_get_thread());
if (qpair->state == SPDK_NVMF_QPAIR_DEACTIVATING ||
qpair->state == SPDK_NVMF_QPAIR_INACTIVE) {
if (qpair->state != SPDK_NVMF_QPAIR_ACTIVE) {
/* This can occur if the connection is killed by the target,
* which results in a notification that the connection
* died. Send a message to defer the processing of this
@ -1043,7 +1042,7 @@ _nvmf_subsystem_disconnect_next_qpair(void *ctx)
subsystem = qpair_ctx->subsystem;
TAILQ_FOREACH(qpair, &group->qpairs, link) {
if (qpair->ctrlr->subsys == subsystem) {
if ((qpair->ctrlr != NULL) && (qpair->ctrlr->subsys == subsystem)) {
break;
}
}
@ -1084,7 +1083,7 @@ spdk_nvmf_poll_group_remove_subsystem(struct spdk_nvmf_poll_group *group,
sgroup->state = SPDK_NVMF_SUBSYSTEM_INACTIVE;
TAILQ_FOREACH(qpair, &group->qpairs, link) {
if (qpair->ctrlr->subsys == subsystem) {
if ((qpair->ctrlr != NULL) && (qpair->ctrlr->subsys == subsystem)) {
break;
}
}

View File

@ -59,7 +59,6 @@ enum spdk_nvmf_subsystem_state {
enum spdk_nvmf_qpair_state {
SPDK_NVMF_QPAIR_UNINITIALIZED = 0,
SPDK_NVMF_QPAIR_INACTIVE,
SPDK_NVMF_QPAIR_ACTIVATING,
SPDK_NVMF_QPAIR_ACTIVE,
SPDK_NVMF_QPAIR_DEACTIVATING,

View File

@ -1,8 +1,8 @@
/*-
* BSD LICENSE
*
* Copyright (c) Intel Corporation.
* All rights reserved.
* Copyright (c) Intel Corporation. All rights reserved.
* Copyright (c) 2018 Mellanox Technologies LTD. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@ -185,6 +185,18 @@ SPDK_TRACE_REGISTER_FN(nvmf_trace)
OWNER_NONE, OBJECT_NONE, 0, 0, "");
}
enum spdk_nvmf_rdma_wr_type {
RDMA_WR_TYPE_RECV,
RDMA_WR_TYPE_SEND,
RDMA_WR_TYPE_DATA,
RDMA_WR_TYPE_DRAIN_SEND,
RDMA_WR_TYPE_DRAIN_RECV
};
struct spdk_nvmf_rdma_wr {
enum spdk_nvmf_rdma_wr_type type;
};
/* This structure holds commands as they are received off the wire.
* It must be dynamically paired with a full request object
* (spdk_nvmf_rdma_request) to service a request. It is separate
@ -201,6 +213,8 @@ struct spdk_nvmf_rdma_recv {
/* In-capsule data buffer */
uint8_t *buf;
struct spdk_nvmf_rdma_wr rdma_wr;
TAILQ_ENTRY(spdk_nvmf_rdma_recv) link;
};
@ -213,20 +227,30 @@ struct spdk_nvmf_rdma_request {
struct spdk_nvmf_rdma_recv *recv;
struct {
struct spdk_nvmf_rdma_wr rdma_wr;
struct ibv_send_wr wr;
struct ibv_sge sgl[NVMF_DEFAULT_TX_SGE];
} rsp;
struct {
struct spdk_nvmf_rdma_wr rdma_wr;
struct ibv_send_wr wr;
struct ibv_sge sgl[SPDK_NVMF_MAX_SGL_ENTRIES];
void *buffers[SPDK_NVMF_MAX_SGL_ENTRIES];
} data;
struct spdk_nvmf_rdma_wr rdma_wr;
TAILQ_ENTRY(spdk_nvmf_rdma_request) link;
TAILQ_ENTRY(spdk_nvmf_rdma_request) state_link;
};
enum spdk_nvmf_rdma_qpair_disconnect_flags {
RDMA_QP_DISCONNECTING = 1,
RDMA_QP_RECV_DRAINED = 1 << 1,
RDMA_QP_SEND_DRAINED = 1 << 2
};
struct spdk_nvmf_rdma_qpair {
struct spdk_nvmf_qpair qpair;
@ -289,7 +313,9 @@ struct spdk_nvmf_rdma_qpair {
struct ibv_qp_init_attr ibv_init_attr;
struct ibv_qp_attr ibv_attr;
bool qpair_disconnected;
uint32_t disconnect_flags;
struct spdk_nvmf_rdma_wr drain_send_wr;
struct spdk_nvmf_rdma_wr drain_recv_wr;
/* Reference counter for how many unprocessed messages
* from other threads are currently outstanding. The
@ -379,17 +405,6 @@ spdk_nvmf_rdma_qpair_dec_refcnt(struct spdk_nvmf_rdma_qpair *rqpair)
return new_refcnt;
}
/* API to IBV QueuePair */
static const char *str_ibv_qp_state[] = {
"IBV_QPS_RESET",
"IBV_QPS_INIT",
"IBV_QPS_RTR",
"IBV_QPS_RTS",
"IBV_QPS_SQD",
"IBV_QPS_SQE",
"IBV_QPS_ERR"
};
static enum ibv_qp_state
spdk_nvmf_rdma_update_ibv_state(struct spdk_nvmf_rdma_qpair *rqpair) {
enum ibv_qp_state old_state, new_state;
@ -432,6 +447,16 @@ spdk_nvmf_rdma_update_ibv_state(struct spdk_nvmf_rdma_qpair *rqpair) {
return new_state;
}
static const char *str_ibv_qp_state[] = {
"IBV_QPS_RESET",
"IBV_QPS_INIT",
"IBV_QPS_RTR",
"IBV_QPS_RTS",
"IBV_QPS_SQD",
"IBV_QPS_SQE",
"IBV_QPS_ERR"
};
static int
spdk_nvmf_rdma_set_ibv_state(struct spdk_nvmf_rdma_qpair *rqpair,
enum ibv_qp_state new_state)
@ -557,15 +582,17 @@ spdk_nvmf_rdma_cur_queue_depth(struct spdk_nvmf_rdma_qpair *rqpair)
static void
spdk_nvmf_rdma_qpair_destroy(struct spdk_nvmf_rdma_qpair *rqpair)
{
spdk_trace_record(TRACE_RDMA_QP_DESTROY, 0, 0, (uintptr_t)rqpair->cm_id, 0);
int qd;
if (spdk_nvmf_rdma_cur_queue_depth(rqpair)) {
rqpair->qpair_disconnected = true;
if (rqpair->refcnt != 0) {
return;
}
if (rqpair->refcnt > 0) {
return;
spdk_trace_record(TRACE_RDMA_QP_DESTROY, 0, 0, (uintptr_t)rqpair->cm_id, 0);
qd = spdk_nvmf_rdma_cur_queue_depth(rqpair);
if (qd != 0) {
SPDK_WARNLOG("Destroying qpair when queue depth is %d\n", qd);
}
if (rqpair->poller) {
@ -622,8 +649,9 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
rqpair->ibv_init_attr.send_cq = rqpair->poller->cq;
rqpair->ibv_init_attr.recv_cq = rqpair->poller->cq;
rqpair->ibv_init_attr.cap.max_send_wr = rqpair->max_queue_depth *
2; /* SEND, READ, and WRITE operations */
rqpair->ibv_init_attr.cap.max_recv_wr = rqpair->max_queue_depth; /* RECV operations */
2 + 1; /* SEND, READ, and WRITE operations + dummy drain WR */
rqpair->ibv_init_attr.cap.max_recv_wr = rqpair->max_queue_depth +
1; /* RECV operations + dummy drain WR */
rqpair->ibv_init_attr.cap.max_send_sge = rqpair->max_sge;
rqpair->ibv_init_attr.cap.max_recv_sge = NVMF_DEFAULT_RX_SGE;
@ -708,6 +736,8 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
transport->opts.in_capsule_data_size));
}
rdma_recv->rdma_wr.type = RDMA_WR_TYPE_RECV;
rdma_recv->sgl[0].addr = (uintptr_t)&rqpair->cmds[i];
rdma_recv->sgl[0].length = sizeof(rqpair->cmds[i]);
rdma_recv->sgl[0].lkey = rqpair->cmds_mr->lkey;
@ -720,7 +750,7 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
rdma_recv->wr.num_sge++;
}
rdma_recv->wr.wr_id = (uintptr_t)rdma_recv;
rdma_recv->wr.wr_id = (uintptr_t)&rdma_recv->rdma_wr;
rdma_recv->wr.sg_list = rdma_recv->sgl;
rc = ibv_post_recv(rqpair->cm_id->qp, &rdma_recv->wr, &bad_wr);
@ -744,7 +774,8 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
rdma_req->rsp.sgl[0].length = sizeof(rqpair->cpls[i]);
rdma_req->rsp.sgl[0].lkey = rqpair->cpls_mr->lkey;
rdma_req->rsp.wr.wr_id = (uintptr_t)rdma_req;
rdma_req->rsp.rdma_wr.type = RDMA_WR_TYPE_SEND;
rdma_req->rsp.wr.wr_id = (uintptr_t)&rdma_req->rsp.rdma_wr;
rdma_req->rsp.wr.next = NULL;
rdma_req->rsp.wr.opcode = IBV_WR_SEND;
rdma_req->rsp.wr.send_flags = IBV_SEND_SIGNALED;
@ -752,7 +783,8 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
rdma_req->rsp.wr.num_sge = SPDK_COUNTOF(rdma_req->rsp.sgl);
/* Set up memory for data buffers */
rdma_req->data.wr.wr_id = (uint64_t)rdma_req;
rdma_req->data.rdma_wr.type = RDMA_WR_TYPE_DATA;
rdma_req->data.wr.wr_id = (uintptr_t)&rdma_req->data.rdma_wr;
rdma_req->data.wr.next = NULL;
rdma_req->data.wr.send_flags = IBV_SEND_SIGNALED;
rdma_req->data.wr.sg_list = rdma_req->data.sgl;
@ -997,172 +1029,6 @@ nvmf_rdma_connect(struct spdk_nvmf_transport *transport, struct rdma_cm_event *e
return 0;
}
static void
_nvmf_rdma_disconnect(void *ctx)
{
struct spdk_nvmf_qpair *qpair = ctx;
struct spdk_nvmf_rdma_qpair *rqpair;
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
spdk_nvmf_rdma_qpair_dec_refcnt(rqpair);
spdk_nvmf_qpair_disconnect(qpair, NULL, NULL);
}
static void
_nvmf_rdma_disconnect_retry(void *ctx)
{
struct spdk_nvmf_qpair *qpair = ctx;
struct spdk_nvmf_poll_group *group;
/* Read the group out of the qpair. This is normally set and accessed only from
* the thread that created the group. Here, we're not on that thread necessarily.
* The data member qpair->group begins it's life as NULL and then is assigned to
* a pointer and never changes. So fortunately reading this and checking for
* non-NULL is thread safe in the x86_64 memory model. */
group = qpair->group;
if (group == NULL) {
/* The qpair hasn't been assigned to a group yet, so we can't
* process a disconnect. Send a message to ourself and try again. */
spdk_thread_send_msg(spdk_get_thread(), _nvmf_rdma_disconnect_retry, qpair);
return;
}
spdk_thread_send_msg(group->thread, _nvmf_rdma_disconnect, qpair);
}
static int
nvmf_rdma_disconnect(struct rdma_cm_event *evt)
{
struct spdk_nvmf_qpair *qpair;
struct spdk_nvmf_rdma_qpair *rqpair;
if (evt->id == NULL) {
SPDK_ERRLOG("disconnect request: missing cm_id\n");
return -1;
}
qpair = evt->id->context;
if (qpair == NULL) {
SPDK_ERRLOG("disconnect request: no active connection\n");
return -1;
}
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
spdk_trace_record(TRACE_RDMA_QP_DISCONNECT, 0, 0, (uintptr_t)rqpair->cm_id, 0);
spdk_nvmf_rdma_update_ibv_state(rqpair);
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
_nvmf_rdma_disconnect_retry(qpair);
return 0;
}
#ifdef DEBUG
static const char *CM_EVENT_STR[] = {
"RDMA_CM_EVENT_ADDR_RESOLVED",
"RDMA_CM_EVENT_ADDR_ERROR",
"RDMA_CM_EVENT_ROUTE_RESOLVED",
"RDMA_CM_EVENT_ROUTE_ERROR",
"RDMA_CM_EVENT_CONNECT_REQUEST",
"RDMA_CM_EVENT_CONNECT_RESPONSE",
"RDMA_CM_EVENT_CONNECT_ERROR",
"RDMA_CM_EVENT_UNREACHABLE",
"RDMA_CM_EVENT_REJECTED",
"RDMA_CM_EVENT_ESTABLISHED",
"RDMA_CM_EVENT_DISCONNECTED",
"RDMA_CM_EVENT_DEVICE_REMOVAL",
"RDMA_CM_EVENT_MULTICAST_JOIN",
"RDMA_CM_EVENT_MULTICAST_ERROR",
"RDMA_CM_EVENT_ADDR_CHANGE",
"RDMA_CM_EVENT_TIMEWAIT_EXIT"
};
#endif /* DEBUG */
static void
spdk_nvmf_process_cm_event(struct spdk_nvmf_transport *transport, new_qpair_fn cb_fn)
{
struct spdk_nvmf_rdma_transport *rtransport;
struct rdma_cm_event *event;
int rc;
rtransport = SPDK_CONTAINEROF(transport, struct spdk_nvmf_rdma_transport, transport);
if (rtransport->event_channel == NULL) {
return;
}
while (1) {
rc = rdma_get_cm_event(rtransport->event_channel, &event);
if (rc == 0) {
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Acceptor Event: %s\n", CM_EVENT_STR[event->event]);
spdk_trace_record(TRACE_RDMA_CM_ASYNC_EVENT, 0, 0, 0, event->event);
switch (event->event) {
case RDMA_CM_EVENT_ADDR_RESOLVED:
case RDMA_CM_EVENT_ADDR_ERROR:
case RDMA_CM_EVENT_ROUTE_RESOLVED:
case RDMA_CM_EVENT_ROUTE_ERROR:
/* No action required. The target never attempts to resolve routes. */
break;
case RDMA_CM_EVENT_CONNECT_REQUEST:
rc = nvmf_rdma_connect(transport, event, cb_fn);
if (rc < 0) {
SPDK_ERRLOG("Unable to process connect event. rc: %d\n", rc);
break;
}
break;
case RDMA_CM_EVENT_CONNECT_RESPONSE:
/* The target never initiates a new connection. So this will not occur. */
break;
case RDMA_CM_EVENT_CONNECT_ERROR:
/* Can this happen? The docs say it can, but not sure what causes it. */
break;
case RDMA_CM_EVENT_UNREACHABLE:
case RDMA_CM_EVENT_REJECTED:
/* These only occur on the client side. */
break;
case RDMA_CM_EVENT_ESTABLISHED:
/* TODO: Should we be waiting for this event anywhere? */
break;
case RDMA_CM_EVENT_DISCONNECTED:
case RDMA_CM_EVENT_DEVICE_REMOVAL:
rc = nvmf_rdma_disconnect(event);
if (rc < 0) {
SPDK_ERRLOG("Unable to process disconnect event. rc: %d\n", rc);
break;
}
break;
case RDMA_CM_EVENT_MULTICAST_JOIN:
case RDMA_CM_EVENT_MULTICAST_ERROR:
/* Multicast is not used */
break;
case RDMA_CM_EVENT_ADDR_CHANGE:
/* Not utilizing this event */
break;
case RDMA_CM_EVENT_TIMEWAIT_EXIT:
/* For now, do nothing. The target never re-uses queue pairs. */
break;
default:
SPDK_ERRLOG("Unexpected Acceptor Event [%d]\n", event->event);
break;
}
rdma_ack_cm_event(event);
} else {
if (errno != EAGAIN && errno != EWOULDBLOCK) {
SPDK_ERRLOG("Acceptor Event Error: %s\n", spdk_strerror(errno));
}
break;
}
}
}
static int
spdk_nvmf_rdma_mem_notify(void *cb_ctx, struct spdk_mem_map *map,
enum spdk_mem_map_notify_action action,
@ -1197,6 +1063,13 @@ spdk_nvmf_rdma_mem_notify(void *cb_ctx, struct spdk_mem_map *map,
return 0;
}
static int
spdk_nvmf_rdma_check_contiguous_entries(uint64_t addr_1, uint64_t addr_2)
{
/* Two contiguous mappings will point to the same address which is the start of the RDMA MR. */
return addr_1 == addr_2;
}
typedef enum spdk_nvme_data_transfer spdk_nvme_data_transfer_t;
static spdk_nvme_data_transfer_t
@ -1266,13 +1139,16 @@ spdk_nvmf_rdma_request_fill_iovs(struct spdk_nvmf_rdma_transport *rtransport,
{
void *buf = NULL;
uint32_t length = rdma_req->req.length;
uint64_t translation_len;
uint32_t i = 0;
int rc = 0;
rdma_req->req.iovcnt = 0;
while (length) {
buf = spdk_mempool_get(rtransport->data_buf_pool);
if (!buf) {
goto nomem;
rc = -ENOMEM;
goto err_exit;
}
rdma_req->req.iov[i].iov_base = (void *)((uintptr_t)(buf + NVMF_DATA_BUFFER_MASK) &
@ -1282,18 +1158,24 @@ spdk_nvmf_rdma_request_fill_iovs(struct spdk_nvmf_rdma_transport *rtransport,
rdma_req->data.buffers[i] = buf;
rdma_req->data.wr.sg_list[i].addr = (uintptr_t)(rdma_req->req.iov[i].iov_base);
rdma_req->data.wr.sg_list[i].length = rdma_req->req.iov[i].iov_len;
translation_len = rdma_req->req.iov[i].iov_len;
rdma_req->data.wr.sg_list[i].lkey = ((struct ibv_mr *)spdk_mem_map_translate(device->map,
(uint64_t)buf, NULL))->lkey;
(uint64_t)buf, &translation_len))->lkey;
length -= rdma_req->req.iov[i].iov_len;
if (translation_len < rdma_req->req.iov[i].iov_len) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
rc = -EINVAL;
goto err_exit;
}
i++;
}
rdma_req->data_from_pool = true;
return 0;
return rc;
nomem:
err_exit:
while (i) {
i--;
spdk_mempool_put(rtransport->data_buf_pool, rdma_req->req.iov[i].iov_base);
@ -1305,7 +1187,7 @@ nomem:
rdma_req->data.wr.sg_list[i].lkey = 0;
}
rdma_req->req.iovcnt = 0;
return -ENOMEM;
return rc;
}
static int
@ -1661,7 +1543,7 @@ spdk_nvmf_rdma_create(struct spdk_nvmf_transport_opts *opts)
const struct spdk_mem_map_ops nvmf_rdma_map_ops = {
.notify_cb = spdk_nvmf_rdma_mem_notify,
.are_contiguous = NULL
.are_contiguous = spdk_nvmf_rdma_check_contiguous_entries
};
rtransport = calloc(1, sizeof(*rtransport));
@ -2093,18 +1975,6 @@ spdk_nvmf_rdma_qpair_process_pending(struct spdk_nvmf_rdma_transport *rtransport
}
}
if (rqpair->qpair_disconnected) {
spdk_nvmf_rdma_qpair_destroy(rqpair);
return;
}
/* Do not process newly received commands if qp is in ERROR state,
* wait till the recovery is complete.
*/
if (rqpair->ibv_attr.qp_state == IBV_QPS_ERR) {
return;
}
/* The lowest priority is processing newly received commands */
TAILQ_FOREACH_SAFE(rdma_recv, &rqpair->incoming_queue, link, recv_tmp) {
if (TAILQ_EMPTY(&rqpair->state_queue[RDMA_REQUEST_STATE_FREE])) {
@ -2121,177 +1991,169 @@ spdk_nvmf_rdma_qpair_process_pending(struct spdk_nvmf_rdma_transport *rtransport
}
static void
spdk_nvmf_rdma_drain_state_queue(struct spdk_nvmf_rdma_qpair *rqpair,
enum spdk_nvmf_rdma_request_state state)
_nvmf_rdma_qpair_disconnect(void *ctx)
{
struct spdk_nvmf_rdma_request *rdma_req, *req_tmp;
struct spdk_nvmf_rdma_transport *rtransport;
struct spdk_nvmf_qpair *qpair = ctx;
struct spdk_nvmf_rdma_qpair *rqpair;
TAILQ_FOREACH_SAFE(rdma_req, &rqpair->state_queue[state], state_link, req_tmp) {
rtransport = SPDK_CONTAINEROF(rdma_req->req.qpair->transport,
struct spdk_nvmf_rdma_transport, transport);
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
}
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
spdk_nvmf_rdma_qpair_dec_refcnt(rqpair);
spdk_nvmf_qpair_disconnect(qpair, NULL, NULL);
}
static void
spdk_nvmf_rdma_qpair_recover(struct spdk_nvmf_rdma_qpair *rqpair)
_nvmf_rdma_disconnect_retry(void *ctx)
{
struct spdk_nvmf_qpair *qpair = ctx;
struct spdk_nvmf_poll_group *group;
/* Read the group out of the qpair. This is normally set and accessed only from
* the thread that created the group. Here, we're not on that thread necessarily.
* The data member qpair->group begins it's life as NULL and then is assigned to
* a pointer and never changes. So fortunately reading this and checking for
* non-NULL is thread safe in the x86_64 memory model. */
group = qpair->group;
if (group == NULL) {
/* The qpair hasn't been assigned to a group yet, so we can't
* process a disconnect. Send a message to ourself and try again. */
spdk_thread_send_msg(spdk_get_thread(), _nvmf_rdma_disconnect_retry, qpair);
return;
}
spdk_thread_send_msg(group->thread, _nvmf_rdma_qpair_disconnect, qpair);
}
static int
nvmf_rdma_disconnect(struct rdma_cm_event *evt)
{
struct spdk_nvmf_qpair *qpair;
struct spdk_nvmf_rdma_qpair *rqpair;
if (evt->id == NULL) {
SPDK_ERRLOG("disconnect request: missing cm_id\n");
return -1;
}
qpair = evt->id->context;
if (qpair == NULL) {
SPDK_ERRLOG("disconnect request: no active connection\n");
return -1;
}
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
spdk_trace_record(TRACE_RDMA_QP_DISCONNECT, 0, 0, (uintptr_t)rqpair->cm_id, 0);
spdk_nvmf_rdma_update_ibv_state(rqpair);
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
_nvmf_rdma_disconnect_retry(qpair);
return 0;
}
#ifdef DEBUG
static const char *CM_EVENT_STR[] = {
"RDMA_CM_EVENT_ADDR_RESOLVED",
"RDMA_CM_EVENT_ADDR_ERROR",
"RDMA_CM_EVENT_ROUTE_RESOLVED",
"RDMA_CM_EVENT_ROUTE_ERROR",
"RDMA_CM_EVENT_CONNECT_REQUEST",
"RDMA_CM_EVENT_CONNECT_RESPONSE",
"RDMA_CM_EVENT_CONNECT_ERROR",
"RDMA_CM_EVENT_UNREACHABLE",
"RDMA_CM_EVENT_REJECTED",
"RDMA_CM_EVENT_ESTABLISHED",
"RDMA_CM_EVENT_DISCONNECTED",
"RDMA_CM_EVENT_DEVICE_REMOVAL",
"RDMA_CM_EVENT_MULTICAST_JOIN",
"RDMA_CM_EVENT_MULTICAST_ERROR",
"RDMA_CM_EVENT_ADDR_CHANGE",
"RDMA_CM_EVENT_TIMEWAIT_EXIT"
};
#endif /* DEBUG */
static void
spdk_nvmf_process_cm_event(struct spdk_nvmf_transport *transport, new_qpair_fn cb_fn)
{
enum ibv_qp_state state, next_state;
int recovered;
struct spdk_nvmf_rdma_transport *rtransport;
struct rdma_cm_event *event;
int rc;
if (!spdk_nvmf_rdma_qpair_is_idle(&rqpair->qpair)) {
/* There must be outstanding requests down to media.
* If so, wait till they're complete.
*/
assert(!TAILQ_EMPTY(&rqpair->qpair.outstanding));
rtransport = SPDK_CONTAINEROF(transport, struct spdk_nvmf_rdma_transport, transport);
if (rtransport->event_channel == NULL) {
return;
}
state = rqpair->ibv_attr.qp_state;
next_state = state;
while (1) {
rc = rdma_get_cm_event(rtransport->event_channel, &event);
if (rc == 0) {
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Acceptor Event: %s\n", CM_EVENT_STR[event->event]);
SPDK_NOTICELOG("RDMA qpair %u is in state: %s\n",
rqpair->qpair.qid,
str_ibv_qp_state[state]);
spdk_trace_record(TRACE_RDMA_CM_ASYNC_EVENT, 0, 0, 0, event->event);
if (!(state == IBV_QPS_ERR || state == IBV_QPS_RESET)) {
SPDK_ERRLOG("Can't recover RDMA qpair %u from the state: %s\n",
rqpair->qpair.qid,
str_ibv_qp_state[state]);
spdk_nvmf_qpair_disconnect(&rqpair->qpair, NULL, NULL);
return;
switch (event->event) {
case RDMA_CM_EVENT_ADDR_RESOLVED:
case RDMA_CM_EVENT_ADDR_ERROR:
case RDMA_CM_EVENT_ROUTE_RESOLVED:
case RDMA_CM_EVENT_ROUTE_ERROR:
/* No action required. The target never attempts to resolve routes. */
break;
case RDMA_CM_EVENT_CONNECT_REQUEST:
rc = nvmf_rdma_connect(transport, event, cb_fn);
if (rc < 0) {
SPDK_ERRLOG("Unable to process connect event. rc: %d\n", rc);
break;
}
recovered = 0;
while (!recovered) {
switch (state) {
case IBV_QPS_ERR:
next_state = IBV_QPS_RESET;
break;
case IBV_QPS_RESET:
next_state = IBV_QPS_INIT;
case RDMA_CM_EVENT_CONNECT_RESPONSE:
/* The target never initiates a new connection. So this will not occur. */
break;
case IBV_QPS_INIT:
next_state = IBV_QPS_RTR;
case RDMA_CM_EVENT_CONNECT_ERROR:
/* Can this happen? The docs say it can, but not sure what causes it. */
break;
case IBV_QPS_RTR:
next_state = IBV_QPS_RTS;
case RDMA_CM_EVENT_UNREACHABLE:
case RDMA_CM_EVENT_REJECTED:
/* These only occur on the client side. */
break;
case IBV_QPS_RTS:
recovered = 1;
case RDMA_CM_EVENT_ESTABLISHED:
/* TODO: Should we be waiting for this event anywhere? */
break;
case RDMA_CM_EVENT_DISCONNECTED:
case RDMA_CM_EVENT_DEVICE_REMOVAL:
rc = nvmf_rdma_disconnect(event);
if (rc < 0) {
SPDK_ERRLOG("Unable to process disconnect event. rc: %d\n", rc);
break;
}
break;
case RDMA_CM_EVENT_MULTICAST_JOIN:
case RDMA_CM_EVENT_MULTICAST_ERROR:
/* Multicast is not used */
break;
case RDMA_CM_EVENT_ADDR_CHANGE:
/* Not utilizing this event */
break;
case RDMA_CM_EVENT_TIMEWAIT_EXIT:
/* For now, do nothing. The target never re-uses queue pairs. */
break;
default:
SPDK_ERRLOG("RDMA qpair %u unexpected state for recovery: %u\n",
rqpair->qpair.qid, state);
goto error;
}
/* Do not transition into same state */
if (next_state == state) {
SPDK_ERRLOG("Unexpected Acceptor Event [%d]\n", event->event);
break;
}
if (spdk_nvmf_rdma_set_ibv_state(rqpair, next_state)) {
goto error;
rdma_ack_cm_event(event);
} else {
if (errno != EAGAIN && errno != EWOULDBLOCK) {
SPDK_ERRLOG("Acceptor Event Error: %s\n", spdk_strerror(errno));
}
state = next_state;
break;
}
rtransport = SPDK_CONTAINEROF(rqpair->qpair.transport,
struct spdk_nvmf_rdma_transport,
transport);
spdk_nvmf_rdma_qpair_process_pending(rtransport, rqpair);
return;
error:
SPDK_NOTICELOG("RDMA qpair %u: recovery failed, disconnecting...\n",
rqpair->qpair.qid);
spdk_nvmf_qpair_disconnect(&rqpair->qpair, NULL, NULL);
}
/* Clean up only the states that can be aborted at any time */
static void
_spdk_nvmf_rdma_qp_cleanup_safe_states(struct spdk_nvmf_rdma_qpair *rqpair)
{
struct spdk_nvmf_rdma_request *rdma_req, *req_tmp;
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_NEW);
TAILQ_FOREACH_SAFE(rdma_req, &rqpair->state_queue[RDMA_REQUEST_STATE_NEED_BUFFER], link, req_tmp) {
TAILQ_REMOVE(&rqpair->ch->pending_data_buf_queue, rdma_req, link);
}
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_NEED_BUFFER);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_DATA_TRANSFER_PENDING);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_READY_TO_EXECUTE);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_EXECUTED);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_READY_TO_COMPLETE);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_COMPLETED);
}
/* This cleans up all memory. It is only safe to use if the rest of the software stack
* has been shut down */
static void
_spdk_nvmf_rdma_qp_cleanup_all_states(struct spdk_nvmf_rdma_qpair *rqpair)
{
_spdk_nvmf_rdma_qp_cleanup_safe_states(rqpair);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_EXECUTING);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_TRANSFERRING_HOST_TO_CONTROLLER);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_COMPLETING);
}
static void
_spdk_nvmf_rdma_qp_error(void *arg)
{
struct spdk_nvmf_rdma_qpair *rqpair = arg;
enum ibv_qp_state state;
spdk_nvmf_rdma_qpair_dec_refcnt(rqpair);
state = rqpair->ibv_attr.qp_state;
if (state != IBV_QPS_ERR) {
/* Error was already recovered */
return;
}
if (spdk_nvmf_qpair_is_admin_queue(&rqpair->qpair)) {
spdk_nvmf_ctrlr_abort_aer(rqpair->qpair.ctrlr);
}
_spdk_nvmf_rdma_qp_cleanup_safe_states(rqpair);
/* Attempt recovery. This will exit without recovering if I/O requests
* are still outstanding */
spdk_nvmf_rdma_qpair_recover(rqpair);
}
static void
_spdk_nvmf_rdma_qp_last_wqe(void *arg)
{
struct spdk_nvmf_rdma_qpair *rqpair = arg;
enum ibv_qp_state state;
spdk_nvmf_rdma_qpair_dec_refcnt(rqpair);
state = rqpair->ibv_attr.qp_state;
if (state != IBV_QPS_ERR) {
/* Error was already recovered */
return;
}
/* Clear out the states that are safe to clear any time, plus the
* RDMA data transfer states. */
_spdk_nvmf_rdma_qp_cleanup_safe_states(rqpair);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_TRANSFERRING_HOST_TO_CONTROLLER);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST);
spdk_nvmf_rdma_drain_state_queue(rqpair, RDMA_REQUEST_STATE_COMPLETING);
spdk_nvmf_rdma_qpair_recover(rqpair);
}
static void
@ -2320,15 +2182,10 @@ spdk_nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
(uintptr_t)rqpair->cm_id, event.event_type);
spdk_nvmf_rdma_update_ibv_state(rqpair);
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
spdk_thread_send_msg(rqpair->qpair.group->thread, _spdk_nvmf_rdma_qp_error, rqpair);
spdk_thread_send_msg(rqpair->qpair.group->thread, _nvmf_rdma_qpair_disconnect, &rqpair->qpair);
break;
case IBV_EVENT_QP_LAST_WQE_REACHED:
rqpair = event.element.qp->qp_context;
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
(uintptr_t)rqpair->cm_id, event.event_type);
spdk_nvmf_rdma_update_ibv_state(rqpair);
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
spdk_thread_send_msg(rqpair->qpair.group->thread, _spdk_nvmf_rdma_qp_last_wqe, rqpair);
/* This event only occurs for shared receive queues, which are not currently supported. */
break;
case IBV_EVENT_SQ_DRAINED:
/* This event occurs frequently in both error and non-error states.
@ -2342,7 +2199,7 @@ spdk_nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
state = spdk_nvmf_rdma_update_ibv_state(rqpair);
if (state == IBV_QPS_ERR) {
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
spdk_thread_send_msg(rqpair->qpair.group->thread, _spdk_nvmf_rdma_qp_error, rqpair);
spdk_thread_send_msg(rqpair->qpair.group->thread, _nvmf_rdma_qpair_disconnect, &rqpair->qpair);
}
break;
case IBV_EVENT_QP_REQ_ERR:
@ -2493,7 +2350,6 @@ spdk_nvmf_rdma_poll_group_destroy(struct spdk_nvmf_transport_poll_group *group)
ibv_destroy_cq(poller->cq);
}
TAILQ_FOREACH_SAFE(qpair, &poller->qpairs, link, tmp_qpair) {
_spdk_nvmf_rdma_qp_cleanup_all_states(qpair);
spdk_nvmf_rdma_qpair_destroy(qpair);
}
@ -2606,12 +2462,6 @@ spdk_nvmf_rdma_request_complete(struct spdk_nvmf_request *req)
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
if (rqpair->qpair.state == SPDK_NVMF_QPAIR_ACTIVE && rqpair->ibv_attr.qp_state == IBV_QPS_ERR) {
/* If the NVMe-oF layer thinks the connection is active, but the RDMA layer thinks
* the connection is dead, perform error recovery. */
spdk_nvmf_rdma_qpair_recover(rqpair);
}
return 0;
}
@ -2619,47 +2469,40 @@ static void
spdk_nvmf_rdma_close_qpair(struct spdk_nvmf_qpair *qpair)
{
struct spdk_nvmf_rdma_qpair *rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
struct ibv_recv_wr recv_wr = {};
struct ibv_recv_wr *bad_recv_wr;
struct ibv_send_wr send_wr = {};
struct ibv_send_wr *bad_send_wr;
int rc;
spdk_nvmf_rdma_qpair_destroy(rqpair);
if (rqpair->disconnect_flags & RDMA_QP_DISCONNECTING) {
return;
}
static struct spdk_nvmf_rdma_request *
get_rdma_req_from_wc(struct ibv_wc *wc)
{
struct spdk_nvmf_rdma_request *rdma_req;
rqpair->disconnect_flags |= RDMA_QP_DISCONNECTING;
rdma_req = (struct spdk_nvmf_rdma_request *)wc->wr_id;
assert(rdma_req != NULL);
#ifdef DEBUG
struct spdk_nvmf_rdma_qpair *rqpair;
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
assert(rdma_req - rqpair->reqs >= 0);
assert(rdma_req - rqpair->reqs < (ptrdiff_t)rqpair->max_queue_depth);
#endif
return rdma_req;
if (rqpair->ibv_attr.qp_state != IBV_QPS_ERR) {
spdk_nvmf_rdma_set_ibv_state(rqpair, IBV_QPS_ERR);
}
static struct spdk_nvmf_rdma_recv *
get_rdma_recv_from_wc(struct ibv_wc *wc)
{
struct spdk_nvmf_rdma_recv *rdma_recv;
rqpair->drain_recv_wr.type = RDMA_WR_TYPE_DRAIN_RECV;
recv_wr.wr_id = (uintptr_t)&rqpair->drain_recv_wr;
rc = ibv_post_recv(rqpair->cm_id->qp, &recv_wr, &bad_recv_wr);
if (rc) {
SPDK_ERRLOG("Failed to post dummy receive WR, errno %d\n", errno);
assert(false);
return;
}
assert(wc->byte_len >= sizeof(struct spdk_nvmf_capsule_cmd));
rdma_recv = (struct spdk_nvmf_rdma_recv *)wc->wr_id;
assert(rdma_recv != NULL);
#ifdef DEBUG
struct spdk_nvmf_rdma_qpair *rqpair = rdma_recv->qpair;
assert(rdma_recv - rqpair->recvs >= 0);
assert(rdma_recv - rqpair->recvs < (ptrdiff_t)rqpair->max_queue_depth);
#endif
return rdma_recv;
rqpair->drain_send_wr.type = RDMA_WR_TYPE_DRAIN_SEND;
send_wr.wr_id = (uintptr_t)&rqpair->drain_send_wr;
send_wr.opcode = IBV_WR_SEND;
rc = ibv_post_send(rqpair->cm_id->qp, &send_wr, &bad_send_wr);
if (rc) {
SPDK_ERRLOG("Failed to post dummy send WR, errno %d\n", errno);
assert(false);
return;
}
}
#ifdef DEBUG
@ -2676,6 +2519,7 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
struct spdk_nvmf_rdma_poller *rpoller)
{
struct ibv_wc wc[32];
struct spdk_nvmf_rdma_wr *rdma_wr;
struct spdk_nvmf_rdma_request *rdma_req;
struct spdk_nvmf_rdma_recv *rdma_recv;
struct spdk_nvmf_rdma_qpair *rqpair;
@ -2692,15 +2536,19 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
}
for (i = 0; i < reaped; i++) {
rdma_wr = (struct spdk_nvmf_rdma_wr *)wc[i].wr_id;
/* Handle error conditions */
if (wc[i].status) {
SPDK_WARNLOG("CQ error on CQ %p, Request 0x%lu (%d): %s\n",
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "CQ error on CQ %p, Request 0x%lu (%d): %s\n",
rpoller->cq, wc[i].wr_id, wc[i].status, ibv_wc_status_str(wc[i].status));
error = true;
switch (wc[i].opcode) {
case IBV_WC_SEND:
rdma_req = get_rdma_req_from_wc(&wc[i]);
switch (rdma_wr->type) {
case RDMA_WR_TYPE_SEND:
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, rsp.rdma_wr);
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
/* We're going to attempt an error recovery, so force the request into
@ -2708,35 +2556,57 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
break;
case IBV_WC_RECV:
rdma_recv = get_rdma_recv_from_wc(&wc[i]);
case RDMA_WR_TYPE_RECV:
rdma_recv = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_recv, rdma_wr);
rqpair = rdma_recv->qpair;
/* Dump this into the incoming queue. This gets cleaned up when
* the queue pair disconnects or recovers. */
TAILQ_INSERT_TAIL(&rqpair->incoming_queue, rdma_recv, link);
break;
case IBV_WC_RDMA_WRITE:
case IBV_WC_RDMA_READ:
case RDMA_WR_TYPE_DATA:
/* If the data transfer fails still force the queue into the error state,
* but the rdma_req objects should only be manipulated in response to
* SEND and RECV operations. */
rdma_req = get_rdma_req_from_wc(&wc[i]);
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, data.rdma_wr);
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
break;
case RDMA_WR_TYPE_DRAIN_RECV:
rqpair = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_qpair, drain_recv_wr);
assert(rqpair->disconnect_flags & RDMA_QP_DISCONNECTING);
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Drained QP RECV %u (%p)\n", rqpair->qpair.qid, rqpair);
rqpair->disconnect_flags |= RDMA_QP_RECV_DRAINED;
if (rqpair->disconnect_flags & RDMA_QP_SEND_DRAINED) {
spdk_nvmf_rdma_qpair_destroy(rqpair);
}
/* Continue so that this does not trigger the disconnect path below. */
continue;
case RDMA_WR_TYPE_DRAIN_SEND:
rqpair = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_qpair, drain_send_wr);
assert(rqpair->disconnect_flags & RDMA_QP_DISCONNECTING);
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Drained QP SEND %u (%p)\n", rqpair->qpair.qid, rqpair);
rqpair->disconnect_flags |= RDMA_QP_SEND_DRAINED;
if (rqpair->disconnect_flags & RDMA_QP_RECV_DRAINED) {
spdk_nvmf_rdma_qpair_destroy(rqpair);
}
/* Continue so that this does not trigger the disconnect path below. */
continue;
default:
SPDK_ERRLOG("Received an unknown opcode on the CQ: %d\n", wc[i].opcode);
continue;
}
/* Set the qpair to the error state. This will initiate a recovery. */
spdk_nvmf_rdma_set_ibv_state(rqpair, IBV_QPS_ERR);
if (rqpair->qpair.state == SPDK_NVMF_QPAIR_ACTIVE) {
/* Disconnect the connection. */
spdk_nvmf_qpair_disconnect(&rqpair->qpair, NULL, NULL);
}
continue;
}
switch (wc[i].opcode) {
case IBV_WC_SEND:
rdma_req = get_rdma_req_from_wc(&wc[i]);
assert(rdma_wr->type == RDMA_WR_TYPE_SEND);
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, rsp.rdma_wr);
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
assert(spdk_nvmf_rdma_req_is_completing(rdma_req));
@ -2751,7 +2621,8 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
break;
case IBV_WC_RDMA_WRITE:
rdma_req = get_rdma_req_from_wc(&wc[i]);
assert(rdma_wr->type == RDMA_WR_TYPE_DATA);
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, data.rdma_wr);
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
/* Try to process other queued requests */
@ -2759,7 +2630,8 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
break;
case IBV_WC_RDMA_READ:
rdma_req = get_rdma_req_from_wc(&wc[i]);
assert(rdma_wr->type == RDMA_WR_TYPE_DATA);
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, data.rdma_wr);
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
assert(rdma_req->state == RDMA_REQUEST_STATE_TRANSFERRING_HOST_TO_CONTROLLER);
@ -2771,7 +2643,8 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
break;
case IBV_WC_RECV:
rdma_recv = get_rdma_recv_from_wc(&wc[i]);
assert(rdma_wr->type == RDMA_WR_TYPE_RECV);
rdma_recv = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_recv, rdma_wr);
rqpair = rdma_recv->qpair;
TAILQ_INSERT_TAIL(&rqpair->incoming_queue, rdma_recv, link);

View File

@ -1,8 +1,8 @@
/*-
* BSD LICENSE
*
* Copyright (c) Intel Corporation.
* All rights reserved.
* Copyright (c) Intel Corporation. All rights reserved.
* Copyright (c) 2018 Mellanox Technologies LTD. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@ -140,6 +140,9 @@ spdk_nvmf_transport_poll_group_create(struct spdk_nvmf_transport *transport)
struct spdk_nvmf_transport_poll_group *group;
group = transport->ops->poll_group_create(transport);
if (!group) {
return NULL;
}
group->transport = transport;
return group;

View File

@ -530,7 +530,7 @@ spdk_get_io_channel(void *io_device)
pthread_mutex_unlock(&g_devlist_mutex);
rc = dev->create_cb(io_device, (uint8_t *)ch + sizeof(*ch));
if (rc == -1) {
if (rc != 0) {
pthread_mutex_lock(&g_devlist_mutex);
TAILQ_REMOVE(&ch->thread->io_channels, ch, tailq);
dev->refcnt--;

View File

@ -273,7 +273,12 @@ endef
INSTALL_SHARED_LIB=\
$(Q)echo " INSTALL $(DESTDIR)$(libdir)/$(notdir $(SHARED_LINKED_LIB))"; \
install -d -m 755 "$(DESTDIR)$(libdir)"; \
install -m 755 "$(SHARED_REALNAME_LIB)" "$(DESTDIR)$(libdir)/"; \
if file --mime-type $(SHARED_REALNAME_LIB) | grep -q 'application/x-sharedlib'; then \
perm_mode=755; \
else \
perm_mode=644; \
fi; \
install -m $$perm_mode "$(SHARED_REALNAME_LIB)" "$(DESTDIR)$(libdir)/"; \
$(call spdk_install_lib_symlink,$(notdir $(SHARED_REALNAME_LIB)),$(notdir $(SHARED_LINKED_LIB)));
# Install an app binary
@ -282,6 +287,11 @@ INSTALL_APP=\
install -d -m 755 "$(DESTDIR)$(bindir)"; \
install -m 755 "$(APP)" "$(DESTDIR)$(bindir)/"
INSTALL_EXAMPLE=\
$(Q)echo " INSTALL $(DESTDIR)$(bindir)/spdk_$(strip $(subst /,_,$(subst $(SPDK_ROOT_DIR)/examples/, ,$(CURDIR))))"; \
install -d -m 755 "$(DESTDIR)$(bindir)"; \
install -m 755 "$(APP)" "$(DESTDIR)$(bindir)/spdk_$(strip $(subst /,_,$(subst $(SPDK_ROOT_DIR)/examples/, ,$(CURDIR))))"
# Install a header
INSTALL_HEADER=\
$(Q)echo " INSTALL $@"; \

171
pkg/spdk.spec Normal file
View File

@ -0,0 +1,171 @@
# Build documentation package
%bcond_with doc
Name: spdk
Version: 18.10.1
Release: 0%{?dist}
Epoch: 0
URL: http://spdk.io
Source: https://github.com/spdk/spdk/archive/v18.10.1.tar.gz#/%{name}-%{version}.tar.gz
Summary: Set of libraries and utilities for high performance user-mode storage
%define spdk_build_dir %{name}-%{version}
%define package_version %{epoch}:%{version}-%{release}
%define install_datadir %{buildroot}/%{_datadir}/%{name}
%define install_sbindir %{buildroot}/%{_sbindir}
%define install_docdir %{buildroot}/%{_docdir}/%{name}
# Distros that don't support python3 will use python2
%if "%{dist}" == ".el7"
%define use_python2 1
%else
%define use_python2 0
%endif
License: BSD
# Only x86_64 is supported
ExclusiveArch: x86_64
BuildRequires: gcc gcc-c++ make
BuildRequires: dpdk-devel, numactl-devel
BuildRequires: libiscsi-devel, libaio-devel, openssl-devel, libuuid-devel
BuildRequires: libibverbs-devel, librdmacm-devel
%if %{with doc}
BuildRequires: doxygen mscgen graphviz
%endif
# Install dependencies
Requires: dpdk >= 17.11, numactl-libs, openssl-libs
Requires: libiscsi, libaio, libuuid
# NVMe over Fabrics
Requires: librdmacm, librdmacm
Requires(post): /sbin/ldconfig
Requires(postun): /sbin/ldconfig
%description
The Storage Performance Development Kit provides a set of tools
and libraries for writing high performance, scalable, user-mode storage
applications.
%package devel
Summary: Storage Performance Development Kit development files
Requires: %{name}%{?_isa} = %{package_version}
Provides: %{name}-static%{?_isa} = %{package_version}
%description devel
This package contains the headers and other files needed for
developing applications with the Storage Performance Development Kit.
%package tools
Summary: Storage Performance Development Kit tools files
%if "%{use_python2}" == "0"
Requires: %{name}%{?_isa} = %{package_version} python3 python3-configshell python3-pexpect
%else
Requires: %{name}%{?_isa} = %{package_version} python python-configshell pexpect
%endif
BuildArch: noarch
%description tools
%{summary}
%if %{with doc}
%package doc
Summary: Storage Performance Development Kit documentation
BuildArch: noarch
%description doc
%{summary}
%endif
%prep
# add -q
%autosetup -n %{spdk_build_dir}
%build
./configure --prefix=%{_usr} \
--disable-tests \
--without-crypto \
--with-dpdk=/usr/share/dpdk/x86_64-default-linuxapp-gcc \
--without-fio \
--with-vhost \
--without-pmdk \
--without-vpp \
--without-rbd \
--with-rdma \
--with-shared \
--with-iscsi-initiator \
--without-vtune
make -j`nproc` all
%if %{with doc}
make -C doc
%endif
%install
%make_install -j`nproc` prefix=%{_usr} libdir=%{_libdir} datadir=%{_datadir}
# Install tools
mkdir -p %{install_datadir}
find scripts -type f -regextype egrep -regex '.*(spdkcli|rpc).*[.]py' \
-exec cp --parents -t %{install_datadir} {} ";"
# env is banned - replace '/usr/bin/env anything' with '/usr/bin/anything'
find %{install_datadir}/scripts -type f -regextype egrep -regex '.*([.]py|[.]sh)' \
-exec sed -i -E '1s@#!/usr/bin/env (.*)@#!/usr/bin/\1@' {} +
%if "%{use_python2}" == "1"
find %{install_datadir}/scripts -type f -regextype egrep -regex '.*([.]py)' \
-exec sed -i -E '1s@#!/usr/bin/python3@#!/usr/bin/python2@' {} +
%endif
# synlinks to tools
mkdir -p %{install_sbindir}
ln -sf -r %{install_datadir}/scripts/rpc.py %{install_sbindir}/%{name}-rpc
ln -sf -r %{install_datadir}/scripts/spdkcli.py %{install_sbindir}/%{name}-cli
%if %{with doc}
# Install doc
mkdir -p %{install_docdir}
mv doc/output/html/ %{install_docdir}
%endif
%post -p /sbin/ldconfig
%postun -p /sbin/ldconfig
%files
%{_bindir}/spdk_*
%{_libdir}/*.so.*
%files devel
%{_includedir}/%{name}
%{_libdir}/*.a
%{_libdir}/*.so
%files tools
%{_datadir}/%{name}/scripts
%{_sbindir}/%{name}-rpc
%{_sbindir}/%{name}-cli
%if %{with doc}
%files doc
%{_docdir}/%{name}
%endif
%changelog
* Tue Sep 18 2018 Pawel Wodkowski <pawelx.wodkowski@intel.com> - 0:18.07-3
- Initial RPM release

View File

@ -257,6 +257,12 @@ test_mem_map_translation(void)
CU_ASSERT(addr == 0);
CU_ASSERT(mapping_length == VALUE_2MB * 3)
/* Translate an unaligned address */
mapping_length = VALUE_2MB * 3;
addr = spdk_mem_map_translate(map, VALUE_4KB, &mapping_length);
CU_ASSERT(addr == 0);
CU_ASSERT(mapping_length == VALUE_2MB * 3 - VALUE_4KB);
/* Clear translation for the middle page of the larger region. */
rc = spdk_mem_map_clear_translation(map, VALUE_2MB, VALUE_2MB);
CU_ASSERT(rc == 0);

View File

@ -109,7 +109,18 @@ _get_xattr_value_null(void *arg, const char *name,
*value = NULL;
}
static int
_get_snapshots_count(struct spdk_blob_store *bs)
{
struct spdk_blob_list *snapshot = NULL;
int count = 0;
TAILQ_FOREACH(snapshot, &bs->snapshots, link) {
count += 1;
}
return count;
}
static void
bs_op_complete(void *cb_arg, int bserrno)
@ -524,6 +535,7 @@ blob_snapshot(void)
struct spdk_blob_xattr_opts xattrs;
spdk_blob_id blobid;
spdk_blob_id snapshotid;
spdk_blob_id snapshotid2;
const void *value;
size_t value_len;
int rc;
@ -551,9 +563,11 @@ blob_snapshot(void)
CU_ASSERT(spdk_blob_get_num_clusters(blob) == 10)
/* Create snapshot from blob */
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 0);
spdk_bs_create_snapshot(bs, blobid, NULL, blob_op_with_id_complete, NULL);
CU_ASSERT(g_bserrno == 0);
CU_ASSERT(g_blobid != SPDK_BLOBID_INVALID);
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 1);
snapshotid = g_blobid;
spdk_bs_open_blob(bs, snapshotid, blob_op_with_handle_complete, NULL);
@ -577,9 +591,10 @@ blob_snapshot(void)
spdk_bs_create_snapshot(bs, blobid, &xattrs, blob_op_with_id_complete, NULL);
CU_ASSERT(g_bserrno == 0);
CU_ASSERT(g_blobid != SPDK_BLOBID_INVALID);
blobid = g_blobid;
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 2);
snapshotid2 = g_blobid;
spdk_bs_open_blob(bs, blobid, blob_op_with_handle_complete, NULL);
spdk_bs_open_blob(bs, snapshotid2, blob_op_with_handle_complete, NULL);
CU_ASSERT(g_bserrno == 0);
SPDK_CU_ASSERT_FATAL(g_blob != NULL);
snapshot2 = g_blob;
@ -620,16 +635,29 @@ blob_snapshot(void)
spdk_bs_create_snapshot(bs, snapshotid, NULL, blob_op_with_id_complete, NULL);
CU_ASSERT(g_bserrno == -EINVAL);
CU_ASSERT(g_blobid == SPDK_BLOBID_INVALID);
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 2);
spdk_blob_close(blob, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
spdk_blob_close(snapshot, blob_op_complete, NULL);
spdk_bs_delete_blob(bs, blobid, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 2);
spdk_blob_close(snapshot2, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
spdk_bs_delete_blob(bs, snapshotid2, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 1);
spdk_blob_close(snapshot, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
spdk_bs_delete_blob(bs, snapshotid, blob_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
CU_ASSERT_EQUAL(_get_snapshots_count(bs), 0);
spdk_bs_unload(g_bs, bs_op_complete, NULL);
CU_ASSERT(g_bserrno == 0);
g_bs = NULL;

View File

@ -279,6 +279,18 @@ spdk_iscsi_tgt_node_cleanup_luns(struct spdk_iscsi_conn *conn,
return 0;
}
uint32_t
spdk_iscsi_pdu_calc_header_digest(struct spdk_iscsi_pdu *pdu)
{
return 0;
}
uint32_t
spdk_iscsi_pdu_calc_data_digest(struct spdk_iscsi_pdu *pdu)
{
return 0;
}
void
spdk_shutdown_iscsi_conns_done(void)
{

View File

@ -39,6 +39,8 @@
#include "common/lib/test_env.c"
#define OCSSD_SECTOR_SIZE 0x1000
DEFINE_STUB(spdk_nvme_qpair_process_completions, int32_t,
(struct spdk_nvme_qpair *qpair,
uint32_t max_completions), 0);
@ -176,7 +178,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_reset_single_entry(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
struct spdk_nvme_ns ns;
struct spdk_nvme_ctrlr ctrlr;
@ -206,7 +208,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_reset(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t vector_size = 0x10;
struct spdk_nvme_ns ns;
@ -236,7 +238,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t md_size = 0x80;
struct spdk_nvme_ns ns;
@ -261,7 +263,7 @@ test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry(void)
SPDK_CU_ASSERT_FATAL(g_request->num_children == 0);
CU_ASSERT(g_request->payload.md == metadata);
CU_ASSERT(g_request->payload_size == PAGE_SIZE);
CU_ASSERT(g_request->payload_size == OCSSD_SECTOR_SIZE);
CU_ASSERT(g_request->payload.contig_or_cb_arg == buffer);
CU_ASSERT(g_request->cmd.opc == SPDK_OCSSD_OPC_VECTOR_READ);
CU_ASSERT(g_request->cmd.nsid == ns.id);
@ -279,7 +281,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_read_with_md(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t md_size = 0x80;
const uint32_t vector_size = 0x10;
@ -323,7 +325,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_read_single_entry(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
struct spdk_nvme_ns ns;
struct spdk_nvme_ctrlr ctrlr;
@ -344,7 +346,7 @@ test_nvme_ocssd_ns_cmd_vector_read_single_entry(void)
SPDK_CU_ASSERT_FATAL(g_request != NULL);
SPDK_CU_ASSERT_FATAL(g_request->num_children == 0);
CU_ASSERT(g_request->payload_size == PAGE_SIZE);
CU_ASSERT(g_request->payload_size == OCSSD_SECTOR_SIZE);
CU_ASSERT(g_request->payload.contig_or_cb_arg == buffer);
CU_ASSERT(g_request->cmd.opc == SPDK_OCSSD_OPC_VECTOR_READ);
CU_ASSERT(g_request->cmd.nsid == ns.id);
@ -360,7 +362,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_read(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t vector_size = 0x10;
struct spdk_nvme_ns ns;
@ -397,7 +399,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t md_size = 0x80;
struct spdk_nvme_ns ns;
@ -422,7 +424,7 @@ test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry(void)
SPDK_CU_ASSERT_FATAL(g_request->num_children == 0);
CU_ASSERT(g_request->payload.md == metadata);
CU_ASSERT(g_request->payload_size == PAGE_SIZE);
CU_ASSERT(g_request->payload_size == OCSSD_SECTOR_SIZE);
CU_ASSERT(g_request->payload.contig_or_cb_arg == buffer);
CU_ASSERT(g_request->cmd.opc == SPDK_OCSSD_OPC_VECTOR_WRITE);
CU_ASSERT(g_request->cmd.nsid == ns.id);
@ -441,7 +443,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_write_with_md(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t md_size = 0x80;
const uint32_t vector_size = 0x10;
@ -485,7 +487,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_write_single_entry(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
struct spdk_nvme_ns ns;
struct spdk_nvme_ctrlr ctrlr;
@ -506,7 +508,7 @@ test_nvme_ocssd_ns_cmd_vector_write_single_entry(void)
SPDK_CU_ASSERT_FATAL(g_request != NULL);
SPDK_CU_ASSERT_FATAL(g_request->num_children == 0);
CU_ASSERT(g_request->payload_size == PAGE_SIZE);
CU_ASSERT(g_request->payload_size == OCSSD_SECTOR_SIZE);
CU_ASSERT(g_request->payload.contig_or_cb_arg == buffer);
CU_ASSERT(g_request->cmd.opc == SPDK_OCSSD_OPC_VECTOR_WRITE);
CU_ASSERT(g_request->cmd.nsid == ns.id);
@ -523,7 +525,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_write(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t vector_size = 0x10;
struct spdk_nvme_ns ns;
@ -562,7 +564,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_copy_single_entry(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
struct spdk_nvme_ns ns;
struct spdk_nvme_ctrlr ctrlr;
@ -594,7 +596,7 @@ static void
test_nvme_ocssd_ns_cmd_vector_copy(void)
{
const uint32_t max_xfer_size = 0x10000;
const uint32_t sector_size = 0x1000;
const uint32_t sector_size = OCSSD_SECTOR_SIZE;
const uint32_t vector_size = 0x10;
struct spdk_nvme_ns ns;