Compare commits

...

71 Commits

Author SHA1 Message Date
Jim Harris
3f5e32adca test/rocksdb: add rocksdb_commit_id file
This signals which RocksDB commit should be checked
out for the SPDK RocksDB tests.

Also point the rocksdb.sh test script to point to where
this version of RocksDB will be cloned.

(Note: this is a modified version of what was merged to
master.)

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I1ba0be00747a2642b359b1e0e0c8c2c6d99cc4f0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/451772 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452482
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-05-01 23:09:45 +00:00
Jim Harris
089585c8d7 rocksdb: use C++ constructor for global channel
This is similar to what's been done on master, but
19.01 doesn't have the fs_thread_ctx changes, so this
looks a bit different.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9578bf0f17953b4a7a120de6718cb97258719447

Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452784
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-05-01 23:09:45 +00:00
Jim Harris
fe3a2c4dcd test/rocksdb: suppress leak reports on thread local ctx
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I77b9f640d75c12ec083bec791506bed921e26292

Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452733
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-05-01 23:09:45 +00:00
Wojciech Malikowski
13cfc610d0 lib/ftl: Free IO in case band's relocation was interrupted by shutdown
This leak could be detected by ASAN in FTL CI tests.

Change-Id: I3ab7317dd5288b9fc808fb476627213b00860eb8
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448566 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448828
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-27 17:41:14 +00:00
Changpeng Liu
6d8f66269d ftl: free allocated IO queue pair before releasing the controller
Intermittent FTL test failure (ASAN) #717 reported an error, in
ftl_halt_poller() call, ftl_anm_unregister_device() will release
controller first, while in ftl_dev_free_sync() the IO queue pair
will be released again.

Change-Id: Ifac2aa68e66ee5f41eba80c11c61d9dc91ec3408
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448524 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448827
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-03-22 18:31:05 +00:00
Pawel Niedzwiecki
d1a00ccd13 test/ftl: Change fio_plugin from basic test to bdevperf
Ftl tests won't pass with fio_plugin when asan is enabled.

Change-Id: I6f07f661c19ecab302e291bbd76a7aad964000c7
Signed-off-by: Pawel Niedzwiecki <pawelx.niedzwiecki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447318 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448716
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-03-22 18:31:05 +00:00
Wojciech Malikowski
9de25dc80d lib/ftl: Fix memory leak in restore module
Change-Id: I39c89ef935eeac56fd860b11e1fafd5047072f7e
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448023 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448715
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-22 18:31:05 +00:00
Tomasz Zawadzki
4d21fba0c5 version: 19.01.2-pre
Change-Id: I5a1c0419f9510c15270b3c510f5438bc7932b62f
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448714
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-22 18:31:05 +00:00
Tomasz Zawadzki
e1c4f011e1 SPDK 19.01.1
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ib2e7508109add1a8125cafc64cf13d31216ac6a6
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447680
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-15 09:32:51 +00:00
Tomasz Zawadzki
c58f0e9117 CHANGELOG: updated for v19.01.1
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I5e9f021370d2b46b0da0199b1915f593842c37e8
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447679
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-15 09:32:51 +00:00
Pawel Wodkowski
05b978da0c bdev: don't allow multiple unregister calls
Unregister calls are not guarded. Fix this by chekcing status before
doing unregister.

Change-Id: I593e27efdae17f6d89362fd8e4edccf2af2b7281
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/445894 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447943
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-03-15 06:54:43 +00:00
Tomasz Zawadzki
e5c6a69ed5 lvol: add option to change clear method for lvol store creation
Default 'unmap' option stays as it was.

'Write_zeroes' comes useful when one wants to make sure
that data presented from lvol bdevs on initial creation presents 0's.

'None' will be used for performance tests,
when whole device is preconditioned before creating lvol store.
Instead of performing preconditioning on each lvol bdev after its creation.

Change-Id: Ic5a5985e42a84f038a882bbe6f881624ae96242c
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442881 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447460
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 17:08:57 +00:00
Ziye Yang
010e9a7338 nvme/tcp: fix the lvol creation failure issue
The patch is used to fix issue:
https://github.com/spdk/spdk/issues/638

Reason: For supporting sgl, the implementation of
function nvme_tcp_pdu_set_data_buf is not correct.
The translation is not correct for incapsule data
when using SGL. In order not to do the translation
via calling sgl function again, we use a variable
to store the buf.

Change-Id: I580d266d85a1a805b5f168271acac25e5fd60190
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/444066 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447584
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 17:08:57 +00:00
Seth Howell
b335ab4765 rdma: change the logic of rdma_qpair_process_pending
I think this simplifies the process a little bit.

Change-Id: Icc87a59c9f6fd965ef35531975b7036d85c4bc95
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445916 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447622
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 17:08:57 +00:00
Seth Howell
d145d67c6b rdma: use an stailq for incoming_queue
Change-Id: Ib1e59db4c5dffc9bc21f26461dabeff0d171ad22
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445344 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447621
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 17:08:57 +00:00
Seth Howell
e11c4afaad rdma: remove the state_cntr variable.
We were only using one value from this array to tell us if the qpair was
idle or not. Remove this array and all of the functions that are no
longer needed after it is removed.

This series is aimed at reverting
fdec444aa8 which has been tied to
performance decreases on master.

Change-Id: Ia3627c1abd15baee8b16d07e436923d222e17ffe
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445336 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447620
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 17:08:57 +00:00
Seth Howell
dc3f8f8c58 rdma: remove reqs from read/write queues in error
Not doing so can cause us to hit asserts during the shutdown path. This
should fix an intermittent failure we are seeing on the test pool where
we hit the assert rdma_req->state != RDMA_REQUEST_STATE_FREE in
spdk_nvmf_rdma_request_process.

Note that this problem doesn't cause any data corruption when debug is
not enabled, it just causes us to probcess a subset of commands through
the state machine one extra time suring qpair shutdown.

Change-Id: Ibc36bfea87ec4089b8e2c7a915f48714fddb0b09
Signed-off-by: Seth Howell <seth.howell@intel.com>
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447852
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 17:08:57 +00:00
Seth Howell
80c98d80b6 rdma.c: Create a single point of entry for qpair disconnect
Since there are multiple events/conditions that can trigger a qpair
disconnection, we need to funnel them to a single point of entry. If
more than one of these events occurs, we can ignore all but the first
since once a disconnect starts, it can't be stopped.

Change-Id: I749c9087a25779fcd5e3fe6685583a610ad983d3
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443305 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447619
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-13 08:17:19 +00:00
Ben Walker
40b4273a14 nvmf/rdma: Eliminate management channel
This is a holdover from before poll groups were introduced.
We just need a per-thread context for a set of connections,
so now that a poll group exists we can use that instead.

Change-Id: I1a91abf52dac6e77ea8505741519332548595c57
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442430 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447618
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-03-13 08:17:19 +00:00
Jim Harris
cf0d953044 blob: pass NULL or SPDK_BLOBID_INVALID when bserrno != 0
When an operation fails, we shouldn't pass a handle or
a 'valid' blob ID to the caller's completion function.
The caller *should* ignore it when bserrno != 0, but
it's best to not take that chance.

Fixes #685.

Note: #685 seems to have a broader issue related to
a possibly locked NVMe SSD in the submitter's system.
This only fixes the assert() that was hit.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3fb3368ccfe0580f0c505285d4b1e9aca797b6a6
Reviewed-on: https://review.gerrithub.io/c/445941 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447449
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 19:39:30 +00:00
Pawel Kaminski
85d6682dd4 spdkcli: Exit with 1 when rpc throws JSONRPCException
Fixes #593

Change-Id: Ib9eebdc1c74b82e8d193708b57afea7fefa7aa98
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443887 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447605
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-03-12 19:36:58 +00:00
Pawel Kaminski
83bbb8bb4b spdkcli: Add try-except section to delete_all commands
Call delete method for all objects in delete_all commands

Change-Id: Ib7eb05334b88aba214f1d28897e7e107f14c7cb8
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444293 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447604
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-03-12 19:36:58 +00:00
Pawel Kaminski
c2ed724e3b spdkcli: Refresh spdkcli tree after loading config
Change-Id: Id68c3914aab3800ccbf283daaada8c8de7bd6f93
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445687 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447601
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-03-12 19:36:58 +00:00
Piotr Pelplinski
a6f10a33cb fio_plugin: fix hang in FIO
This is fix for https://github.com/spdk/spdk/issues/523

Fio hangs on pthread_exit(NULL) from spdk thread.
This happens because, pthread_exit tries to dlopen glibc and hangs on
__lll_lock_wait. This patch prevents unmapping of glibc in fio_plugin
and phtread_exit does not need to dlopen it again.

Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: I5078cc55e24841675d6ef4ecba43879dc3f73a4f
Reviewed-on: https://review.gerrithub.io/c/443912 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447586
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-03-12 19:28:16 +00:00
Jim Harris
c828d09d3a nvme: add SHST_COMPLETE quirk for VMWare emulated SSDs
VMWare Workstation NVMe emulation does not seem to write the
SHST_COMPLETE bit within 10 seconds, resulting in an ERRLOG
during detach/shutdown.  So add a quirk to cover these VMWare
SSDs.  But rather than squashing the ERRLOG completely for
these SSDs, just add a message instead indicating this is
somewhat expected on these VMWare emulated SSDs.

Fixes issue #676.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3dfcb631feda639926fd712f1f41abb66cbf2096
Reviewed-on: https://review.gerrithub.io/c/445942 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447591
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:17:11 +00:00
gila
90c60fc372 configure: update how CPU arch is determined
The -i option for uname is not portable, -m is a better choice.
Fixes #648

Signed-off-by: gila <jeffry.molanus@gmail.com>
Change-Id: I2287e652e8d3243df2bf101c1cfbdc6aedf643f1
Reviewed-on: https://review.gerrithub.io/c/443315 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447596
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:16:47 +00:00
heyang
a5879f56f4 nvme: add memory barrier in completion path for arm64
Add a memory barrier for arm64 to prevent possible reordering
of tracker and cpl access,
because arm64 has less strict memory ordering behavior than x86.

Change-Id: I0a8716f7bfeffb0bbce27ee3174e214c8e4566b4
Signed-off-by: heyang <heyang18@huawei.com>
Reviewed-on: https://review.gerrithub.io/c/442964 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447592
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:16:29 +00:00
Pawel Wodkowski
9f6a6b1942 configure: detect IBV_WR_SEND_WITH_INV instead checking version
Checking version of libibverbs is error prone as custom version might be
installed that implements needed features but version number is not
incremented. Instead test if we can compile with needed features.

Fixes #524

Change-Id: I18e9ca923eea92b124e95a5f660955a01afad5c4
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443387 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447587
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:16:11 +00:00
Jim Harris
40e461cbb7 build: fix duplicated clean target in shared_lib/Makefile
Add a CLEAN_FILES macro that shared_lib/Makefile can use
to add to the list of files to be cleaned.

Fixes #663.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I12982e0989e02a69aaea4e470777301280090096
Reviewed-on: https://review.gerrithub.io/c/444427 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447583
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:15:50 +00:00
Darek Stojaczyk
6a365c0811 env/dpdk: fix potential memleak on init failure
When we were trying to push a newly allocated string
into the arg array and the array realloc() failed,
the string we were about to insert was leaked.

Change-Id: I31ccd5a09956d5407b2938792ecc9b482b2419d1
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445149 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447580
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:15:31 +00:00
Pawel Kaminski
cad2095077 spdkcli: Skip refreshing node if spdkcli is run noninteractive
Change-Id: I38662ce05acbf02092b1f02c72800aaf8f448136
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445012
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447600
2019-03-12 05:13:20 +00:00
Pawel Kaminski
bcbf6e8483 spdkcli: Catch JSONRPCException in execute_command
Move try-catch sections from create and delete commands to
execute_command method. Move refresh methods
to execute_command.

Change-Id: Idfa1cacd8a1a1c8ac738a84595610f4e57cace44
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442395
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447599
2019-03-12 05:13:20 +00:00
paul luse
8d31df3061 bdev/crypto: fix error path memory leak in driver init
This patch refactors driver init and in doing so eliminates the mem
leak described in the GitHub issue.  Also it is now consistent with
how the pending compression driver does init.

Fixes #633

Change-Id: Ia2d55d9e98fb9470ff8f9b34aeb4ee9f3d0478f5
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442896 (master)
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447607
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-03-12 05:12:18 +00:00
Xiaodong Liu
a629b17d51 nbd: avoid unlimited wait for device busy
The ioctl NBD_SET_SOCK can return EBUSY on conditions not
only the kernel module hasn't loaded entirely yet, but
also the nbd device is setup by another process, which will
lead the poller's infinite polling.
This patch will wait only 1 second if device is busy.

Change-Id: I8b1cfab725cba180f774a57ced3fa4ba81da2037
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444804 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447598
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:10:52 +00:00
Xiaodong Liu
d2e533c642 nbd: avoid impact to device setup by other task
Use NBD_SET_SOCK to check whether the nbd device is setup
by other process or whether nbd kernel module is ready
before other nbd ioctl operations. This can avoid bad
influence to the nbd device setup by other process.

Change-Id: Ic12acbfddb8c4388e25731c39159b1ce559b8f23
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444805 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447597
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:10:22 +00:00
Changpeng Liu
10cb21522a bdev/nvme: don't attach user deleted controllers automaticlly
When hotplug feature is enabled by NVMe driver, users may
call delete_nvme_controller() RPC to delete one controller,
however, the hotplug monitor will probe this controller
automaticlly and attach it back to NVMe driver.  We added
a skip list, for those user deleted controllers so that
NVMe driver will not attach it again.

Fix issue #602.

Change-Id: Ibbe21ff8a021f968305271acdae86207e6228e20
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444323 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447595
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:10:03 +00:00
yidong0635
4d4c3fe813 vagrant: add SPDK_TEST_OCF=0 in autorun-spdk.conf
A new module switch  which was missed at here.

Change-Id: If1784ace13657756d8034cd04e594af5b1799381
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444820 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447594
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:09:34 +00:00
Ziye Yang
e0c1093936 event: Change the base to 0 when calling strtol
Previously, we can -p + hex value(e.g., 0x1) to assign the master core
and start the NVMe-oF or iSCSI target app.

However now it is not supported and prints error. I checked
the code, it only supports transformation with Decimal format,
so chaning the base to 0 to make it supporting other formats.

Change-Id: I82510ba0cef47b5593484b4fd3490f85c93cf6a5
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444830 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447593
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:09:16 +00:00
Darek Stojaczyk
b0cacd460d vm_setup.sh: add iptables dependency
We started to use iptables in patch 21bd94275
(libsock: add functional tests) but never added
the package dependency.

Change-Id: I651f2545a11f546f8b47f9759fbaed3a141f0928
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443597 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447590
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:08:06 +00:00
Wojciech Malikowski
ecad1d2cbc lib/ftl: fix IO metadata pointer initialization
Change-Id: I2bad16b6649c279448a3c662ab7b035dbe0a4bfb
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443251 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447589
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:07:47 +00:00
Vitaliy Mysak
17660fa741 bdev/ocf: synchronize env_allocator creation
Make modyfication of global allocator index tread safe
  by using atomic operation

This patch also changes mempool size to be 2^n - 1
  which makes it more efficient

Change-Id: I5b7426f2feef31471d3a4e6c6d2c7f7474200d68
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442695 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447588
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:07:34 +00:00
Wojciech Malikowski
e13c1ffbc3 lib/ftl: Fix band's metadata inconsistency with L2P
Added check before write submission to indicate if
LBA was update in meantime. In such case don't set band's
metadata and rwb entry cache bit. Previous implementation
invalidates such address during write completion and could
cause that inconsistent lba map was stored into disk.

Change-Id: I4353d9f96c53132ca384aeca43caef8d11f07fa4
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/444403 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447582
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:07:21 +00:00
Vitaliy Mysak
0395d29bf4 scripts: vm_setup.sh fix OCF github repo path
Fix wrong url for git repo for OCF

This patch is connected to issue #670

Change-Id: I030889089a4b0433517dd909246a3bc16b67c71b
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445249 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447581
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-12 05:07:08 +00:00
Seth Howell
792b36e898 test/rdma_ut: fix valgrind issue.
Recently, we started setting the list of RDMA wr in the parse_sgl
function. This meant that we started using a variable we hadn't before
which was uninitialized in the unit tests which caused a valgrind error.

Change-Id: I3f76ce1dcf95d1d41fe8b3f96e878859036a5031
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443791 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447450
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-08 22:00:07 +00:00
Liang Yan
6a2de254d6 test/spdkcli: update match file to cover larger volume NVMe SSD
The match file is hardcoded to $(FP)G. If using xxTB volume NVMe
SSD, this test case will fail. So using $(S) to cover larger
volume NVMe SSD.

Change-Id: Id046cadfbc5236cd8f480981fa337d2ee9a68bf4
Signed-off-by: Liang Yan <liang.z.yan@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447130 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447472
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-03-08 20:56:45 +00:00
zkhatami88
a9533f4083 nvme: remaning changes related to nvme hooks
Change-Id: I07f3f403bef26a7c3e41b3c9f74e7ba4e378b2cc
Signed-off-by: zkhatami88 <z.khatami88@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/443650 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447452
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:27:19 +00:00
Wojciech Malikowski
f882a577d4 lib/ftl: Fix band picking for write pointer
Removing band from "free list" is moved from FTL_BAND_STATE_OPENING
to FTL_BAND_STATE_PREP state's change actions.
This will fix race condition when one band is prepared (erased)
and write pointer is trying to get next active band.

Change-Id: I9e4fe9482a01ee732271736e4a0e6fcedf2582d8
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445118 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447461
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:07:18 +00:00
Jim Harris
e22df3fbcf vhost: use mmap_size to check for 2MB hugepage multiple
Older versions of QEMU (<= 2.11) expose the VGA BIOS
hole (0xA0000-0xBFFFF) by specifying two separate memory
regions - one before and one after the hole.  This results
in the "size" not being a 2MB multiple.  But the underlying
memory is still mmaped at a 2MB multiple - so that's what
we should be checking to ensure the memory is hugepage backed.

Fixes #673.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I1644bb6d8a8fb1fd51a548ae7a17da061c18c669
Reviewed-on: https://review.gerrithub.io/c/445764 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447457
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:05:07 +00:00
Ziye Yang
d07cc7d35d event/subsystem: solve the subsystem init and destroy conflict
We have conflict to handle the NVMf subsystem shut
down. The situation is that:

If there is shutdown request (e.g., ctrlr+c),
we may have subsystem finalization and subsystem
initialization conflict (e.g., have NVMf subsystem fini and
intialization together), we will have coredump
issue like #682.

If we interrupt the initialization of the subsystem,
following works should do:

1  Do not initilize the next subsystem.
2  Recycle the resources in each subsystem via the
spdk_subsystem_fini related function. And this patch will
do the general thing, but will not consider the detailed
interrupt policy in each subsystem.

Change-Id: I2438b4a2462acb05d8c8e06dfff3da3d388d4b70
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/446189 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447459
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:04:41 +00:00
Zhu Lingshan
6481d80514 scripts/pkgdep: update SUSE distros recognition
OpenSUSE releases (OpenSUSE Leap and Tumbleweed) now use
/etc/SUSE-brand than /etc/SuSE-release as SUSE identification.
According to this change, This commit intends to update
scripts/pkgdep so that it could install packages for OpenSUSE.

Tested on OpenSUSE Leap 15.0 and latest Tumblweed.

Change-Id: I878b6671753084ef718e1f7630a42520a72ea151
Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446504 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447458
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:04:06 +00:00
Tomasz Kulasek
8befeab1b4 test/unit/app_ut: fix potential leak of memory
This patch fixes potential memory leak in spdk_app_parse_args() when
white or blacklist of devices is defined.

Change-Id: Ia586d77c67dbe6c664447f8431e1a7a30d624ae1
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440982 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447456
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:03:31 +00:00
GangCao
d45c6e54ae QoS: enable rate limit when opening the bdev
There are some cases that virtual bdev open and close
the device and QoS will be disabled at the last close.
In this case, when a new bdev open operation comes again,
the QoS needs to be enabled again.

Change-Id: I792e610f4592bad1cac55c6c55261d4946c6b3e2
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442953 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447455
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:03:02 +00:00
Zahra Khatami
bf881b09a7 nvmf: remaning changes related to nvmf hooks
Change-Id: I6780fa43cebd9f48d1ae0ea6fbeb92a95c4dfa15
Signed-off-by: zkhatami88 <z.khatami88@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/443653 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447454
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:01:57 +00:00
Liang Yan
1e0e636351 test:increase the json_config.sh shutdown app timeout value
In some situation, the script needs to try more times to kill
spdk_tgt. So increase the loop count.

Change-Id: I5c3596b0bae8ee965bb0b3532ba100dfd0aec82d
Signed-off-by: Liang Yan <liang.z.yan@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445436 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447453
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:01:36 +00:00
Seth Howell
0640d3fca5 RDMA: Remove the state_queues
Since we no longer rely on the state queues for draining qpairs, we can
get rid of most of them. We cn keep just a few, and since we don't ever
remove arbitrary elements, we can use stailqs to perform those
operations. Operations on Stailqs carry about half the overhead as
operations on tailqs

Change-Id: I8f184e6269db853619a3581d387d97a795034798
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445332 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447466
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 19:00:44 +00:00
Seth Howell
46dd96c2f0 rdma: update default number of shared buffers.
When the decision was made to uncouple the number of shared buffers from
the queue depth and allow the user to decide for themselves, the default
was also significantly lowered, which caused some issues when trying
torun performance tests (See https://github.com/spdk/spdk/issues/699).
While this is a user modifiable variable, it is still best to keep the
higher default value.

The original value was equivalent to max_queue_depth *
SPDK_NVMF_MAX_SGL_ENTRIES * 2 with the defaults for max_queue depth and
max_sgl_entries being 128 and 16 respectively. Hence 4096

fixes: 0b20f2e552

Change-Id: I809e97a10973093a2b485b85bca7160091166f70
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/446525 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447465
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:00:44 +00:00
Seth Howell
8529ceadfa rdma: adjust I/O unit based on device SGL support
For devices that support fewer SGE elements than our default values, we
need to adjust the I/O unit size so that we don't ever try to submit
more SGLs than we are allowed to.

Change-Id: I316d88459380f28009cc8a3d9357e9c67b08e871
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442776 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447464
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:00:44 +00:00
Seth Howell
6dcace0744 rdma: Fix misordered assert and decrement.
In the error path, we were first decrementing a variable and then
asserting that it must be >0. These operations should occur in the
opposite order.

Change-Id: I6cec544faf17bb75cbfca3d3a3c173dc5db14f99
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/446440 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447463
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:00:44 +00:00
Seth Howell
37ad7fd3b8 rdma: properly account num_outstanding_data_wr
This value was not being decremented when we got SEND completions for
write operations because we were using the recv send to indicate when we
had completed all writes associated with the request. I also erroneously
made the assumption that spdk_nvmf_rdma_request_parse_sgl would properly
reset this value to zero for all requests. However, for requests that
return SPDK_NVME_DATA_NONE rom spdk_nvmf_rdma_request_get_xfer, this
funxtion is skipped and the value is never reset. This can cause a
coherency issue on admin queues when we request multiple log files. When
the keep_alive request is resent, it can pick up an old rdma_req which
reports the wrong number of outstanding_wrs and it will permanently
increment the qpairs curr_send_depth.

This change decrements num_outstanding_data_wrs on writes, and also
resets that value when the request is freed to ensure that this problem
doesn't occur again.

Change-Id: I5866af97c946a0a58c30507499b43359fb6d0f64
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443811 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447462
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 19:00:44 +00:00
Sasha Kotchubievsky
a8dd54792c perf: Fix integer overflow
perf application can't generate IO for NVMe namespace with
 more than 4G size.

 Example of error:
 "Attached to NVMe over Fabrics controller at 1.1.75.1:1023:
 nqn.2016-06.io.spdk.r-dcs75:rd0
 WARNING: controller SPDK bdev Controller (SPDK000DEADBEAF00   ) ns 1 has
 invalid ns size 0 / block size 4096 for I/O size 4096
 WARNING: Some requested NVMe devices were skipped
 No valid NVMe controllers or AIO devices found"

 ns_size variable is uint32_t, spdk_nvme_ns_get_size function
 returns uint64_t. Result can exceed the maximum size of
 uint32_t and ns_size remains 0.

 The issue introduced by commit: f2462909

Change-Id: Idc6dd8688d5d6268bda1a1d6b06a611643af6155
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-on: https://review.gerrithub.io/c/443996 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447451
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 18:56:27 +00:00
Pawel Wodkowski
14d4c7f06d test/ftl: use OCSSD instead first NVMe like
Change-Id: I175bebb68ea1752fda6fe80932cd27c30cf3dcff
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443737 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447183
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-03-08 09:49:13 +00:00
Pawel Wodkowski
5c50e8e1b5 autotest: blacklist OCSSD devices
Detect and blacklist OCSSD devices by unbinding the driver.

Change-Id: I7ba6cefd083a7d3ead6db27fa27a765f8ee52402
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442978 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447150
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
18b8ef97ac test/ftl: limit total IO size to 256M
On VM these tests takes ages.

Change-Id: Id4799e2d226e59b430e899983a6470080b5c37dc
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443795 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447149
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
1bf4f98311 scripts/common.sh: use PCI blacklist and whitelist
iter_pci_dev_id abd iter_pci_dev_id functions should
not return BDF for devices that are not ment to be used
in tests.

Note that not all tests are ready for this change as they
discover functions on its own. Lets this changed in
separate patch.

Change-Id: I45a59ec121aa81e9f981acae7ec0379ff68e520a
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443767 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447148
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
29ae45877a setup.sh: move pci_can_bind function to common.sh
Change-Id: I1c3ba13c39ef0d06d70e6e262bdc08c76a7614e0
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443766 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447147
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
0168d9bc9d setup.sh: try harder to find out if driver is loaded
Change-Id: I098285ff42271a7577a260cd864c015b235833b5
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443765 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447146
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
7eda85292a setup.sh: add PCI_BLACKLIST
Add PCI blacklist so we can skip only some devices.

Change-Id: I8600307dd53f32acb4dfeb3f57845e0b9d29fdb9
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442977 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447145
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
b1be663bfb setup.sh: enhance output from setup, reset and status
Unify output of setup driver binding. Each line will print PCI BDF,
vendor and device id.

  $export PCI_BLACKLIST="0000:00:04.0 0000:00:04.1"
  $./scripts/setup.sh
  0000:0b:00.0 (8086 0953): nvme -> vfio-pci
  0000:00:04.1 (8086 0e20): Skipping un-whitelisted I/OAT device
  ...
  0000:00:04.1 (8086 0e21): Skipping un-whitelisted I/OAT device
  ...

Print log when desired driver is already bound:

  $./scripts/setup.sh
  0000:0b:00.0 (8086 0953): Already using the vfio-pci driver
  ...

'status' command prints vendor and device:

  ./scripts/setup.sh status
  ...
  NVMe devices
  BDF		Vendor	Device	NUMA	Driver		Device name
  0000:0b:00.0	8086	0953	0	vfio-pci		-

  I/OAT DMA
  BDF		Vendor	Device	NUMA	Driver
  0000:00:04.0	8086	0e20	0	ioatdma
  0000:80:04.0	8086	0e20	1	vfio-pci
  0000:00:04.1	8086	0e21	0	ioatdma
  0000:80:04.1	8086	0e21	1	vfio-pci
  0000:00:04.2	8086	0e22	0	vfio-pci
  0000:80:04.2	8086	0e22	1	vfio-pci
  ...

As we are here replace legacy Bash subshell invocation ` ` with $( ) in
some places.

Change-Id: I76b533c7580dadeb3d592c084778b8f9869c6d17
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443218 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447144
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
2019-03-08 08:55:02 +00:00
Pawel Wodkowski
d3dbb9c7cf setup.sh: remove usless '= "0"' part from if statements
Bash interprets everything after command as additional
function arguments. To not confuse user just remove this part
and replace by '!'.

Change-Id: I44228003a1f96324271e726df4f5033f3258523c
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442976 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447143
Tested-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 08:55:02 +00:00
Darek Stojaczyk
898bad7d0c autotest: introduce SPDK_RUN_FUNCTIONAL_TEST
Introduced a new variable to run functional tests.
It's enabled by default, and can be manually disabled
on systems where e.g. only unit tests are run.

SPDK_RUN_FUNCTIONAL_TEST is a supplement to SPDK_UNITTEST.
The two are completely independent - both can be enabled,
disabled, or run in any combination.

The new variable is prefixed SPDK_RUN_ as it aligns nicely
with SPDK_RUN_CHECK_FORMAT, SPDK_RUN_VALGRIND, and
SPDK_RUN_ASAN, all of which control how much is tested.
SPDK_UNITTEST should eventually follow the same pattern
as well.

This gives us 2 layers of configuration:
SPDK_TEST_* <- what is tested
SPDK_RUN_* <- how it is tested

The following would run UT+ASAN for FTL and BlobFS, without
running their functional tests:

```
SPDK_RUN_FUNCTIONAL_TEST=0
SPDK_RUN_ASAN=1
SPDK_TEST_UNITTEST=1
SPDK_TEST_FTL=1
SPDK_TEST_BLOBFS=1
```

Change-Id: I9e592fa41aa2df8e246eca2bb9161b6da6832130
Signed-off-by: Seth Howell <seth.howell@intel.com>
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442327 (master)
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447261
Tested-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-03-08 08:55:02 +00:00
Darek Stojaczyk
2f87aada01 version: 19.01.1-pre
Change-Id: I0741ecdf02461dbaf1b04d78ec0c67843c8c0f39
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443512
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-02-25 12:11:50 +00:00
72 changed files with 1412 additions and 1240 deletions

View File

@ -1,5 +1,41 @@
# Changelog
## v19.01.2: (Upcoming Release)
## v19.01.1:
### logical volumes
Added option to change method for erasing data region on lvol store creation.
Default of unmapping can now be changed to writing zeroes or no operation.
### autotest
Introduce SPDK_RUN_FUNCTIONAL_TEST variable enabled by default, that can be
manually disabled on systems where e.g. only unit tests are run.
### FTL
Add detection for OpenChannel devices, so that NVMe tests and FTL bdev tests
are run on appropriate devices.
### QoS
Enabled rate limit when opening the bdev, as there were some cases where
previously closed bdev would have QoS disabled.
### GitHub issues
#523: (fio_plugin) Fixed hang on pthread_exit(NULL).
#633: (crypto bdev) Fixed memory leak in driver init path.
#602: (nvme bdev) Will not attach user deleted controllers automatically.
#676: (nvme_bdev) Added SHST_COMPLETE quirk for VMWare emulated SSDs.
#638: (nvmf) Fixed the lvol creation failure issue for NVMe-oF TCP.
#699: (nvmf) Updated default number of shared buffers for RDMA.
#673: (vhost) Will use mmap_size to check for 2MB hugepage multiple.
#663: (building) Fixed duplicated clean target in shared_lib/Makefile
#593: (spdkcli) Will exit with 1 when rpc throws JSONRPCException.
## v19.01:
### ocf bdev

View File

@ -63,8 +63,44 @@ rm -f /var/tmp/spdk*.sock
# Let the kernel discover any filesystems or partitions
sleep 10
if [ $(uname -s) = Linux ]; then
# OCSSD devices drivers don't support IO issues by kernel so
# detect OCSSD devices and blacklist them (unbind from any driver).
# If test scripts want to use this device it needs to do this explicitly.
#
# If some OCSSD device is bound to other driver than nvme we won't be able to
# discover if it is OCSSD or not so load the kernel driver first.
for dev in $(find /dev -maxdepth 1 -regex '/dev/nvme[0-9]+'); do
# Send Open Channel 2.0 Geometry opcode "0xe2" - not supported by NVMe device.
if nvme admin-passthru $dev --namespace-id=1 --data-len=4096 --opcode=0xe2 --read >/dev/null; then
bdf="$(basename $(readlink -e /sys/class/nvme/${dev#/dev/}/device))"
echo "INFO: blacklisting OCSSD device: $dev ($bdf)"
PCI_BLACKLIST+=" $bdf"
OCSSD_PCI_DEVICES+=" $bdf"
fi
done
export OCSSD_PCI_DEVICES
# Now, bind blacklisted devices to pci-stub module. This will prevent
# automatic grabbing these devices when we add device/vendor ID to
# proper driver.
if [[ -n "$PCI_BLACKLIST" ]]; then
PCI_WHITELIST="$PCI_BLACKLIST" \
PCI_BLACKLIST="" \
DRIVER_OVERRIDE="pci-stub" \
./scripts/setup.sh
# Export our blacklist so it will take effect during next setup.sh
export PCI_BLACKLIST
fi
fi
# Delete all leftover lvols and gpt partitions
# Matches both /dev/nvmeXnY on Linux and /dev/nvmeXnsY on BSD
# Filter out nvme with partitions - the "p*" suffix
for dev in $(ls /dev/nvme*n* | grep -v p || true); do
dd if=/dev/zero of="$dev" bs=1M count=1
done
@ -105,104 +141,106 @@ if [ $SPDK_TEST_UNITTEST -eq 1 ]; then
timing_exit unittest
fi
timing_enter lib
if [ $SPDK_TEST_BLOCKDEV -eq 1 ]; then
run_test suite test/bdev/blockdev.sh
fi
if [ $SPDK_RUN_FUNCTIONAL_TEST -eq 1 ]; then
timing_enter lib
if [ $SPDK_TEST_JSON -eq 1 ]; then
run_test suite test/config_converter/test_converter.sh
fi
run_test suite test/env/env.sh
run_test suite test/rpc_client/rpc_client.sh
run_test suite ./test/json_config/json_config.sh
if [ $SPDK_TEST_EVENT -eq 1 ]; then
run_test suite test/event/event.sh
fi
if [ $SPDK_TEST_NVME -eq 1 ]; then
run_test suite test/nvme/nvme.sh
if [ $SPDK_TEST_NVME_CLI -eq 1 ]; then
run_test suite test/nvme/spdk_nvme_cli.sh
if [ $SPDK_TEST_BLOCKDEV -eq 1 ]; then
run_test suite test/bdev/blockdev.sh
fi
if [ $SPDK_TEST_JSON -eq 1 ]; then
run_test suite test/config_converter/test_converter.sh
fi
if [ $SPDK_TEST_EVENT -eq 1 ]; then
run_test suite test/event/event.sh
fi
if [ $SPDK_TEST_NVME -eq 1 ]; then
run_test suite test/nvme/nvme.sh
if [ $SPDK_TEST_NVME_CLI -eq 1 ]; then
run_test suite test/nvme/spdk_nvme_cli.sh
fi
# Only test hotplug without ASAN enabled. Since if it is
# enabled, it catches SEGV earlier than our handler which
# breaks the hotplug logic.
# Temporary workaround for issue #542, annotated for no VM image.
#if [ $SPDK_RUN_ASAN -eq 0 ]; then
# run_test suite test/nvme/hotplug.sh intel
#fi
fi
if [ $SPDK_TEST_IOAT -eq 1 ]; then
run_test suite test/ioat/ioat.sh
fi
timing_exit lib
if [ $SPDK_TEST_ISCSI -eq 1 ]; then
run_test suite ./test/iscsi_tgt/iscsi_tgt.sh posix
run_test suite ./test/spdkcli/iscsi.sh
fi
if [ $SPDK_TEST_BLOBFS -eq 1 ]; then
run_test suite ./test/blobfs/rocksdb/rocksdb.sh
run_test suite ./test/blobstore/blobstore.sh
fi
if [ $SPDK_TEST_NVMF -eq 1 ]; then
run_test suite ./test/nvmf/nvmf.sh
run_test suite ./test/spdkcli/nvmf.sh
fi
if [ $SPDK_TEST_VHOST -eq 1 ]; then
run_test suite ./test/vhost/vhost.sh
report_test_completion "vhost"
fi
if [ $SPDK_TEST_LVOL -eq 1 ]; then
timing_enter lvol
test_cases="1,50,51,52,53,100,101,102,150,200,201,250,251,252,253,254,255,"
test_cases+="300,301,450,451,452,550,551,552,553,"
test_cases+="600,601,602,650,651,652,654,655,"
test_cases+="700,701,702,750,751,752,753,754,755,756,757,758,759,760,"
test_cases+="800,801,802,803,804,10000"
run_test suite ./test/lvol/lvol.sh --test-cases=$test_cases
run_test suite ./test/blobstore/blob_io_wait/blob_io_wait.sh
report_test_completion "lvol"
timing_exit lvol
fi
if [ $SPDK_TEST_VHOST_INIT -eq 1 ]; then
timing_enter vhost_initiator
run_test suite ./test/vhost/initiator/blockdev.sh
run_test suite ./test/spdkcli/virtio.sh
run_test suite ./test/vhost/shared/shared.sh
report_test_completion "vhost_initiator"
timing_exit vhost_initiator
fi
if [ $SPDK_TEST_PMDK -eq 1 ]; then
run_test suite ./test/pmem/pmem.sh -x
run_test suite ./test/spdkcli/pmem.sh
fi
if [ $SPDK_TEST_RBD -eq 1 ]; then
run_test suite ./test/spdkcli/rbd.sh
fi
if [ $SPDK_TEST_OCF -eq 1 ]; then
run_test suite ./test/ocf/ocf.sh
fi
if [ $SPDK_TEST_BDEV_FTL -eq 1 ]; then
run_test suite ./test/ftl/ftl.sh
fi
# Only test hotplug without ASAN enabled. Since if it is
# enabled, it catches SEGV earlier than our handler which
# breaks the hotplug logic.
# Temporary workaround for issue #542, annotated for no VM image.
#if [ $SPDK_RUN_ASAN -eq 0 ]; then
# run_test suite test/nvme/hotplug.sh intel
#fi
fi
run_test suite test/env/env.sh
run_test suite test/rpc_client/rpc_client.sh
if [ $SPDK_TEST_IOAT -eq 1 ]; then
run_test suite test/ioat/ioat.sh
fi
timing_exit lib
if [ $SPDK_TEST_ISCSI -eq 1 ]; then
run_test suite ./test/iscsi_tgt/iscsi_tgt.sh posix
run_test suite ./test/spdkcli/iscsi.sh
fi
if [ $SPDK_TEST_BLOBFS -eq 1 ]; then
run_test suite ./test/blobfs/rocksdb/rocksdb.sh
run_test suite ./test/blobstore/blobstore.sh
fi
if [ $SPDK_TEST_NVMF -eq 1 ]; then
run_test suite ./test/nvmf/nvmf.sh
run_test suite ./test/spdkcli/nvmf.sh
fi
if [ $SPDK_TEST_VHOST -eq 1 ]; then
run_test suite ./test/vhost/vhost.sh
report_test_completion "vhost"
fi
if [ $SPDK_TEST_LVOL -eq 1 ]; then
timing_enter lvol
test_cases="1,50,51,52,53,100,101,102,150,200,201,250,251,252,253,254,255,"
test_cases+="300,301,450,451,452,550,551,552,553,"
test_cases+="600,601,650,651,652,654,655,"
test_cases+="700,701,702,750,751,752,753,754,755,756,757,758,759,760,"
test_cases+="800,801,802,803,804,10000"
run_test suite ./test/lvol/lvol.sh --test-cases=$test_cases
run_test suite ./test/blobstore/blob_io_wait/blob_io_wait.sh
report_test_completion "lvol"
timing_exit lvol
fi
if [ $SPDK_TEST_VHOST_INIT -eq 1 ]; then
timing_enter vhost_initiator
run_test suite ./test/vhost/initiator/blockdev.sh
run_test suite ./test/spdkcli/virtio.sh
run_test suite ./test/vhost/shared/shared.sh
report_test_completion "vhost_initiator"
timing_exit vhost_initiator
fi
if [ $SPDK_TEST_PMDK -eq 1 ]; then
run_test suite ./test/pmem/pmem.sh -x
run_test suite ./test/spdkcli/pmem.sh
fi
if [ $SPDK_TEST_RBD -eq 1 ]; then
run_test suite ./test/spdkcli/rbd.sh
fi
if [ $SPDK_TEST_OCF -eq 1 ]; then
run_test suite ./test/ocf/ocf.sh
fi
if [ $SPDK_TEST_BDEV_FTL -eq 1 ]; then
run_test suite ./test/ftl/ftl.sh
fi
run_test suite ./test/json_config/json_config.sh
timing_enter cleanup
autotest_cleanup
timing_exit cleanup

19
configure vendored
View File

@ -294,7 +294,7 @@ for i in "$@"; do
done
# Detect architecture and force no isal if non x86 archtecture
arch=$(uname -i)
arch=$(uname -m)
if [[ $arch != x86_64* ]]; then
echo "Notice: ISAL auto-disabled due to CPU incompatiblity."
CONFIG[ISAL]=n
@ -359,20 +359,9 @@ fi
if [ "${CONFIG[RDMA]}" = "y" ]; then
if [ "$OSTYPE" != "FreeBSD"* ]; then
ibv_lib_file="$(ldconfig -p | grep 'libibverbs.so ' || true)"
if [ ! -z "$ibv_lib_file" ]; then
ibv_lib_file="${ibv_lib_file##*=> }"
ibv_lib_file="$(readlink -f $ibv_lib_file)" || true
fi
if [ -z $ibv_lib_file ]; then
ibv_lib_file="libibverbs.so.0.0.0"
fi
ibv_ver_str="$(basename $ibv_lib_file)"
ibv_maj_ver=`echo $ibv_ver_str | cut -d. -f3`
ibv_min_ver=`echo $ibv_ver_str | cut -d. -f4`
if [[ "$ibv_maj_var" -gt 1 || ("$ibv_maj_ver" -eq 1 && "$ibv_min_ver" -ge 1) ]]; then
if echo -e '#include <infiniband/verbs.h>\n \
int main(void){ return !!IBV_WR_SEND_WITH_INV; }\n' \
| ${CC:-cc} ${CFLAGS} -x c -c -o /dev/null - 2>/dev/null; then
CONFIG[RDMA_SEND_WITH_INVAL]="y"
else
CONFIG[RDMA_SEND_WITH_INVAL]="n"

View File

@ -4499,6 +4499,7 @@ Name | Optional | Type | Description
bdev_name | Required | string | Bdev on which to construct logical volume store
lvs_name | Required | string | Name of the logical volume store to create
cluster_sz | Optional | number | Cluster size of the logical volume store in bytes
clear_method | Optional | string | Change clear method for data region. Available: none, unmap (default), write_zeroes
### Response
@ -4515,6 +4516,7 @@ Example request:
"params": {
"lvs_name": "LVS0",
"bdev_name": "Malloc0"
"clear_method": "write_zeroes"
}
}
~~~

View File

@ -10,6 +10,7 @@ The Logical Volumes library is a flexible storage space management system. It pr
* Type name: struct spdk_lvol_store
A logical volume store uses the super blob feature of blobstore to hold uuid (and in future other metadata). Blobstore types are implemented in blobstore itself, and saved on disk. An lvolstore will generate a UUID on creation, so that it can be uniquely identified from other lvolstores.
By default when creating lvol store data region is unmapped. Optional --clear-method parameter can be passed on creation to change that behavior to writing zeroes or performing no operation.
## Logical volume {#lvol}
@ -84,6 +85,7 @@ construct_lvol_store [-h] [-c CLUSTER_SZ] bdev_name lvs_name
Optional parameters:
-h show help
-c CLUSTER_SZ Specifies the size of cluster. By default its 4MiB.
--clear-method specify data region clear method "none", "unmap" (default), "write_zeroes"
destroy_lvol_store [-h] [-u UUID] [-l LVS_NAME]
Destroy lvolstore on specified bdev. Removes lvolstore along with lvols on
it. User can identify lvol store by UUID or its name. Note that destroying

View File

@ -40,7 +40,7 @@ APP := fio_plugin
C_SRCS = fio_plugin.c
CFLAGS += -I$(CONFIG_FIO_SOURCE_DIR)
LDFLAGS += -shared -rdynamic
LDFLAGS += -shared -rdynamic -Wl,-z,nodelete
SPDK_LIB_LIST = $(ALL_MODULES_LIST)
SPDK_LIB_LIST += thread util bdev conf copy rpc jsonrpc json log sock trace

View File

@ -4,6 +4,8 @@
* Copyright (c) Intel Corporation.
* All rights reserved.
*
* Copyright (c) 2019 Mellanox Technologies LTD. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
@ -588,7 +590,8 @@ register_ns(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_ns *ns)
{
struct ns_entry *entry;
const struct spdk_nvme_ctrlr_data *cdata;
uint32_t max_xfer_size, entries, ns_size, sector_size;
uint32_t max_xfer_size, entries, sector_size;
uint64_t ns_size;
struct spdk_nvme_io_qpair_opts opts;
cdata = spdk_nvme_ctrlr_get_data(ctrlr);
@ -606,7 +609,7 @@ register_ns(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_ns *ns)
if (ns_size < g_io_size_bytes || sector_size > g_io_size_bytes) {
printf("WARNING: controller %-20.20s (%-20.20s) ns %u has invalid "
"ns size %u / block size %u for I/O size %u\n",
"ns size %" PRIu64 " / block size %u for I/O size %u\n",
cdata->mn, cdata->sn, spdk_nvme_ns_get_id(ns),
ns_size, spdk_nvme_ns_get_sector_size(ns), g_io_size_bytes);
g_warn = true;

View File

@ -77,6 +77,12 @@ enum blob_clear_method {
BLOB_CLEAR_WITH_WRITE_ZEROES,
};
enum bs_clear_method {
BS_CLEAR_WITH_UNMAP,
BS_CLEAR_WITH_WRITE_ZEROES,
BS_CLEAR_WITH_NONE,
};
struct spdk_blob_store;
struct spdk_io_channel;
struct spdk_blob;
@ -206,6 +212,9 @@ struct spdk_bs_opts {
/** Maximum simultaneous operations per channel */
uint32_t max_channel_ops;
/** Clear method */
enum bs_clear_method clear_method;
/** Blobstore type */
struct spdk_bs_type bstype;

View File

@ -39,6 +39,7 @@
#define SPDK_LVOL_H
#include "spdk/stdinc.h"
#include "spdk/blob.h"
#ifdef __cplusplus
extern "C" {
@ -55,6 +56,11 @@ enum lvol_clear_method {
LVOL_CLEAR_WITH_WRITE_ZEROES,
};
enum lvs_clear_method {
LVS_CLEAR_WITH_UNMAP = BS_CLEAR_WITH_UNMAP,
LVS_CLEAR_WITH_WRITE_ZEROES = BS_CLEAR_WITH_WRITE_ZEROES,
LVS_CLEAR_WITH_NONE = BS_CLEAR_WITH_NONE,
};
/* Must include null terminator. */
#define SPDK_LVS_NAME_MAX 64
@ -64,8 +70,9 @@ enum lvol_clear_method {
* Parameters for lvolstore initialization.
*/
struct spdk_lvs_opts {
uint32_t cluster_sz;
char name[SPDK_LVS_NAME_MAX];
uint32_t cluster_sz;
enum lvs_clear_method clear_method;
char name[SPDK_LVS_NAME_MAX];
};
/**

View File

@ -51,6 +51,7 @@ extern "C" {
#define SPDK_PCI_VID_VIRTUALBOX 0x80ee
#define SPDK_PCI_VID_VIRTIO 0x1af4
#define SPDK_PCI_VID_CNEXLABS 0x1d1d
#define SPDK_PCI_VID_VMWARE 0x15ad
/**
* PCI class code for NVMe devices.

View File

@ -54,12 +54,12 @@
* Patch level is incremented on maintenance branch releases and reset to 0 for each
* new major.minor release.
*/
#define SPDK_VERSION_PATCH 0
#define SPDK_VERSION_PATCH 2
/**
* Version string suffix.
*/
#define SPDK_VERSION_SUFFIX ""
#define SPDK_VERSION_SUFFIX "-pre"
/**
* Single numeric value representing a version number for compile-time comparisons.

View File

@ -282,6 +282,12 @@ struct spdk_bdev_iostat_ctx {
void *cb_arg;
};
struct set_qos_limit_ctx {
void (*cb_fn)(void *cb_arg, int status);
void *cb_arg;
struct spdk_bdev *bdev;
};
#define __bdev_to_io_dev(bdev) (((char *)bdev) + 1)
#define __bdev_from_io_dev(io_dev) ((struct spdk_bdev *)(((char *)io_dev) - 1))
@ -289,6 +295,9 @@ static void _spdk_bdev_write_zero_buffer_done(struct spdk_bdev_io *bdev_io, bool
void *cb_arg);
static void _spdk_bdev_write_zero_buffer_next(void *_bdev_io);
static void _spdk_bdev_enable_qos_msg(struct spdk_io_channel_iter *i);
static void _spdk_bdev_enable_qos_done(struct spdk_io_channel_iter *i, int status);
void
spdk_bdev_get_opts(struct spdk_bdev_opts *opts)
{
@ -3686,33 +3695,18 @@ _remove_notify(void *arg)
}
}
void
spdk_bdev_unregister(struct spdk_bdev *bdev, spdk_bdev_unregister_cb cb_fn, void *cb_arg)
/* Must be called while holding bdev->internal.mutex.
* returns: 0 - bdev removed and ready to be destructed.
* -EBUSY - bdev can't be destructed yet. */
static int
spdk_bdev_unregister_unsafe(struct spdk_bdev *bdev)
{
struct spdk_bdev_desc *desc, *tmp;
bool do_destruct = true;
struct spdk_thread *thread;
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Removing bdev %s from list\n", bdev->name);
thread = spdk_get_thread();
if (!thread) {
/* The user called this from a non-SPDK thread. */
if (cb_fn != NULL) {
cb_fn(cb_arg, -ENOTSUP);
}
return;
}
pthread_mutex_lock(&bdev->internal.mutex);
bdev->internal.status = SPDK_BDEV_STATUS_REMOVING;
bdev->internal.unregister_cb = cb_fn;
bdev->internal.unregister_ctx = cb_arg;
int rc = 0;
TAILQ_FOREACH_SAFE(desc, &bdev->internal.open_descs, link, tmp) {
if (desc->remove_cb) {
do_destruct = false;
rc = -EBUSY;
/*
* Defer invocation of the remove_cb to a separate message that will
* run later on its thread. This ensures this context unwinds and
@ -3727,15 +3721,51 @@ spdk_bdev_unregister(struct spdk_bdev *bdev, spdk_bdev_unregister_cb cb_fn, void
}
}
if (!do_destruct) {
pthread_mutex_unlock(&bdev->internal.mutex);
if (rc == 0) {
TAILQ_REMOVE(&g_bdev_mgr.bdevs, bdev, internal.link);
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Removing bdev %s from list done\n", bdev->name);
}
return rc;
}
void
spdk_bdev_unregister(struct spdk_bdev *bdev, spdk_bdev_unregister_cb cb_fn, void *cb_arg)
{
struct spdk_thread *thread;
int rc;
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Removing bdev %s from list\n", bdev->name);
thread = spdk_get_thread();
if (!thread) {
/* The user called this from a non-SPDK thread. */
if (cb_fn != NULL) {
cb_fn(cb_arg, -ENOTSUP);
}
return;
}
TAILQ_REMOVE(&g_bdev_mgr.bdevs, bdev, internal.link);
pthread_mutex_lock(&bdev->internal.mutex);
if (bdev->internal.status == SPDK_BDEV_STATUS_REMOVING) {
pthread_mutex_unlock(&bdev->internal.mutex);
if (cb_fn) {
cb_fn(cb_arg, -EBUSY);
}
return;
}
bdev->internal.status = SPDK_BDEV_STATUS_REMOVING;
bdev->internal.unregister_cb = cb_fn;
bdev->internal.unregister_ctx = cb_arg;
/* Call under lock. */
rc = spdk_bdev_unregister_unsafe(bdev);
pthread_mutex_unlock(&bdev->internal.mutex);
spdk_bdev_fini(bdev);
if (rc == 0) {
spdk_bdev_fini(bdev);
}
}
int
@ -3744,6 +3774,7 @@ spdk_bdev_open(struct spdk_bdev *bdev, bool write, spdk_bdev_remove_cb_t remove_
{
struct spdk_bdev_desc *desc;
struct spdk_thread *thread;
struct set_qos_limit_ctx *ctx;
thread = spdk_get_thread();
if (!thread) {
@ -3778,6 +3809,22 @@ spdk_bdev_open(struct spdk_bdev *bdev, bool write, spdk_bdev_remove_cb_t remove_
return -EPERM;
}
/* Enable QoS */
if (bdev->internal.qos && bdev->internal.qos->thread == NULL) {
ctx = calloc(1, sizeof(*ctx));
if (ctx == NULL) {
SPDK_ERRLOG("Failed to allocate memory for QoS context\n");
pthread_mutex_unlock(&bdev->internal.mutex);
free(desc);
*_desc = NULL;
return -ENOMEM;
}
ctx->bdev = bdev;
spdk_for_each_channel(__bdev_to_io_dev(bdev),
_spdk_bdev_enable_qos_msg, ctx,
_spdk_bdev_enable_qos_done);
}
TAILQ_INSERT_TAIL(&bdev->internal.open_descs, desc, link);
pthread_mutex_unlock(&bdev->internal.mutex);
@ -3789,7 +3836,7 @@ void
spdk_bdev_close(struct spdk_bdev_desc *desc)
{
struct spdk_bdev *bdev = desc->bdev;
bool do_unregister = false;
int rc;
SPDK_DEBUGLOG(SPDK_LOG_BDEV, "Closing descriptor %p for bdev %s on thread %p\n", desc, bdev->name,
spdk_get_thread());
@ -3822,12 +3869,14 @@ spdk_bdev_close(struct spdk_bdev_desc *desc)
spdk_bdev_set_qd_sampling_period(bdev, 0);
if (bdev->internal.status == SPDK_BDEV_STATUS_REMOVING && TAILQ_EMPTY(&bdev->internal.open_descs)) {
do_unregister = true;
}
pthread_mutex_unlock(&bdev->internal.mutex);
rc = spdk_bdev_unregister_unsafe(bdev);
pthread_mutex_unlock(&bdev->internal.mutex);
if (do_unregister == true) {
spdk_bdev_unregister(bdev, bdev->internal.unregister_cb, bdev->internal.unregister_ctx);
if (rc == 0) {
spdk_bdev_fini(bdev);
}
} else {
pthread_mutex_unlock(&bdev->internal.mutex);
}
}
@ -3984,12 +4033,6 @@ _spdk_bdev_write_zero_buffer_done(struct spdk_bdev_io *bdev_io, bool success, vo
_spdk_bdev_write_zero_buffer_next(parent_io);
}
struct set_qos_limit_ctx {
void (*cb_fn)(void *cb_arg, int status);
void *cb_arg;
struct spdk_bdev *bdev;
};
static void
_spdk_bdev_set_qos_limit_done(struct set_qos_limit_ctx *ctx, int status)
{
@ -3997,7 +4040,9 @@ _spdk_bdev_set_qos_limit_done(struct set_qos_limit_ctx *ctx, int status)
ctx->bdev->internal.qos_mod_in_progress = false;
pthread_mutex_unlock(&ctx->bdev->internal.mutex);
ctx->cb_fn(ctx->cb_arg, status);
if (ctx->cb_fn) {
ctx->cb_fn(ctx->cb_arg, status);
}
free(ctx);
}

View File

@ -192,6 +192,109 @@ struct crypto_bdev_io {
struct iovec cry_iov; /* iov representing contig write buffer */
};
/* Called by vbdev_crypto_init_crypto_drivers() to init each discovered crypto device */
static int
create_vbdev_dev(uint8_t index, uint16_t num_lcores)
{
struct vbdev_dev *device;
uint8_t j, cdev_id, cdrv_id;
struct device_qp *dev_qp;
struct device_qp *tmp_qp;
int rc;
device = calloc(1, sizeof(struct vbdev_dev));
if (!device) {
return -ENOMEM;
}
/* Get details about this device. */
rte_cryptodev_info_get(index, &device->cdev_info);
cdrv_id = device->cdev_info.driver_id;
cdev_id = device->cdev_id = index;
/* Before going any further, make sure we have enough resources for this
* device type to function. We need a unique queue pair per core accross each
* device type to remain lockless....
*/
if ((rte_cryptodev_device_count_by_driver(cdrv_id) *
device->cdev_info.max_nb_queue_pairs) < num_lcores) {
SPDK_ERRLOG("Insufficient unique queue pairs available for %s\n",
device->cdev_info.driver_name);
SPDK_ERRLOG("Either add more crypto devices or decrease core count\n");
rc = -EINVAL;
goto err;
}
/* Setup queue pairs. */
struct rte_cryptodev_config conf = {
.nb_queue_pairs = device->cdev_info.max_nb_queue_pairs,
.socket_id = SPDK_ENV_SOCKET_ID_ANY
};
rc = rte_cryptodev_configure(cdev_id, &conf);
if (rc < 0) {
SPDK_ERRLOG("Failed to configure cryptodev %u\n", cdev_id);
rc = -EINVAL;
goto err;
}
struct rte_cryptodev_qp_conf qp_conf = {
.nb_descriptors = CRYPTO_QP_DESCRIPTORS
};
/* Pre-setup all pottential qpairs now and assign them in the channel
* callback. If we were to create them there, we'd have to stop the
* entire device affecting all other threads that might be using it
* even on other queue pairs.
*/
for (j = 0; j < device->cdev_info.max_nb_queue_pairs; j++) {
rc = rte_cryptodev_queue_pair_setup(cdev_id, j, &qp_conf, SOCKET_ID_ANY,
(struct rte_mempool *)g_session_mp);
if (rc < 0) {
SPDK_ERRLOG("Failed to setup queue pair %u on "
"cryptodev %u\n", j, cdev_id);
rc = -EINVAL;
goto err;
}
}
rc = rte_cryptodev_start(cdev_id);
if (rc < 0) {
SPDK_ERRLOG("Failed to start device %u: error %d\n",
cdev_id, rc);
rc = -EINVAL;
goto err;
}
/* Build up list of device/qp combinations */
for (j = 0; j < device->cdev_info.max_nb_queue_pairs; j++) {
dev_qp = calloc(1, sizeof(struct device_qp));
if (!dev_qp) {
rc = -ENOMEM;
goto err;
}
dev_qp->device = device;
dev_qp->qp = j;
dev_qp->in_use = false;
TAILQ_INSERT_TAIL(&g_device_qp, dev_qp, link);
}
/* Add to our list of available crypto devices. */
TAILQ_INSERT_TAIL(&g_vbdev_devs, device, link);
return 0;
err:
TAILQ_FOREACH_SAFE(dev_qp, &g_device_qp, link, tmp_qp) {
TAILQ_REMOVE(&g_device_qp, dev_qp, link);
free(dev_qp);
}
free(device);
return rc;
}
/* This is called from the module's init function. We setup all crypto devices early on as we are unable
* to easily dynamically configure queue pairs after the drivers are up and running. So, here, we
* configure the max capabilities of each device and assign threads to queue pairs as channels are
@ -201,10 +304,10 @@ static int
vbdev_crypto_init_crypto_drivers(void)
{
uint8_t cdev_count;
uint8_t cdrv_id, cdev_id, i, j;
uint8_t cdev_id, i;
int rc = 0;
struct vbdev_dev *device = NULL;
struct device_qp *dev_qp = NULL;
struct vbdev_dev *device;
struct vbdev_dev *tmp_dev;
unsigned int max_sess_size = 0, sess_size;
uint16_t num_lcores = rte_lcore_count();
@ -269,106 +372,21 @@ vbdev_crypto_init_crypto_drivers(void)
goto error_create_op;
}
/*
* Now lets configure each device.
*/
/* Init all devices */
for (i = 0; i < cdev_count; i++) {
device = calloc(1, sizeof(struct vbdev_dev));
if (!device) {
rc = -ENOMEM;
goto error_create_device;
}
/* Get details about this device. */
rte_cryptodev_info_get(i, &device->cdev_info);
cdrv_id = device->cdev_info.driver_id;
cdev_id = device->cdev_id = i;
/* Before going any further, make sure we have enough resources for this
* device type to function. We need a unique queue pair per core accross each
* device type to remain lockless....
*/
if ((rte_cryptodev_device_count_by_driver(cdrv_id) *
device->cdev_info.max_nb_queue_pairs) < num_lcores) {
SPDK_ERRLOG("Insufficient unique queue pairs available for %s\n",
device->cdev_info.driver_name);
SPDK_ERRLOG("Either add more crypto devices or decrease core count\n");
rc = -EINVAL;
goto error_qp;
}
/* Setup queue pairs. */
struct rte_cryptodev_config conf = {
.nb_queue_pairs = device->cdev_info.max_nb_queue_pairs,
.socket_id = SPDK_ENV_SOCKET_ID_ANY
};
rc = rte_cryptodev_configure(cdev_id, &conf);
if (rc < 0) {
SPDK_ERRLOG("Failed to configure cryptodev %u\n", cdev_id);
rc = -EINVAL;
goto error_dev_config;
}
struct rte_cryptodev_qp_conf qp_conf = {
.nb_descriptors = CRYPTO_QP_DESCRIPTORS
};
/* Pre-setup all pottential qpairs now and assign them in the channel
* callback. If we were to create them there, we'd have to stop the
* entire device affecting all other threads that might be using it
* even on other queue pairs.
*/
for (j = 0; j < device->cdev_info.max_nb_queue_pairs; j++) {
rc = rte_cryptodev_queue_pair_setup(cdev_id, j, &qp_conf, SOCKET_ID_ANY,
(struct rte_mempool *)g_session_mp);
if (rc < 0) {
SPDK_ERRLOG("Failed to setup queue pair %u on "
"cryptodev %u\n", j, cdev_id);
rc = -EINVAL;
goto error_qp_setup;
}
}
rc = rte_cryptodev_start(cdev_id);
if (rc < 0) {
SPDK_ERRLOG("Failed to start device %u: error %d\n",
cdev_id, rc);
rc = -EINVAL;
goto error_device_start;
}
/* Add to our list of available crypto devices. */
TAILQ_INSERT_TAIL(&g_vbdev_devs, device, link);
/* Build up list of device/qp combinations */
for (j = 0; j < device->cdev_info.max_nb_queue_pairs; j++) {
dev_qp = calloc(1, sizeof(struct device_qp));
if (!dev_qp) {
rc = -ENOMEM;
goto error_create_devqp;
}
dev_qp->device = device;
dev_qp->qp = j;
dev_qp->in_use = false;
TAILQ_INSERT_TAIL(&g_device_qp, dev_qp, link);
rc = create_vbdev_dev(i, num_lcores);
if (rc) {
goto err;
}
}
return 0;
/* Error cleanup paths. */
error_create_devqp:
while ((dev_qp = TAILQ_FIRST(&g_device_qp))) {
TAILQ_REMOVE(&g_device_qp, dev_qp, link);
free(dev_qp);
err:
TAILQ_FOREACH_SAFE(device, &g_vbdev_devs, link, tmp_dev) {
TAILQ_REMOVE(&g_vbdev_devs, device, link);
free(device);
}
error_device_start:
error_qp_setup:
error_dev_config:
error_qp:
free(device);
error_create_device:
rte_mempool_free(g_crypto_op_mp);
g_crypto_op_mp = NULL;
error_create_op:

View File

@ -205,7 +205,7 @@ end:
int
vbdev_lvs_create(struct spdk_bdev *base_bdev, const char *name, uint32_t cluster_sz,
spdk_lvs_op_with_handle_complete cb_fn, void *cb_arg)
enum lvs_clear_method clear_method, spdk_lvs_op_with_handle_complete cb_fn, void *cb_arg)
{
struct spdk_bs_dev *bs_dev;
struct spdk_lvs_with_handle_req *lvs_req;
@ -223,6 +223,10 @@ vbdev_lvs_create(struct spdk_bdev *base_bdev, const char *name, uint32_t cluster
opts.cluster_sz = cluster_sz;
}
if (clear_method != 0) {
opts.clear_method = clear_method;
}
if (name == NULL) {
SPDK_ERRLOG("missing name param\n");
return -EINVAL;

View File

@ -48,7 +48,7 @@ struct lvol_store_bdev {
};
int vbdev_lvs_create(struct spdk_bdev *base_bdev, const char *name, uint32_t cluster_sz,
spdk_lvs_op_with_handle_complete cb_fn, void *cb_arg);
enum lvs_clear_method clear_method, spdk_lvs_op_with_handle_complete cb_fn, void *cb_arg);
void vbdev_lvs_destruct(struct spdk_lvol_store *lvs, spdk_lvs_op_complete cb_fn, void *cb_arg);
void vbdev_lvs_unload(struct spdk_lvol_store *lvs, spdk_lvs_op_complete cb_fn, void *cb_arg);

View File

@ -44,6 +44,7 @@ struct rpc_construct_lvol_store {
char *lvs_name;
char *bdev_name;
uint32_t cluster_sz;
char *clear_method;
};
static int
@ -81,12 +82,14 @@ free_rpc_construct_lvol_store(struct rpc_construct_lvol_store *req)
{
free(req->bdev_name);
free(req->lvs_name);
free(req->clear_method);
}
static const struct spdk_json_object_decoder rpc_construct_lvol_store_decoders[] = {
{"bdev_name", offsetof(struct rpc_construct_lvol_store, bdev_name), spdk_json_decode_string},
{"cluster_sz", offsetof(struct rpc_construct_lvol_store, cluster_sz), spdk_json_decode_uint32, true},
{"lvs_name", offsetof(struct rpc_construct_lvol_store, lvs_name), spdk_json_decode_string},
{"clear_method", offsetof(struct rpc_construct_lvol_store, clear_method), spdk_json_decode_string, true},
};
static void
@ -123,6 +126,7 @@ spdk_rpc_construct_lvol_store(struct spdk_jsonrpc_request *request,
struct rpc_construct_lvol_store req = {};
struct spdk_bdev *bdev;
int rc;
enum lvs_clear_method clear_method;
if (spdk_json_decode_object(params, rpc_construct_lvol_store_decoders,
SPDK_COUNTOF(rpc_construct_lvol_store_decoders),
@ -150,8 +154,23 @@ spdk_rpc_construct_lvol_store(struct spdk_jsonrpc_request *request,
goto invalid;
}
rc = vbdev_lvs_create(bdev, req.lvs_name, req.cluster_sz, _spdk_rpc_lvol_store_construct_cb,
request);
if (req.clear_method != NULL) {
if (!strcasecmp(req.clear_method, "none")) {
clear_method = LVS_CLEAR_WITH_NONE;
} else if (!strcasecmp(req.clear_method, "unmap")) {
clear_method = LVS_CLEAR_WITH_UNMAP;
} else if (!strcasecmp(req.clear_method, "write_zeroes")) {
clear_method = LVS_CLEAR_WITH_WRITE_ZEROES;
} else {
rc = -EINVAL;
goto invalid;
}
} else {
clear_method = LVS_CLEAR_WITH_UNMAP;
}
rc = vbdev_lvs_create(bdev, req.lvs_name, req.cluster_sz, clear_method,
_spdk_rpc_lvol_store_construct_cb, request);
if (rc < 0) {
goto invalid;
}

View File

@ -95,6 +95,14 @@ struct nvme_probe_ctx {
const char *hostnqn;
};
struct nvme_probe_skip_entry {
struct spdk_nvme_transport_id trid;
TAILQ_ENTRY(nvme_probe_skip_entry) tailq;
};
/* All the controllers deleted by users via RPC are skipped by hotplug monitor */
static TAILQ_HEAD(, nvme_probe_skip_entry) g_skipped_nvme_ctrlrs = TAILQ_HEAD_INITIALIZER(
g_skipped_nvme_ctrlrs);
static struct spdk_bdev_nvme_opts g_opts = {
.action_on_timeout = SPDK_BDEV_NVME_TIMEOUT_ACTION_NONE,
.timeout_us = 0,
@ -809,6 +817,14 @@ static bool
hotplug_probe_cb(void *cb_ctx, const struct spdk_nvme_transport_id *trid,
struct spdk_nvme_ctrlr_opts *opts)
{
struct nvme_probe_skip_entry *entry;
TAILQ_FOREACH(entry, &g_skipped_nvme_ctrlrs, tailq) {
if (spdk_nvme_transport_id_compare(trid, &entry->trid) == 0) {
return false;
}
}
SPDK_DEBUGLOG(SPDK_LOG_BDEV_NVME, "Attaching to %s\n", trid->traddr);
return true;
@ -1203,6 +1219,7 @@ spdk_bdev_nvme_create(struct spdk_nvme_transport_id *trid,
struct nvme_bdev *nvme_bdev;
uint32_t i, nsid;
size_t j;
struct nvme_probe_skip_entry *entry, *tmp;
if (nvme_ctrlr_get(trid) != NULL) {
SPDK_ERRLOG("A controller with the provided trid (traddr: %s) already exists.\n", trid->traddr);
@ -1214,6 +1231,16 @@ spdk_bdev_nvme_create(struct spdk_nvme_transport_id *trid,
return -1;
}
if (trid->trtype == SPDK_NVME_TRANSPORT_PCIE) {
TAILQ_FOREACH_SAFE(entry, &g_skipped_nvme_ctrlrs, tailq, tmp) {
if (spdk_nvme_transport_id_compare(trid, &entry->trid) == 0) {
TAILQ_REMOVE(&g_skipped_nvme_ctrlrs, entry, tailq);
free(entry);
break;
}
}
}
spdk_nvme_ctrlr_get_default_ctrlr_opts(&opts, sizeof(opts));
if (hostnqn) {
@ -1276,6 +1303,7 @@ int
spdk_bdev_nvme_delete(const char *name)
{
struct nvme_ctrlr *nvme_ctrlr = NULL;
struct nvme_probe_skip_entry *entry;
if (name == NULL) {
return -EINVAL;
@ -1287,6 +1315,15 @@ spdk_bdev_nvme_delete(const char *name)
return -ENODEV;
}
if (nvme_ctrlr->trid.trtype == SPDK_NVME_TRANSPORT_PCIE) {
entry = calloc(1, sizeof(*entry));
if (!entry) {
return -ENOMEM;
}
entry->trid = nvme_ctrlr->trid;
TAILQ_INSERT_TAIL(&g_skipped_nvme_ctrlrs, entry, tailq);
}
remove_cb(NULL, nvme_ctrlr->ctrlr);
return 0;
}
@ -1491,9 +1528,15 @@ static void
bdev_nvme_library_fini(void)
{
struct nvme_ctrlr *nvme_ctrlr, *tmp;
struct nvme_probe_skip_entry *entry, *entry_tmp;
spdk_poller_unregister(&g_hotplug_poller);
TAILQ_FOREACH_SAFE(entry, &g_skipped_nvme_ctrlrs, tailq, entry_tmp) {
TAILQ_REMOVE(&g_skipped_nvme_ctrlrs, entry, tailq);
free(entry);
}
pthread_mutex_lock(&g_bdev_nvme_mutex);
TAILQ_FOREACH_SAFE(nvme_ctrlr, &g_nvme_ctrlrs, tailq, tmp) {
if (nvme_ctrlr->ref > 0) {

View File

@ -38,16 +38,16 @@
#include "spdk_internal/log.h"
/* Number of buffers for mempool
* Need to be power of two
* Need to be power of two - 1 for better memory utilization
* It depends on memory usage of OCF which
* in itself depends on the workload
* It is a big number because OCF uses allocators
* for every request it sends and recieves
*/
#define ENV_ALLOCATOR_NBUFS 32768
#define ENV_ALLOCATOR_NBUFS 32767
/* Use unique index for env allocators */
static int g_env_allocator_index = 0;
static env_atomic g_env_allocator_index = 0;
void *
env_allocator_new(env_allocator *allocator)
@ -61,7 +61,7 @@ env_allocator_create(uint32_t size, const char *name)
env_allocator *allocator;
char qualified_name[128] = {0};
snprintf(qualified_name, 128, "ocf_env_%d", g_env_allocator_index++);
snprintf(qualified_name, 128, "ocf_env_%d", env_atomic_inc_return(&g_env_allocator_index));
allocator = spdk_mempool_create(qualified_name,
ENV_ALLOCATOR_NBUFS, size,

View File

@ -2421,6 +2421,7 @@ spdk_bs_opts_init(struct spdk_bs_opts *opts)
opts->num_md_pages = SPDK_BLOB_OPTS_NUM_MD_PAGES;
opts->max_md_ops = SPDK_BLOB_OPTS_MAX_MD_OPS;
opts->max_channel_ops = SPDK_BLOB_OPTS_DEFAULT_CHANNEL_OPS;
opts->clear_method = BS_CLEAR_WITH_UNMAP;
memset(&opts->bstype, 0, sizeof(opts->bstype));
opts->iter_cb_fn = NULL;
opts->iter_cb_arg = NULL;
@ -3694,8 +3695,14 @@ spdk_bs_init(struct spdk_bs_dev *dev, struct spdk_bs_opts *o,
/* Clear metadata space */
spdk_bs_batch_write_zeroes_dev(batch, 0, num_md_lba);
/* Trim data clusters */
spdk_bs_batch_unmap_dev(batch, num_md_lba, ctx->bs->dev->blockcnt - num_md_lba);
if (opts.clear_method == BS_CLEAR_WITH_UNMAP) {
/* Trim data clusters */
spdk_bs_batch_unmap_dev(batch, num_md_lba, ctx->bs->dev->blockcnt - num_md_lba);
} else if (opts.clear_method == BS_CLEAR_WITH_WRITE_ZEROES) {
/* Write_zeroes to data clusters */
spdk_bs_batch_write_zeroes_dev(batch, num_md_lba, ctx->bs->dev->blockcnt - num_md_lba);
}
spdk_bs_batch_close(batch);
}

View File

@ -51,7 +51,7 @@ spdk_bs_call_cpl(struct spdk_bs_cpl *cpl, int bserrno)
break;
case SPDK_BS_CPL_TYPE_BS_HANDLE:
cpl->u.bs_handle.cb_fn(cpl->u.bs_handle.cb_arg,
cpl->u.bs_handle.bs,
bserrno == 0 ? cpl->u.bs_handle.bs : NULL,
bserrno);
break;
case SPDK_BS_CPL_TYPE_BLOB_BASIC:
@ -60,12 +60,12 @@ spdk_bs_call_cpl(struct spdk_bs_cpl *cpl, int bserrno)
break;
case SPDK_BS_CPL_TYPE_BLOBID:
cpl->u.blobid.cb_fn(cpl->u.blobid.cb_arg,
cpl->u.blobid.blobid,
bserrno == 0 ? cpl->u.blobid.blobid : SPDK_BLOBID_INVALID,
bserrno);
break;
case SPDK_BS_CPL_TYPE_BLOB_HANDLE:
cpl->u.blob_handle.cb_fn(cpl->u.blob_handle.cb_arg,
cpl->u.blob_handle.blob,
bserrno == 0 ? cpl->u.blob_handle.blob : NULL,
bserrno);
break;
case SPDK_BS_CPL_TYPE_NESTED_SEQUENCE:

View File

@ -162,6 +162,7 @@ spdk_push_arg(char *args[], int *argcount, char *arg)
tmp = realloc(args, sizeof(char *) * (*argcount + 1));
if (tmp == NULL) {
free(arg);
spdk_free_args(args, *argcount);
return NULL;
}

View File

@ -857,7 +857,7 @@ spdk_app_parse_args(int argc, char **argv, struct spdk_app_opts *opts,
retval = SPDK_APP_PARSE_ARGS_HELP;
goto out;
case SHM_ID_OPT_IDX:
opts->shm_id = spdk_strtol(optarg, 10);
opts->shm_id = spdk_strtol(optarg, 0);
if (opts->shm_id < 0) {
fprintf(stderr, "Invalid shared memory ID %s\n", optarg);
goto out;
@ -867,14 +867,14 @@ spdk_app_parse_args(int argc, char **argv, struct spdk_app_opts *opts,
opts->reactor_mask = optarg;
break;
case MEM_CHANNELS_OPT_IDX:
opts->mem_channel = spdk_strtol(optarg, 10);
opts->mem_channel = spdk_strtol(optarg, 0);
if (opts->mem_channel < 0) {
fprintf(stderr, "Invalid memory channel %s\n", optarg);
goto out;
}
break;
case MASTER_CORE_OPT_IDX:
opts->master_core = spdk_strtol(optarg, 10);
opts->master_core = spdk_strtol(optarg, 0);
if (opts->master_core < 0) {
fprintf(stderr, "Invalid master core %s\n", optarg);
goto out;

View File

@ -42,6 +42,7 @@ struct spdk_subsystem_list g_subsystems = TAILQ_HEAD_INITIALIZER(g_subsystems);
struct spdk_subsystem_depend_list g_subsystems_deps = TAILQ_HEAD_INITIALIZER(g_subsystems_deps);
static struct spdk_subsystem *g_next_subsystem;
static bool g_subsystems_initialized = false;
static bool g_subsystems_init_interrupted = false;
static struct spdk_event *g_app_start_event;
static struct spdk_event *g_app_stop_event;
static uint32_t g_fini_core;
@ -116,6 +117,11 @@ subsystem_sort(void)
void
spdk_subsystem_init_next(int rc)
{
/* The initialization is interrupted by the spdk_subsystem_fini, so just return */
if (g_subsystems_init_interrupted) {
return;
}
if (rc) {
SPDK_ERRLOG("Init subsystem %s failed\n", g_next_subsystem->name);
spdk_app_stop(rc);
@ -190,11 +196,11 @@ _spdk_subsystem_fini_next(void *arg1, void *arg2)
g_next_subsystem = TAILQ_LAST(&g_subsystems, spdk_subsystem_list);
}
} else {
/* We rewind the g_next_subsystem unconditionally - even when some subsystem failed
* to initialize. It is assumed that subsystem which failed to initialize does not
* need to be deinitialized.
*/
g_next_subsystem = TAILQ_PREV(g_next_subsystem, spdk_subsystem_list, tailq);
if (g_subsystems_initialized || g_subsystems_init_interrupted) {
g_next_subsystem = TAILQ_PREV(g_next_subsystem, spdk_subsystem_list, tailq);
} else {
g_subsystems_init_interrupted = true;
}
}
while (g_next_subsystem) {

View File

@ -224,13 +224,14 @@ _ftl_band_set_free(struct ftl_band *band)
}
static void
_ftl_band_set_opening(struct ftl_band *band)
_ftl_band_set_preparing(struct ftl_band *band)
{
struct spdk_ftl_dev *dev = band->dev;
struct ftl_md *md = &band->md;
/* Verify band's previous state */
assert(band->state == FTL_BAND_STATE_PREP);
assert(band->state == FTL_BAND_STATE_FREE);
/* Remove band from free list */
LIST_REMOVE(band, list_entry);
md->wr_cnt++;
@ -467,8 +468,8 @@ ftl_band_set_state(struct ftl_band *band, enum ftl_band_state state)
_ftl_band_set_free(band);
break;
case FTL_BAND_STATE_OPENING:
_ftl_band_set_opening(band);
case FTL_BAND_STATE_PREP:
_ftl_band_set_preparing(band);
break;
case FTL_BAND_STATE_CLOSED:

View File

@ -507,23 +507,22 @@ ftl_get_limit(const struct spdk_ftl_dev *dev, int type)
return &dev->conf.defrag.limits[type];
}
static int
ftl_update_md_entry(struct spdk_ftl_dev *dev, struct ftl_rwb_entry *entry)
static bool
ftl_cache_lba_valid(struct spdk_ftl_dev *dev, struct ftl_rwb_entry *entry)
{
struct ftl_ppa ppa;
/* If the LBA is invalid don't bother checking the md and l2p */
if (spdk_unlikely(entry->lba == FTL_LBA_INVALID)) {
return 1;
return false;
}
ppa = ftl_l2p_get(dev, entry->lba);
if (!(ftl_ppa_cached(ppa) && ppa.offset == entry->pos)) {
ftl_invalidate_addr(dev, entry->ppa);
return 1;
return false;
}
return 0;
return true;
}
static void
@ -535,13 +534,10 @@ ftl_evict_cache_entry(struct spdk_ftl_dev *dev, struct ftl_rwb_entry *entry)
goto unlock;
}
/* Make sure the metadata is in sync with l2p. If the l2p still contains */
/* the entry, fill it with the on-disk PPA and clear the cache status */
/* bit. Otherwise, skip the l2p update and just clear the cache status. */
/* This can happen, when a write comes during the time that l2p contains */
/* the entry, but the entry doesn't have a PPA assigned (and therefore */
/* does not have the cache bit set). */
if (ftl_update_md_entry(dev, entry)) {
/* If the l2p wasn't updated and still points at the entry, fill it with the */
/* on-disk PPA and clear the cache status bit. Otherwise, skip the l2p update */
/* and just clear the cache status. */
if (!ftl_cache_lba_valid(dev, entry)) {
goto clear;
}
@ -891,10 +887,6 @@ ftl_write_cb(void *arg, int status)
SPDK_DEBUGLOG(SPDK_LOG_FTL_CORE, "Write ppa:%lu, lba:%lu\n",
entry->ppa.ppa, entry->lba);
if (ftl_update_md_entry(dev, entry)) {
ftl_rwb_entry_invalidate(entry);
}
}
ftl_process_flush(dev, batch);
@ -1039,7 +1031,7 @@ ftl_wptr_process_writes(struct ftl_wptr *wptr)
struct ftl_rwb_batch *batch;
struct ftl_rwb_entry *entry;
struct ftl_io *io;
struct ftl_ppa ppa;
struct ftl_ppa ppa, prev_ppa;
/* Make sure the band is prepared for writing */
if (!ftl_wptr_ready(wptr)) {
@ -1069,14 +1061,21 @@ ftl_wptr_process_writes(struct ftl_wptr *wptr)
ppa = wptr->ppa;
ftl_rwb_foreach(entry, batch) {
entry->ppa = ppa;
/* Setting entry's cache bit needs to be done after metadata */
/* within the band is updated to make sure that writes */
/* invalidating the entry clear the metadata as well */
if (entry->lba != FTL_LBA_INVALID) {
ftl_band_set_addr(wptr->band, entry->lba, entry->ppa);
}
ftl_rwb_entry_set_valid(entry);
if (entry->lba != FTL_LBA_INVALID) {
pthread_spin_lock(&entry->lock);
prev_ppa = ftl_l2p_get(dev, entry->lba);
/* If the l2p was updated in the meantime, don't update band's metadata */
if (ftl_ppa_cached(prev_ppa) && prev_ppa.offset == entry->pos) {
/* Setting entry's cache bit needs to be done after metadata */
/* within the band is updated to make sure that writes */
/* invalidating the entry clear the metadata as well */
ftl_band_set_addr(wptr->band, entry->lba, entry->ppa);
ftl_rwb_entry_set_valid(entry);
}
pthread_spin_unlock(&entry->lock);
}
ftl_trace_rwb_pop(dev, entry);
ftl_update_rwb_stats(dev, entry);

View File

@ -964,9 +964,6 @@ ftl_dev_free_sync(struct spdk_ftl_dev *dev)
}
pthread_mutex_unlock(&g_ftl_queue_lock);
ftl_dev_free_thread(dev, &dev->read_thread);
ftl_dev_free_thread(dev, &dev->core_thread);
assert(LIST_EMPTY(&dev->wptr_list));
ftl_dev_dump_bands(dev);
@ -1004,6 +1001,9 @@ ftl_halt_poller(void *ctx)
if (!dev->core_thread.poller && !dev->read_thread.poller) {
spdk_poller_unregister(&dev->halt_poller);
ftl_dev_free_thread(dev, &dev->read_thread);
ftl_dev_free_thread(dev, &dev->core_thread);
ftl_anm_unregister_device(dev);
ftl_dev_free_sync(dev);

View File

@ -200,7 +200,7 @@ ftl_io_init_internal(const struct ftl_io_init_opts *opts)
io->lbk_cnt = opts->iov_cnt * opts->req_size;
io->rwb_batch = opts->rwb_batch;
io->band = opts->band;
io->md = io->md;
io->md = opts->md;
if (ftl_io_init_iovec(io, opts->data, opts->iov_cnt, opts->req_size)) {
if (!opts->io) {

View File

@ -622,10 +622,26 @@ ftl_band_reloc_init(struct ftl_reloc *reloc, struct ftl_band_reloc *breloc,
static void
ftl_band_reloc_free(struct ftl_band_reloc *breloc)
{
struct ftl_reloc *reloc = breloc->parent;
struct ftl_io *io;
size_t i, num_ios;
if (!breloc) {
return;
}
if (breloc->active) {
num_ios = spdk_ring_dequeue(breloc->write_queue, (void **)reloc->io, reloc->max_qdepth);
for (i = 0; i < num_ios; ++i) {
io = reloc->io[i];
if (io->flags & FTL_IO_INITIALIZED) {
ftl_reloc_free_io(breloc, io);
}
}
ftl_reloc_release_io(breloc);
}
spdk_ring_free(breloc->free_queue);
spdk_ring_free(breloc->write_queue);
spdk_bit_array_free(&breloc->reloc_map);

View File

@ -62,6 +62,8 @@ struct ftl_restore {
void *md_buf;
void *lba_map;
bool l2p_phase;
};
static int
@ -94,6 +96,7 @@ ftl_restore_init(struct spdk_ftl_dev *dev, ftl_restore_fn cb)
restore->dev = dev;
restore->cb = cb;
restore->l2p_phase = false;
restore->bands = calloc(ftl_dev_num_bands(dev), sizeof(*restore->bands));
if (!restore->bands) {
@ -131,9 +134,10 @@ static void
ftl_restore_complete(struct ftl_restore *restore, int status)
{
struct ftl_restore *ctx = status ? NULL : restore;
bool l2p_phase = restore->l2p_phase;
restore->cb(restore->dev, ctx, status);
if (status) {
if (status || l2p_phase) {
ftl_restore_free(restore);
}
}
@ -409,6 +413,7 @@ ftl_restore_device(struct ftl_restore *restore, ftl_restore_fn cb)
{
struct ftl_restore_band *rband;
restore->l2p_phase = true;
restore->current = 0;
restore->cb = cb;

View File

@ -556,6 +556,7 @@ void
spdk_lvs_opts_init(struct spdk_lvs_opts *o)
{
o->cluster_sz = SPDK_LVS_OPTS_CLUSTER_SZ;
o->clear_method = LVS_CLEAR_WITH_UNMAP;
memset(o->name, 0, sizeof(o->name));
}
@ -565,6 +566,7 @@ _spdk_setup_lvs_opts(struct spdk_bs_opts *bs_opts, struct spdk_lvs_opts *o)
assert(o != NULL);
spdk_lvs_bs_opts_init(bs_opts);
bs_opts->cluster_sz = o->cluster_sz;
bs_opts->clear_method = (enum bs_clear_method)o->clear_method;
}
int

View File

@ -49,7 +49,9 @@
#include "spdk_internal/log.h"
#include "spdk/queue.h"
#define GET_IO_LOOP_COUNT 16
#define GET_IO_LOOP_COUNT 16
#define NBD_BUSY_WAITING_MS 1000
#define NBD_BUSY_POLLING_INTERVAL_US 20000
enum nbd_io_state_t {
/* Receiving or ready to receive nbd request header */
@ -353,10 +355,6 @@ _nbd_stop(struct spdk_nbd_disk *nbd)
spdk_bdev_close(nbd->bdev_desc);
}
if (nbd->nbd_path) {
free(nbd->nbd_path);
}
if (nbd->spdk_sp_fd >= 0) {
close(nbd->spdk_sp_fd);
}
@ -366,11 +364,18 @@ _nbd_stop(struct spdk_nbd_disk *nbd)
}
if (nbd->dev_fd >= 0) {
ioctl(nbd->dev_fd, NBD_CLEAR_QUE);
ioctl(nbd->dev_fd, NBD_CLEAR_SOCK);
/* Clear nbd device only if it is occupied by SPDK app */
if (nbd->nbd_path && spdk_nbd_disk_find_by_nbd_path(nbd->nbd_path)) {
ioctl(nbd->dev_fd, NBD_CLEAR_QUE);
ioctl(nbd->dev_fd, NBD_CLEAR_SOCK);
}
close(nbd->dev_fd);
}
if (nbd->nbd_path) {
free(nbd->nbd_path);
}
if (nbd->nbd_poller) {
spdk_poller_unregister(&nbd->nbd_poller);
}
@ -842,6 +847,7 @@ struct spdk_nbd_start_ctx {
spdk_nbd_start_cb cb_fn;
void *cb_arg;
struct spdk_poller *poller;
int polling_count;
};
static void
@ -851,6 +857,28 @@ spdk_nbd_start_complete(struct spdk_nbd_start_ctx *ctx)
pthread_t tid;
int flag;
/* Add nbd_disk to the end of disk list */
rc = spdk_nbd_disk_register(ctx->nbd);
if (rc != 0) {
SPDK_ERRLOG("Failed to register %s, it should not happen.\n", ctx->nbd->nbd_path);
assert(false);
goto err;
}
rc = ioctl(ctx->nbd->dev_fd, NBD_SET_BLKSIZE, spdk_bdev_get_block_size(ctx->nbd->bdev));
if (rc == -1) {
SPDK_ERRLOG("ioctl(NBD_SET_BLKSIZE) failed: %s\n", spdk_strerror(errno));
rc = -errno;
goto err;
}
rc = ioctl(ctx->nbd->dev_fd, NBD_SET_SIZE_BLOCKS, spdk_bdev_get_num_blocks(ctx->nbd->bdev));
if (rc == -1) {
SPDK_ERRLOG("ioctl(NBD_SET_SIZE_BLOCKS) failed: %s\n", spdk_strerror(errno));
rc = -errno;
goto err;
}
#ifdef NBD_FLAG_SEND_TRIM
rc = ioctl(ctx->nbd->dev_fd, NBD_SET_FLAGS, NBD_FLAG_SEND_TRIM);
if (rc == -1) {
@ -905,17 +933,18 @@ spdk_nbd_enable_kernel(void *arg)
struct spdk_nbd_start_ctx *ctx = arg;
int rc;
/* Declare device setup by this process */
rc = ioctl(ctx->nbd->dev_fd, NBD_SET_SOCK, ctx->nbd->kernel_sp_fd);
if (rc == -1) {
if (errno == EBUSY) {
if (errno == EBUSY && ctx->polling_count-- > 0) {
if (ctx->poller == NULL) {
ctx->poller = spdk_poller_register(spdk_nbd_enable_kernel, ctx, 20000);
ctx->poller = spdk_poller_register(spdk_nbd_enable_kernel, ctx,
NBD_BUSY_POLLING_INTERVAL_US);
}
/* If the kernel is busy, check back later */
return 0;
}
SPDK_ERRLOG("ioctl(NBD_SET_SOCK) failed: %s\n", spdk_strerror(errno));
if (ctx->poller) {
spdk_poller_unregister(&ctx->poller);
@ -976,6 +1005,7 @@ spdk_nbd_start(const char *bdev_name, const char *nbd_path,
ctx->nbd = nbd;
ctx->cb_fn = cb_fn;
ctx->cb_arg = cb_arg;
ctx->polling_count = NBD_BUSY_WAITING_MS * 1000ULL / NBD_BUSY_POLLING_INTERVAL_US;
rc = spdk_bdev_open(bdev, true, spdk_nbd_bdev_hot_remove, nbd, &nbd->bdev_desc);
if (rc != 0) {
@ -1007,9 +1037,10 @@ spdk_nbd_start(const char *bdev_name, const char *nbd_path,
TAILQ_INIT(&nbd->received_io_list);
TAILQ_INIT(&nbd->executed_io_list);
/* Add nbd_disk to the end of disk list */
rc = spdk_nbd_disk_register(nbd);
if (rc != 0) {
/* Make sure nbd_path is not used in this SPDK app */
if (spdk_nbd_disk_find_by_nbd_path(nbd->nbd_path)) {
SPDK_NOTICELOG("%s is already exported\n", nbd->nbd_path);
rc = -EBUSY;
goto err;
}
@ -1020,27 +1051,6 @@ spdk_nbd_start(const char *bdev_name, const char *nbd_path,
goto err;
}
rc = ioctl(nbd->dev_fd, NBD_SET_BLKSIZE, spdk_bdev_get_block_size(bdev));
if (rc == -1) {
SPDK_ERRLOG("ioctl(NBD_SET_BLKSIZE) failed: %s\n", spdk_strerror(errno));
rc = -errno;
goto err;
}
rc = ioctl(nbd->dev_fd, NBD_SET_SIZE_BLOCKS, spdk_bdev_get_num_blocks(bdev));
if (rc == -1) {
SPDK_ERRLOG("ioctl(NBD_SET_SIZE_BLOCKS) failed: %s\n", spdk_strerror(errno));
rc = -errno;
goto err;
}
rc = ioctl(nbd->dev_fd, NBD_CLEAR_SOCK);
if (rc == -1) {
SPDK_ERRLOG("ioctl(NBD_CLEAR_SOCK) failed: %s\n", spdk_strerror(errno));
rc = -errno;
goto err;
}
SPDK_INFOLOG(SPDK_LOG_NBD, "Enabling kernel access to bdev %s via %s\n",
spdk_bdev_get_name(bdev), nbd_path);

View File

@ -544,6 +544,9 @@ nvme_ctrlr_shutdown(struct spdk_nvme_ctrlr *ctrlr)
} while (ms_waited < shutdown_timeout_ms);
SPDK_ERRLOG("did not shutdown within %u milliseconds\n", shutdown_timeout_ms);
if (ctrlr->quirks & NVME_QUIRK_SHST_COMPLETE) {
SPDK_ERRLOG("likely due to shutdown handling in the VMWare emulated NVMe SSD\n");
}
}
static int

View File

@ -116,6 +116,13 @@ extern pid_t g_spdk_nvme_pid;
*/
#define NVME_INTEL_QUIRK_NO_LOG_PAGES 0x100
/*
* The controller does not set SHST_COMPLETE in a reasonable amount of time. This
* is primarily seen in virtual VMWare NVMe SSDs. This quirk merely adds an additional
* error message that on VMWare NVMe SSDs, the shutdown timeout may be expected.
*/
#define NVME_QUIRK_SHST_COMPLETE 0x200
#define NVME_MAX_ASYNC_EVENTS (8)
#define NVME_MIN_TIMEOUT_PERIOD (5)

View File

@ -2084,7 +2084,7 @@ nvme_pcie_qpair_process_completions(struct spdk_nvme_qpair *qpair, uint32_t max_
if (cpl->status.p != pqpair->phase) {
break;
}
#ifdef __PPC64__
#if defined(__PPC64__) || defined(__aarch64__)
/*
* This memory barrier prevents reordering of:
* - load after store from/to tr

View File

@ -83,6 +83,9 @@ static const struct nvme_quirk nvme_quirks[] = {
NVME_QUIRK_IDENTIFY_CNS |
NVME_QUIRK_OCSSD
},
{ {SPDK_PCI_VID_VMWARE, 0x07f0, SPDK_PCI_ANY_ID, SPDK_PCI_ANY_ID},
NVME_QUIRK_SHST_COMPLETE
},
{ {0x0000, 0x0000, 0x0000, 0x0000}, 0}
};

View File

@ -914,14 +914,23 @@ nvme_rdma_build_contig_inline_request(struct nvme_rdma_qpair *rqpair,
assert(nvme_payload_type(&req->payload) == NVME_PAYLOAD_TYPE_CONTIG);
requested_size = req->payload_size;
mr = (struct ibv_mr *)spdk_mem_map_translate(rqpair->mr_map->map,
(uint64_t)payload, &requested_size);
if (mr == NULL || requested_size < req->payload_size) {
if (mr) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
if (!g_nvme_hooks.get_rkey) {
mr = (struct ibv_mr *)spdk_mem_map_translate(rqpair->mr_map->map,
(uint64_t)payload, &requested_size);
if (mr == NULL || requested_size < req->payload_size) {
if (mr) {
SPDK_ERRLOG("Data buffer split over multiple RDMA Memory Regions\n");
}
return -EINVAL;
}
return -EINVAL;
rdma_req->send_sgl[1].lkey = mr->lkey;
} else {
rdma_req->send_sgl[1].lkey = spdk_mem_map_translate(rqpair->mr_map->map,
(uint64_t)payload,
&requested_size);
}
/* The first element of this SGL is pointing at an
@ -932,7 +941,6 @@ nvme_rdma_build_contig_inline_request(struct nvme_rdma_qpair *rqpair,
rdma_req->send_sgl[1].addr = (uint64_t)payload;
rdma_req->send_sgl[1].length = (uint32_t)req->payload_size;
rdma_req->send_sgl[1].lkey = mr->lkey;
/* The RDMA SGL contains two elements. The first describes
* the NVMe command and the second describes the data

View File

@ -117,6 +117,7 @@ struct nvme_tcp_req {
uint32_t r2tl_remain;
bool in_capsule_data;
struct nvme_tcp_pdu send_pdu;
void *buf;
TAILQ_ENTRY(nvme_tcp_req) link;
TAILQ_ENTRY(nvme_tcp_req) active_r2t_link;
};
@ -154,6 +155,7 @@ nvme_tcp_req_get(struct nvme_tcp_qpair *tqpair)
tcp_req->req = NULL;
tcp_req->in_capsule_data = false;
tcp_req->r2tl_remain = 0;
tcp_req->buf = NULL;
memset(&tcp_req->send_pdu, 0, sizeof(tcp_req->send_pdu));
TAILQ_INSERT_TAIL(&tqpair->outstanding_reqs, tcp_req, link);
@ -509,14 +511,14 @@ nvme_tcp_qpair_write_pdu(struct nvme_tcp_qpair *tqpair,
* Build SGL describing contiguous payload buffer.
*/
static int
nvme_tcp_build_contig_request(struct nvme_tcp_qpair *tqpair, struct nvme_request *req)
nvme_tcp_build_contig_request(struct nvme_tcp_qpair *tqpair, struct nvme_tcp_req *tcp_req)
{
void *payload = req->payload.contig_or_cb_arg + req->payload_offset;
struct nvme_request *req = tcp_req->req;
tcp_req->buf = req->payload.contig_or_cb_arg + req->payload_offset;
SPDK_DEBUGLOG(SPDK_LOG_NVME, "enter\n");
assert(nvme_payload_type(&req->payload) == NVME_PAYLOAD_TYPE_CONTIG);
req->cmd.dptr.sgl1.address = (uint64_t)payload;
return 0;
}
@ -525,11 +527,11 @@ nvme_tcp_build_contig_request(struct nvme_tcp_qpair *tqpair, struct nvme_request
* Build SGL describing scattered payload buffer.
*/
static int
nvme_tcp_build_sgl_request(struct nvme_tcp_qpair *tqpair, struct nvme_request *req)
nvme_tcp_build_sgl_request(struct nvme_tcp_qpair *tqpair, struct nvme_tcp_req *tcp_req)
{
int rc;
void *virt_addr;
uint32_t length;
struct nvme_request *req = tcp_req->req;
SPDK_DEBUGLOG(SPDK_LOG_NVME, "enter\n");
@ -540,7 +542,8 @@ nvme_tcp_build_sgl_request(struct nvme_tcp_qpair *tqpair, struct nvme_request *r
req->payload.reset_sgl_fn(req->payload.contig_or_cb_arg, req->payload_offset);
/* TODO: for now, we only support a single SGL entry */
rc = req->payload.next_sge_fn(req->payload.contig_or_cb_arg, &virt_addr, &length);
rc = req->payload.next_sge_fn(req->payload.contig_or_cb_arg, &tcp_req->buf, &length);
if (rc) {
return -1;
}
@ -550,8 +553,6 @@ nvme_tcp_build_sgl_request(struct nvme_tcp_qpair *tqpair, struct nvme_request *r
return -1;
}
req->cmd.dptr.sgl1.address = (uint64_t)virt_addr;
return 0;
}
@ -578,9 +579,9 @@ nvme_tcp_req_init(struct nvme_tcp_qpair *tqpair, struct nvme_request *req,
req->cmd.dptr.sgl1.unkeyed.length = req->payload_size;
if (nvme_payload_type(&req->payload) == NVME_PAYLOAD_TYPE_CONTIG) {
rc = nvme_tcp_build_contig_request(tqpair, req);
rc = nvme_tcp_build_contig_request(tqpair, tcp_req);
} else if (nvme_payload_type(&req->payload) == NVME_PAYLOAD_TYPE_SGL) {
rc = nvme_tcp_build_sgl_request(tqpair, req);
rc = nvme_tcp_build_sgl_request(tqpair, tcp_req);
} else {
rc = -1;
}
@ -622,14 +623,7 @@ static void
nvme_tcp_pdu_set_data_buf(struct nvme_tcp_pdu *pdu,
struct nvme_tcp_req *tcp_req)
{
/* Here is the tricky, we should consider different NVME data command type: SGL with continue or
scatter data, now we only consider continous data, which is not exactly correct, shoud be fixed */
if (spdk_unlikely(!tcp_req->req->cmd.dptr.sgl1.address)) {
pdu->data = (void *)tcp_req->req->payload.contig_or_cb_arg + tcp_req->datao;
} else {
pdu->data = (void *)tcp_req->req->cmd.dptr.sgl1.address + tcp_req->datao;
}
pdu->data = (void *)((uint64_t)tcp_req->buf + tcp_req->datao);
}
static int

View File

@ -210,17 +210,17 @@ struct spdk_nvmf_rdma_wr {
* command when there aren't any free request objects.
*/
struct spdk_nvmf_rdma_recv {
struct ibv_recv_wr wr;
struct ibv_sge sgl[NVMF_DEFAULT_RX_SGE];
struct ibv_recv_wr wr;
struct ibv_sge sgl[NVMF_DEFAULT_RX_SGE];
struct spdk_nvmf_rdma_qpair *qpair;
struct spdk_nvmf_rdma_qpair *qpair;
/* In-capsule data buffer */
uint8_t *buf;
uint8_t *buf;
struct spdk_nvmf_rdma_wr rdma_wr;
struct spdk_nvmf_rdma_wr rdma_wr;
TAILQ_ENTRY(spdk_nvmf_rdma_recv) link;
STAILQ_ENTRY(spdk_nvmf_rdma_recv) link;
};
struct spdk_nvmf_rdma_request_data {
@ -251,7 +251,7 @@ struct spdk_nvmf_rdma_request {
struct spdk_nvmf_rdma_wr rdma_wr;
TAILQ_ENTRY(spdk_nvmf_rdma_request) link;
TAILQ_ENTRY(spdk_nvmf_rdma_request) state_link;
STAILQ_ENTRY(spdk_nvmf_rdma_request) state_link;
};
enum spdk_nvmf_rdma_qpair_disconnect_flags {
@ -298,13 +298,17 @@ struct spdk_nvmf_rdma_qpair {
uint32_t max_recv_sge;
/* Receives that are waiting for a request object */
TAILQ_HEAD(, spdk_nvmf_rdma_recv) incoming_queue;
STAILQ_HEAD(, spdk_nvmf_rdma_recv) incoming_queue;
/* Queues to track the requests in all states */
TAILQ_HEAD(, spdk_nvmf_rdma_request) state_queue[RDMA_REQUEST_NUM_STATES];
/* Queues to track requests in critical states */
STAILQ_HEAD(, spdk_nvmf_rdma_request) free_queue;
/* Number of requests in each state */
uint32_t state_cntr[RDMA_REQUEST_NUM_STATES];
STAILQ_HEAD(, spdk_nvmf_rdma_request) pending_rdma_read_queue;
STAILQ_HEAD(, spdk_nvmf_rdma_request) pending_rdma_write_queue;
/* Number of requests not in the free state */
uint32_t qd;
/* Array of size "max_queue_depth" containing RDMA requests. */
struct spdk_nvmf_rdma_request *reqs;
@ -332,10 +336,6 @@ struct spdk_nvmf_rdma_qpair {
TAILQ_ENTRY(spdk_nvmf_rdma_qpair) link;
/* Mgmt channel */
struct spdk_io_channel *mgmt_channel;
struct spdk_nvmf_rdma_mgmt_channel *ch;
/* IBV queue pair attributes: they are used to manage
* qp state and recover from errors.
*/
@ -345,14 +345,11 @@ struct spdk_nvmf_rdma_qpair {
struct spdk_nvmf_rdma_wr drain_send_wr;
struct spdk_nvmf_rdma_wr drain_recv_wr;
/* Reference counter for how many unprocessed messages
* from other threads are currently outstanding. The
* qpair cannot be destroyed until this is 0. This is
* atomically incremented from any thread, but only
* decremented and read from the thread that owns this
* qpair.
/* There are several ways a disconnect can start on a qpair
* and they are not all mutually exclusive. It is important
* that we only initialize one of these paths.
*/
uint32_t refcnt;
bool disconnect_started;
};
struct spdk_nvmf_rdma_poller {
@ -371,6 +368,9 @@ struct spdk_nvmf_rdma_poller {
struct spdk_nvmf_rdma_poll_group {
struct spdk_nvmf_transport_poll_group group;
/* Requests that are waiting to obtain a data buffer */
TAILQ_HEAD(, spdk_nvmf_rdma_request) pending_data_buf_queue;
TAILQ_HEAD(, spdk_nvmf_rdma_poller) pollers;
};
@ -410,31 +410,6 @@ struct spdk_nvmf_rdma_transport {
TAILQ_HEAD(, spdk_nvmf_rdma_port) ports;
};
struct spdk_nvmf_rdma_mgmt_channel {
/* Requests that are waiting to obtain a data buffer */
TAILQ_HEAD(, spdk_nvmf_rdma_request) pending_data_buf_queue;
};
static inline void
spdk_nvmf_rdma_qpair_inc_refcnt(struct spdk_nvmf_rdma_qpair *rqpair)
{
__sync_fetch_and_add(&rqpair->refcnt, 1);
}
static inline uint32_t
spdk_nvmf_rdma_qpair_dec_refcnt(struct spdk_nvmf_rdma_qpair *rqpair)
{
uint32_t old_refcnt, new_refcnt;
do {
old_refcnt = rqpair->refcnt;
assert(old_refcnt > 0);
new_refcnt = old_refcnt - 1;
} while (__sync_bool_compare_and_swap(&rqpair->refcnt, old_refcnt, new_refcnt) == false);
return new_refcnt;
}
static inline int
spdk_nvmf_rdma_check_ibv_state(enum ibv_qp_state state)
{
@ -581,51 +556,6 @@ spdk_nvmf_rdma_set_ibv_state(struct spdk_nvmf_rdma_qpair *rqpair,
return 0;
}
static void
spdk_nvmf_rdma_request_set_state(struct spdk_nvmf_rdma_request *rdma_req,
enum spdk_nvmf_rdma_request_state state)
{
struct spdk_nvmf_qpair *qpair;
struct spdk_nvmf_rdma_qpair *rqpair;
qpair = rdma_req->req.qpair;
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
TAILQ_REMOVE(&rqpair->state_queue[rdma_req->state], rdma_req, state_link);
rqpair->state_cntr[rdma_req->state]--;
rdma_req->state = state;
TAILQ_INSERT_TAIL(&rqpair->state_queue[rdma_req->state], rdma_req, state_link);
rqpair->state_cntr[rdma_req->state]++;
}
static int
spdk_nvmf_rdma_mgmt_channel_create(void *io_device, void *ctx_buf)
{
struct spdk_nvmf_rdma_mgmt_channel *ch = ctx_buf;
TAILQ_INIT(&ch->pending_data_buf_queue);
return 0;
}
static void
spdk_nvmf_rdma_mgmt_channel_destroy(void *io_device, void *ctx_buf)
{
struct spdk_nvmf_rdma_mgmt_channel *ch = ctx_buf;
if (!TAILQ_EMPTY(&ch->pending_data_buf_queue)) {
SPDK_ERRLOG("Pending I/O list wasn't empty on channel destruction\n");
}
}
static int
spdk_nvmf_rdma_cur_queue_depth(struct spdk_nvmf_rdma_qpair *rqpair)
{
return rqpair->max_queue_depth -
rqpair->state_cntr[RDMA_REQUEST_STATE_FREE];
}
static void
nvmf_rdma_dump_request(struct spdk_nvmf_rdma_request *req)
{
@ -638,12 +568,11 @@ static void
nvmf_rdma_dump_qpair_contents(struct spdk_nvmf_rdma_qpair *rqpair)
{
int i;
struct spdk_nvmf_rdma_request *req;
SPDK_ERRLOG("Dumping contents of queue pair (QID %d)\n", rqpair->qpair.qid);
for (i = 1; i < RDMA_REQUEST_NUM_STATES; i++) {
SPDK_ERRLOG("\tdumping requests in state %d\n", i);
TAILQ_FOREACH(req, &rqpair->state_queue[i], state_link) {
nvmf_rdma_dump_request(req);
for (i = 0; i < rqpair->max_queue_depth; i++) {
if (rqpair->reqs[i].state != RDMA_REQUEST_STATE_FREE) {
nvmf_rdma_dump_request(&rqpair->reqs[i]);
}
}
}
@ -651,18 +580,11 @@ nvmf_rdma_dump_qpair_contents(struct spdk_nvmf_rdma_qpair *rqpair)
static void
spdk_nvmf_rdma_qpair_destroy(struct spdk_nvmf_rdma_qpair *rqpair)
{
int qd;
if (rqpair->refcnt != 0) {
return;
}
spdk_trace_record(TRACE_RDMA_QP_DESTROY, 0, 0, (uintptr_t)rqpair->cm_id, 0);
qd = spdk_nvmf_rdma_cur_queue_depth(rqpair);
if (qd != 0) {
if (rqpair->qd != 0) {
nvmf_rdma_dump_qpair_contents(rqpair);
SPDK_WARNLOG("Destroying qpair when queue depth is %d\n", qd);
SPDK_WARNLOG("Destroying qpair when queue depth is %d\n", rqpair->qd);
}
if (rqpair->poller) {
@ -690,10 +612,6 @@ spdk_nvmf_rdma_qpair_destroy(struct spdk_nvmf_rdma_qpair *rqpair)
}
}
if (rqpair->mgmt_channel) {
spdk_put_io_channel(rqpair->mgmt_channel);
}
/* Free all memory */
spdk_dma_free(rqpair->cmds);
spdk_dma_free(rqpair->cpls);
@ -834,11 +752,9 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
transport->opts.in_capsule_data_size, rqpair->bufs_mr->lkey);
}
/* Initialise request state queues and counters of the queue pair */
for (i = RDMA_REQUEST_STATE_FREE; i < RDMA_REQUEST_NUM_STATES; i++) {
TAILQ_INIT(&rqpair->state_queue[i]);
rqpair->state_cntr[i] = 0;
}
STAILQ_INIT(&rqpair->free_queue);
STAILQ_INIT(&rqpair->pending_rdma_read_queue);
STAILQ_INIT(&rqpair->pending_rdma_write_queue);
rqpair->current_recv_depth = rqpair->max_queue_depth;
for (i = 0; i < rqpair->max_queue_depth; i++) {
@ -912,8 +828,7 @@ spdk_nvmf_rdma_qpair_initialize(struct spdk_nvmf_qpair *qpair)
/* Initialize request state to FREE */
rdma_req->state = RDMA_REQUEST_STATE_FREE;
TAILQ_INSERT_TAIL(&rqpair->state_queue[rdma_req->state], rdma_req, state_link);
rqpair->state_cntr[rdma_req->state]++;
STAILQ_INSERT_HEAD(&rqpair->free_queue, rdma_req, state_link);
}
return 0;
@ -1142,7 +1057,7 @@ nvmf_rdma_connect(struct spdk_nvmf_transport *transport, struct rdma_cm_event *e
rqpair->cm_id = event->id;
rqpair->listen_id = event->listen_id;
rqpair->qpair.transport = transport;
TAILQ_INIT(&rqpair->incoming_queue);
STAILQ_INIT(&rqpair->incoming_queue);
event->id->context = &rqpair->qpair;
cb_fn(&rqpair->qpair);
@ -1322,8 +1237,8 @@ spdk_nvmf_rdma_request_fill_iovs(struct spdk_nvmf_rdma_transport *rtransport,
rdma_req->data.wr.sg_list[i].lkey = ((struct ibv_mr *)spdk_mem_map_translate(device->map,
(uint64_t)buf, &translation_len))->lkey;
} else {
rdma_req->data.wr.sg_list[i].lkey = *((uint64_t *)spdk_mem_map_translate(device->map,
(uint64_t)buf, &translation_len));
rdma_req->data.wr.sg_list[i].lkey = spdk_mem_map_translate(device->map,
(uint64_t)buf, &translation_len);
}
length -= rdma_req->req.iov[i].iov_len;
@ -1462,16 +1377,19 @@ nvmf_rdma_request_free(struct spdk_nvmf_rdma_request *rdma_req,
struct spdk_nvmf_rdma_qpair *rqpair;
struct spdk_nvmf_rdma_poll_group *rgroup;
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
if (rdma_req->data_from_pool) {
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
rgroup = rqpair->poller->group;
spdk_nvmf_rdma_request_free_buffers(rdma_req, &rgroup->group, &rtransport->transport);
}
rdma_req->num_outstanding_data_wr = 0;
rdma_req->req.length = 0;
rdma_req->req.iovcnt = 0;
rdma_req->req.data = NULL;
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_FREE);
rqpair->qd--;
STAILQ_INSERT_HEAD(&rqpair->free_queue, rdma_req, state_link);
rdma_req->state = RDMA_REQUEST_STATE_FREE;
}
static bool
@ -1480,6 +1398,7 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
{
struct spdk_nvmf_rdma_qpair *rqpair;
struct spdk_nvmf_rdma_device *device;
struct spdk_nvmf_rdma_poll_group *rgroup;
struct spdk_nvme_cpl *rsp = &rdma_req->req.rsp->nvme_cpl;
int rc;
struct spdk_nvmf_rdma_recv *rdma_recv;
@ -1489,6 +1408,7 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
device = rqpair->port->device;
rgroup = rqpair->poller->group;
assert(rdma_req->state != RDMA_REQUEST_STATE_FREE);
@ -1496,9 +1416,13 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
* to release resources. */
if (rqpair->ibv_attr.qp_state == IBV_QPS_ERR || rqpair->qpair.state != SPDK_NVMF_QPAIR_ACTIVE) {
if (rdma_req->state == RDMA_REQUEST_STATE_NEED_BUFFER) {
TAILQ_REMOVE(&rqpair->ch->pending_data_buf_queue, rdma_req, link);
TAILQ_REMOVE(&rgroup->pending_data_buf_queue, rdma_req, link);
} else if (rdma_req->state == RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING) {
STAILQ_REMOVE(&rqpair->pending_rdma_read_queue, rdma_req, spdk_nvmf_rdma_request, state_link);
} else if (rdma_req->state == RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING) {
STAILQ_REMOVE(&rqpair->pending_rdma_write_queue, rdma_req, spdk_nvmf_rdma_request, state_link);
}
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
}
/* The loop here is to allow for several back-to-back state changes. */
@ -1521,10 +1445,8 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
rdma_req->req.cmd = (union nvmf_h2c_msg *)rdma_recv->sgl[0].addr;
memset(rdma_req->req.rsp, 0, sizeof(*rdma_req->req.rsp));
TAILQ_REMOVE(&rqpair->incoming_queue, rdma_recv, link);
if (rqpair->ibv_attr.qp_state == IBV_QPS_ERR || rqpair->qpair.state != SPDK_NVMF_QPAIR_ACTIVE) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
break;
}
@ -1533,12 +1455,12 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
/* If no data to transfer, ready to execute. */
if (rdma_req->req.xfer == SPDK_NVME_DATA_NONE) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_READY_TO_EXECUTE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_EXECUTE;
break;
}
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_NEED_BUFFER);
TAILQ_INSERT_TAIL(&rqpair->ch->pending_data_buf_queue, rdma_req, link);
rdma_req->state = RDMA_REQUEST_STATE_NEED_BUFFER;
TAILQ_INSERT_TAIL(&rgroup->pending_data_buf_queue, rdma_req, link);
break;
case RDMA_REQUEST_STATE_NEED_BUFFER:
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_NEED_BUFFER, 0, 0,
@ -1546,7 +1468,7 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
assert(rdma_req->req.xfer != SPDK_NVME_DATA_NONE);
if (rdma_req != TAILQ_FIRST(&rqpair->ch->pending_data_buf_queue)) {
if (rdma_req != TAILQ_FIRST(&rgroup->pending_data_buf_queue)) {
/* This request needs to wait in line to obtain a buffer */
break;
}
@ -1554,9 +1476,9 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
/* Try to get a data buffer */
rc = spdk_nvmf_rdma_request_parse_sgl(rtransport, device, rdma_req);
if (rc < 0) {
TAILQ_REMOVE(&rqpair->ch->pending_data_buf_queue, rdma_req, link);
TAILQ_REMOVE(&rgroup->pending_data_buf_queue, rdma_req, link);
rsp->status.sc = SPDK_NVME_SC_INTERNAL_DEVICE_ERROR;
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_READY_TO_COMPLETE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_COMPLETE;
break;
}
@ -1565,24 +1487,24 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
break;
}
TAILQ_REMOVE(&rqpair->ch->pending_data_buf_queue, rdma_req, link);
TAILQ_REMOVE(&rgroup->pending_data_buf_queue, rdma_req, link);
/* If data is transferring from host to controller and the data didn't
* arrive using in capsule data, we need to do a transfer from the host.
*/
if (rdma_req->req.xfer == SPDK_NVME_DATA_HOST_TO_CONTROLLER && rdma_req->data_from_pool) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING);
STAILQ_INSERT_TAIL(&rqpair->pending_rdma_read_queue, rdma_req, state_link);
rdma_req->state = RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING;
break;
}
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_READY_TO_EXECUTE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_EXECUTE;
break;
case RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING:
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING, 0, 0,
(uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id);
if (rdma_req != TAILQ_FIRST(
&rqpair->state_queue[RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING])) {
if (rdma_req != STAILQ_FIRST(&rqpair->pending_rdma_read_queue)) {
/* This request needs to wait in line to perform RDMA */
break;
}
@ -1591,14 +1513,16 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
/* We can only have so many WRs outstanding. we have to wait until some finish. */
break;
}
/* We have already verified that this request is the head of the queue. */
STAILQ_REMOVE_HEAD(&rqpair->pending_rdma_read_queue, state_link);
rc = request_transfer_in(&rdma_req->req);
if (!rc) {
spdk_nvmf_rdma_request_set_state(rdma_req,
RDMA_REQUEST_STATE_TRANSFERRING_HOST_TO_CONTROLLER);
rdma_req->state = RDMA_REQUEST_STATE_TRANSFERRING_HOST_TO_CONTROLLER;
} else {
rsp->status.sc = SPDK_NVME_SC_INTERNAL_DEVICE_ERROR;
spdk_nvmf_rdma_request_set_state(rdma_req,
RDMA_REQUEST_STATE_READY_TO_COMPLETE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_COMPLETE;
}
break;
case RDMA_REQUEST_STATE_TRANSFERRING_HOST_TO_CONTROLLER:
@ -1610,7 +1534,7 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
case RDMA_REQUEST_STATE_READY_TO_EXECUTE:
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_READY_TO_EXECUTE, 0, 0,
(uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id);
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_EXECUTING);
rdma_req->state = RDMA_REQUEST_STATE_EXECUTING;
spdk_nvmf_request_exec(&rdma_req->req);
break;
case RDMA_REQUEST_STATE_EXECUTING:
@ -1623,17 +1547,17 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_EXECUTED, 0, 0,
(uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id);
if (rdma_req->req.xfer == SPDK_NVME_DATA_CONTROLLER_TO_HOST) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING);
STAILQ_INSERT_TAIL(&rqpair->pending_rdma_write_queue, rdma_req, state_link);
rdma_req->state = RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING;
} else {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_READY_TO_COMPLETE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_COMPLETE;
}
break;
case RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING:
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING, 0, 0,
(uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id);
if (rdma_req != TAILQ_FIRST(
&rqpair->state_queue[RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING])) {
if (rdma_req != STAILQ_FIRST(&rqpair->pending_rdma_write_queue)) {
/* This request needs to wait in line to perform RDMA */
break;
}
@ -1643,10 +1567,14 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
* +1 since each request has an additional wr in the resp. */
break;
}
/* We have already verified that this request is the head of the queue. */
STAILQ_REMOVE_HEAD(&rqpair->pending_rdma_write_queue, state_link);
/* The data transfer will be kicked off from
* RDMA_REQUEST_STATE_READY_TO_COMPLETE state.
*/
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_READY_TO_COMPLETE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_COMPLETE;
break;
case RDMA_REQUEST_STATE_READY_TO_COMPLETE:
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_READY_TO_COMPLETE, 0, 0,
@ -1654,12 +1582,10 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
rc = request_transfer_out(&rdma_req->req, &data_posted);
assert(rc == 0); /* No good way to handle this currently */
if (rc) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
} else {
spdk_nvmf_rdma_request_set_state(rdma_req,
data_posted ?
RDMA_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST :
RDMA_REQUEST_STATE_COMPLETING);
rdma_req->state = data_posted ? RDMA_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST :
RDMA_REQUEST_STATE_COMPLETING;
}
break;
case RDMA_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST:
@ -1701,10 +1627,9 @@ spdk_nvmf_rdma_request_process(struct spdk_nvmf_rdma_transport *rtransport,
#define SPDK_NVMF_RDMA_DEFAULT_MAX_QPAIRS_PER_CTRLR 64
#define SPDK_NVMF_RDMA_DEFAULT_IN_CAPSULE_DATA_SIZE 4096
#define SPDK_NVMF_RDMA_DEFAULT_MAX_IO_SIZE 131072
#define SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE 4096
#define SPDK_NVMF_RDMA_DEFAULT_NUM_SHARED_BUFFERS 512
#define SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE (SPDK_NVMF_RDMA_DEFAULT_MAX_IO_SIZE / SPDK_NVMF_MAX_SGL_ENTRIES)
#define SPDK_NVMF_RDMA_DEFAULT_NUM_SHARED_BUFFERS 4096
#define SPDK_NVMF_RDMA_DEFAULT_BUFFER_CACHE_SIZE 32
#define SPDK_NVMF_RDMA_DEFAULT_IO_BUFFER_SIZE (SPDK_NVMF_RDMA_DEFAULT_MAX_IO_SIZE / SPDK_NVMF_MAX_SGL_ENTRIES)
static void
spdk_nvmf_rdma_opts_init(struct spdk_nvmf_transport_opts *opts)
@ -1713,8 +1638,7 @@ spdk_nvmf_rdma_opts_init(struct spdk_nvmf_transport_opts *opts)
opts->max_qpairs_per_ctrlr = SPDK_NVMF_RDMA_DEFAULT_MAX_QPAIRS_PER_CTRLR;
opts->in_capsule_data_size = SPDK_NVMF_RDMA_DEFAULT_IN_CAPSULE_DATA_SIZE;
opts->max_io_size = SPDK_NVMF_RDMA_DEFAULT_MAX_IO_SIZE;
opts->io_unit_size = spdk_max(SPDK_NVMF_RDMA_DEFAULT_IO_BUFFER_SIZE,
SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE);
opts->io_unit_size = SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE;
opts->max_aq_depth = SPDK_NVMF_RDMA_DEFAULT_AQ_DEPTH;
opts->num_shared_buffers = SPDK_NVMF_RDMA_DEFAULT_NUM_SHARED_BUFFERS;
opts->buf_cache_size = SPDK_NVMF_RDMA_DEFAULT_BUFFER_CACHE_SIZE;
@ -1733,6 +1657,7 @@ spdk_nvmf_rdma_create(struct spdk_nvmf_transport_opts *opts)
int flag;
uint32_t sge_count;
uint32_t min_shared_buffers;
int max_device_sge = SPDK_NVMF_MAX_SGL_ENTRIES;
rtransport = calloc(1, sizeof(*rtransport));
if (!rtransport) {
@ -1745,11 +1670,6 @@ spdk_nvmf_rdma_create(struct spdk_nvmf_transport_opts *opts)
return NULL;
}
spdk_io_device_register(rtransport, spdk_nvmf_rdma_mgmt_channel_create,
spdk_nvmf_rdma_mgmt_channel_destroy,
sizeof(struct spdk_nvmf_rdma_mgmt_channel),
"rdma_transport");
TAILQ_INIT(&rtransport->devices);
TAILQ_INIT(&rtransport->ports);
@ -1849,6 +1769,8 @@ spdk_nvmf_rdma_create(struct spdk_nvmf_transport_opts *opts)
}
max_device_sge = spdk_min(max_device_sge, device->attr.max_sge);
#ifdef SPDK_CONFIG_RDMA_SEND_WITH_INVAL
if ((device->attr.device_cap_flags & IBV_DEVICE_MEM_MGT_EXTENSIONS) == 0) {
SPDK_WARNLOG("The libibverbs on this system supports SEND_WITH_INVALIDATE,");
@ -1883,6 +1805,18 @@ spdk_nvmf_rdma_create(struct spdk_nvmf_transport_opts *opts)
}
rdma_free_devices(contexts);
if (opts->io_unit_size * max_device_sge < opts->max_io_size) {
/* divide and round up. */
opts->io_unit_size = (opts->max_io_size + max_device_sge - 1) / max_device_sge;
/* round up to the nearest 4k. */
opts->io_unit_size = (opts->io_unit_size + NVMF_DATA_BUFFER_ALIGNMENT - 1) & ~NVMF_DATA_BUFFER_MASK;
opts->io_unit_size = spdk_max(opts->io_unit_size, SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE);
SPDK_NOTICELOG("Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size %u\n",
opts->io_unit_size);
}
if (rc < 0) {
spdk_nvmf_rdma_destroy(&rtransport->transport);
return NULL;
@ -1957,7 +1891,6 @@ spdk_nvmf_rdma_destroy(struct spdk_nvmf_transport *transport)
}
spdk_mempool_free(rtransport->data_wr_pool);
spdk_io_device_unregister(rtransport, NULL);
pthread_mutex_destroy(&rtransport->lock);
free(rtransport);
@ -2203,7 +2136,7 @@ spdk_nvmf_rdma_qpair_is_idle(struct spdk_nvmf_qpair *qpair)
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
if (spdk_nvmf_rdma_cur_queue_depth(rqpair) == 0) {
if (rqpair->qd == 0) {
return true;
}
return false;
@ -2213,43 +2146,39 @@ static void
spdk_nvmf_rdma_qpair_process_pending(struct spdk_nvmf_rdma_transport *rtransport,
struct spdk_nvmf_rdma_qpair *rqpair, bool drain)
{
struct spdk_nvmf_rdma_recv *rdma_recv, *recv_tmp;
struct spdk_nvmf_rdma_request *rdma_req, *req_tmp;
/* We process I/O in the data transfer pending queue at the highest priority. RDMA reads first */
TAILQ_FOREACH_SAFE(rdma_req,
&rqpair->state_queue[RDMA_REQUEST_STATE_DATA_TRANSFER_TO_CONTROLLER_PENDING],
state_link, req_tmp) {
STAILQ_FOREACH_SAFE(rdma_req, &rqpair->pending_rdma_read_queue, state_link, req_tmp) {
if (spdk_nvmf_rdma_request_process(rtransport, rdma_req) == false && drain == false) {
break;
}
}
/* Then RDMA writes sincereads have stronger restrictions than writes */
TAILQ_FOREACH_SAFE(rdma_req, &rqpair->state_queue[RDMA_REQUEST_STATE_DATA_TRANSFER_TO_HOST_PENDING],
state_link, req_tmp) {
/* Then RDMA writes since reads have stronger restrictions than writes */
STAILQ_FOREACH_SAFE(rdma_req, &rqpair->pending_rdma_write_queue, state_link, req_tmp) {
if (spdk_nvmf_rdma_request_process(rtransport, rdma_req) == false && drain == false) {
break;
}
}
/* The second highest priority is I/O waiting on memory buffers. */
TAILQ_FOREACH_SAFE(rdma_req, &rqpair->ch->pending_data_buf_queue, link,
TAILQ_FOREACH_SAFE(rdma_req, &rqpair->poller->group->pending_data_buf_queue, link,
req_tmp) {
if (spdk_nvmf_rdma_request_process(rtransport, rdma_req) == false && drain == false) {
break;
}
}
/* The lowest priority is processing newly received commands */
TAILQ_FOREACH_SAFE(rdma_recv, &rqpair->incoming_queue, link, recv_tmp) {
if (TAILQ_EMPTY(&rqpair->state_queue[RDMA_REQUEST_STATE_FREE])) {
break;
}
while (!STAILQ_EMPTY(&rqpair->free_queue) && !STAILQ_EMPTY(&rqpair->incoming_queue)) {
rdma_req = TAILQ_FIRST(&rqpair->state_queue[RDMA_REQUEST_STATE_FREE]);
rdma_req->recv = rdma_recv;
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_NEW);
rdma_req = STAILQ_FIRST(&rqpair->free_queue);
STAILQ_REMOVE_HEAD(&rqpair->free_queue, state_link);
rdma_req->recv = STAILQ_FIRST(&rqpair->incoming_queue);
STAILQ_REMOVE_HEAD(&rqpair->incoming_queue, link);
rqpair->qd++;
rdma_req->state = RDMA_REQUEST_STATE_NEW;
if (spdk_nvmf_rdma_request_process(rtransport, rdma_req) == false) {
break;
}
@ -2260,17 +2189,12 @@ static void
_nvmf_rdma_qpair_disconnect(void *ctx)
{
struct spdk_nvmf_qpair *qpair = ctx;
struct spdk_nvmf_rdma_qpair *rqpair;
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
spdk_nvmf_rdma_qpair_dec_refcnt(rqpair);
spdk_nvmf_qpair_disconnect(qpair, NULL, NULL);
}
static void
_nvmf_rdma_disconnect_retry(void *ctx)
_nvmf_rdma_try_disconnect(void *ctx)
{
struct spdk_nvmf_qpair *qpair = ctx;
struct spdk_nvmf_poll_group *group;
@ -2285,13 +2209,22 @@ _nvmf_rdma_disconnect_retry(void *ctx)
if (group == NULL) {
/* The qpair hasn't been assigned to a group yet, so we can't
* process a disconnect. Send a message to ourself and try again. */
spdk_thread_send_msg(spdk_get_thread(), _nvmf_rdma_disconnect_retry, qpair);
spdk_thread_send_msg(spdk_get_thread(), _nvmf_rdma_try_disconnect, qpair);
return;
}
spdk_thread_send_msg(group->thread, _nvmf_rdma_qpair_disconnect, qpair);
}
static inline void
spdk_nvmf_rdma_start_disconnect(struct spdk_nvmf_rdma_qpair *rqpair)
{
if (__sync_bool_compare_and_swap(&rqpair->disconnect_started, false, true)) {
_nvmf_rdma_try_disconnect(&rqpair->qpair);
}
}
static int
nvmf_rdma_disconnect(struct rdma_cm_event *evt)
{
@ -2314,9 +2247,8 @@ nvmf_rdma_disconnect(struct rdma_cm_event *evt)
spdk_trace_record(TRACE_RDMA_QP_DISCONNECT, 0, 0, (uintptr_t)rqpair->cm_id, 0);
spdk_nvmf_rdma_update_ibv_state(rqpair);
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
_nvmf_rdma_disconnect_retry(qpair);
spdk_nvmf_rdma_start_disconnect(rqpair);
return 0;
}
@ -2447,8 +2379,7 @@ spdk_nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
(uintptr_t)rqpair->cm_id, event.event_type);
spdk_nvmf_rdma_update_ibv_state(rqpair);
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
_nvmf_rdma_disconnect_retry(&rqpair->qpair);
spdk_nvmf_rdma_start_disconnect(rqpair);
break;
case IBV_EVENT_QP_LAST_WQE_REACHED:
/* This event only occurs for shared receive queues, which are not currently supported. */
@ -2464,8 +2395,7 @@ spdk_nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
(uintptr_t)rqpair->cm_id, event.event_type);
state = spdk_nvmf_rdma_update_ibv_state(rqpair);
if (state == IBV_QPS_ERR) {
spdk_nvmf_rdma_qpair_inc_refcnt(rqpair);
_nvmf_rdma_disconnect_retry(&rqpair->qpair);
spdk_nvmf_rdma_start_disconnect(rqpair);
}
break;
case IBV_EVENT_QP_REQ_ERR:
@ -2564,6 +2494,7 @@ spdk_nvmf_rdma_poll_group_create(struct spdk_nvmf_transport *transport)
}
TAILQ_INIT(&rgroup->pollers);
TAILQ_INIT(&rgroup->pending_data_buf_queue);
pthread_mutex_lock(&rtransport->lock);
TAILQ_FOREACH(device, &rtransport->devices, link) {
@ -2632,6 +2563,10 @@ spdk_nvmf_rdma_poll_group_destroy(struct spdk_nvmf_transport_poll_group *group)
free(poller);
}
if (!TAILQ_EMPTY(&rgroup->pending_data_buf_queue)) {
SPDK_ERRLOG("Pending I/O list wasn't empty on poll group destruction\n");
}
free(rgroup);
}
@ -2639,14 +2574,12 @@ static int
spdk_nvmf_rdma_poll_group_add(struct spdk_nvmf_transport_poll_group *group,
struct spdk_nvmf_qpair *qpair)
{
struct spdk_nvmf_rdma_transport *rtransport;
struct spdk_nvmf_rdma_poll_group *rgroup;
struct spdk_nvmf_rdma_qpair *rqpair;
struct spdk_nvmf_rdma_device *device;
struct spdk_nvmf_rdma_poller *poller;
int rc;
rtransport = SPDK_CONTAINEROF(qpair->transport, struct spdk_nvmf_rdma_transport, transport);
rgroup = SPDK_CONTAINEROF(group, struct spdk_nvmf_rdma_poll_group, group);
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
@ -2672,16 +2605,6 @@ spdk_nvmf_rdma_poll_group_add(struct spdk_nvmf_transport_poll_group *group,
return -1;
}
rqpair->mgmt_channel = spdk_get_io_channel(rtransport);
if (!rqpair->mgmt_channel) {
spdk_nvmf_rdma_event_reject(rqpair->cm_id, SPDK_NVMF_RDMA_ERROR_NO_RESOURCES);
spdk_nvmf_rdma_qpair_destroy(rqpair);
return -1;
}
rqpair->ch = spdk_io_channel_get_ctx(rqpair->mgmt_channel);
assert(rqpair->ch != NULL);
rc = spdk_nvmf_rdma_event_accept(rqpair->cm_id, rqpair);
if (rc) {
/* Try to reject, but we probably can't */
@ -2718,10 +2641,10 @@ spdk_nvmf_rdma_request_complete(struct spdk_nvmf_request *req)
if (rqpair->ibv_attr.qp_state != IBV_QPS_ERR) {
/* The connection is alive, so process the request as normal */
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_EXECUTED);
rdma_req->state = RDMA_REQUEST_STATE_EXECUTED;
} else {
/* The connection is dead. Move the request directly to the completed state. */
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
}
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
@ -2819,8 +2742,10 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
SPDK_ERRLOG("data=%p length=%u\n", rdma_req->req.data, rdma_req->req.length);
/* We're going to attempt an error recovery, so force the request into
* the completed state. */
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
rqpair->current_send_depth--;
assert(rdma_req->num_outstanding_data_wr == 0);
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
break;
case RDMA_WR_TYPE_RECV:
@ -2829,7 +2754,7 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
/* Dump this into the incoming queue. This gets cleaned up when
* the queue pair disconnects or recovers. */
TAILQ_INSERT_TAIL(&rqpair->incoming_queue, rdma_recv, link);
STAILQ_INSERT_TAIL(&rqpair->incoming_queue, rdma_recv, link);
rqpair->current_recv_depth++;
/* Don't worry about responding to recv overflow, we are disconnecting anyways */
@ -2843,12 +2768,12 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
SPDK_ERRLOG("data=%p length=%u\n", rdma_req->req.data, rdma_req->req.length);
assert(rdma_req->num_outstanding_data_wr > 0);
rdma_req->num_outstanding_data_wr--;
if (rdma_req->data.wr.opcode == IBV_WR_RDMA_READ) {
assert(rdma_req->num_outstanding_data_wr > 0);
rqpair->current_read_depth--;
rdma_req->num_outstanding_data_wr--;
if (rdma_req->num_outstanding_data_wr == 0) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
}
}
rqpair->current_send_depth--;
@ -2885,7 +2810,7 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
if (rqpair->qpair.state == SPDK_NVMF_QPAIR_ACTIVE) {
/* Disconnect the connection. */
spdk_nvmf_qpair_disconnect(&rqpair->qpair, NULL, NULL);
spdk_nvmf_rdma_start_disconnect(rqpair);
}
continue;
}
@ -2898,12 +2823,13 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
assert(spdk_nvmf_rdma_req_is_completing(rdma_req));
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_COMPLETED);
rdma_req->state = RDMA_REQUEST_STATE_COMPLETED;
rqpair->current_send_depth--;
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
count++;
assert(rdma_req->num_outstanding_data_wr == 0);
/* Try to process other queued requests */
spdk_nvmf_rdma_qpair_process_pending(rtransport, rqpair, false);
break;
@ -2913,6 +2839,7 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, data.rdma_wr);
rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair, struct spdk_nvmf_rdma_qpair, qpair);
rqpair->current_send_depth--;
rdma_req->num_outstanding_data_wr--;
/* Try to process other queued requests */
spdk_nvmf_rdma_qpair_process_pending(rtransport, rqpair, false);
@ -2930,7 +2857,7 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
rqpair->current_read_depth--;
rdma_req->num_outstanding_data_wr--;
if (rdma_req->num_outstanding_data_wr == 0) {
spdk_nvmf_rdma_request_set_state(rdma_req, RDMA_REQUEST_STATE_READY_TO_EXECUTE);
rdma_req->state = RDMA_REQUEST_STATE_READY_TO_EXECUTE;
spdk_nvmf_rdma_request_process(rtransport, rdma_req);
}
@ -2944,11 +2871,11 @@ spdk_nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
rqpair = rdma_recv->qpair;
/* The qpair should not send more requests than are allowed per qpair. */
if (rqpair->current_recv_depth >= rqpair->max_queue_depth) {
spdk_nvmf_qpair_disconnect(&rqpair->qpair, NULL, NULL);
spdk_nvmf_rdma_start_disconnect(rqpair);
} else {
rqpair->current_recv_depth++;
}
TAILQ_INSERT_TAIL(&rqpair->incoming_queue, rdma_recv, link);
STAILQ_INSERT_TAIL(&rqpair->incoming_queue, rdma_recv, link);
/* Try to process other queued requests */
spdk_nvmf_rdma_qpair_process_pending(rtransport, rqpair, false);
break;

View File

@ -58,11 +58,29 @@ uint32_t g_lcore = 0;
std::string g_bdev_name;
volatile bool g_spdk_ready = false;
volatile bool g_spdk_start_failure = false;
struct sync_args {
void SpdkInitializeThread(void);
class SpdkThreadCtx
{
public:
struct spdk_io_channel *channel;
SpdkThreadCtx(void) : channel(NULL)
{
SpdkInitializeThread();
}
~SpdkThreadCtx(void)
{
}
private:
SpdkThreadCtx(const SpdkThreadCtx &);
SpdkThreadCtx &operator=(const SpdkThreadCtx &);
};
__thread struct sync_args g_sync_args;
thread_local SpdkThreadCtx g_sync_args;
static void
__call_fn(void *arg1, void *arg2)
@ -510,7 +528,6 @@ public:
}
return Status::OK();
}
virtual void StartThread(void (*function)(void *arg), void *arg) override;
virtual Status LockFile(const std::string &fname, FileLock **lock) override
{
std::string name = sanitize_path(fname, mDirectory);
@ -583,35 +600,13 @@ void SpdkInitializeThread(void)
{
struct spdk_thread *thread;
if (g_fs != NULL) {
if (g_fs != NULL && g_sync_args.channel == NULL) {
thread = spdk_thread_create("spdk_rocksdb");
spdk_set_thread(thread);
g_sync_args.channel = spdk_fs_alloc_io_channel_sync(g_fs);
}
}
struct SpdkThreadState {
void (*user_function)(void *);
void *arg;
};
static void SpdkStartThreadWrapper(void *arg)
{
SpdkThreadState *state = reinterpret_cast<SpdkThreadState *>(arg);
SpdkInitializeThread();
state->user_function(state->arg);
delete state;
}
void SpdkEnv::StartThread(void (*function)(void *arg), void *arg)
{
SpdkThreadState *state = new SpdkThreadState;
state->user_function = function;
state->arg = arg;
EnvWrapper::StartThread(SpdkStartThreadWrapper, state);
}
static void
fs_load_cb(__attribute__((unused)) void *ctx,
struct spdk_filesystem *fs, int fserrno)

View File

@ -1085,8 +1085,11 @@ start_device(int vid)
}
for (i = 0; i < vsession->mem->nregions; i++) {
if (vsession->mem->regions[i].size & MASK_2MB) {
SPDK_ERRLOG("vhost device %d: Guest memory size is not a 2MB multiple\n", vid);
uint64_t mmap_size = vsession->mem->regions[i].mmap_size;
if (mmap_size & MASK_2MB) {
SPDK_ERRLOG("vhost device %d: Guest mmaped memory size %" PRIx64
" is not a 2MB multiple\n", vid, mmap_size);
free(vsession->mem);
goto out;
}

View File

@ -35,6 +35,8 @@ include $(SPDK_ROOT_DIR)/mk/spdk.app_vars.mk
LIBS += $(SPDK_LIB_LINKER_ARGS)
CLEAN_FILES = $(APP)
all : $(APP)
@:
@ -44,6 +46,6 @@ $(APP) : $(OBJS) $(SPDK_LIB_FILES) $(ENV_LIBS)
$(LINK_C)
clean :
$(CLEAN_C) $(APP)
$(CLEAN_C) $(CLEAN_FILES)
include $(SPDK_ROOT_DIR)/mk/spdk.deps.mk

View File

@ -2,12 +2,12 @@
%bcond_with doc
Name: spdk
Version: 19.01
Version: 19.01.x
Release: 0%{?dist}
Epoch: 0
URL: http://spdk.io
Source: https://github.com/spdk/spdk/archive/v19.01.tar.gz
Source: https://github.com/spdk/spdk/archive/v19.01.x.tar.gz
Summary: Set of libraries and utilities for high performance user-mode storage
%define package_version %{epoch}:%{version}-%{release}

View File

@ -1,6 +1,35 @@
# Common shell utility functions
function iter_pci_class_code() {
# Check if PCI device is on PCI_WHITELIST and not on PCI_BLACKLIST
# Env:
# if PCI_WHITELIST is empty assume device is whitelistened
# if PCI_BLACKLIST is empty assume device is NOT blacklistened
# Params:
# $1 - PCI BDF
function pci_can_use() {
local i
# The '\ ' part is important
if [[ " $PCI_BLACKLIST " =~ \ $1\ ]] ; then
return 1
fi
if [[ -z "$PCI_WHITELIST" ]]; then
#no whitelist specified, bind all devices
return 0
fi
for i in $PCI_WHITELIST; do
if [ "$i" == "$1" ] ; then
return 0
fi
done
return 1
}
# This function will ignore PCI PCI_WHITELIST and PCI_BLACKLIST
function iter_all_pci_class_code() {
local class="$(printf %02x $((0x$1)))"
local subclass="$(printf %02x $((0x$2)))"
local progif="$(printf %02x $((0x$3)))"
@ -17,7 +46,25 @@ function iter_pci_class_code() {
'{if (cc ~ $2) print $1}' | tr -d '"'
fi
elif hash pciconf &>/dev/null; then
addr=($(pciconf -l | grep -i "class=0x${class}${subclass}${progif}" | \
local addr=($(pciconf -l | grep -i "class=0x${class}${subclass}${progif}" | \
cut -d$'\t' -f1 | sed -e 's/^[a-zA-Z0-9_]*@pci//g' | tr ':' ' '))
printf "%04x:%02x:%02x:%x\n" ${addr[0]} ${addr[1]} ${addr[2]} ${addr[3]}
else
echo "Missing PCI enumeration utility"
exit 1
fi
}
# This function will ignore PCI PCI_WHITELIST and PCI_BLACKLIST
function iter_all_pci_dev_id() {
local ven_id="$(printf %04x $((0x$1)))"
local dev_id="$(printf %04x $((0x$2)))"
if hash lspci &>/dev/null; then
lspci -mm -n -D | awk -v ven="\"$ven_id\"" -v dev="\"${dev_id}\"" -F " " \
'{if (ven ~ $3 && dev ~ $4) print $1}' | tr -d '"'
elif hash pciconf &>/dev/null; then
local addr=($(pciconf -l | grep -i "chip=0x${dev_id}${ven_id}" | \
cut -d$'\t' -f1 | sed -e 's/^[a-zA-Z0-9_]*@pci//g' | tr ':' ' '))
printf "%04x:%02x:%02x:%x\n" ${addr[0]} ${addr[1]} ${addr[2]} ${addr[3]}
else
@ -27,18 +74,23 @@ function iter_pci_class_code() {
}
function iter_pci_dev_id() {
local ven_id="$(printf %04x $((0x$1)))"
local dev_id="$(printf %04x $((0x$2)))"
local bdf=""
if hash lspci &>/dev/null; then
lspci -mm -n -D | awk -v ven="\"$ven_id\"" -v dev="\"${dev_id}\"" -F " " \
'{if (ven ~ $3 && dev ~ $4) print $1}' | tr -d '"'
elif hash pciconf &>/dev/null; then
addr=($(pciconf -l | grep -i "chip=0x${dev_id}${ven_id}" | \
cut -d$'\t' -f1 | sed -e 's/^[a-zA-Z0-9_]*@pci//g' | tr ':' ' '))
printf "%04x:%02x:%02x:%x\n" ${addr[0]} ${addr[1]} ${addr[2]} ${addr[3]}
else
echo "Missing PCI enumeration utility"
exit 1
fi
for bdf in $(iter_all_pci_dev_id "$@"); do
if pci_can_use "$bdf"; then
echo "$bdf"
fi
done
}
# This function will filter out PCI devices using PCI_WHITELIST and PCI_BLACKLIST
# See function pci_can_use()
function iter_pci_class_code() {
local bdf=""
for bdf in $(iter_all_pci_class_code "$@"); do
if pci_can_use "$bdf"; then
echo "$bdf"
fi
done
}

View File

@ -90,7 +90,7 @@ elif [ -f /etc/debian_version ]; then
"Note: Some SPDK CLI dependencies could not be installed."
# Additional dependencies for ISA-L used in compression
apt-get install -y autoconf automake libtool
elif [ -f /etc/SuSE-release ]; then
elif [ -f /etc/SuSE-release ] || [ -f /etc/SUSE-brand ]; then
zypper install -y gcc gcc-c++ make cunit-devel libaio-devel libopenssl-devel \
git-core lcov python-base python-pep8 libuuid-devel sg3_utils pciutils
# Additional (optional) dependencies for showing backtrace in logs

View File

@ -1063,12 +1063,15 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
print(rpc.lvol.construct_lvol_store(args.client,
bdev_name=args.bdev_name,
lvs_name=args.lvs_name,
cluster_sz=args.cluster_sz))
cluster_sz=args.cluster_sz,
clear_method=args.clear_method))
p = subparsers.add_parser('construct_lvol_store', help='Add logical volume store on base bdev')
p.add_argument('bdev_name', help='base bdev name')
p.add_argument('lvs_name', help='name for lvol store')
p.add_argument('-c', '--cluster-sz', help='size of cluster (in bytes)', type=int, required=False)
p.add_argument('--clear-method', help="""Change clear method for data region.
Available: none, unmap, write_zeroes""", required=False)
p.set_defaults(func=construct_lvol_store)
def rename_lvol_store(args):

View File

@ -1,10 +1,11 @@
def construct_lvol_store(client, bdev_name, lvs_name, cluster_sz=None):
def construct_lvol_store(client, bdev_name, lvs_name, cluster_sz=None, clear_method=None):
"""Construct a logical volume store.
Args:
bdev_name: bdev on which to construct logical volume store
lvs_name: name of the logical volume store to create
cluster_sz: cluster size of the logical volume store in bytes (optional)
clear_method: Change clear method for data region. Available: none, unmap, write_zeroes (optional)
Returns:
UUID of created logical volume store.
@ -12,6 +13,8 @@ def construct_lvol_store(client, bdev_name, lvs_name, cluster_sz=None):
params = {'bdev_name': bdev_name, 'lvs_name': lvs_name}
if cluster_sz:
params['cluster_sz'] = cluster_sz
if clear_method:
params['clear_method'] = clear_method
return client.call('construct_lvol_store', params)

View File

@ -41,12 +41,16 @@ function usage()
echo "HUGENODE Specific NUMA node to allocate hugepages on. To allocate"
echo " hugepages on multiple nodes run this script multiple times -"
echo " once for each node."
echo "PCI_WHITELIST Whitespace separated list of PCI devices (NVMe, I/OAT, Virtio) to bind."
echo "PCI_WHITELIST"
echo "PCI_BLACKLIST Whitespace separated list of PCI devices (NVMe, I/OAT, Virtio)."
echo " Each device must be specified as a full PCI address."
echo " E.g. PCI_WHITELIST=\"0000:01:00.0 0000:02:00.0\""
echo " To blacklist all PCI devices use a non-valid address."
echo " E.g. PCI_WHITELIST=\"none\""
echo " If empty or unset, all PCI devices will be bound."
echo " If PCI_WHITELIST and PCI_BLACKLIST are empty or unset, all PCI devices"
echo " will be bound."
echo " Each device in PCI_BLACKLIST will be ignored (driver won't be changed)."
echo " PCI_BLACKLIST has precedence over PCI_WHITELIST."
echo "TARGET_USER User that will own hugepage mountpoint directory and vfio groups."
echo " By default the current user will be used."
echo "DRIVER_OVERRIDE Disable automatic vfio-pci/uio_pci_generic selection and forcefully"
@ -56,35 +60,29 @@ function usage()
}
# In monolithic kernels the lsmod won't work. So
# back that with a /sys/modules check. Return a different code for
# built-in vs module just in case we want that down the road.
# back that with a /sys/modules. We also check
# /sys/bus/pci/drivers/ as neither lsmod nor /sys/modules might
# contain needed info (like in Fedora-like OS).
function check_for_driver {
$(lsmod | grep $1 > /dev/null)
if [ $? -eq 0 ]; then
if lsmod | grep -q ${1//-/_}; then
return 1
else
if [[ -d /sys/module/$1 ]]; then
return 2
else
return 0
fi
fi
if [[ -d /sys/module/${1} || \
-d /sys/module/${1//-/_} || \
-d /sys/bus/pci/drivers/${1} || \
-d /sys/bus/pci/drivers/${1//-/_} ]]; then
return 2
fi
return 0
}
function pci_can_bind() {
if [[ ${#PCI_WHITELIST[@]} == 0 ]]; then
#no whitelist specified, bind all devices
return 1
fi
for i in ${PCI_WHITELIST[@]}
do
if [ "$i" == "$1" ] ; then
return 1
fi
done
return 0
function pci_dev_echo() {
local bdf="$1"
local vendor="$(cat /sys/bus/pci/devices/$bdf/vendor)"
local device="$(cat /sys/bus/pci/devices/$bdf/device)"
shift
echo "$bdf (${vendor#0x} ${device#0x}): $@"
}
function linux_bind_driver() {
@ -97,6 +95,7 @@ function linux_bind_driver() {
old_driver_name=$(basename $(readlink /sys/bus/pci/devices/$bdf/driver))
if [ "$driver_name" = "$old_driver_name" ]; then
pci_dev_echo "$bdf" "Already using the $old_driver_name driver"
return 0
fi
@ -104,7 +103,7 @@ function linux_bind_driver() {
echo "$bdf" > "/sys/bus/pci/devices/$bdf/driver/unbind"
fi
echo "$bdf ($ven_dev_id): $old_driver_name -> $driver_name"
pci_dev_echo "$bdf" "$old_driver_name -> $driver_name"
echo "$ven_dev_id" > "/sys/bus/pci/drivers/$driver_name/new_id" 2> /dev/null || true
echo "$bdf" > "/sys/bus/pci/drivers/$driver_name/bind" 2> /dev/null || true
@ -179,11 +178,11 @@ function configure_linux_pci {
# NVMe
modprobe $driver_name
for bdf in $(iter_pci_class_code 01 08 02); do
for bdf in $(iter_all_pci_class_code 01 08 02); do
blkname=''
get_nvme_name_from_bdf "$bdf" blkname
if pci_can_bind $bdf == "0" ; then
echo "Skipping un-whitelisted NVMe controller $blkname ($bdf)"
if ! pci_can_use $bdf; then
pci_dev_echo "$bdf" "Skipping un-whitelisted NVMe controller $blkname"
continue
fi
if [ "$blkname" != "" ]; then
@ -194,7 +193,7 @@ function configure_linux_pci {
if [ "$mountpoints" = "0" ]; then
linux_bind_driver "$bdf" "$driver_name"
else
echo Active mountpoints on /dev/$blkname, so not binding PCI dev $bdf
pci_dev_echo "$bdf" "Active mountpoints on /dev/$blkname, so not binding PCI dev"
fi
done
@ -205,9 +204,9 @@ function configure_linux_pci {
| awk -F"x" '{print $2}' > $TMP
for dev_id in `cat $TMP`; do
for bdf in $(iter_pci_dev_id 8086 $dev_id); do
if pci_can_bind $bdf == "0" ; then
echo "Skipping un-whitelisted I/OAT device at $bdf"
for bdf in $(iter_all_pci_dev_id 8086 $dev_id); do
if ! pci_can_use $bdf; then
pci_dev_echo "$bdf" "Skipping un-whitelisted I/OAT device"
continue
fi
@ -223,16 +222,16 @@ function configure_linux_pci {
| awk -F"x" '{print $2}' > $TMP
for dev_id in `cat $TMP`; do
for bdf in $(iter_pci_dev_id 1af4 $dev_id); do
if pci_can_bind $bdf == "0" ; then
echo "Skipping un-whitelisted Virtio device at $bdf"
for bdf in $(iter_all_pci_dev_id 1af4 $dev_id); do
if ! pci_can_use $bdf; then
pci_dev_echo "$bdf" "Skipping un-whitelisted Virtio device at $bdf"
continue
fi
blknames=''
get_virtio_names_from_bdf "$bdf" blknames
for blkname in $blknames; do
if [ "$(lsblk /dev/$blkname --output MOUNTPOINT -n | wc -w)" != "0" ]; then
echo Active mountpoints on /dev/$blkname, so not binding PCI dev $bdf
pci_dev_echo "$bdf" "Active mountpoints on /dev/$blkname, so not binding"
continue 2
fi
done
@ -361,9 +360,9 @@ function reset_linux_pci {
check_for_driver nvme
driver_loaded=$?
set -e
for bdf in $(iter_pci_class_code 01 08 02); do
if pci_can_bind $bdf == "0" ; then
echo "Skipping un-whitelisted NVMe controller $blkname ($bdf)"
for bdf in $(iter_all_pci_class_code 01 08 02); do
if ! pci_can_use $bdf; then
pci_dev_echo "$bdf" "Skipping un-whitelisted NVMe controller $blkname"
continue
fi
if [ $driver_loaded -ne 0 ]; then
@ -384,9 +383,9 @@ function reset_linux_pci {
driver_loaded=$?
set -e
for dev_id in `cat $TMP`; do
for bdf in $(iter_pci_dev_id 8086 $dev_id); do
if pci_can_bind $bdf == "0" ; then
echo "Skipping un-whitelisted I/OAT device at $bdf"
for bdf in $(iter_all_pci_dev_id 8086 $dev_id); do
if ! pci_can_use $bdf; then
pci_dev_echo "$bdf" "Skipping un-whitelisted I/OAT device"
continue
fi
if [ $driver_loaded -ne 0 ]; then
@ -410,9 +409,9 @@ function reset_linux_pci {
# underscore vs. dash right in the virtio_scsi name.
modprobe virtio-pci || true
for dev_id in `cat $TMP`; do
for bdf in $(iter_pci_dev_id 1af4 $dev_id); do
if pci_can_bind $bdf == "0" ; then
echo "Skipping un-whitelisted Virtio device at $bdf"
for bdf in $(iter_all_pci_dev_id 1af4 $dev_id); do
if ! pci_can_use $bdf; then
pci_dev_echo "$bdf" "Skipping un-whitelisted Virtio device at"
continue
fi
linux_bind_driver "$bdf" virtio-pci
@ -461,47 +460,56 @@ function status_linux {
printf "%-6s %10s %8s / %6s\n" $node $huge_size $free_pages $all_pages
fi
echo ""
echo "NVMe devices"
echo -e "BDF\t\tNuma Node\tDriver name\t\tDevice name"
for bdf in $(iter_pci_class_code 01 08 02); do
driver=`grep DRIVER /sys/bus/pci/devices/$bdf/uevent |awk -F"=" '{print $2}'`
node=`cat /sys/bus/pci/devices/$bdf/numa_node`;
echo -e "BDF\t\tVendor\tDevice\tNUMA\tDriver\t\tDevice name"
for bdf in $(iter_all_pci_class_code 01 08 02); do
driver=$(grep DRIVER /sys/bus/pci/devices/$bdf/uevent |awk -F"=" '{print $2}')
node=$(cat /sys/bus/pci/devices/$bdf/numa_node)
device=$(cat /sys/bus/pci/devices/$bdf/device)
vendor=$(cat /sys/bus/pci/devices/$bdf/vendor)
if [ "$driver" = "nvme" -a -d /sys/bus/pci/devices/$bdf/nvme ]; then
name="\t"`ls /sys/bus/pci/devices/$bdf/nvme`;
else
name="-";
fi
echo -e "$bdf\t$node\t\t$driver\t\t$name";
echo -e "$bdf\t${vendor#0x}\t${device#0x}\t$node\t$driver\t\t$name";
done
echo ""
echo "I/OAT DMA"
#collect all the device_id info of ioat devices.
TMP=`grep "PCI_DEVICE_ID_INTEL_IOAT" $rootdir/include/spdk/pci_ids.h \
| awk -F"x" '{print $2}'`
echo -e "BDF\t\tNuma Node\tDriver Name"
echo -e "BDF\t\tVendor\tDevice\tNUMA\tDriver"
for dev_id in $TMP; do
for bdf in $(iter_pci_dev_id 8086 $dev_id); do
driver=`grep DRIVER /sys/bus/pci/devices/$bdf/uevent |awk -F"=" '{print $2}'`
node=`cat /sys/bus/pci/devices/$bdf/numa_node`;
echo -e "$bdf\t$node\t\t$driver"
for bdf in $(iter_all_pci_dev_id 8086 $dev_id); do
driver=$(grep DRIVER /sys/bus/pci/devices/$bdf/uevent |awk -F"=" '{print $2}')
node=$(cat /sys/bus/pci/devices/$bdf/numa_node)
device=$(cat /sys/bus/pci/devices/$bdf/device)
vendor=$(cat /sys/bus/pci/devices/$bdf/vendor)
echo -e "$bdf\t${vendor#0x}\t${device#0x}\t$node\t$driver"
done
done
echo ""
echo "virtio"
#collect all the device_id info of virtio devices.
TMP=`grep "PCI_DEVICE_ID_VIRTIO" $rootdir/include/spdk/pci_ids.h \
| awk -F"x" '{print $2}'`
echo -e "BDF\t\tNuma Node\tDriver Name\t\tDevice Name"
echo -e "BDF\t\tVendor\tDevice\tNUMA\tDriver\t\tDevice name"
for dev_id in $TMP; do
for bdf in $(iter_pci_dev_id 1af4 $dev_id); do
driver=`grep DRIVER /sys/bus/pci/devices/$bdf/uevent |awk -F"=" '{print $2}'`
node=`cat /sys/bus/pci/devices/$bdf/numa_node`;
for bdf in $(iter_all_pci_dev_id 1af4 $dev_id); do
driver=$(grep DRIVER /sys/bus/pci/devices/$bdf/uevent |awk -F"=" '{print $2}')
node=$(cat /sys/bus/pci/devices/$bdf/numa_node)
device=$(cat /sys/bus/pci/devices/$bdf/device)
vendor=$(cat /sys/bus/pci/devices/$bdf/vendor)
blknames=''
get_virtio_names_from_bdf "$bdf" blknames
echo -e "$bdf\t$node\t\t$driver\t\t$blknames"
echo -e "$bdf\t${vendor#0x}\t${device#0x}\t$node\t\t$driver\t\t$blknames"
done
done
}
@ -559,6 +567,7 @@ fi
: ${HUGEMEM:=2048}
: ${PCI_WHITELIST:=""}
: ${PCI_BLACKLIST:=""}
if [ -n "$NVME_WHITELIST" ]; then
PCI_WHITELIST="$PCI_WHITELIST $NVME_WHITELIST"
@ -568,8 +577,6 @@ if [ -n "$SKIP_PCI" ]; then
PCI_WHITELIST="none"
fi
declare -a PCI_WHITELIST=(${PCI_WHITELIST})
if [ -z "$TARGET_USER" ]; then
TARGET_USER="$SUDO_USER"
if [ -z "$TARGET_USER" ]; then

View File

@ -3,6 +3,7 @@ import sys
import argparse
import configshell_fb
from os import getuid
from rpc.client import JSONRPCException
from configshell_fb import ConfigShell, shell, ExecutionError
from spdkcli import UIRoot
from pyparsing import (alphanums, Optional, Suppress, Word, Regex,
@ -31,6 +32,7 @@ def main():
:return:
"""
spdk_shell = ConfigShell("~/.scripts")
spdk_shell.interactive = True
add_quotes_to_shell(spdk_shell)
parser = argparse.ArgumentParser(description="SPDK command line interface")
@ -50,6 +52,7 @@ def main():
if len(args.commands) > 0:
try:
spdk_shell.interactive = False
spdk_shell.run_cmdline(" ".join(args.commands))
except Exception as e:
sys.stderr.write("%s\n" % e)
@ -61,7 +64,7 @@ def main():
while not spdk_shell._exit:
try:
spdk_shell.run_interactive()
except ExecutionError as e:
except (JSONRPCException, ExecutionError) as e:
spdk_shell.log.error("%s" % e)

View File

@ -21,6 +21,9 @@ class UINode(ConfigNode):
for child in self.children:
child.refresh()
def refresh_node(self):
self.refresh()
def ui_command_refresh(self):
self.refresh()
@ -34,12 +37,23 @@ class UINode(ConfigNode):
try:
result = ConfigNode.execute_command(self, command,
pparams, kparams)
except Exception as msg:
self.shell.log.error(str(msg))
pass
except Exception as e:
raise e
else:
self.shell.log.debug("Command %s succeeded." % command)
return result
finally:
if self.shell.interactive and\
command in ["create", "delete", "delete_all", "add_initiator",
"allow_any_host", "split_bdev", "add_lun",
"add_pg_ig_maps", "remove_target", "add_secret",
"destruct_split_bdev", "delete_pmem_pool",
"create_pmem_pool", "delete_secret_all",
"delete_initiator", "set_auth", "delete_secret",
"delete_pg_ig_maps", "load_config",
"load_subsystem_config"]:
self.get_root().refresh()
self.refresh_node()
class UIBdevs(UINode):
@ -90,14 +104,7 @@ class UILvolStores(UINode):
"""
cluster_size = self.ui_eval_param(cluster_size, "number", None)
try:
self.get_root().create_lvol_store(lvs_name=name, bdev_name=bdev_name, cluster_sz=cluster_size)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.get_root().create_lvol_store(lvs_name=name, bdev_name=bdev_name, cluster_sz=cluster_size)
def ui_command_delete(self, name=None, uuid=None):
"""
@ -109,14 +116,16 @@ class UILvolStores(UINode):
uuid - UUID number of the logical volume store to be deleted.
"""
self.delete(name, uuid)
self.get_root().refresh()
self.refresh()
def ui_command_delete_all(self):
rpc_messages = ""
for lvs in self._children:
self.delete(None, lvs.lvs.uuid)
self.get_root().refresh()
self.refresh()
try:
self.delete(None, lvs.lvs.uuid)
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Lvol stores: %s" % len(self.children), None
@ -133,18 +142,19 @@ class UIBdev(UINode):
UIBdevObj(bdev, self)
def ui_command_get_bdev_iostat(self, name=None):
try:
ret = self.get_root().get_bdevs_iostat(name=name)
self.shell.log.info(json.dumps(ret, indent=2))
except JSONRPCException as e:
self.shell.log.error(e.message)
ret = self.get_root().get_bdevs_iostat(name=name)
self.shell.log.info(json.dumps(ret, indent=2))
def ui_command_delete_all(self):
"""Delete all bdevs from this tree node."""
rpc_messages = ""
for bdev in self._children:
self.delete(bdev.name)
self.get_root().refresh()
self.refresh()
try:
self.delete(bdev.name)
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Bdevs: %d" % len(self.children), None
@ -155,10 +165,7 @@ class UIMallocBdev(UIBdev):
UIBdev.__init__(self, "malloc", parent)
def delete(self, name):
try:
self.get_root().delete_malloc_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_malloc_bdev(name=name)
def ui_command_create(self, size, block_size, name=None, uuid=None):
"""
@ -175,17 +182,10 @@ class UIMallocBdev(UIBdev):
size = self.ui_eval_param(size, "number", None)
block_size = self.ui_eval_param(block_size, "number", None)
try:
ret_name = self.get_root().create_malloc_bdev(num_blocks=size * 1024 * 1024 // block_size,
block_size=block_size,
name=name, uuid=uuid)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_malloc_bdev(num_blocks=size * 1024 * 1024 // block_size,
block_size=block_size,
name=name, uuid=uuid)
self.shell.log.info(ret_name)
def ui_command_delete(self, name):
"""
@ -195,8 +195,6 @@ class UIMallocBdev(UIBdev):
name - Is a unique identifier of the malloc bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UIAIOBdev(UIBdev):
@ -204,10 +202,7 @@ class UIAIOBdev(UIBdev):
UIBdev.__init__(self, "aio", parent)
def delete(self, name):
try:
self.get_root().delete_aio_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_aio_bdev(name=name)
def ui_command_create(self, name, filename, block_size):
"""
@ -222,17 +217,10 @@ class UIAIOBdev(UIBdev):
"""
block_size = self.ui_eval_param(block_size, "number", None)
try:
ret_name = self.get_root().create_aio_bdev(name=name,
block_size=int(block_size),
filename=filename)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_aio_bdev(name=name,
block_size=int(block_size),
filename=filename)
self.shell.log.info(ret_name)
def ui_command_delete(self, name):
"""
@ -242,8 +230,6 @@ class UIAIOBdev(UIBdev):
name - Is a unique identifier of the aio bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UILvolBdev(UIBdev):
@ -251,10 +237,7 @@ class UILvolBdev(UIBdev):
UIBdev.__init__(self, "logical_volume", parent)
def delete(self, name):
try:
self.get_root().destroy_lvol_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().destroy_lvol_bdev(name=name)
def ui_command_create(self, name, size, lvs, thin_provision=None):
"""
@ -280,16 +263,10 @@ class UILvolBdev(UIBdev):
size *= (1024 * 1024)
thin_provision = self.ui_eval_param(thin_provision, "bool", False)
try:
ret_uuid = self.get_root().create_lvol_bdev(lvol_name=name, size=size,
lvs_name=lvs_name, uuid=uuid,
thin_provision=thin_provision)
self.shell.log.info(ret_uuid)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_uuid = self.get_root().create_lvol_bdev(lvol_name=name, size=size,
lvs_name=lvs_name, uuid=uuid,
thin_provision=thin_provision)
self.shell.log.info(ret_uuid)
def ui_command_delete(self, name):
"""
@ -299,8 +276,6 @@ class UILvolBdev(UIBdev):
name - Is a unique identifier of the lvol bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UINvmeBdev(UIBdev):
@ -308,37 +283,30 @@ class UINvmeBdev(UIBdev):
UIBdev.__init__(self, "nvme", parent)
def delete(self, name):
try:
self.get_root().delete_nvme_controller(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_nvme_controller(name=name)
def ui_command_create(self, name, trtype, traddr,
adrfam=None, trsvcid=None, subnqn=None):
if "rdma" in trtype and None in [adrfam, trsvcid, subnqn]:
self.shell.log.error("Using RDMA transport type."
"Please provide arguments for adrfam, trsvcid and subnqn.")
try:
ret_name = self.get_root().create_nvme_bdev(name=name, trtype=trtype,
traddr=traddr, adrfam=adrfam,
trsvcid=trsvcid, subnqn=subnqn)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_nvme_bdev(name=name, trtype=trtype,
traddr=traddr, adrfam=adrfam,
trsvcid=trsvcid, subnqn=subnqn)
self.shell.log.info(ret_name)
def ui_command_delete_all(self):
rpc_messages = ""
ctrlrs = [x.name for x in self._children]
ctrlrs = [x.rsplit("n", 1)[0] for x in ctrlrs]
ctrlrs = set(ctrlrs)
for ctrlr in ctrlrs:
self.delete(ctrlr)
self.get_root().refresh()
self.refresh()
try:
self.delete(ctrlr)
except JSONRPCException as e:
rpc_messages += e.messages
if rpc_messages:
raise JSONRPCException(rpc_messages)
def ui_command_delete(self, name):
"""
@ -348,8 +316,6 @@ class UINvmeBdev(UIBdev):
name - Is a unique identifier of the NVMe controller to be deleted.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UINullBdev(UIBdev):
@ -357,10 +323,7 @@ class UINullBdev(UIBdev):
UIBdev.__init__(self, "null", parent)
def delete(self, name):
try:
self.get_root().delete_null_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_null_bdev(name=name)
def ui_command_create(self, name, size, block_size, uuid=None):
"""
@ -377,17 +340,10 @@ class UINullBdev(UIBdev):
size = self.ui_eval_param(size, "number", None)
block_size = self.ui_eval_param(block_size, "number", None)
num_blocks = size * 1024 * 1024 // block_size
try:
ret_name = self.get_root().create_null_bdev(num_blocks=num_blocks,
block_size=block_size,
name=name, uuid=uuid)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_null_bdev(num_blocks=num_blocks,
block_size=block_size,
name=name, uuid=uuid)
self.shell.log.info(ret_name)
def ui_command_delete(self, name):
"""
@ -397,8 +353,6 @@ class UINullBdev(UIBdev):
name - Is a unique identifier of the null bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UIErrorBdev(UIBdev):
@ -406,10 +360,7 @@ class UIErrorBdev(UIBdev):
UIBdev.__init__(self, "error", parent)
def delete(self, name):
try:
self.get_root().delete_error_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_error_bdev(name=name)
def ui_command_create(self, base_name):
"""
@ -419,13 +370,7 @@ class UIErrorBdev(UIBdev):
base_name - base bdev name on top of which error bdev will be created.
"""
try:
self.get_root().create_error_bdev(base_name=base_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.get_root().create_error_bdev(base_name=base_name)
def ui_command_delete(self, name):
"""
@ -435,8 +380,6 @@ class UIErrorBdev(UIBdev):
name - Is a unique identifier of the error bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UISplitBdev(UIBdev):
@ -459,16 +402,10 @@ class UISplitBdev(UIBdev):
split_count = self.ui_eval_param(split_count, "number", None)
split_size_mb = self.ui_eval_param(split_size_mb, "number", None)
try:
ret_name = self.get_root().split_bdev(base_bdev=base_bdev,
split_count=split_count,
split_size_mb=split_size_mb)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.parent.refresh()
self.refresh()
ret_name = self.get_root().split_bdev(base_bdev=base_bdev,
split_count=split_count,
split_size_mb=split_size_mb)
self.shell.log.info(ret_name)
def ui_command_destruct_split_bdev(self, base_bdev):
"""Destroy split block devices associated with base bdev.
@ -477,13 +414,7 @@ class UISplitBdev(UIBdev):
base_bdev: name of previously split bdev
"""
try:
self.get_root().destruct_split_bdev(base_bdev=base_bdev)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.parent.refresh()
self.refresh()
self.get_root().destruct_split_bdev(base_bdev=base_bdev)
class UIPmemBdev(UIBdev):
@ -491,46 +422,28 @@ class UIPmemBdev(UIBdev):
UIBdev.__init__(self, "pmemblk", parent)
def delete(self, name):
try:
self.get_root().delete_pmem_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_pmem_bdev(name=name)
def ui_command_create_pmem_pool(self, pmem_file, total_size, block_size):
total_size = self.ui_eval_param(total_size, "number", None)
block_size = self.ui_eval_param(block_size, "number", None)
num_blocks = int((total_size * 1024 * 1024) / block_size)
try:
self.get_root().create_pmem_pool(pmem_file=pmem_file,
num_blocks=num_blocks,
block_size=block_size)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().create_pmem_pool(pmem_file=pmem_file,
num_blocks=num_blocks,
block_size=block_size)
def ui_command_delete_pmem_pool(self, pmem_file):
try:
self.get_root().delete_pmem_pool(pmem_file=pmem_file)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_pmem_pool(pmem_file=pmem_file)
def ui_command_info_pmem_pool(self, pmem_file):
try:
ret = self.get_root().delete_pmem_pool(pmem_file=pmem_file)
self.shell.log.info(ret)
except JSONRPCException as e:
self.shell.log.error(e.message)
ret = self.get_root().delete_pmem_pool(pmem_file=pmem_file)
self.shell.log.info(ret)
def ui_command_create(self, pmem_file, name):
try:
ret_name = self.get_root().create_pmem_bdev(pmem_file=pmem_file,
name=name)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_pmem_bdev(pmem_file=pmem_file,
name=name)
self.shell.log.info(ret_name)
def ui_command_delete(self, name):
"""
@ -540,8 +453,6 @@ class UIPmemBdev(UIBdev):
name - Is a unique identifier of the pmem bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UIRbdBdev(UIBdev):
@ -549,25 +460,16 @@ class UIRbdBdev(UIBdev):
UIBdev.__init__(self, "rbd", parent)
def delete(self, name):
try:
self.get_root().delete_rbd_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_rbd_bdev(name=name)
def ui_command_create(self, pool_name, rbd_name, block_size, name=None):
block_size = self.ui_eval_param(block_size, "number", None)
try:
ret_name = self.get_root().create_rbd_bdev(pool_name=pool_name,
rbd_name=rbd_name,
block_size=block_size,
name=name)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_rbd_bdev(pool_name=pool_name,
rbd_name=rbd_name,
block_size=block_size,
name=name)
self.shell.log.info(ret_name)
def ui_command_delete(self, name):
"""
@ -577,8 +479,6 @@ class UIRbdBdev(UIBdev):
name - Is a unique identifier of the rbd bdev to be deleted - UUID number or name alias.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UIiSCSIBdev(UIBdev):
@ -586,10 +486,7 @@ class UIiSCSIBdev(UIBdev):
UIBdev.__init__(self, "iscsi", parent)
def delete(self, name):
try:
self.get_root().delete_iscsi_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_iscsi_bdev(name=name)
def ui_command_create(self, name, url, initiator_iqn):
"""
@ -602,16 +499,10 @@ class UIiSCSIBdev(UIBdev):
Example: iscsi://127.0.0.1:3260/iqn.2018-06.org.spdk/0.
initiator_iqn - IQN to use for initiating connection with the target.
"""
try:
ret_name = self.get_root().create_iscsi_bdev(name=name,
url=url,
initiator_iqn=initiator_iqn)
self.shell.log.info(ret_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
ret_name = self.get_root().create_iscsi_bdev(name=name,
url=url,
initiator_iqn=initiator_iqn)
self.shell.log.info(ret_name)
def ui_command_delete(self, name):
"""
@ -621,8 +512,6 @@ class UIiSCSIBdev(UIBdev):
name - name of the iscsi bdev to be deleted.
"""
self.delete(name)
self.get_root().refresh()
self.refresh()
class UIVirtioBlkBdev(UIBdev):
@ -635,20 +524,14 @@ class UIVirtioBlkBdev(UIBdev):
vq_count = self.ui_eval_param(vq_count, "number", None)
vq_size = self.ui_eval_param(vq_size, "number", None)
try:
ret = self.get_root().create_virtio_dev(name=name,
trtype=trtype,
traddr=traddr,
dev_type="blk",
vq_count=vq_count,
vq_size=vq_size)
ret = self.get_root().create_virtio_dev(name=name,
trtype=trtype,
traddr=traddr,
dev_type="blk",
vq_count=vq_count,
vq_size=vq_size)
self.shell.log.info(ret)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.shell.log.info(ret)
def ui_command_delete(self, name):
"""
@ -657,12 +540,7 @@ class UIVirtioBlkBdev(UIBdev):
Arguments:
name - Is a unique identifier of the virtio scsi bdev to be deleted - UUID number or name alias.
"""
try:
self.get_root().remove_virtio_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.get_root().remove_virtio_bdev(name=name)
class UIVirtioScsiBdev(UIBdev):
@ -680,29 +558,17 @@ class UIVirtioScsiBdev(UIBdev):
vq_count = self.ui_eval_param(vq_count, "number", None)
vq_size = self.ui_eval_param(vq_size, "number", None)
try:
ret = self.get_root().create_virtio_dev(name=name,
trtype=trtype,
traddr=traddr,
dev_type="scsi",
vq_count=vq_count,
vq_size=vq_size)
ret = self.get_root().create_virtio_dev(name=name,
trtype=trtype,
traddr=traddr,
dev_type="scsi",
vq_count=vq_count,
vq_size=vq_size)
self.shell.log.info(ret)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.shell.log.info(ret)
def ui_command_delete(self, name):
try:
self.get_root().remove_virtio_bdev(name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.get_root().remove_virtio_bdev(name=name)
class UIBdevObj(UINode):
@ -802,8 +668,6 @@ class UIVhost(UINode):
name - Controller name.
"""
self.get_root().remove_vhost_controller(ctrlr=name)
self.get_root().refresh()
self.refresh()
class UIVhostBlk(UIVhost):
@ -828,16 +692,10 @@ class UIVhostBlk(UIVhost):
readonly - Whether controller should be read only or not.
Default: False.
"""
try:
self.get_root().create_vhost_blk_controller(ctrlr=name,
dev_name=bdev,
cpumask=cpumask,
readonly=bool(readonly))
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.get_root().create_vhost_blk_controller(ctrlr=name,
dev_name=bdev,
cpumask=cpumask,
readonly=bool(readonly))
class UIVhostScsi(UIVhost):
@ -859,14 +717,8 @@ class UIVhostScsi(UIVhost):
cpumask - Optional. Integer to specify mask of CPUs to use.
Default: 1.
"""
try:
self.get_root().create_vhost_scsi_controller(ctrlr=name,
cpumask=cpumask)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh()
self.get_root().create_vhost_scsi_controller(ctrlr=name,
cpumask=cpumask)
class UIVhostCtrl(UINode):
@ -883,12 +735,9 @@ class UIVhostCtrl(UINode):
delay_base_us = self.ui_eval_param(delay_base_us, "number", None)
iops_threshold = self.ui_eval_param(iops_threshold, "number", None)
try:
self.get_root().set_vhost_controller_coalescing(ctrlr=self.ctrlr.ctrlr,
delay_base_us=delay_base_us,
iops_threshold=iops_threshold)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().set_vhost_controller_coalescing(ctrlr=self.ctrlr.ctrlr,
delay_base_us=delay_base_us,
iops_threshold=iops_threshold)
class UIVhostScsiCtrlObj(UIVhostCtrl):
@ -904,17 +753,11 @@ class UIVhostScsiCtrlObj(UIVhostCtrl):
Arguments:
target_num - Integer identifier of target node to delete.
"""
try:
self.get_root().remove_vhost_scsi_target(ctrlr=self.ctrlr.ctrlr,
scsi_target_num=int(target_num))
for ctrlr in self.get_root().get_vhost_controllers(ctrlr_type="scsi"):
if ctrlr.ctrlr == self.ctrlr.ctrlr:
self.ctrlr = ctrlr
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().refresh()
self.get_root().remove_vhost_scsi_target(ctrlr=self.ctrlr.ctrlr,
scsi_target_num=int(target_num))
for ctrlr in self.get_root().get_vhost_controllers(ctrlr_type="scsi"):
if ctrlr.ctrlr == self.ctrlr.ctrlr:
self.ctrlr = ctrlr
def ui_command_add_lun(self, target_num, bdev_name):
"""
@ -926,17 +769,12 @@ class UIVhostScsiCtrlObj(UIVhostCtrl):
target_num - Integer identifier of target node to modify.
bdev - Which bdev to add as LUN.
"""
try:
self.get_root().add_vhost_scsi_lun(ctrlr=self.ctrlr.ctrlr,
scsi_target_num=int(target_num),
bdev_name=bdev_name)
for ctrlr in self.get_root().get_vhost_controllers(ctrlr_type="scsi"):
if ctrlr.ctrlr == self.ctrlr.ctrlr:
self.ctrlr = ctrlr
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().add_vhost_scsi_lun(ctrlr=self.ctrlr.ctrlr,
scsi_target_num=int(target_num),
bdev_name=bdev_name)
for ctrlr in self.get_root().get_vhost_controllers(ctrlr_type="scsi"):
if ctrlr.ctrlr == self.ctrlr.ctrlr:
self.ctrlr = ctrlr
def summary(self):
info = self.ctrlr.socket

View File

@ -43,13 +43,9 @@ class UIISCSIGlobalParams(UINode):
disable_chap = self.ui_eval_param(d, "bool", None)
require_chap = self.ui_eval_param(r, "bool", None)
mutual_chap = self.ui_eval_param(m, "bool", None)
try:
self.get_root().set_iscsi_discovery_auth(
chap_group=chap_group, disable_chap=disable_chap,
require_chap=require_chap, mutual_chap=mutual_chap)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().set_iscsi_discovery_auth(
chap_group=chap_group, disable_chap=disable_chap,
require_chap=require_chap, mutual_chap=mutual_chap)
class UIISCSIGlobalParam(UINode):
@ -74,10 +70,7 @@ class UIISCSIDevices(UINode):
UIISCSIDevice(device, node, self)
def delete(self, name):
try:
self.get_root().delete_target_node(target_node_name=name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_target_node(target_node_name=name)
def ui_command_create(self, name, alias_name, bdev_name_id_pairs,
pg_ig_mappings, queue_depth, g=None, d=None, r=None,
@ -115,17 +108,12 @@ class UIISCSIDevices(UINode):
mutual_chap = self.ui_eval_param(m, "bool", None)
header_digest = self.ui_eval_param(h, "bool", None)
data_digest = self.ui_eval_param(t, "bool", None)
try:
self.get_root().construct_target_node(
name=name, alias_name=alias_name, luns=luns,
pg_ig_maps=pg_ig_maps, queue_depth=queue_depth,
chap_group=chap_group, disable_chap=disable_chap,
require_chap=require_chap, mutual_chap=mutual_chap,
header_digest=header_digest, data_digest=data_digest)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().construct_target_node(
name=name, alias_name=alias_name, luns=luns,
pg_ig_maps=pg_ig_maps, queue_depth=queue_depth,
chap_group=chap_group, disable_chap=disable_chap,
require_chap=require_chap, mutual_chap=mutual_chap,
header_digest=header_digest, data_digest=data_digest)
def ui_command_delete(self, name=None):
"""Delete a target node. If name is not specified delete all target nodes.
@ -134,13 +122,17 @@ class UIISCSIDevices(UINode):
name - Target node name.
"""
self.delete(name)
self.refresh()
def ui_command_delete_all(self):
"""Delete all target nodes"""
rpc_messages = ""
for device in self.scsi_devices:
self.delete(device.device_name)
self.refresh()
try:
self.delete(device.device_name)
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def ui_command_add_lun(self, name, bdev_name, lun_id=None):
"""Add lun to the target node.
@ -153,12 +145,8 @@ class UIISCSIDevices(UINode):
"""
if lun_id:
lun_id = self.ui_eval_param(lun_id, "number", None)
try:
self.get_root().target_node_add_lun(
name=name, bdev_name=bdev_name, lun_id=lun_id)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.parent.refresh()
self.get_root().target_node_add_lun(
name=name, bdev_name=bdev_name, lun_id=lun_id)
def summary(self):
count = 0
@ -190,14 +178,10 @@ class UIISCSIDevice(UINode):
disable_chap = self.ui_eval_param(d, "bool", None)
require_chap = self.ui_eval_param(r, "bool", None)
mutual_chap = self.ui_eval_param(m, "bool", None)
try:
self.get_root().set_iscsi_target_node_auth(
name=self.device.device_name, chap_group=chap_group,
disable_chap=disable_chap,
require_chap=require_chap, mutual_chap=mutual_chap)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.parent.refresh()
self.get_root().set_iscsi_target_node_auth(
name=self.device.device_name, chap_group=chap_group,
disable_chap=disable_chap,
require_chap=require_chap, mutual_chap=mutual_chap)
def ui_command_add_pg_ig_maps(self, pg_ig_mappings):
"""Add PG-IG maps to the target node.
@ -209,12 +193,8 @@ class UIISCSIDevice(UINode):
for u in pg_ig_mappings.strip().split(" "):
pg, ig = u.split(":")
pg_ig_maps.append({"pg_tag": int(pg), "ig_tag": int(ig)})
try:
self.get_root().add_pg_ig_maps(
pg_ig_maps=pg_ig_maps, name=self.device.device_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.parent.refresh()
self.get_root().add_pg_ig_maps(
pg_ig_maps=pg_ig_maps, name=self.device.device_name)
def ui_command_delete_pg_ig_maps(self, pg_ig_mappings):
"""Add PG-IG maps to the target node.
@ -226,12 +206,8 @@ class UIISCSIDevice(UINode):
for u in pg_ig_mappings.strip().split(" "):
pg, ig = u.split(":")
pg_ig_maps.append({"pg_tag": int(pg), "ig_tag": int(ig)})
try:
self.get_root().delete_pg_ig_maps(
pg_ig_maps=pg_ig_maps, name=self.device.device_name)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.parent.refresh()
self.get_root().delete_pg_ig_maps(
pg_ig_maps=pg_ig_maps, name=self.device.device_name)
def refresh(self):
self._children = set([])
@ -315,10 +291,7 @@ class UIPortalGroups(UINode):
self.refresh()
def delete(self, tag):
try:
self.get_root().delete_portal_group(tag=tag)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_portal_group(tag=tag)
def ui_command_create(self, tag, portal_list):
"""Add a portal group.
@ -338,30 +311,32 @@ class UIPortalGroups(UINode):
if cpumask:
portals[-1]['cpumask'] = cpumask
tag = self.ui_eval_param(tag, "number", None)
try:
self.get_root().construct_portal_group(tag=tag, portals=portals)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().construct_portal_group(tag=tag, portals=portals)
def ui_command_delete(self, tag):
"""Delete a portal group with given tag (unique, integer > 0))"""
tag = self.ui_eval_param(tag, "number", None)
self.delete(tag)
self.refresh()
def ui_command_delete_all(self):
"""Delete all portal groups"""
rpc_messages = ""
for pg in self.pgs:
self.delete(pg.tag)
self.refresh()
try:
self.delete(pg.tag)
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def refresh(self):
self._children = set([])
self.pgs = list(self.get_root().get_portal_groups())
for pg in self.pgs:
UIPortalGroup(pg, self)
try:
UIPortalGroup(pg, self)
except JSONRPCException as e:
self.shell.log.error(e.message)
def summary(self):
return "Portal groups: %d" % len(self.pgs), None
@ -395,10 +370,7 @@ class UIInitiatorGroups(UINode):
self.refresh()
def delete(self, tag):
try:
self.get_root().delete_initiator_group(tag=tag)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_initiator_group(tag=tag)
def ui_command_create(self, tag, initiator_list, netmask_list):
"""Add an initiator group.
@ -411,14 +383,9 @@ class UIInitiatorGroups(UINode):
e.g. 255.255.0.0 255.248.0.0
"""
tag = self.ui_eval_param(tag, "number", None)
try:
self.get_root().construct_initiator_group(
tag=tag, initiators=initiator_list.split(" "),
netmasks=netmask_list.split(" "))
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().construct_initiator_group(
tag=tag, initiators=initiator_list.split(" "),
netmasks=netmask_list.split(" "))
def ui_command_delete(self, tag):
"""Delete an initiator group.
@ -428,13 +395,17 @@ class UIInitiatorGroups(UINode):
"""
tag = self.ui_eval_param(tag, "number", None)
self.delete(tag)
self.refresh()
def ui_command_delete_all(self):
"""Delete all initiator groups"""
rpc_messages = ""
for ig in self.igs:
self.delete(ig.tag)
self.refresh()
try:
self.delete(ig.tag)
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def ui_command_add_initiator(self, tag, initiators, netmasks):
"""Add initiators to an existing initiator group.
@ -447,14 +418,9 @@ class UIInitiatorGroups(UINode):
e.g. 255.255.0.0 255.248.0.0
"""
tag = self.ui_eval_param(tag, "number", None)
try:
self.get_root().add_initiators_to_initiator_group(
tag=tag, initiators=initiators.split(" "),
netmasks=netmasks.split(" "))
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().add_initiators_to_initiator_group(
tag=tag, initiators=initiators.split(" "),
netmasks=netmasks.split(" "))
def ui_command_delete_initiator(self, tag, initiators=None, netmasks=None):
"""Delete initiators from an existing initiator group.
@ -469,14 +435,9 @@ class UIInitiatorGroups(UINode):
initiators = initiators.split(" ")
if netmasks:
netmasks = netmasks.split(" ")
try:
self.get_root().delete_initiators_from_initiator_group(
tag=tag, initiators=initiators,
netmasks=netmasks)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().delete_initiators_from_initiator_group(
tag=tag, initiators=initiators,
netmasks=netmasks)
def refresh(self):
self._children = set([])
@ -558,17 +519,11 @@ class UIISCSIAuthGroups(UINode):
UIISCSIAuthGroup(ag, self)
def delete(self, tag):
try:
self.get_root().delete_iscsi_auth_group(tag=tag)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_iscsi_auth_group(tag=tag)
def delete_secret(self, tag, user):
try:
self.get_root().delete_secret_from_iscsi_auth_group(
tag=tag, user=user)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_secret_from_iscsi_auth_group(
tag=tag, user=user)
def ui_command_create(self, tag, secrets=None):
"""Add authentication group for CHAP authentication.
@ -583,12 +538,7 @@ class UIISCSIAuthGroups(UINode):
if secrets:
secrets = [dict(u.split(":") for u in a.split(" "))
for a in secrets.split(",")]
try:
self.get_root().add_iscsi_auth_group(tag=tag, secrets=secrets)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().add_iscsi_auth_group(tag=tag, secrets=secrets)
def ui_command_delete(self, tag):
"""Delete an authentication group.
@ -598,13 +548,17 @@ class UIISCSIAuthGroups(UINode):
"""
tag = self.ui_eval_param(tag, "number", None)
self.delete(tag)
self.refresh()
def ui_command_delete_all(self):
"""Delete all authentication groups."""
rpc_messages = ""
for iscsi_auth_group in self.iscsi_auth_groups:
self.delete(iscsi_auth_group['tag'])
self.refresh()
try:
self.delete(iscsi_auth_group['tag'])
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def ui_command_add_secret(self, tag, user, secret,
muser=None, msecret=None):
@ -619,13 +573,9 @@ class UIISCSIAuthGroups(UINode):
msecret: Secret for mutual CHAP authentication
"""
tag = self.ui_eval_param(tag, "number", None)
try:
self.get_root().add_secret_to_iscsi_auth_group(
tag=tag, user=user, secret=secret,
muser=muser, msecret=msecret)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().add_secret_to_iscsi_auth_group(
tag=tag, user=user, secret=secret,
muser=muser, msecret=msecret)
def ui_command_delete_secret(self, tag, user):
"""Delete a secret from an authentication group.
@ -636,7 +586,6 @@ class UIISCSIAuthGroups(UINode):
"""
tag = self.ui_eval_param(tag, "number", None)
self.delete_secret(tag, user)
self.refresh()
def ui_command_delete_secret_all(self, tag):
"""Delete all secrets from an authentication group.
@ -644,12 +593,17 @@ class UIISCSIAuthGroups(UINode):
Args:
tag: Authentication group tag (unique, integer > 0)
"""
rpc_messages = ""
tag = self.ui_eval_param(tag, "number", None)
for ag in self.iscsi_auth_groups:
if ag['tag'] == tag:
for secret in ag['secrets']:
self.delete_secret(tag, secret['user'])
self.refresh()
try:
self.delete_secret(tag, secret['user'])
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Groups: %s" % len(self.iscsi_auth_groups), None

View File

@ -42,17 +42,14 @@ class UINVMfTransports(UINode):
max_io_size = self.ui_eval_param(max_io_size, "number", None)
io_unit_size = self.ui_eval_param(io_unit_size, "number", None)
max_aq_depth = self.ui_eval_param(max_aq_depth, "number", None)
try:
self.get_root().create_nvmf_transport(trtype=trtype,
max_queue_depth=max_queue_depth,
max_qpairs_per_ctrlr=max_qpairs_per_ctrlr,
in_capsule_data_size=in_capsule_data_size,
max_io_size=max_io_size,
io_unit_size=io_unit_size,
max_aq_depth=max_aq_depth)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().create_nvmf_transport(trtype=trtype,
max_queue_depth=max_queue_depth,
max_qpairs_per_ctrlr=max_qpairs_per_ctrlr,
in_capsule_data_size=in_capsule_data_size,
max_io_size=max_io_size,
io_unit_size=io_unit_size,
max_aq_depth=max_aq_depth)
def summary(self):
return "Transports: %s" % len(self.children), None
@ -75,10 +72,7 @@ class UINVMfSubsystems(UINode):
UINVMfSubsystem(subsystem, self)
def delete(self, subsystem_nqn):
try:
self.get_root().delete_nvmf_subsystem(nqn=subsystem_nqn)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().delete_nvmf_subsystem(nqn=subsystem_nqn)
def ui_command_create(self, nqn, serial_number=None,
max_namespaces=None, allow_any_host="false"):
@ -94,13 +88,9 @@ class UINVMfSubsystems(UINode):
"""
allow_any_host = self.ui_eval_param(allow_any_host, "bool", False)
max_namespaces = self.ui_eval_param(max_namespaces, "number", 0)
try:
self.get_root().create_nvmf_subsystem(nqn=nqn, serial_number=serial_number,
allow_any_host=allow_any_host,
max_namespaces=max_namespaces)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.refresh()
self.get_root().create_nvmf_subsystem(nqn=nqn, serial_number=serial_number,
allow_any_host=allow_any_host,
max_namespaces=max_namespaces)
def ui_command_delete(self, subsystem_nqn):
"""Delete subsystem with given nqn.
@ -109,13 +99,17 @@ class UINVMfSubsystems(UINode):
nqn_subsystem - Name of susbsytem to delete
"""
self.delete(subsystem_nqn)
self.refresh()
def ui_command_delete_all(self):
"""Delete all subsystems"""
rpc_messages = ""
for child in self._children:
self.delete(child.subsystem.nqn)
self.refresh()
try:
self.delete(child.subsystem.nqn)
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Subsystems: %s" % len(self.children), None
@ -150,13 +144,8 @@ class UINVMfSubsystem(UINode):
disable - Optional parameter. If false then enable, if true disable
"""
disable = self.ui_eval_param(disable, "bool", None)
try:
self.get_root().nvmf_subsystem_allow_any_host(
nqn=self.subsystem.nqn, disable=disable)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh_node()
self.get_root().nvmf_subsystem_allow_any_host(
nqn=self.subsystem.nqn, disable=disable)
def summary(self):
sn = None
@ -190,12 +179,9 @@ class UINVMfSubsystemListeners(UINode):
self.refresh()
def delete(self, trtype, traddr, trsvcid, adrfam=None):
try:
self.get_root().nvmf_subsystem_remove_listener(
nqn=self.parent.subsystem.nqn, trtype=trtype,
traddr=traddr, trsvcid=trsvcid, adrfam=adrfam)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().nvmf_subsystem_remove_listener(
nqn=self.parent.subsystem.nqn, trtype=trtype,
traddr=traddr, trsvcid=trsvcid, adrfam=adrfam)
def ui_command_create(self, trtype, traddr, trsvcid, adrfam):
"""Create address listener for subsystem.
@ -206,14 +192,9 @@ class UINVMfSubsystemListeners(UINode):
trsvcid - NVMe-oF transport service id: e.g., a port number.
adrfam - NVMe-oF transport adrfam: e.g., ipv4, ipv6, ib, fc.
"""
try:
self.get_root().nvmf_subsystem_add_listener(
nqn=self.parent.subsystem.nqn, trtype=trtype, traddr=traddr,
trsvcid=trsvcid, adrfam=adrfam)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh_node()
self.get_root().nvmf_subsystem_add_listener(
nqn=self.parent.subsystem.nqn, trtype=trtype, traddr=traddr,
trsvcid=trsvcid, adrfam=adrfam)
def ui_command_delete(self, trtype, traddr, trsvcid, adrfam=None):
"""Remove address listener for subsystem.
@ -225,15 +206,17 @@ class UINVMfSubsystemListeners(UINode):
adrfam - Optional argument. Address family ("IPv4", "IPv6", "IB" or "FC").
"""
self.delete(trtype, traddr, trsvcid, adrfam)
self.get_root().refresh()
self.refresh_node()
def ui_command_delete_all(self):
"""Remove all address listeners from subsystem."""
rpc_messages = ""
for la in self.listen_addresses:
self.delete(la['trtype'], la['traddr'], la['trsvcid'], la['adrfam'])
self.get_root().refresh()
self.refresh_node()
try:
self.delete(la['trtype'], la['traddr'], la['trsvcid'], la['adrfam'])
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Addresses: %s" % len(self.listen_addresses), None
@ -267,11 +250,8 @@ class UINVMfSubsystemHosts(UINode):
self.refresh()
def delete(self, host):
try:
self.get_root().nvmf_subsystem_remove_host(
nqn=self.parent.subsystem.nqn, host=host)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().nvmf_subsystem_remove_host(
nqn=self.parent.subsystem.nqn, host=host)
def ui_command_create(self, host):
"""Add a host NQN to the whitelist of allowed hosts.
@ -279,13 +259,8 @@ class UINVMfSubsystemHosts(UINode):
Args:
host: Host NQN to add to the list of allowed host NQNs
"""
try:
self.get_root().nvmf_subsystem_add_host(
nqn=self.parent.subsystem.nqn, host=host)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh_node()
self.get_root().nvmf_subsystem_add_host(
nqn=self.parent.subsystem.nqn, host=host)
def ui_command_delete(self, host):
"""Delete host from subsystem.
@ -294,15 +269,17 @@ class UINVMfSubsystemHosts(UINode):
host - NQN of host to remove.
"""
self.delete(host)
self.get_root().refresh()
self.refresh_node()
def ui_command_delete_all(self):
"""Delete host from subsystem"""
rpc_messages = ""
for host in self.hosts:
self.delete(host['nqn'])
self.get_root().refresh()
self.refresh_node()
try:
self.delete(host['nqn'])
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Hosts: %s" % len(self.hosts), None
@ -332,11 +309,8 @@ class UINVMfSubsystemNamespaces(UINode):
self.refresh()
def delete(self, nsid):
try:
self.get_root().nvmf_subsystem_remove_ns(
nqn=self.parent.subsystem.nqn, nsid=nsid)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().nvmf_subsystem_remove_ns(
nqn=self.parent.subsystem.nqn, nsid=nsid)
def ui_command_create(self, bdev_name, nsid=None,
nguid=None, eui64=None, uuid=None):
@ -351,14 +325,9 @@ class UINVMfSubsystemNamespaces(UINode):
uuid: Namespace UUID.
"""
nsid = self.ui_eval_param(nsid, "number", None)
try:
self.get_root().nvmf_subsystem_add_ns(
nqn=self.parent.subsystem.nqn, bdev_name=bdev_name,
nsid=nsid, nguid=nguid, eui64=eui64, uuid=uuid)
except JSONRPCException as e:
self.shell.log.error(e.message)
self.get_root().refresh()
self.refresh_node()
self.get_root().nvmf_subsystem_add_ns(
nqn=self.parent.subsystem.nqn, bdev_name=bdev_name,
nsid=nsid, nguid=nguid, eui64=eui64, uuid=uuid)
def ui_command_delete(self, nsid):
"""Delete namespace from subsystem.
@ -368,15 +337,17 @@ class UINVMfSubsystemNamespaces(UINode):
"""
nsid = self.ui_eval_param(nsid, "number", None)
self.delete(nsid)
self.get_root().refresh()
self.refresh_node()
def ui_command_delete_all(self):
"""Delete all namespaces from subsystem."""
rpc_messages = ""
for namespace in self.namespaces:
self.delete(namespace['nsid'])
self.get_root().refresh()
self.refresh_node()
try:
self.delete(namespace['nsid'])
except JSONRPCException as e:
rpc_messages += e.message
if rpc_messages:
raise JSONRPCException(rpc_messages)
def summary(self):
return "Namespaces: %s" % len(self.namespaces), None

View File

@ -3,6 +3,7 @@ SPDK_BUILD_DOC=1
SPDK_RUN_CHECK_FORMAT=1
SPDK_RUN_SCANBUILD=1
SPDK_RUN_VALGRIND=1
SPDK_RUN_FUNCTIONAL_TEST=1
SPDK_TEST_UNITTEST=1
SPDK_TEST_ISAL=1
SPDK_TEST_ISCSI=0
@ -12,6 +13,7 @@ SPDK_TEST_NVME_CLI=0
SPDK_TEST_NVMF=1
SPDK_TEST_RBD=0
SPDK_TEST_CRYPTO=0
SPDK_TEST_OCF=0
# requires some extra configuration. see TEST_ENV_SETUP_README
SPDK_TEST_VHOST=0
SPDK_TEST_VHOST_INIT=0

View File

@ -56,8 +56,7 @@ $(SHARED_LINKED_LIB) : $(SHARED_REALNAME_LIB)
all: $(SHARED_LINKED_LIB)
clean:
$(CLEAN_C) $(SHARED_REALNAME_LIB) $(SHARED_LINKED_LIB)
CLEAN_FILES += $(SHARED_REALNAME_LIB) $(SHARED_LINKED_LIB)
install:
$(INSTALL_SHARED_LIB)

View File

@ -0,0 +1 @@
leak:spdk_fs_alloc_io_channel_sync

View File

@ -13,6 +13,11 @@ run_step() {
echo "--spdk_cache_size=$CACHE_SIZE" >> "$1"_flags.txt
echo -n Start $1 test phase...
# ASAN has some bugs around thread_local variables. We have a destructor in place
# to free the thread contexts, but ASAN complains about the leak before those
# destructors have a chance to run. So suppress this one specific leak using
# LSAN_OPTIONS.
export LSAN_OPTIONS="suppressions=$testdir/lsan_suppressions.txt"
/usr/bin/time taskset 0xFF $DB_BENCH --flagfile="$1"_flags.txt &> "$1"_db_bench.txt
echo done.
}
@ -25,7 +30,8 @@ testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../../..)
source $rootdir/test/common/autotest_common.sh
DB_BENCH_DIR=/usr/src/rocksdb
# In the autotest job, we copy the rocksdb source to just outside the spdk directory.
DB_BENCH_DIR="$rootdir/../rocksdb"
DB_BENCH=$DB_BENCH_DIR/db_bench
ROCKSDB_CONF=$testdir/rocksdb.conf

View File

@ -0,0 +1 @@
526c73bd94150cc8fbd651f736e1ca95f50d8e13

View File

@ -39,6 +39,7 @@ fi
: ${SPDK_RUN_CHECK_FORMAT=1}; export SPDK_RUN_CHECK_FORMAT
: ${SPDK_RUN_SCANBUILD=1}; export SPDK_RUN_SCANBUILD
: ${SPDK_RUN_VALGRIND=1}; export SPDK_RUN_VALGRIND
: ${SPDK_RUN_FUNCTIONAL_TEST=1}; export SPDK_RUN_FUNCTIONAL_TEST
: ${SPDK_TEST_UNITTEST=1}; export SPDK_TEST_UNITTEST
: ${SPDK_TEST_ISAL=1}; export SPDK_TEST_ISAL
: ${SPDK_TEST_ISCSI=1}; export SPDK_TEST_ISCSI

View File

@ -360,7 +360,7 @@ cd ~
: ${GIT_REPO_SPDK_NVME_CLI=https://github.com/spdk/nvme-cli}; export GIT_REPO_SPDK_NVME_CLI
: ${GIT_REPO_INTEL_IPSEC_MB=https://github.com/spdk/intel-ipsec-mb.git}; export GIT_REPO_INTEL_IPSEC_MB
: ${DRIVER_LOCATION_QAT=https://01.org/sites/default/files/downloads/intelr-quickassist-technology/qat1.7.l.4.3.0-00033.tar.gz}; export DRIVER_LOCATION_QAT
: ${GIT_REPO_OCF=https://github.com/Open-OCF/ocf}; export GIT_REPO_OCF
: ${GIT_REPO_OCF=https://github.com/Open-CAS/ocf}; export GIT_REPO_OCF
jobs=$(($(nproc)*2))
@ -426,7 +426,8 @@ if $INSTALL; then
sshfs \
sshpass \
python3-pandas \
btrfs-progs
btrfs-progs \
iptables
fi
sudo mkdir -p /usr/src
@ -459,6 +460,7 @@ SPDK_RUN_CHECK_FORMAT=1
SPDK_RUN_SCANBUILD=1
SPDK_RUN_VALGRIND=1
SPDK_TEST_CRYPTO=1
SPDK_RUN_FUNCTIONAL_TEST=1
SPDK_TEST_UNITTEST=1
SPDK_TEST_ISCSI=1
SPDK_TEST_ISCSI_INITIATOR=1

22
test/ftl/bdevperf.sh Executable file
View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -e
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
tests=('-q 1 -w randwrite -t 4 -o 69632' '-q 128 -w randwrite -t 4 -o 4096' '-q 128 -w verify -t 4 -o 4096')
device=$1
ftl_bdev_conf=$testdir/config/ftl.conf
$rootdir/scripts/gen_ftl.sh -a $device -n nvme0 -l 0-3 > $ftl_bdev_conf
for (( i=0; i<${#tests[@]}; i++ )) do
timing_enter "${tests[$i]}"
$rootdir/test/bdev/bdevperf/bdevperf -c $ftl_bdev_conf ${tests[$i]}
timing_exit "${tests[$i]}"
done
report_test_completion ftl_bdevperf

View File

@ -14,7 +14,7 @@ bs=4k
filename=FTL_BDEV_NAME
random_distribution=normal
serialize_overlap=1
io_size=5G
io_size=256M
[test]
numjobs=1

View File

@ -15,7 +15,7 @@ bs=4k
filename=FTL_BDEV_NAME
random_distribution=normal
serialize_overlap=1
io_size=1G
io_size=256M
[first_half]
offset=0%

View File

@ -5,7 +5,7 @@ thread=1
direct=1
iodepth=1
rw=randwrite
size=4G
size=256M
verify=crc32c
do_verify=1
verify_dump=0

View File

@ -7,27 +7,34 @@ rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
function ftl_kill() {
rm -f $testdir/.testfile_*
function at_ftl_exit() {
# restore original driver
PCI_WHITELIST="$device" PCI_BLACKLIST="" DRIVER_OVERRIDE="$ocssd_original_dirver" ./scripts/setup.sh
}
vendor_id='0x1d1d'
device_id='0x1f1f'
device=$(lspci -d ${vendor_id}:${device_id} | cut -d' ' -f 1)
read device _ <<< "$OCSSD_PCI_DEVICES"
if [ -z "$device" ]; then
echo "Could not find FTL device. Tests skipped."
exit 0
if [[ -z "$device" ]]; then
echo "OCSSD device list is empty."
echo "This test require that OCSSD_PCI_DEVICES environment variable to be set"
echo "and point to OCSSD devices PCI BDF. You can specify multiple space"
echo "separated BDFs in this case first one will be used."
exit 1
fi
trap "ftl_kill; exit 1" SIGINT SIGTERM EXIT
ocssd_original_dirver="$(basename $(readlink /sys/bus/pci/devices/$device/driver))"
trap "at_ftl_exit" SIGINT SIGTERM EXIT
# OCSSD is blacklisted so bind it to vfio/uio driver before testing
PCI_WHITELIST="$device" PCI_BLACKLIST="" DRIVER_OVERRIDE="" ./scripts/setup.sh
timing_enter ftl
timing_enter fio
timing_enter bdevperf
run_test suite $testdir/fio.sh $device
run_test suite $testdir/bdevperf.sh $device
timing_exit fio
timing_exit bdevperf
timing_enter restore
run_test suite $testdir/restore.sh $device $uuid
@ -36,4 +43,4 @@ timing_exit restore
timing_exit ftl
trap - SIGINT SIGTERM EXIT
ftl_kill
at_ftl_exit

View File

@ -80,7 +80,7 @@ function json_config_test_shutdown_app() {
# kill_instance RPC will trigger ASAN
kill -SIGINT ${app_pid[$app]}
for (( i=0; i<10; i++ )); do
for (( i=0; i<30; i++ )); do
if ! kill -0 ${app_pid[$app]} 2>/dev/null; then
app_pid[$app]=
break

View File

@ -112,12 +112,16 @@ class Commands_Rpc(object):
output = self.rpc.construct_malloc_bdev(total_size, block_size)[0]
return output.rstrip('\n')
def construct_lvol_store(self, base_name, lvs_name, cluster_size=None):
def construct_lvol_store(self, base_name, lvs_name, cluster_size=None, clear_method=None):
print("INFO: RPC COMMAND construct_lvol_store")
if cluster_size:
output = self.rpc.construct_lvol_store(base_name,
lvs_name,
"-c {cluster_sz}".format(cluster_sz=cluster_size))[0]
elif clear_method:
output = self.rpc.construct_lvol_store(base_name,
lvs_name,
"--clear-method {clear_m}".format(clear_m=clear_method))[0]
else:
output = self.rpc.construct_lvol_store(base_name, lvs_name)[0]
return output.rstrip('\n')

View File

@ -121,6 +121,7 @@ def case_message(func):
553: 'unregister_lvol_bdev',
600: 'construct_lvol_store_with_cluster_size_max',
601: 'construct_lvol_store_with_cluster_size_min',
602: 'construct_lvol_store_with_all_clear_methods',
650: 'thin_provisioning_check_space',
651: 'thin_provisioning_read_empty_bdev',
652: 'thin_provisionind_data_integrity_test',
@ -1023,6 +1024,43 @@ class TestCases(object):
# - Error code response printed to stdout
return fail_count
@case_message
def test_case602(self):
"""
construct_lvol_store_with_all_clear_methods
Call construct_lvol_store with all options for clear methods.
"""
fail_count = 0
# Create malloc bdev
base_name = self.c.construct_malloc_bdev(self.total_size,
self.block_size)
# Construct lvol store with clear method 'none'
lvol_uuid = self.c.construct_lvol_store(base_name, self.lvs_name, clear_method="none")
fail_count += self.c.check_get_lvol_stores(base_name, lvol_uuid)
fail_count += self.c.delete_malloc_bdev(base_name)
# Create malloc bdev
base_name = self.c.construct_malloc_bdev(self.total_size,
self.block_size)
# Construct lvol store with clear method 'unmap'
lvol_uuid = self.c.construct_lvol_store(base_name, self.lvs_name, clear_method="unmap")
fail_count += self.c.check_get_lvol_stores(base_name, lvol_uuid)
fail_count += self.c.delete_malloc_bdev(base_name)
# Create malloc bdev
base_name = self.c.construct_malloc_bdev(self.total_size,
self.block_size)
# Construct lvol store with clear method 'write_zeroes'
lvol_uuid = self.c.construct_lvol_store(base_name, self.lvs_name, clear_method="write_zeroes")
fail_count += self.c.check_get_lvol_stores(base_name, lvol_uuid)
fail_count += self.c.delete_malloc_bdev(base_name)
# Expected result:
# - construct lvol store return code != 0
# - Error code response printed to stdout
return fail_count
@case_message
def test_case650(self):
"""

View File

@ -21,7 +21,7 @@ o- / ...........................................................................
| | o- null_bdev0 ...................................................................................... [Size=$(FP)M, Not claimed]
| | o- null_bdev1 ...................................................................................... [Size=$(FP)M, Not claimed]
| o- nvme ............................................................................................................. [Bdevs: 1]
| | o- Nvme0n1 $(S) [Size=$(FP)G, Claimed]
| | o- Nvme0n1 $(S) [Size=$(S), Claimed]
| o- pmemblk .......................................................................................................... [Bdevs: 0]
| o- rbd .............................................................................................................. [Bdevs: 0]
| o- split_disk ....................................................................................................... [Bdevs: 4]

View File

@ -731,7 +731,7 @@ test_initdrivers(void)
CU_ASSERT(g_session_mp == NULL);
MOCK_SET(rte_crypto_op_pool_create, (struct rte_mempool *)1);
/* Check resources are sufficient failure. */
/* Check resources are not sufficient */
MOCK_CLEARED_ASSERT(spdk_mempool_create);
rc = vbdev_crypto_init_crypto_drivers();
CU_ASSERT(rc == -EINVAL);

View File

@ -671,6 +671,22 @@ basic_qos(void)
poll_threads();
CU_ASSERT(status == SPDK_BDEV_IO_STATUS_SUCCESS);
/*
* Close the descriptor only, which should stop the qos channel as
* the last descriptor removed.
*/
spdk_bdev_close(g_desc);
poll_threads();
CU_ASSERT(bdev->internal.qos->ch == NULL);
/*
* Open the bdev again which shall setup the qos channel as the
* channels are valid.
*/
spdk_bdev_open(bdev, true, NULL, NULL, &g_desc);
poll_threads();
CU_ASSERT(bdev->internal.qos->ch != NULL);
/* Tear down the channels */
set_thread(0);
spdk_put_io_channel(io_ch[0]);
@ -684,7 +700,10 @@ basic_qos(void)
poll_threads();
CU_ASSERT(bdev->internal.qos->ch == NULL);
/* Open the bdev again, no qos channel setup without valid channels. */
spdk_bdev_open(bdev, true, NULL, NULL, &g_desc);
poll_threads();
CU_ASSERT(bdev->internal.qos->ch == NULL);
/* Create the channels in reverse order. */
set_thread(1);

View File

@ -705,7 +705,8 @@ ut_lvs_destroy(void)
struct spdk_lvol_store *lvs;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -738,7 +739,8 @@ ut_lvol_init(void)
int rc;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -772,7 +774,8 @@ ut_lvol_snapshot(void)
struct spdk_lvol *lvol = NULL;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -821,7 +824,8 @@ ut_lvol_clone(void)
struct spdk_lvol *clone = NULL;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -886,7 +890,8 @@ ut_lvol_hotremove(void)
lvol_already_opened = false;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -979,7 +984,8 @@ ut_lvol_rename(void)
int rc;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -1043,7 +1049,8 @@ ut_lvol_destroy(void)
int rc;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -1097,7 +1104,8 @@ ut_lvol_resize(void)
int rc = 0;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -1142,7 +1150,8 @@ ut_lvol_set_read_only(void)
int rc = 0;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -1181,7 +1190,8 @@ ut_lvs_unload(void)
struct spdk_lvol_store *lvs;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -1216,7 +1226,8 @@ ut_lvs_init(void)
/* spdk_lvs_init() fails */
lvol_store_initialize_fail = true;
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc != 0);
CU_ASSERT(g_lvserrno == 0);
CU_ASSERT(g_lvol_store == NULL);
@ -1226,7 +1237,8 @@ ut_lvs_init(void)
/* spdk_lvs_init_cb() fails */
lvol_store_initialize_cb_fail = true;
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno != 0);
CU_ASSERT(g_lvol_store == NULL);
@ -1234,7 +1246,8 @@ ut_lvs_init(void)
lvol_store_initialize_cb_fail = false;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
@ -1244,7 +1257,8 @@ ut_lvs_init(void)
g_lvol_store = NULL;
/* Bdev with lvol store already claimed */
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "lvs", 0, LVS_CLEAR_WITH_UNMAP, lvol_store_op_with_handle_complete,
NULL);
CU_ASSERT(rc != 0);
CU_ASSERT(g_lvserrno == 0);
CU_ASSERT(g_lvol_store == NULL);
@ -1379,7 +1393,8 @@ ut_lvs_rename(void)
struct spdk_lvol_store *lvs;
/* Lvol store is successfully created */
rc = vbdev_lvs_create(&g_bdev, "old_lvs_name", 0, lvol_store_op_with_handle_complete, NULL);
rc = vbdev_lvs_create(&g_bdev, "old_lvs_name", 0, LVS_CLEAR_WITH_UNMAP,
lvol_store_op_with_handle_complete, NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);

View File

@ -62,6 +62,16 @@ unittest_parse_args(int ch, char *arg)
return 0;
}
static void
clean_opts(struct spdk_app_opts *opts)
{
free(opts->pci_whitelist);
opts->pci_whitelist = NULL;
free(opts->pci_blacklist);
opts->pci_blacklist = NULL;
memset(opts, 0, sizeof(struct spdk_app_opts));
}
static void
test_spdk_app_parse_args(void)
{
@ -109,24 +119,28 @@ test_spdk_app_parse_args(void)
rc = spdk_app_parse_args(test_argc, valid_argv, &opts, "", NULL, unittest_parse_args, NULL);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_SUCCESS);
optind = 1;
clean_opts(&opts);
/* Test invalid short option Expected result: FAIL */
rc = spdk_app_parse_args(test_argc, argv_added_short_opt, &opts, "", NULL, unittest_parse_args,
NULL);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_FAIL);
optind = 1;
clean_opts(&opts);
/* Test valid global and local options. Expected result: PASS */
rc = spdk_app_parse_args(test_argc, argv_added_short_opt, &opts, "z", NULL, unittest_parse_args,
unittest_usage);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_SUCCESS);
optind = 1;
clean_opts(&opts);
/* Test invalid long option Expected result: FAIL */
rc = spdk_app_parse_args(test_argc, argv_added_long_opt, &opts, "", NULL, unittest_parse_args,
NULL);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_FAIL);
optind = 1;
clean_opts(&opts);
/* Test valid global and local options. Expected result: PASS */
my_options[0].name = "test-long-opt";
@ -134,23 +148,27 @@ test_spdk_app_parse_args(void)
unittest_usage);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_SUCCESS);
optind = 1;
clean_opts(&opts);
/* Test overlapping global and local options. Expected result: FAIL */
rc = spdk_app_parse_args(test_argc, valid_argv, &opts, SPDK_APP_GETOPT_STRING, NULL,
unittest_parse_args, NULL);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_FAIL);
optind = 1;
clean_opts(&opts);
/* Specify -B and -W options at the same time. Expected result: FAIL */
rc = spdk_app_parse_args(test_argc, invalid_argv_BW, &opts, "", NULL, unittest_parse_args, NULL);
SPDK_CU_ASSERT_FATAL(rc == SPDK_APP_PARSE_ARGS_FAIL);
optind = 1;
clean_opts(&opts);
/* Omit necessary argument to option */
rc = spdk_app_parse_args(test_argc, invalid_argv_missing_option, &opts, "", NULL,
unittest_parse_args, NULL);
CU_ASSERT_EQUAL(rc, SPDK_APP_PARSE_ARGS_FAIL);
optind = 1;
clean_opts(&opts);
}
int

View File

@ -45,8 +45,8 @@ struct spdk_nvmf_transport_opts g_rdma_ut_transport_opts = {
.max_queue_depth = SPDK_NVMF_RDMA_DEFAULT_MAX_QUEUE_DEPTH,
.max_qpairs_per_ctrlr = SPDK_NVMF_RDMA_DEFAULT_MAX_QPAIRS_PER_CTRLR,
.in_capsule_data_size = SPDK_NVMF_RDMA_DEFAULT_IN_CAPSULE_DATA_SIZE,
.max_io_size = (SPDK_NVMF_RDMA_DEFAULT_IO_BUFFER_SIZE * RDMA_UT_UNITS_IN_MAX_IO),
.io_unit_size = SPDK_NVMF_RDMA_DEFAULT_IO_BUFFER_SIZE,
.max_io_size = (SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE * RDMA_UT_UNITS_IN_MAX_IO),
.io_unit_size = SPDK_NVMF_RDMA_MIN_IO_BUFFER_SIZE,
.max_aq_depth = SPDK_NVMF_RDMA_DEFAULT_AQ_DEPTH,
.num_shared_buffers = SPDK_NVMF_RDMA_DEFAULT_NUM_SHARED_BUFFERS,
};
@ -135,6 +135,7 @@ test_spdk_nvmf_rdma_request_parse_sgl(void)
rdma_req.req.rsp = &cpl;
rdma_req.data.wr.sg_list = rdma_req.data.sgl;
rdma_req.req.qpair = &rqpair.qpair;
rdma_req.req.xfer = SPDK_NVME_DATA_CONTROLLER_TO_HOST;
rtransport.transport.opts = g_rdma_ut_transport_opts;