This is a part of future changes to block blob operations
that may cause race conditions between each other.
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Ia728d1fc207375ddcb3b70b5081ddcffa9f99027
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449789
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
There are several places where we have the tracker
pointer, yet we go find the tracker again by getting
the tr->cid and using that index to find the tracker
again in the qpair's array. That's really silly.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I54acd642a2c9821f2b95e17563904b859495081a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450308
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Current the idea is:
Convert the multiple SGL into the single SGL and send it
out.
Change-Id: I8e571704e9d7c7b583f889837eead7cac1982fcd
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448262
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
First submission. Implemented part of the Opal library
and "scan" function. Can be invoked by nvme_manage.
Change-Id: Iba86d86dd3af06a06b6805120ee5005af8183459
Signed-off-by: Chunyang Hui <chunyang.hui@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/439335
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: yidong0635 <dongx.yi@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
by removing the passthru stuff from the conf file, we create a more
dramatic effect when we add in the passthru bdev during the labs.
Change-Id: I0a0ed101dd4135d8b2774cdd0850ca3bf2e43d01
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450187
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
I think it's simpler to just have people pass in the path of the conf
file during the lab. right now in the lab, if people try to run this
script from any othe rdirectory but hello_world, it throws errors and
can through people off during labs.
Change-Id: I74fc9bd311d4fdc9e8676cfa938d32b48fdece39
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450186
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Limit the thread scheduler to put spdk_threads on
lcores < last_lcore instead of lcores < lcore_count,
which was probably the original intent.
When the cpumask was not a contiguous cpu range, the
thread scheduler failed to schedule any spdk_threads on
the last cores. There was one hardcoded thread created
for each reactor, but the scheduler could squash some
of those into a single reactor. This broke the legacy
lcore-based messages as those expect there to be at least
one spdk_thread per lcore. Any spdk_poller_register()
or spdk_get_io_channel() called from such a legacy
message would fail an assertion, as spdk_get_thread()
returned NULL.
Fixes#743
Change-Id: I81a3f76d9c4788596c697df6ff51b264b99ce10b
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450353
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It's possible that we get a request to abort I/O just after
a queued_datain_task has completed, but before we've had a
chance to remove it from the queued_datain_tasks TAILQ.
_iscsi_conn_abort_queued_datain_task wasn't accounting for
that which would result in an infinite loop.
Fixes issue #725.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Tested-by: Jane Lusby
Change-Id: I494ee78763d527d83dcb65f46563ee69bb975576
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450301
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Unregister OCF bdev using poller that checks if
cache has pending requests.
This prevents ocf_mngt_cache_stop causing
deadlock on reads in WT mode.
Submodule commit was updated to get new function ocf_cache_has_pending_requests().
Change-Id: Iee4cb09bc2bb859a6dcce89994c686f64924c942
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446604
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add structure vbdev_ocf_mngt_ctx that is going to be used
for asynchronous management operations.
Using such context, dynamic memory allocations are no longer required
(except for one time poller registration)
This patch also adds functions to modify context in a predictable way.
They are very important for complex flow of management operations in OCF bdev
especially after whole OCF API gets changed to asynchronous.
Change-Id: Ia402d51665330a553f7f3d74000e3636f0d6a598
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449556
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
We didn't call teardown_test() in "unregister_and_close"
test case, causing the subsequent test case to fail
to register the same io_channel. This didn't cause any
issues, as spdk_io_device_register() silently returned
if the same io_device was already registered. However,
there was an extra error message printed and this patch
gets rid of it.
```
Test: unregister_and_close ...passed
Test: basic_qos ...thread.c: 850:spdk_io_device_register: *ERROR*:
io_device 0x55555576e4e0 already registered (old:0x555555770ab0
new:0x55555d7a14d0)
passed
```
Change-Id: Ib554612df8985c9d99b46b71bb76020f52565362
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450111
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Now we need to add configurations in these files to run cases.
Add some settings in conf files for user to choose freely.
Raid bdev can be comprised of AIO/Nvme/Malloc.
Change-Id: Ifdab8539c89d2cf4fcd88ca7f26e8563e12dd585
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449772
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Should speed up operations, and allows us to remove the 16 byte link
object from the request structure.
Change-Id: Ie62df1f44d22580a7a7ae41c498295841d1e3064
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448080
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
spdk_dma_*malloc() is about to be deprecated.
Change-Id: I683ea366da7bb186f16a8084a9c43276ed4fce04
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449798
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Purpose: To support the multiple SGL later.
Change-Id: I133a451100b736353cf98a6aaca879d290ff5b67
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448259
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Enhance RPC method start_nbd_disk to take nbd_device as
one optional parameter. If it is not assigned, automaticly
choose an available nbd device path from /dev/nbd0 to
/dev/nbdN.
For github issue #324:
https://github.com/spdk/spdk/issues/324
Change-Id: I72c064d8bd476df342f5aa0af4d6120eb021c7ed
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/440453
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
This API had good intentions, but as more complicated
use cases came up where base bdevs could come and go,
we've realized that the bdev layer will need another
mechanism to query bdev modules on these types of
relationships between a virtual bdev and its base
bdevs. We removed all code related to tracking
the array of base bdevs a long time ago.
Change all existing callers to use spdk_bdev_register.
Document spdk_vbdev_register as deprecated for now,
and change its implementation to just call
spdk_bdev_register for simplicity sake.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3b40ed96480c0fa7184db42953a9f4e4c167fed1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450076
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This function will be exteneded later for multiple SGL
support.
Change-Id: I1f6962ec03c72e335efaa311a12d3891312fcc53
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449968
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
LUN ID is not converted from integer to structure and integer is
set to R2T PDUs.
Popular iSCSI initiators don't check this value and work correctly.
This patch uses the public helper function of the SCSI library
to fix the issue.
Additionally, private helper function to convert structure to
integer for LUN ID is replaced to the public helper function
of the SCSI library.
Change-Id: I9218c5ef7a8bfec43326c6584db7c6929fdd11a8
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449963
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
SPDK iSCSI target didn't convert LUN ID from integer to structure
when it sends R2T PDUs. The next patch will fix the issue. Introducing
helper functions into SCSI library and using them will be clean. Hence
this patch adds two helper functions to convert LUN ID between structure
and integer.
The logic of helper functions is derived simply from the current
implementation in SPDK.
Change-Id: I114b546cfcb44109d6cd131a1fa972f4d6bfea38
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449962
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We were only using one enum from this whole struct, so there is no need
to store it. Plus the queries we use to update it are so infrequent and
only occur during connect and disconnect so I think we can save quite a
bit of space by removing this without compromising performance.
Change-Id: Icf29977a3c10cb289564fa2760a0059f07a0f8cb
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448072
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
There was a lot of duplicated code here between states. I'm trying to
minimize the duplicated code without making it confusing.
Change-Id: I13183431e554c8a9f501b3385bbd7b59e2c83161
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448066
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Catch an edge case where a multi sgl request is longer than the allowed
transfer size.
Change-Id: I79779050fe951d16f1240e2c3d8cf5037e576ea2
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/440766
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This is important to avoid thrash when we don't have enough buffers to
satisfy a request.
Change-Id: Id35fd492078b8e628c2118317f674f07e95d4dba
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449109
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Looks like a leftover from hotplug test when it was
done using a NVMe bdev instead of malloc.
Change-Id: Ia71d167b403d7f6d8ee5a621653f4062fce4ba6a
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450035
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
While useful in NVMf target, this is really annoying
in the consolidated spdk target, where NVMe-oF doesn't
even have to be used. OCF tests currently use iscsi_tgt
just because spdk_tgt requires the additional [Transport]
section in the cfg file. Let's remove that requirement.
Change-Id: I418b47d62dcc06b9513f9f0496dc1e39b9d5a554
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450056
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For trace file which is generated by spdk trace_record,
if one lcore recorded no entries, there is no space for
it to have idle entries to referring.
Change-Id: I1cd89ec934407fc805bda66c8c87f76cb181679e
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449840
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously, copying trace-file from disk or page-cache
to memory aims to accelerate disk-read in one large
sequential read.
But it shows no value, since trace-data are originally
retrieved sequentially, and page-cache can handle it
well.
Change-Id: I1c328286686ed05e83f3b7a1bef10dc353c5b653
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449839
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Right now they can only use memory from the socket
that performed iSCSI static initialization, which
doesn't make sense. Those mempools can be used from
a different socket as well, so don't restrict them
to any specific one.
Change-Id: I0def1c554ed6227ab0f0a7be107c4c1c61f40c96
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450055
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Blobstore now supports 512B reads/writes to blobs, if the
backing device is formatted for 512B LBAs. The hello_blob
example app was never updated to account for this - so when
running against a backing device with 512B LBAs, it would
fail since it was only reading/writing 1 blob io_unit (512B)
but was comparing a page size (4KB).
Clean up a typo too while we're here.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I6cfeeff1c160a24d4c10b68b9dd93717ed79f212
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450069
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Relaunch queue poller on put_io_channel callback of OCF bdev
to delay thread shutdown. New poller will finish all pending
OCF requests as there might be some even after all SPDK IOs
completed.
This solves the issue of OCF not being able to
complete all its work because of queue poller getting
unregistered in callback of put_io_channel.
This patch also changes unregister procedure:
we call ocf_cache_stop in callback of io_device_unregister
instead of in callback of bdev_unregister.
Change-Id: Ib7e41fc25e71029a73bb76a62e39e6bf4b8189ce
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/444276
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
vbdev_ocf_delete function accepts callback now.
RPC delete_ocf_bdev is also updated to adopt this change.
Change-Id: I1d357a5e37015268e28c07fd81dc35f48ec80ab8
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449777
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Piotr Pelpliński <piotr.pelplinski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Not sure how it ended up this way, but there are a bunch of
extra # symbols that make the header look really weird.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3ecf04c7a97b0cb22fdb2bf5dbc4b64ca554704b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449936
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The buffers are really specific to the request and not the wr or data
object. In the case of multiple wr requests, the maximum number of
buffers per req is equal to the number of SGEs in the NVMe-oF request
*2.
Change-Id: Ic59498bfed461d180adb2fb9a481ac5b11fa9252
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449108
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
FIO is going to always present a contiguous buffer to us. But we can
fake out the nvme driver with a couple of global variables.
Change-Id: I038e70582043e1d7c1800ed065fe126aa091c290
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/439608
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Enable parsing an nvmf request that contains an inline
nvme_sgl_last_segment_descriptor element. This is the next step
towards NVMe-oF SGL support in the NVMe-oF target.
Change-Id: Ia2f1f7054e0de8a9e2bfe4dabe6af4085e3f12c4
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/428745
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This also means that the RocksDB threads don't need
to be SPDK threads any more.
Change-Id: Icadd6dd446958ebf470ad8ab8239f5390942eb87
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449465
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
In the next step, this won't be a channel at all.
Change-Id: Ia8fe4da5b0b283e8dfc5c6477b84cfdd346d89a0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449464
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Rely on the available threading abstractions to make this
a bit simpler.
Change-Id: If8f028092c057637ff167d2ec7faa3dce009af61
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449463
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The unit tests to this point used an algorithm that
just copied the data as-is with no compression.
Change it now to use the actual unit test compression
algorithm.
A few unit tests need to be modified to account
for the compressible buffers - primarily those that
write a single 512 byte LBA in a 16KB chunk. These
will now be compressed down to a single I/O unit,
so some of the asserts need to be changed to adjust
for this.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I11b2c92b1d6f710a301c8a3b0961f76c9f4886d4
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449567
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>