Automatically detect more whitespace errors.
All existing cases are fixed; only whitespace change (verify with
diff -w) except for one comment style fixup in include/spdk/nvme.h.
Change-Id: If750e54b9c8e3421ea6feda5f20184a31431631e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/402360
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This change wasn't correctly rebased and needs to be updated to compile
against the current blobstore.
This reverts commit c1174e6895.
Change-Id: I529608bee7323cb626d8c36dff15adc9ba24ad26
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/402352
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Unit tests implemented in following patches.
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Ib18c9060f527bd22bfdbed74e96871a6e0551ead
Reviewed-on: https://review.gerrithub.io/396648
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
blobfs and lvol can now use this to automatically iterate
all existing blobs during spdk_bs_load. Changes to blobfs
and lvol will come in future patches.
This will also be used in some upcoming patches which need
to iterate through blobs during load to determine
snapshot/clone relationships.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic7c5fac4535ceaa926217a105dda532517e3e251
Reviewed-on: https://review.gerrithub.io/400177
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Finish the sequence first, before calling _spdk_bs_free().
Otherwise synchronous bs_devs (like we use in the unit
tests) cause the sequence memory to get freed via
_spdk_bs_free() and then we try to finish the sequence.
This eliminates the need for g_scheduler_delay and
_bs_flush_scheduler() in the blob unit tests. But don't
remove them - they will be useful in upcoming unit tests
for queued persist operations.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I09aac3ae4d3a56ff8e04a5b822fcd6746f13afc3
Reviewed-on: https://review.gerrithub.io/401267
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
No functional change - this just separates out the
code that creates the persist ctx from the code that
actually performs the persist operation.
Part of series to enable queuing persist operations -
this will be useful for starting a previously queued
persist operation.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie1966ff2a477f3075c36f90560010d036658f803
Reviewed-on: https://review.gerrithub.io/401255
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaef32731b05a53ac0707524d78086eedc89d6af6
Reviewed-on: https://review.gerrithub.io/401254
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I096fb24dd2fe2fc4dd97d80c957c328d960fb867
Reviewed-on: https://review.gerrithub.io/401073
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
spdk_blob_close() and spdk_blob_sync_md() currently do their
own CLEAN state. To consolidate the state checking code,
have both functions rely on the check in _spdk_blob_persist()
instead.
This will reduce code but more importantly is needed for
some upcoming changes for queuing persist operations.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I38118624b4fad6f18c4b7466d9ddfa0915c3fce0
Reviewed-on: https://review.gerrithub.io/401065
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
These are common functions that can be called from
any function that reads or modifies a blob's metadata to
perform necessary asserts.
This will also fix several places where blob metadata
functions were asserting the calling thread context,
but not the current state of the blob.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9e16c082a27c439311f8ff214335adadfa715497
Reviewed-on: https://review.gerrithub.io/401053
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
All metadata operations are now done on the metadata
thread, so we no longer have to worry about one thread
updating in-memory metadata structures while another
thread is transferring the in-memory structures to
on-disk structures.
This does not protect against multiple sync operations
outstanding at once - that will be coming in an
upcoming path.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ibf33edf4d41d867c96a38df017737e9ceb87fa58
Reviewed-on: https://review.gerrithub.io/401056
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The md (metadata) thread is always the thread that
initialize/loaded the blobstore. Metadata operations may
only be performed from this thread. This patch adds some
more asserts in metadata functions that were previously
missed.
While here, also update some of the blobstore documentation
related to this.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5cafdb3ba402ceb6c3ccb6fdd9d36e7768f59f39
Reviewed-on: https://review.gerrithub.io/400885
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
These new names are much more clear and are aligned with other
functions such as spdk_blob_close.
Keep the old names around for now but deprecate them. We will
remove them in next release.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Idc60fd0b19fa2a8b0247a1f5835774d342e721f9
Reviewed-on: https://review.gerrithub.io/400884
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This is needed for an upcoming change which will
prevent metadata functions from being called on
threads other than the metadata thread. Without
this change, there was no way for this function
to return an error if it was called from the wrong
thread.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I67e591140194ff6ad250878168f6b166a1ff2282
Reviewed-on: https://review.gerrithub.io/400883
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
There was some thinking that we would need to allocate
I/O channels on a per-blob basis to handle dynamic
resizing during I/O. Making spdk_blob an opaque handle,
with the existing spdk_blob structure renamed to
spdk_blob_data was a first step towards making that
happen. But more recent work on blobstore has
simplified the resizing approach, so this spdk_blob_data
is no longer needed. So revert it.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I22e07008faceb70649ee560176ebe5e014d5f1a3
Reviewed-on: https://review.gerrithub.io/400881
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This reduces some code duplication and ensures all
successful load operations (whether or not it included
recovery after power fail) through the same function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia463ce6ebe976ab2420525d86962e8fe54ad04d7
Reviewed-on: https://review.gerrithub.io/400164
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Patch adds internal version of xattr functions to allow
operations on internal xattrs, which are not visible to
upper layers.
When there is at least one internal xattr set, also
SPDK_BLOB_INTERNAL_XATTR flag is set in invalid_flags to prevent
loading this blob in previous spdk versions.
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Iec918ec858f069f7cd9f36d5e8f0495ffa4a42d8
Reviewed-on: https://review.gerrithub.io/395122
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: I8b570802b05b8e03802d3c2b68a1e7644ea548ac
Reviewed-on: https://review.gerrithub.io/396572
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: I277f7288427788e7a107b143331753fd5b23f16f
Reviewed-on: https://review.gerrithub.io/396571
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This change will allow reusing this structure for both internal
and external xattrs as well as in functions having optional xattr,
but missing other options (i.e. snapshot, clone implemented in next patches)
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Ia6619a75efa0a100168a6f8317be274823af04ab
Reviewed-on: https://review.gerrithub.io/396417
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Icba20351c9ca76397393064b41013c527084853e
Reviewed-on: https://review.gerrithub.io/396385
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
For thin provisioned blobs we allocate dma memory
required for copying cluster from backing device.
When cluster size is too big dma allocation may fail
silently (only IO error).
Use PRIu32 to print out the cluster size, and while
here, fix two other places that were using %d to print
cluster size instead of PRIu32.
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I098b1a58aee2f0d3f4ead7aa326ecdb63a5b53d8
Reviewed-on: https://review.gerrithub.io/397563
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: I3c7d2096f549a88b4a9884c0026d15d3bcd8dc67
Reviewed-on: https://review.gerrithub.io/396387
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Ibc9609ad36188006e9454e5c799bccd8a92d7991
Reviewed-on: https://review.gerrithub.io/391422
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This will be needed for thin provisioning, since a write
I/O may result in needing to insert a cluster into the
blob and that write I/O may not have been performed
on the metadata thread.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I84b0cb6e7af87b1f9c6cab4e2c24fa26b12e2c06
Reviewed-on: https://review.gerrithub.io/396737
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This enables some code reuse for future patches.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I296a6c5c0915da4a77a1ab43e8f10a335b7d16d0
Reviewed-on: https://review.gerrithub.io/396736
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
This will be used in the upcoming thin provisioning
patches. A thread the wishes to insert a newly
allocated cluster into the blob will send a message
to the metadata thread to perform to call this
function, and if it succeeds, sync the blob's
metadata.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I26fcca235ea0d7a187b9fe559851290b6db13649
Reviewed-on: https://review.gerrithub.io/396711
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
For now, use this to add some assert() calls to ensure
per-blob metadata operations are only called from the
thread that initialized/loaded the blobstore.
Upcoming patches will utilize this for metadata updates
required due to cluster allocations on thin provisioned
blobs. In that case, the cluster allocations may not
always be done on the metadata thread - but we want
the metadata thread to actually do the metadata sync
operation to guard against races from allocations on
multiple threads in parallel.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ifa0adfe8b7e61ba770449d1e076126ecb9d7a556
Reviewed-on: https://review.gerrithub.io/396712
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Currently, there is no possibility to save read only blob to disk.
This patch modifies behaviour so that read only flags are applied after syncing blob.
This is analogy to resize, set xattr and remove xattr operations.
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Iffed601c78cb83231bb20e7ef05b73847dc3c95a
Reviewed-on: https://review.gerrithub.io/394243
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
This allows a channel's request_set resources to be
used for queuing I/O requests. This is needed
for upcoming thin provisioning functionality,
where we must queue I/O requests that need to
allocate a cluster, if another cluster allocation
is in progress.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie8d3e799afc0b56bc95ba5ecab11253d8bc8608f
Reviewed-on: https://review.gerrithub.io/395037
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Now all operations (both single buffer and iov-based
payloads) which span a cluster boundary get split into
separate blob calls for each cluster.
This will simplify upcoming patches that need to do
special operations if a cluster needs to be allocated.
This code can now be added to just the single cluster
operations and not have to worry about splits. It
will also simplify the code that will eventually queue
requests which require a cluster allocation if an
allocation is already in progress.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I22a850e6e23e3b3be31183b28d01d57c163b25b1
Reviewed-on: https://review.gerrithub.io/395035
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
A future patch will queue an operation in
_spdk_blob_request_submit_op_single if the cluster is not allocated
and another allocation is already in progress. If the queueing fails
because of no channel resources, we want to fail the callback and
need to do this before creating the batch.
This causes a bit of code duplication, but in the end should make the
code easier to read.
While here, pass spdk_blob instead of spdk_blob_data to
_spdk_blob_request_submit_op_single. This simplifies a future patch
which will need the spdk_blob when queueing an operation. It also
incidentally makes it consistent with _spdk_blob_request_submit_op_split.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ib0ec0e5138eac5bf208fcde6676cd2a77a1a663f
Reviewed-on: https://review.gerrithub.io/395196
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
For I/O that do not span a cluster boundary, just issue
a single batch command to underlying block device.
For I/O that do span a cluster boundary, issue a batch
command for each against the blob (not the block device)
for each cluster accessed by the I/O.
This is all in preparation for upcoming patches which
enable thin provisioning and hence cluster allocation
in the I/O path. It will simplify implementation of
the cluster allocation path since now that code only
needs to be concerned with a single allocation at once.
Splitting for readv/writev will be handled in a
later patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia2341abbda599dace3357c4eec06ab6602ef81a8
Reviewed-on: https://review.gerrithub.io/395027
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This breaks out the logic for building a batch for
non-iov operations to a separate function. Future
patches will do further modifications on this
new function - separating it out into two separate
functions, one for operations that span a cluster boundary
and one for those that do not.
No functional change here - this is just moving code
around to reduce size of upcoming patches.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Id1e67e6305d7ba2317700e1477e12c749ebf664c
Reviewed-on: https://review.gerrithub.io/395026
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This clarifies that the read/write/etc. operation is
being performaned on the block device. This
clarification will be important in some upcoming
patches which will batch similar operations on a
blob (as part of splitting a user request into smaller
operations if it spans a cluster boundary).
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic574804c15f769ee80c1b6f68ca3b77ec910f0f6
Reviewed-on: https://review.gerrithub.io/395017
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: I35779dc547e0c084086ec6d9bf44f86850cb7f05
Reviewed-on: https://review.gerrithub.io/393780
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Iadb29a8ce8dcebfea68d4feeb5f3de1bb3124f16
Reviewed-on: https://review.gerrithub.io/392286
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Ib3470fbac49e92308ed14e20ccde6655354f2580
Reviewed-on: https://review.gerrithub.io/389577
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This patch provides logic for returning errors instead of
assert when size is larger than blobstore size.
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: I16d12338e2b682c39bd33d507d57ea126501a0d7
Reviewed-on: https://review.gerrithub.io/392749
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Recovery code did not claim clusters taken by metadata.
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: If6726eddd22f4e1a3f9814b2348243155fb0fdb9
Reviewed-on: https://review.gerrithub.io/394173
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
This only adds the option and metadata flags.
Actual functionality will be added in an upcoming commit
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: I66015f48f34d4c7c64fce1831ebaed134098407c
Reviewed-on: https://review.gerrithub.io/390196
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This patch fixes issue when blobstore doesn't serialize flags
when there is also at least one extent or xattr.
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: I85d5031dc45df510cebe1acf4694ab62bca2e720
Reviewed-on: https://review.gerrithub.io/393770
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We need to make the channel operations numbers configurable for blob.
Reason: for iSCSI tests, if there is one CPU core, there will be only
one channel, thus read stress tests would
fail since we need more operations for blob channel.
Select a value equal to the small buffer size(8192) for
bdev layer, thus we can solve the iSCSI read issue
correctly. Since for bdev read, we currently only
allow 8192 active bdev I/o requests, so this solution should
work.
PS: Current solution is still not perfect, I think the very
precise fix is that we need to restrict sending I/Os
to the blob, if there is no channel operations. Though
current code, we have retry I/O in bdev , but it still fails
the iSCSI high pressure test.
Change-Id: I211f7a89d144af2c96ad4cc1bd7ac8e94adc72e7
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/393115
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Ibffb43e39b44e5f443d3dfbfa5b5d7dcac3243ef
Reviewed-on: https://review.gerrithub.io/391182
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Change-Id: Ic2c23d16360b26359c2a32920b89f2f3a21a2a9a
Reviewed-on: https://review.gerrithub.io/391191
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This can be used for two purposes:
1) more quickly iterate the blob list, avoiding
metadata pages that are valid but not the first
page in the blob's metadata list
2) close races between delete and open operations -
now we can clear the bit in the blobid bit array
when the delete operation is in progress, ensuring
no one else can try to open the blob
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3904648fd6fa656cb98c9e17ea763ed5a84ef537
Reviewed-on: https://review.gerrithub.io/391695
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This prepares for a future change where we need to use the
recovery path when loading pre-v3 on-disk formats, since the
older disk formats do not save a blobid mask.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia94d56450202f81373c3de94237eca2dfd96526c
Reviewed-on: https://review.gerrithub.io/391694
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This eliminates a bunch of code duplication. This also
fixes a couple of places where the ctx->bs was not being
freed in the load fail path.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie6b0a4a653b5c80edf14086801b75457852a4736
Reviewed-on: https://review.gerrithub.io/391693
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Iba33c55f129c60fad2d58f5254dec5c54ed56805
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Reviewed-on: https://review.gerrithub.io/388217
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This ensures we do not end up with a racing close v.
delete. If we decrement the ref up front, we could
start the close process (which may include persisting
metadata) and then also allow a delete operation to
start. It is safer to wait until the close operation
is done before decrementing the ref count, because then
it will eliminate this race condition (the delete op
would immediately fail).
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iad7fd8320d2c9b56f3c4fce054bcb6271e19ad38
Reviewed-on: https://review.gerrithub.io/391493
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>