vring notification mechanism is transport-specific. At present, vhost
dataplane code in `lib/vhost/vhost.c` triggers guest notifications with
`eventfd_write()` system call. But this is an AF_UNIX specific
notification mechanism. This patch replaces `eventfd_write()` with the
existing generic `rte_vhost_vring_call()` function that is part of
DPDK's librte_vhost public API.
`rte_vhost_vring_call()` takes a vring_idx as an argument to associate
the `struct spdk_vhost_virtqueue` instance with the relevant `struct
vhost_virtqueue` instance. We introduce a new `vring_idx` field in
`struct spdk_vhost_virtqueue` to enable this association. This field is
initialized in `start_device()`. In addition, a stub for
`rte_vhost_vring_call()` is added in the vhost unit test file.
SPDK's internal `rte_vhost` copy will not be updated in order to support
the virtio-vhost-user transport. However, an `rte_vhost_vring_call()`
function is introduced in SPDK's `rte_vhost` in order to have a solid
API. This function is just a wrapper of `eventfd_write()`.
Change-Id: Ic93e25cd3f06e92f04766521bc850f1ee80b8ec8
Signed-off-by: Nikos Dragazis <ndragazis@arrikto.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/454373
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
We used to rely on lcore >= 0 for sessions that are
started (have their pollers running) and in order to
prevent data races, that lcore field had to be set from
the same thread that runs the pollers, directly after
registering/unregistering them. The lcore was always
set to spdk_env_get_current_core(), but we won't be able
to use an equivalent get_current_poll_group() function
after we switch to poll groups. We will have a poll group
object only inside spdk_vhost_session_send_event() that's
called from the DPDK rte_vhost thread.
In order to change the lcore field (or a poll group one)
from spdk_vhost_session_send_event(), we'll need a separate
field to maintain the started/stopped status that's only
going to be modified from the session's thread.
Change-Id: Idb09cae3c4715eebb20282aad203987b26be707b
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452394
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The previous patches described as optimizations also
fixed some issues. They seem sufficient to cover all
the error cases, but the real source of the problem
lies in foreach_session() initiated by the device backend,
which can use sessions that were never seen by the
backend.
The backends are only notified when a session is
*started*, but foreach_session() iterates through
all the sessions - even those that were never started.
Vhost SCSI, for example, in the foreach_session() callbacks
used to expect svsession->svdev to be always set, but
that field is only set when the session gets started.
A perfect solution would to introduce a new backend
callback to be called on new connection. Vhost SCSI
could set e.g. svsession->svdev inside. For now we go
with much easier solution that prevents sessions from
being used in foreach-session() unless they were
started at least once. (...and e.g. got their ->svdev set)
Change-Id: Ida30a1f27f99977360d08a71a64fc92931b25b75
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449394
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
There is currently a small window after we stop
session's pollers and before we mark the session
as stopped (by setting vsession->lcore to -1). If
spdk_vhost_dev_foreach_session() is called within
this window, its callback could assume the session
is still running and for example in vhost scsi
target hotremove case, could destroy an io_channel
for the second time - as it'd first done when the
session was stopped. That's a bug.
A similar case exists for session start.
We fix the above by setting vsession->lcore directly
after starting or stopping the session, hence
eliminating the possible window for data races.
This has a few implications:
* spdk_vhost_session_send_event() called before
session start can't operate on vsession->lcore,
so it needs to be provided with the lcore as
an additional parameter now.
* the vsession->lcore can't be accessed until
spdk_vhost_session_start_done() is called, so
its existing usages were replaced with
spdk_env_get_current_core()
* active_session_num is decremented right after
spdk_vhost_session_stop_done() is called and
before spdk_vhost_session_send_event() returns,
so some active_session_num == 1 checks meaning
"the last session gets stopped now" needed to be
changed to check against == 0, as if "the last
session has been just stopped"
Change-Id: I5781bb0ce247425130c9672e0df27d06b6234317
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448229
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Split spdk_vhost_session_event_done() into two separate
functions. This is just a preparation for the next patch.
Change-Id: I05e046e4b963387f058d2b822d7493c761eebbbb
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448228
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
In the next patch we will put much more responsibility
on spdk_vhost_session_event_done(), so here we make
sure it's always called under the global vhost mutex.
Specifically, spdk_vhost_session_event_done() will set
vsession->lcore, which any other thread might try to
concurrently access via spdk_vhost_dev_foreach_session().
Change-Id: I7a5fde4be4e8bdfdbbb24ac955af964f516bdb68
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448227
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
First of all, this struct was used when stopping
a session and wasn't directly related to any vhost
device despite its name.
Second, the struct contained just a single poller.
Instead of renaming it, we remove it. We can use
that poller pointer directly.
Change-Id: I66ad0826f7e809365c07662e59979b1942243c2e
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448225
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The context had to be previously carried around by
particular vhost backend code and now it's embedded
inside the generic vsession struct. This serves mostly
as a cleanup.
Change-Id: I7b6ac2c3cb5d60a035d56affbf42fe5d4697f0f6
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448223
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
rte_vhost has rejected a patch with this feature, so
we implement it using the external rte_vhost msg handling
hooks directly in SPDK.
Change-Id: Ib072fc19b921fe0fa01c7f4892e60430232e3a1c
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447025
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
rte_vhost requires all queues to be fully initialized
in order to start I/O processing. This behavior is not
compliant with the vhost-user specification and doesn't
work with QEMU 2.12+, which will only initialize 1 I/O
queue for the SeaBIOS boot. Theoretically, we should
start polling each virtqueue individually after
receiving its SET_VRING_KICK message, but rte_vhost is
not designed to poll individual queues. So we use
a workaround to detect when a vhost session could be
potentially at that SeaBIOS stage and we mark it to
start polling as soon as its first virtqueue gets
initialized. This doesn't hurt any non-QEMU vhost slaves
and allows QEMU 2.12+ to boot correctly. SET_FEATURES
could be sent at any time, but QEMU will send it at
least once on SeaBIOS initialization - whenever
powered-up or rebooted.
Vhost sessions are still mostly started/stopped from
within rte_vhost callbacks, but now there's additional
concept of "forced" polling, in which SPDK starts
sessions manually, while rte_vhost still thinks the
sessions are stopped. This can potentially lead to cases
where a session is "started" twice, or gets destroyed
while it's still being polled (by force). Those cases
also need to be handled within this patch.
Change-Id: I70636d63e27914906ddece59cec34f1dd37ec5cd
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446086
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
DPDK 19.05+ gives us an ability to pre or post-process
any single vhost-user message. The user can either perform
additional actions upon some generic events, or can
implement handling for brand new message types that
rte_vhost doesn't even know about.
In order to smoothly switch to the upstream rte_vhost
and drop our internal copy, we introduce an SPDK wrapper
function to register SPDK-specific message handlers. For
DPDK 19.05+ this will use the new rte_vhost API to
register those message handlers, and for older DPDKs
this function simply won't do anything - as w assume the
internal rte_copy already contains all the necessary
changes and does not need any "external" hooks.
For now we use the message handlers to stop the vhost
device and wait for any pending DMA ops before letting
rte_vhost to process the SET_MEM_TABLE message and unmap
the current shared memory.
Change-Id: Ic0fefa9174254627cb3fc0ed30ab1e54be4dd654
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446085
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
It's disabled by default, so no functionality is changed yet.
The intention is to use the upstream rte_vhost from DPDK,
which - starting from DPDK 19.05 - is finally capable of
running with storage device backends.
SPDK still requires a lot of changes in order to support
that upstream version, but the most fundamental change is
dropping vhost-nvme support. It'll remain usable only with
the internal rte_vhost copy and with the upstream rte_vhost
it simply won't be compiled. This allows us at least to
compile with that upstream rte_vhost, where we can pursue
adding the full integration.
Change-Id: Ic8bc5497c4d77bfef77c57f3d5a1f8681ffb6d1f
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446082
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Adapted our custom rte_vhost APIs to the upstream DPDK
version which has independently added similar APIs.
This will potentially allow us to remove our internal
rte_vhost copy.
rte_vhost_set_vhost_vring_last_idx() was renamed to
rte_vhost_set_vring_base() and the last vring indices
have to be acquired with a newly introduced rte_vhost_get_vring_base()
rather than rte_vhost_get_vhost_vring().
This is only a refactor, no functionality is changed.
Change-Id: I1ca2c1216635c117832c9d9c784d5661145c04cd
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/446081
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Vhost external events no longer do any asynchronous
calls, they only lock the vhost mutex and directly
call the provided function. The mutex encapsulation
isn't worth the additional complexity of splitting
each vdev-handling code into multiple functions, so
we expose low-level APIs that should eventually
replace external events entirely.
Instead of:
```
static int do_something_cb(struct spdk_vhost_dev *vdev, void *arg)
{
struct my_data *ctx = arg;
/* access the vdev and ctx */
free(ctx);
}
struct my_data *ctx = calloc(...);
rc = spdk_vhost_call_external_event("my_vdev", do_something_cb, ctx);
if (rc != 0) { /* err handling */ }
```
We can now do just:
```
spdk_vhost_lock();
vdev = spdk_vhost_dev_find("my_vdev");
if (vdev == NULL) { /* err handling */ }
/* access the vdev any context data */
spdk_vhost_unlock();
```
Change-Id: I06e1e149d6dd006720b021d3bef8d9b7bfaeceaa
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440377
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Although Vhost SCSI code is technically capable
of polling different sessions on different lcores,
the underlying SCSI API won't allow allocating
io_channels on more than one lcore.
That's why we will now let device backends assign
lcores by themselves.
The first Vhost SCSI session will now choose one
core from the available ones, and any subsequent
sessions will stick to the same one.
Change-Id: I616cd195a919960dff68508473cea236abf8d6a3
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441581
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
With all the patches in place, we can finally
enable having more than one simultaneous sessions
to a single vhost device.
This patch adds a unique id to the session structure,
similar to the one in a vhost device and also fills in
the implementation holes in foreach_session().
Vhost-NVMe can support only one session per device
and now has an additional check that prevents it from
starting more than one at a time.
Vhost-SCSI also has the same check now since it needs
additional work on the lcore assignment policy. The
check will be removed once the required work is done.
Change-Id: I13a32c7a0eae808e9bec63a7b8c15ec0bc2e36ed
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439324
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Particular backends will now be responsible for sending
events to vsession->lcore. This was previously done by
the generic vhost layer, but since some backends will
need different lcore assignment policies soon, we need
to give them more power now.
Change-Id: I72cbbccb9d5a5b2358acca6d4b6bb882131937af
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441580
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
It's sessions that are tied with the lcores now.
This makes the vhost devices accessible by any
thread that only locks the global vhost mutex.
The mechanism used for external device events was
refactored to serve for foreach_session() API.
Additionally, since we don't want to handle cases
where the entire vhost device gets removed while
an asynchronous foreach_session chain is pending,
a new per-vdev counter of pending async operations
was added. We'll fail the device removal request
if there are any pending operations. Eventually
we would like the device removal to be asynchronous,
but that's a todo for later.
The external events are still there, although
they only lock the mutex and call the provided
function now.
Change-Id: I20618f9420a9bc04270373469deaad8fb2049c7c
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439323
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Before we implement the support for multiple sessions
per device, we still need to make a few intermediate
changes that will require a counter of currently polled
sessions. So here it is.
Change-Id: I0a1d928eafa75efa1b5c2e6670a5ceb282c87fa4
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441734
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The backend struct will get some new dependencies
soon, so move its definition further in a header
file.
Change-Id: I39c25027312777c7e570b12511dc9c5e9b9023d4
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439322
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Sessions are allocated internally by the core vhost
library whenever DPDK accepts a new connection, so
the only reasonable way to store additional per-sesion
data is to tell the core vhost library how much extra
memory it needs to allocate. Hence, we add a new field
to the vhost device backend struct.
Change-Id: Id6c8285505b2e610e28e5d985aceb271ed232555
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/437778
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Instead of calculating those settings once and storing
them in the device struct, we'll now recalculate them
whenever a device session is created. This lets us
remove 2 fields from the device struct.
Change-Id: I2cb2bdbc570a41ae78c0666490fb1462a00d0b6f
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439081
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Keep all coalescing variables inside the session struct.
Interrupt coalescing is still configured with the device-
pecific APIs, but those will now transparently propagate
the change to all active connections.
This is the last piece that held struct spdk_vhost_dev
tied with the session's lcore. Now that device
settings aren't actively polled by any sessions, they
only need to be synchronized with the global vhost lock.
This will potentially let us get rid of the vhost external
events API, allowing user to lock the mutex directly,
set coalescing params directly, and transparently let
the internal spdk_vhost_dev_foreach_session() do the
tricky synchronization.
Change-Id: Ifba96d241c736d33376861fa894c738e7d9b5b40
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/437777
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
When device is changed, e.g. the underlying bdev is
hotremoved, all sessions need to be notified. For
instance, Vhost-SCSI would send an additional hotremove
eventq message. That's why we introduce a helper
function to iterate through all active sessions.
Eventually, we may want to poll different sessions
from different lcores, so there will be some kind of
internal cross-lcore message management required
- just like there is one for spdk_vhost_call_external_event_foreach().
For now, though, we can get away with a dumbest
implementation.
We still want to keep this API internal for the time
being. The end-user (RPC) should only modify the
device, and the whole concept of sessions should be
completely encapsulated.
Change-Id: I2e142632c07a23daeac15cabea4cffecf984e455
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/418736
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Session struct will be now allocated inside the
`new_connection` rte_vhost callback. There can be
still only one connection per device, but this
change brings us one step towards supporting more.
Besides the obvious pointer changes, we'll now also
use the session pointer to check if the connection
actually exists. We used to set device vid to -1
when there was no connection but we no longer have
to do that.
Change-Id: I4d062c0b5f093fef132a6a2c9cc29458cbaad414
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/437776
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Each connection is created with the `new_connection`
rte_vhost callback with a unique vid parameter. Storing
the vid inside the device struct was sufficient until
we wanted to have multiple connections per device.
Change-Id: Ic730d3377e1410499bdc163ce961863c530b880d
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/437775
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Grouped a few spdk_vhost_dev struct fields into a new
struct spdk_vhost_session. A session will represent the
connection between SPDK vhost device (vhost-user slave)
and QEMU (vhost-user master).
This essentially serves two purposes. The first is to
allow multiple simultaneous connections to a single
vhost device. Each connection (session) will have access
to the same storage, but will use separate virtqueues,
separate features and possibly different memory. For
Vhost-SCSI, this could be used together with the upcoming
SCSI reservations feature.
The other purpose is to untie devices from lcores and tie
sessions instead. This will potentially allow us to modify
the device struct from any thread, meaning we'll be able
to get rid of the external events API and simplify a lot
of the code that manages vhost - vhost RPC for instance.
Device backends themselves would be responsible for
propagating all device events to each session, but it could
be completely transparent to the upper layers.
Change-Id: I39984cc0a3ae2e76e0817d48fdaa5f43d3339607
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/437774
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
For some old Linux Guest kernels, the new NVMe 1.3 feature: shadow
doorbell buffer is not enabled, while here, make a dummy BAR region
inside slave target, when Guest submits a new request, the doorbell
value will be write to the shared memory between Guest and vhost
target, so that the existing vhost target can support both new
Linux Guest kernel(newer than 4.12) and old Guest kernel.
Also, the shared BAR space can be used in future which we can move
ADMIN queue processing into SPDK vhost target, with this feature,
the QEMU driver will become very small and easy for upstreaming.
Change-Id: I9463e9f13421368f43bfe4076facddd119f4552e
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/419157
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
So we don't need to do extra memory allocation when stop vhost device.
This makes code more clean.
Change-Id: I27a1b446621ce4f452fee62acd634737b4ffe174
Signed-off-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-on: https://review.gerrithub.io/427336
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
DPDK will deprecate the old API soon.
Change-Id: I0522d47d9cc0b80fb0e2ceb9cc47c45ff51a5077
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/408722
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Moved it to the DPDK thread, so that we don't stress
SPDK I/O reactors on device start/stop. This is mandatory
if we want to maintain hundreds of simultaneous connections.
This patch also fixes various memory registrations leaks
in cases where further device initiation fails.
Change-Id: I435062108fe96d7e67e2a078a3547acb1f73ad11
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/406960
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Similar with exist vhost scsi/blk target, this commit introduces
a new target: vhost nvme I/O slave target, QEMU will present an
emulated NVMe controller to VM, the SPDK I/O slave target will
process the I/Os sent from Guest VM.
Users can follow the example configuation file to evaluate this
feature, refer to etc/spdk/vhost.conf.in [VhostNvme].
Change-Id: Ia2a8a3f719573f3268177234812bd28ed0082d5c
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/384213
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
spdk_vhost_dev_has_feature() is internal function so we can move it's
declaration to header file. This remove function call overhead.
Change-Id: I1704e8279cd6720177047a1ae8818f68982998db
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/400241
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Patch 808a2b05 [1] fixed overflow of avail entries count,
also fixing indirect descriptors support. Without indirect
descriptors, we usually didn't manage to fill entire queue
- there were always a couple of descriptors left. A couple,
but not enough to form a vhost-scsi descriptor. Indirect
descriptors, however, are 1-descriptor long. They have higher
possibility to fill entire avail queue, and that's why they
used to fail more often. But we can safely reenable them now.
[1] 808a2b05 ("vhost: fix overflow of avail entries count")
Change-Id: Idec8568cdfc1255fb578e5d18f5c476a4c034d2d
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/396805
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This feature is not particularly useful for storage.
It forces us to read additional vq memory on each
I/O completion and that's quite expensive.
Quoting VIRTIO 1.0:
```
VIRTIO_F_NOTIFY_ON_EMPTY (24)
If this feature has been negotiated by driver, the device MUST issue an
interrupt if the device runs out of available descriptors on a
virtqueue, even though interrupts are suppressed using the
VIRTQ_AVAIL_F_NO_INTERRUPT flag or the used_event field.
```
Later on:
```
Note: An example of a driver using this feature is the legacy networking
driver: it doesn’t need to know every time a packet is transmitted, but
it does need to free the transmitted packets a finite time after they
are transmitted. It can avoid using a timer if the device interrupts it
when all the packets are transmitted.
```
Change-Id: I7f53293bf811a4cd5ae8e42e18f35042ea6f4ba8
Suggested-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/398325
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Drop max vhost initiators limitation in SPDK (64).
We're still limited by rte_vhost limits, but
they're set to 1024 at the moment.
Change-Id: Ia1ad25665d6e798bc22709cdd43b72d60f1f4cf0
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/389811
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Added vdev->id field and refactored the hot
vhost external event code operating on
g_spdk_vhost_devices array, so that we will
be able to switch from that array to a linked
list with unlimited capacity more easily.
This patch does not change any functionality
on its own.
We would very willingly reuse vdev->vid here,
but it's available only on devices with active
connections.
Change-Id: Ife911a6ca1531aade374b99ef4502cf77f691fa2
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/389071
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This field was only required to check
if we can safely upcast vdev object.
We can just as well check vdev->backend
instead. The vdev->type is not needed here.
Change-Id: I525350957406d4299151e0557b9025ca7bea5371
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/396584
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Instead of:
* spdk_vhost_scsi_dev_remove(vdev)
* spdk_vhost_blk_dev_remove(vdev)
we now have
* spdk_vhost_dev_remove(vdev)
All the logic is already handled internally. This patch only
changes the API. Also, previous vhost_dev_construct()/remove()
functions have been renamed to vhost_dev_register()/unregister()
because that's what they really do.
Change-Id: I7dd0d77bc5b633bec075e0a71345ddbed62697b4
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/396574
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <shuhei.matsumoto.xt@hitachi.com>
This patch adds support for live migration for vhost-scsi and vhost-blk
backends.
Change-Id: Ibfc8a713dbba14ba8cb38377a71e28fd340b1487
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/394203
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Fixes github issue #218.
This patch introduces spdk_cpuset object to store and manipulate
the set of individual CPUs. The main objective of this object is
to replace cpumask declared as uint64_t and extend the limitation
of supported CPUs (lcores) above 64 CPUs.
spdk_cpuset is always allocated dynamically and accessed by opaque
pointer, what makes it easier to extend in the future without
breaking API/ABI.
This patch also extends parsing function allowing to set cpumask
using a list of cpus e.g. "[0-4,10,12]" sets mask of 0,1,2,3,4,10,12
as well as hexadecimal string with and without "0x" prefix.
Change-Id: I475c3ba7fab629021a22e03176e57e400dd24a49
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Reviewed-on: https://review.gerrithub.io/390794
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
New added vhost user messages: GET_CONFIG/SET_CONFIG are
used for get/set virtio device's configuration space, this
commit enable the new added vhost messages.
Change-Id: I5c3e3f8fb6ed55e99299323c39658765b1724bb8
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/386545
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Currently we test up to 512KB I/O sizes, and this I/O
size could fragment to 129 IOVs if split on 4KB
boundaries if the buffer start is not 4KB aligned.
This is likely the cause of a recent slew of failures
in the test pool which coincide with updating to
fio 3.3.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I189157af578e75e025ca8e3420712739e604fca7
Reviewed-on: https://review.gerrithub.io/395872
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Virtio spec say that any IRQ requests are only hints. So try to limit
number of interrupts generated by vhost by defining minimum interval
between sending IRQ. Coalescing is disabled by default. Can be enabled
using RPC command 'set_vhost_controller_coalescing'.
Change-Id: I9b96014d004ea0ea022b4498c6b47d30d867091a
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/378130
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Casting pointers without checking their length could
potentially lead to a crash
Change-Id: I7c61e5818ecfbf32bb363858965503341353c51e
Signed-off-by: Pawel Niedzwiecki <pawelx.niedzwiecki@intel.com>
Reviewed-on: https://review.gerrithub.io/382420
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>