2019-05-06 17:18:32 +00:00
|
|
|
# Virtualized I/O with Vhost-user {#vhost_processing}
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2021-08-02 12:27:54 +00:00
|
|
|
## Table of Contents {#vhost_processing_toc}
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
- @ref vhost_processing_intro
|
|
|
|
- @ref vhost_processing_qemu
|
|
|
|
- @ref vhost_processing_init
|
|
|
|
- @ref vhost_processing_io_path
|
2018-07-30 17:41:16 +00:00
|
|
|
- @ref vhost_spdk_optimizations
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2021-08-02 12:27:54 +00:00
|
|
|
## Introduction {#vhost_processing_intro}
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2019-05-06 17:18:32 +00:00
|
|
|
This document is intended to provide an overview of how Vhost works behind the
|
|
|
|
scenes. Code snippets used in this document might have been simplified for the
|
|
|
|
sake of readability and should not be used as an API or implementation
|
|
|
|
reference.
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2018-07-30 17:41:16 +00:00
|
|
|
Reading from the
|
|
|
|
[Virtio specification](http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html):
|
|
|
|
|
2021-08-24 07:04:22 +00:00
|
|
|
> The purpose of virtio and [virtio] specification is that virtual environments
|
|
|
|
> and guests should have a straightforward, efficient, standard and extensible
|
|
|
|
> mechanism for virtual devices, rather than boutique per-environment or per-OS
|
|
|
|
> mechanisms.
|
2018-07-30 17:41:16 +00:00
|
|
|
|
|
|
|
Virtio devices use virtqueues to transport data efficiently. Virtqueue is a set
|
|
|
|
of three different single-producer, single-consumer ring structures designed to
|
|
|
|
store generic scatter-gatter I/O. Virtio is most commonly used in QEMU VMs,
|
|
|
|
where the QEMU itself exposes a virtual PCI device and the guest OS communicates
|
|
|
|
with it using a specific Virtio PCI driver. With only Virtio involved, it's
|
|
|
|
always the QEMU process that handles all I/O traffic.
|
|
|
|
|
|
|
|
Vhost is a protocol for devices accessible via inter-process communication.
|
|
|
|
It uses the same virtqueue layout as Virtio to allow Vhost devices to be mapped
|
|
|
|
directly to Virtio devices. This allows a Vhost device, exposed by an SPDK
|
|
|
|
application, to be accessed directly by a guest OS inside a QEMU process with
|
|
|
|
an existing Virtio (PCI) driver. Only the configuration, I/O submission
|
|
|
|
notification, and I/O completion interruption are piped through QEMU.
|
|
|
|
See also @ref vhost_spdk_optimizations
|
|
|
|
|
|
|
|
The initial vhost implementation is a part of the Linux kernel and uses ioctl
|
|
|
|
interface to communicate with userspace applications. What makes it possible for
|
|
|
|
SPDK to expose a vhost device is Vhost-user protocol.
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
The [Vhost-user specification](https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/interop/vhost-user.txt;hb=HEAD)
|
|
|
|
describes the protocol as follows:
|
|
|
|
|
2021-08-24 07:04:22 +00:00
|
|
|
> [Vhost-user protocol] is aiming to complement the ioctl interface used to
|
|
|
|
> control the vhost implementation in the Linux kernel. It implements the control
|
|
|
|
> plane needed to establish virtqueue sharing with a user space process on the
|
|
|
|
> same host. It uses communication over a Unix domain socket to share file
|
|
|
|
> descriptors in the ancillary data of the message.
|
|
|
|
>
|
|
|
|
> The protocol defines 2 sides of the communication, master and slave. Master is
|
|
|
|
> the application that shares its virtqueues, in our case QEMU. Slave is the
|
|
|
|
> consumer of the virtqueues.
|
|
|
|
>
|
|
|
|
> In the current implementation QEMU is the Master, and the Slave is intended to
|
|
|
|
> be a software Ethernet switch running in user space, such as Snabbswitch.
|
|
|
|
>
|
|
|
|
> Master and slave can be either a client (i.e. connecting) or server (listening)
|
|
|
|
> in the socket communication.
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
SPDK vhost is a Vhost-user slave server. It exposes Unix domain sockets and
|
|
|
|
allows external applications to connect.
|
|
|
|
|
2021-08-02 12:27:54 +00:00
|
|
|
## QEMU {#vhost_processing_qemu}
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
One of major Vhost-user use cases is networking (DPDK) or storage (SPDK)
|
|
|
|
offload in QEMU. The following diagram presents how QEMU-based VM
|
|
|
|
communicates with SPDK Vhost-SCSI device.
|
|
|
|
|
2018-05-23 02:07:50 +00:00
|
|
|
![QEMU/SPDK vhost data flow](img/qemu_vhost_data_flow.svg)
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2021-08-02 12:27:54 +00:00
|
|
|
## Device initialization {#vhost_processing_init}
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2018-07-30 17:41:16 +00:00
|
|
|
All initialization and management information is exchanged using Vhost-user
|
2018-05-15 03:33:29 +00:00
|
|
|
messages. The connection always starts with the feature negotiation. Both
|
2018-07-30 17:41:16 +00:00
|
|
|
the Master and the Slave exposes a list of their implemented features and
|
|
|
|
upon negotiation they choose a common set of those. Most of these features are
|
|
|
|
implementation-related, but also regard e.g. multiqueue support or live migration.
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2018-08-27 08:42:35 +00:00
|
|
|
After the negotiation, the Vhost-user driver shares its memory, so that the vhost
|
2018-05-15 03:33:29 +00:00
|
|
|
device (SPDK) can access it directly. The memory can be fragmented into multiple
|
2018-07-30 17:41:16 +00:00
|
|
|
physically-discontiguous regions and Vhost-user specification puts a limit on
|
|
|
|
their number - currently 8. The driver sends a single message for each region with
|
|
|
|
the following data:
|
2020-02-04 15:41:17 +00:00
|
|
|
|
2021-07-30 09:55:02 +00:00
|
|
|
- file descriptor - for mmap
|
|
|
|
- user address - for memory translations in Vhost-user messages (e.g.
|
|
|
|
translating vring addresses)
|
|
|
|
- guest address - for buffers addresses translations in vrings (for QEMU this
|
|
|
|
is a physical address inside the guest)
|
|
|
|
- user offset - positive offset for the mmap
|
|
|
|
- size
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
The Master will send new memory regions after each memory change - usually
|
|
|
|
hotplug/hotremove. The previous mappings will be removed.
|
|
|
|
|
|
|
|
Drivers may also request a device config, consisting of e.g. disk geometry.
|
2018-07-30 17:41:16 +00:00
|
|
|
Vhost-SCSI drivers, however, don't need to implement this functionality
|
2018-05-15 03:33:29 +00:00
|
|
|
as they use common SCSI I/O to inquiry the underlying disk(s).
|
|
|
|
|
|
|
|
Afterwards, the driver requests the number of maximum supported queues and
|
|
|
|
starts sending virtqueue data, which consists of:
|
2020-02-04 15:41:17 +00:00
|
|
|
|
2021-07-30 09:55:02 +00:00
|
|
|
- unique virtqueue id
|
|
|
|
- index of the last processed vring descriptor
|
|
|
|
- vring addresses (from user address space)
|
|
|
|
- call descriptor (for interrupting the driver after I/O completions)
|
|
|
|
- kick descriptor (to listen for I/O requests - unused by SPDK)
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
If multiqueue feature has been negotiated, the driver has to send a specific
|
|
|
|
*ENABLE* message for each extra queue it wants to be polled. Other queues are
|
|
|
|
polled as soon as they're initialized.
|
|
|
|
|
2021-08-02 12:27:54 +00:00
|
|
|
## I/O path {#vhost_processing_io_path}
|
2018-05-15 03:33:29 +00:00
|
|
|
|
|
|
|
The Master sends I/O by allocating proper buffers in shared memory, filling
|
|
|
|
the request data, and putting guest addresses of those buffers into virtqueues.
|
|
|
|
|
|
|
|
A Virtio-Block request looks as follows.
|
|
|
|
|
2021-08-24 07:04:22 +00:00
|
|
|
```c
|
2018-05-15 03:33:29 +00:00
|
|
|
struct virtio_blk_req {
|
|
|
|
uint32_t type; // READ, WRITE, FLUSH (read-only)
|
|
|
|
uint64_t offset; // offset in the disk (read-only)
|
|
|
|
struct iovec buffers[]; // scatter-gatter list (read/write)
|
|
|
|
uint8_t status; // I/O completion status (write-only)
|
|
|
|
};
|
|
|
|
```
|
|
|
|
And a Virtio-SCSI request as follows.
|
|
|
|
|
2021-08-24 07:04:22 +00:00
|
|
|
```c
|
2018-05-15 03:33:29 +00:00
|
|
|
struct virtio_scsi_req_cmd {
|
|
|
|
struct virtio_scsi_cmd_req *req; // request data (read-only)
|
|
|
|
struct iovec read_only_buffers[]; // scatter-gatter list for WRITE I/Os
|
|
|
|
struct virtio_scsi_cmd_resp *resp; // response data (write-only)
|
|
|
|
struct iovec write_only_buffers[]; // scatter-gatter list for READ I/Os
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
Virtqueue generally consists of an array of descriptors and each I/O needs
|
2018-07-30 17:41:16 +00:00
|
|
|
to be converted into a chain of such descriptors. A single descriptor can be
|
|
|
|
either readable or writable, so each I/O request consists of at least two
|
|
|
|
(request + response).
|
2018-05-15 03:33:29 +00:00
|
|
|
|
2021-08-24 07:04:22 +00:00
|
|
|
```c
|
2018-05-15 03:33:29 +00:00
|
|
|
struct virtq_desc {
|
|
|
|
/* Address (guest-physical). */
|
|
|
|
le64 addr;
|
|
|
|
/* Length. */
|
|
|
|
le32 len;
|
|
|
|
|
|
|
|
/* This marks a buffer as continuing via the next field. */
|
|
|
|
#define VIRTQ_DESC_F_NEXT 1
|
|
|
|
/* This marks a buffer as device write-only (otherwise device read-only). */
|
|
|
|
#define VIRTQ_DESC_F_WRITE 2
|
|
|
|
/* The flags as indicated above. */
|
|
|
|
le16 flags;
|
|
|
|
/* Next field if flags & NEXT */
|
|
|
|
le16 next;
|
|
|
|
};
|
|
|
|
```
|
|
|
|
|
2018-07-30 17:41:16 +00:00
|
|
|
Legacy Virtio implementations used the name vring alongside virtqueue, and the
|
|
|
|
name vring is still used in virtio data structures inside the code. Instead of
|
|
|
|
`struct virtq_desc`, the `struct vring_desc` is much more likely to be found.
|
|
|
|
|
2018-05-15 03:33:29 +00:00
|
|
|
The device after polling this descriptor chain needs to translate and transform
|
|
|
|
it back into the original request struct. It needs to know the request layout
|
|
|
|
up-front, so each device backend (Vhost-Block/SCSI) has its own implementation
|
|
|
|
for polling virtqueues. For each descriptor, the device performs a lookup in
|
|
|
|
the Vhost-user memory region table and goes through a gpa_to_vva translation
|
|
|
|
(guest physical address to vhost virtual address). SPDK enforces the request
|
|
|
|
and response data to be contained within a single memory region. I/O buffers
|
|
|
|
do not have such limitations and SPDK may automatically perform additional
|
2018-07-30 17:41:16 +00:00
|
|
|
iovec splitting and gpa_to_vva translations if required. After forming the request
|
2018-05-15 03:33:29 +00:00
|
|
|
structs, SPDK forwards such I/O to the underlying drive and polls for the
|
|
|
|
completion. Once I/O completes, SPDK vhost fills the response buffer with
|
|
|
|
proper data and interrupts the guest by doing an eventfd_write on the call
|
|
|
|
descriptor for proper virtqueue. There are multiple interrupt coalescing
|
2018-07-30 17:41:16 +00:00
|
|
|
features involved, but they are not be discussed in this document.
|
|
|
|
|
2021-08-02 12:27:54 +00:00
|
|
|
### SPDK optimizations {#vhost_spdk_optimizations}
|
2018-07-30 17:41:16 +00:00
|
|
|
|
|
|
|
Due to its poll-mode nature, SPDK vhost removes the requirement for I/O submission
|
|
|
|
notifications, drastically increasing the vhost server throughput and decreasing
|
|
|
|
the guest overhead of submitting an I/O. A couple of different solutions exist
|
|
|
|
to mitigate the I/O completion interrupt overhead (irqfd, vDPA), but those won't
|
|
|
|
be discussed in this document. For the highest performance, a poll-mode @ref virtio
|
|
|
|
can be used, as it suppresses all I/O completion interrupts, making the I/O
|
|
|
|
path to fully bypass the QEMU/KVM overhead.
|