2017-12-19 15:49:03 +00:00
|
|
|
/* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
* Copyright(c) 2010-2016 Intel Corporation
|
2015-02-23 17:36:31 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stdint.h>
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <limits.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/un.h>
|
2016-05-12 23:14:19 +00:00
|
|
|
#include <sys/queue.h>
|
2015-02-23 17:36:31 +00:00
|
|
|
#include <errno.h>
|
2016-07-21 13:19:35 +00:00
|
|
|
#include <fcntl.h>
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
#include <pthread.h>
|
2015-02-23 17:36:31 +00:00
|
|
|
|
|
|
|
#include <rte_log.h>
|
|
|
|
|
|
|
|
#include "fd_man.h"
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
#include "vhost.h"
|
|
|
|
#include "vhost_user.h"
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2017-03-27 08:52:15 +00:00
|
|
|
|
|
|
|
TAILQ_HEAD(vhost_user_connection_list, vhost_user_connection);
|
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
/*
|
|
|
|
* Every time rte_vhost_driver_register() is invoked, an associated
|
|
|
|
* vhost_user_socket struct will be created.
|
|
|
|
*/
|
|
|
|
struct vhost_user_socket {
|
2017-03-27 08:52:15 +00:00
|
|
|
struct vhost_user_connection_list conn_list;
|
|
|
|
pthread_mutex_t conn_mutex;
|
2016-05-06 20:13:22 +00:00
|
|
|
char *path;
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
int socket_fd;
|
|
|
|
struct sockaddr_un un;
|
2016-05-06 21:26:03 +00:00
|
|
|
bool is_server;
|
2016-05-12 23:14:19 +00:00
|
|
|
bool reconnect;
|
2016-10-09 07:27:58 +00:00
|
|
|
bool dequeue_zero_copy;
|
2017-11-06 20:38:11 +00:00
|
|
|
bool iommu_support;
|
2018-01-31 17:46:50 +00:00
|
|
|
bool use_builtin_virtio_net;
|
2019-10-15 18:59:51 +00:00
|
|
|
bool extbuf;
|
|
|
|
bool linearbuf;
|
2017-04-01 07:22:39 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The "supported_features" indicates the feature bits the
|
|
|
|
* vhost driver supports. The "features" indicates the feature
|
|
|
|
* bits after the rte_vhost_driver_features_disable/enable().
|
|
|
|
* It is also the final feature bits used for vhost-user
|
|
|
|
* features negotiation.
|
|
|
|
*/
|
|
|
|
uint64_t supported_features;
|
|
|
|
uint64_t features;
|
2017-04-01 07:22:42 +00:00
|
|
|
|
2018-10-12 12:40:45 +00:00
|
|
|
uint64_t protocol_features;
|
|
|
|
|
2018-04-02 11:46:54 +00:00
|
|
|
/*
|
|
|
|
* Device id to identify a specific backend device.
|
|
|
|
* It's set to -1 for the default software implementation.
|
|
|
|
* If valid, one socket can have 1 connection only.
|
|
|
|
*/
|
|
|
|
int vdpa_dev_id;
|
|
|
|
|
2017-04-01 07:22:52 +00:00
|
|
|
struct vhost_device_ops const *notify_ops;
|
2016-05-06 20:13:22 +00:00
|
|
|
};
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
struct vhost_user_connection {
|
|
|
|
struct vhost_user_socket *vsocket;
|
2017-03-27 08:52:15 +00:00
|
|
|
int connfd;
|
2016-05-23 08:36:33 +00:00
|
|
|
int vid;
|
2017-03-27 08:52:15 +00:00
|
|
|
|
|
|
|
TAILQ_ENTRY(vhost_user_connection) next;
|
2015-02-23 17:36:31 +00:00
|
|
|
};
|
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
#define MAX_VHOST_SOCKET 1024
|
|
|
|
struct vhost_user {
|
|
|
|
struct vhost_user_socket *vsockets[MAX_VHOST_SOCKET];
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
struct fdset fdset;
|
2016-05-06 20:13:22 +00:00
|
|
|
int vsocket_cnt;
|
|
|
|
pthread_mutex_t mutex;
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
};
|
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
#define MAX_VIRTIO_BACKLOG 128
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
static void vhost_user_server_new_connection(int fd, void *data, int *remove);
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
static void vhost_user_read_cb(int fd, void *dat, int *remove);
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
static int create_unix_socket(struct vhost_user_socket *vsocket);
|
|
|
|
static int vhost_user_start_client(struct vhost_user_socket *vsocket);
|
2016-05-06 20:13:22 +00:00
|
|
|
|
|
|
|
static struct vhost_user vhost_user = {
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
.fdset = {
|
|
|
|
.fd = { [0 ... MAX_FDS - 1] = {-1, NULL, NULL, NULL, 0} },
|
|
|
|
.fd_mutex = PTHREAD_MUTEX_INITIALIZER,
|
2018-12-06 16:00:07 +00:00
|
|
|
.fd_pooling_mutex = PTHREAD_MUTEX_INITIALIZER,
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
.num = 0
|
|
|
|
},
|
2016-05-06 20:13:22 +00:00
|
|
|
.vsocket_cnt = 0,
|
|
|
|
.mutex = PTHREAD_MUTEX_INITIALIZER,
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
};
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2018-10-12 12:40:35 +00:00
|
|
|
/*
|
|
|
|
* return bytes# of read on success or negative val on failure. Update fdnum
|
|
|
|
* with number of fds read.
|
|
|
|
*/
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
int
|
2018-10-12 12:40:35 +00:00
|
|
|
read_fd_message(int sockfd, char *buf, int buflen, int *fds, int max_fds,
|
|
|
|
int *fd_num)
|
2015-02-23 17:36:31 +00:00
|
|
|
{
|
|
|
|
struct iovec iov;
|
|
|
|
struct msghdr msgh;
|
2018-10-12 12:40:35 +00:00
|
|
|
char control[CMSG_SPACE(max_fds * sizeof(int))];
|
2015-02-23 17:36:31 +00:00
|
|
|
struct cmsghdr *cmsg;
|
2018-02-05 12:16:00 +00:00
|
|
|
int got_fds = 0;
|
2015-02-23 17:36:31 +00:00
|
|
|
int ret;
|
|
|
|
|
2018-10-12 12:40:35 +00:00
|
|
|
*fd_num = 0;
|
|
|
|
|
2015-02-23 17:36:31 +00:00
|
|
|
memset(&msgh, 0, sizeof(msgh));
|
|
|
|
iov.iov_base = buf;
|
|
|
|
iov.iov_len = buflen;
|
|
|
|
|
|
|
|
msgh.msg_iov = &iov;
|
|
|
|
msgh.msg_iovlen = 1;
|
|
|
|
msgh.msg_control = control;
|
|
|
|
msgh.msg_controllen = sizeof(control);
|
|
|
|
|
|
|
|
ret = recvmsg(sockfd, &msgh, 0);
|
|
|
|
if (ret <= 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR, "recvmsg failed\n");
|
2015-02-23 17:36:31 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (msgh.msg_flags & (MSG_TRUNC | MSG_CTRUNC)) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR, "truncated msg\n");
|
2015-02-23 17:36:31 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (cmsg = CMSG_FIRSTHDR(&msgh); cmsg != NULL;
|
|
|
|
cmsg = CMSG_NXTHDR(&msgh, cmsg)) {
|
|
|
|
if ((cmsg->cmsg_level == SOL_SOCKET) &&
|
|
|
|
(cmsg->cmsg_type == SCM_RIGHTS)) {
|
2018-02-05 12:16:00 +00:00
|
|
|
got_fds = (cmsg->cmsg_len - CMSG_LEN(0)) / sizeof(int);
|
2018-10-12 12:40:35 +00:00
|
|
|
*fd_num = got_fds;
|
2018-02-05 12:16:00 +00:00
|
|
|
memcpy(fds, CMSG_DATA(cmsg), got_fds * sizeof(int));
|
2015-02-23 17:36:31 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-02-05 12:16:00 +00:00
|
|
|
/* Clear out unused file descriptors */
|
2018-10-12 12:40:35 +00:00
|
|
|
while (got_fds < max_fds)
|
2018-02-05 12:16:00 +00:00
|
|
|
fds[got_fds++] = -1;
|
|
|
|
|
2015-02-23 17:36:31 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
int
|
2015-02-23 17:36:31 +00:00
|
|
|
send_fd_message(int sockfd, char *buf, int buflen, int *fds, int fd_num)
|
|
|
|
{
|
|
|
|
|
|
|
|
struct iovec iov;
|
|
|
|
struct msghdr msgh;
|
|
|
|
size_t fdsize = fd_num * sizeof(int);
|
|
|
|
char control[CMSG_SPACE(fdsize)];
|
|
|
|
struct cmsghdr *cmsg;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
memset(&msgh, 0, sizeof(msgh));
|
|
|
|
iov.iov_base = buf;
|
|
|
|
iov.iov_len = buflen;
|
|
|
|
|
|
|
|
msgh.msg_iov = &iov;
|
|
|
|
msgh.msg_iovlen = 1;
|
|
|
|
|
|
|
|
if (fds && fd_num > 0) {
|
|
|
|
msgh.msg_control = control;
|
|
|
|
msgh.msg_controllen = sizeof(control);
|
|
|
|
cmsg = CMSG_FIRSTHDR(&msgh);
|
2018-02-09 17:05:00 +00:00
|
|
|
if (cmsg == NULL) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR, "cmsg == NULL\n");
|
2018-02-09 17:05:00 +00:00
|
|
|
errno = EINVAL;
|
|
|
|
return -1;
|
|
|
|
}
|
2015-02-23 17:36:31 +00:00
|
|
|
cmsg->cmsg_len = CMSG_LEN(fdsize);
|
|
|
|
cmsg->cmsg_level = SOL_SOCKET;
|
|
|
|
cmsg->cmsg_type = SCM_RIGHTS;
|
|
|
|
memcpy(CMSG_DATA(cmsg), fds, fdsize);
|
|
|
|
} else {
|
|
|
|
msgh.msg_control = NULL;
|
|
|
|
msgh.msg_controllen = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
do {
|
2018-03-06 10:43:25 +00:00
|
|
|
ret = sendmsg(sockfd, &msgh, MSG_NOSIGNAL);
|
2015-02-23 17:36:31 +00:00
|
|
|
} while (ret < 0 && errno == EINTR);
|
|
|
|
|
|
|
|
if (ret < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR, "sendmsg error\n");
|
2015-02-23 17:36:31 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2016-05-06 21:26:03 +00:00
|
|
|
vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket)
|
2015-02-23 17:36:31 +00:00
|
|
|
{
|
2016-05-23 08:36:33 +00:00
|
|
|
int vid;
|
2016-05-06 21:26:03 +00:00
|
|
|
size_t size;
|
|
|
|
struct vhost_user_connection *conn;
|
2016-07-06 12:24:58 +00:00
|
|
|
int ret;
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2018-04-27 15:19:44 +00:00
|
|
|
if (vsocket == NULL)
|
|
|
|
return;
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
conn = malloc(sizeof(*conn));
|
2016-05-06 20:13:22 +00:00
|
|
|
if (conn == NULL) {
|
2016-05-06 21:26:03 +00:00
|
|
|
close(fd);
|
2015-02-23 17:36:31 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2016-04-29 23:24:27 +00:00
|
|
|
vid = vhost_new_device();
|
2016-05-23 08:36:33 +00:00
|
|
|
if (vid == -1) {
|
2017-08-30 10:50:58 +00:00
|
|
|
goto err;
|
2015-02-23 17:36:31 +00:00
|
|
|
}
|
2015-02-23 17:36:32 +00:00
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
size = strnlen(vsocket->path, PATH_MAX);
|
|
|
|
vhost_set_ifname(vid, vsocket->path, size);
|
2015-02-23 17:36:32 +00:00
|
|
|
|
2018-01-31 17:46:50 +00:00
|
|
|
vhost_set_builtin_virtio_net(vid, vsocket->use_builtin_virtio_net);
|
|
|
|
|
2018-04-02 11:46:55 +00:00
|
|
|
vhost_attach_vdpa_device(vid, vsocket->vdpa_dev_id);
|
|
|
|
|
2016-10-09 07:27:58 +00:00
|
|
|
if (vsocket->dequeue_zero_copy)
|
|
|
|
vhost_enable_dequeue_zero_copy(vid);
|
|
|
|
|
2019-10-15 18:59:51 +00:00
|
|
|
if (vsocket->extbuf)
|
|
|
|
vhost_enable_extbuf(vid);
|
|
|
|
|
|
|
|
if (vsocket->linearbuf)
|
|
|
|
vhost_enable_linearbuf(vid);
|
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO, "new device, handle is %d\n", vid);
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2017-08-30 10:50:58 +00:00
|
|
|
if (vsocket->notify_ops->new_connection) {
|
|
|
|
ret = vsocket->notify_ops->new_connection(vid);
|
|
|
|
if (ret < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2017-08-30 10:50:58 +00:00
|
|
|
"failed to add vhost user connection with fd %d\n",
|
|
|
|
fd);
|
2019-04-11 10:23:06 +00:00
|
|
|
goto err_cleanup;
|
2017-08-30 10:50:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-27 08:52:15 +00:00
|
|
|
conn->connfd = fd;
|
2016-05-06 20:13:22 +00:00
|
|
|
conn->vsocket = vsocket;
|
|
|
|
conn->vid = vid;
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
ret = fdset_add(&vhost_user.fdset, fd, vhost_user_read_cb,
|
2016-07-06 12:24:58 +00:00
|
|
|
NULL, conn);
|
|
|
|
if (ret < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-07-06 12:24:58 +00:00
|
|
|
"failed to add fd %d into vhost server fdset\n",
|
|
|
|
fd);
|
2017-08-30 10:50:58 +00:00
|
|
|
|
|
|
|
if (vsocket->notify_ops->destroy_connection)
|
|
|
|
vsocket->notify_ops->destroy_connection(conn->vid);
|
|
|
|
|
2019-04-11 10:23:06 +00:00
|
|
|
goto err_cleanup;
|
2016-07-06 12:24:58 +00:00
|
|
|
}
|
2017-03-27 08:52:15 +00:00
|
|
|
|
|
|
|
pthread_mutex_lock(&vsocket->conn_mutex);
|
|
|
|
TAILQ_INSERT_TAIL(&vsocket->conn_list, conn, next);
|
|
|
|
pthread_mutex_unlock(&vsocket->conn_mutex);
|
2018-03-28 05:49:25 +00:00
|
|
|
|
|
|
|
fdset_pipe_notify(&vhost_user.fdset);
|
2017-08-30 10:50:58 +00:00
|
|
|
return;
|
|
|
|
|
2019-04-11 10:23:06 +00:00
|
|
|
err_cleanup:
|
|
|
|
vhost_destroy_device(vid);
|
2017-08-30 10:50:58 +00:00
|
|
|
err:
|
|
|
|
free(conn);
|
|
|
|
close(fd);
|
2016-05-06 21:26:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* call back when there is new vhost-user connection from client */
|
|
|
|
static void
|
|
|
|
vhost_user_server_new_connection(int fd, void *dat, int *remove __rte_unused)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket = dat;
|
|
|
|
|
|
|
|
fd = accept(fd, NULL, NULL);
|
|
|
|
if (fd < 0)
|
|
|
|
return;
|
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO, "new vhost user connection is %d\n", fd);
|
2016-05-06 21:26:03 +00:00
|
|
|
vhost_user_add_connection(fd, vsocket);
|
2015-02-23 17:36:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
vhost_user_read_cb(int connfd, void *dat, int *remove)
|
2015-02-23 17:36:31 +00:00
|
|
|
{
|
2016-05-06 20:13:22 +00:00
|
|
|
struct vhost_user_connection *conn = dat;
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
struct vhost_user_socket *vsocket = conn->vsocket;
|
2015-02-23 17:36:31 +00:00
|
|
|
int ret;
|
|
|
|
|
vhost: refactor code structure
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2016-08-18 08:48:39 +00:00
|
|
|
ret = vhost_user_msg_handler(conn->vid, connfd);
|
|
|
|
if (ret < 0) {
|
2019-04-12 08:32:28 +00:00
|
|
|
struct virtio_net *dev = get_device(conn->vid);
|
|
|
|
|
2015-02-23 17:36:31 +00:00
|
|
|
close(connfd);
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
*remove = 1;
|
2019-04-12 08:32:28 +00:00
|
|
|
|
|
|
|
if (dev)
|
|
|
|
vhost_destroy_device_notify(dev);
|
2017-03-27 08:52:15 +00:00
|
|
|
|
2017-08-30 10:50:58 +00:00
|
|
|
if (vsocket->notify_ops->destroy_connection)
|
|
|
|
vsocket->notify_ops->destroy_connection(conn->vid);
|
|
|
|
|
2019-04-12 08:32:28 +00:00
|
|
|
vhost_destroy_device(conn->vid);
|
|
|
|
|
2017-03-27 08:52:15 +00:00
|
|
|
pthread_mutex_lock(&vsocket->conn_mutex);
|
|
|
|
TAILQ_REMOVE(&vsocket->conn_list, conn, next);
|
|
|
|
pthread_mutex_unlock(&vsocket->conn_mutex);
|
|
|
|
|
2016-10-18 14:38:06 +00:00
|
|
|
free(conn);
|
2015-02-23 17:36:31 +00:00
|
|
|
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
if (vsocket->reconnect) {
|
|
|
|
create_unix_socket(vsocket);
|
|
|
|
vhost_user_start_client(vsocket);
|
|
|
|
}
|
2015-02-23 17:36:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
static int
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
create_unix_socket(struct vhost_user_socket *vsocket)
|
2015-02-23 17:36:31 +00:00
|
|
|
{
|
2016-05-06 21:26:03 +00:00
|
|
|
int fd;
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
struct sockaddr_un *un = &vsocket->un;
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
fd = socket(AF_UNIX, SOCK_STREAM, 0);
|
|
|
|
if (fd < 0)
|
|
|
|
return -1;
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO, "vhost-user %s: socket created, fd: %d\n",
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
vsocket->is_server ? "server" : "client", fd);
|
2015-06-30 09:20:48 +00:00
|
|
|
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
if (!vsocket->is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-07-21 13:19:35 +00:00
|
|
|
"vhost-user: can't set nonblocking mode for socket, fd: "
|
|
|
|
"%d (%s)\n", fd, strerror(errno));
|
|
|
|
close(fd);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
memset(un, 0, sizeof(*un));
|
|
|
|
un->sun_family = AF_UNIX;
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
strncpy(un->sun_path, vsocket->path, sizeof(un->sun_path));
|
2016-06-28 03:58:30 +00:00
|
|
|
un->sun_path[sizeof(un->sun_path) - 1] = '\0';
|
2016-05-06 21:26:03 +00:00
|
|
|
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
vsocket->socket_fd = fd;
|
|
|
|
return 0;
|
2016-05-06 21:26:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
vhost_user_start_server(struct vhost_user_socket *vsocket)
|
2016-05-06 21:26:03 +00:00
|
|
|
{
|
|
|
|
int ret;
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
int fd = vsocket->socket_fd;
|
2016-05-06 21:26:03 +00:00
|
|
|
const char *path = vsocket->path;
|
|
|
|
|
2018-02-26 08:39:00 +00:00
|
|
|
/*
|
|
|
|
* bind () may fail if the socket file with the same name already
|
|
|
|
* exists. But the library obviously should not delete the file
|
|
|
|
* provided by the user, since we can not be sure that it is not
|
|
|
|
* being used by other applications. Moreover, many applications form
|
|
|
|
* socket names based on user input, which is prone to errors.
|
|
|
|
*
|
|
|
|
* The user must ensure that the socket does not exist before
|
|
|
|
* registering the vhost driver in server mode.
|
|
|
|
*/
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
ret = bind(fd, (struct sockaddr *)&vsocket->un, sizeof(vsocket->un));
|
2016-05-06 21:26:03 +00:00
|
|
|
if (ret < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-05-06 21:26:03 +00:00
|
|
|
"failed to bind to %s: %s; remove it and try again\n",
|
|
|
|
path, strerror(errno));
|
|
|
|
goto err;
|
2015-06-30 09:20:48 +00:00
|
|
|
}
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO, "bind to %s\n", path);
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
ret = listen(fd, MAX_VIRTIO_BACKLOG);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err;
|
|
|
|
|
2016-07-06 12:24:58 +00:00
|
|
|
ret = fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection,
|
2016-05-06 21:26:03 +00:00
|
|
|
NULL, vsocket);
|
2016-07-06 12:24:58 +00:00
|
|
|
if (ret < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-07-06 12:24:58 +00:00
|
|
|
"failed to add listen fd %d to vhost server fdset\n",
|
|
|
|
fd);
|
|
|
|
goto err;
|
|
|
|
}
|
2016-05-06 21:26:03 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err:
|
|
|
|
close(fd);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2016-05-12 23:14:19 +00:00
|
|
|
struct vhost_user_reconnect {
|
|
|
|
struct sockaddr_un un;
|
|
|
|
int fd;
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
TAILQ_ENTRY(vhost_user_reconnect) next;
|
|
|
|
};
|
|
|
|
|
|
|
|
TAILQ_HEAD(vhost_user_reconnect_tailq_list, vhost_user_reconnect);
|
|
|
|
struct vhost_user_reconnect_list {
|
|
|
|
struct vhost_user_reconnect_tailq_list head;
|
|
|
|
pthread_mutex_t mutex;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct vhost_user_reconnect_list reconn_list;
|
|
|
|
static pthread_t reconn_tid;
|
|
|
|
|
2016-07-21 13:19:35 +00:00
|
|
|
static int
|
|
|
|
vhost_user_connect_nonblock(int fd, struct sockaddr *un, size_t sz)
|
|
|
|
{
|
|
|
|
int ret, flags;
|
|
|
|
|
|
|
|
ret = connect(fd, un, sz);
|
|
|
|
if (ret < 0 && errno != EISCONN)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
flags = fcntl(fd, F_GETFL, 0);
|
|
|
|
if (flags < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-07-21 13:19:35 +00:00
|
|
|
"can't get flags for connfd %d\n", fd);
|
|
|
|
return -2;
|
|
|
|
}
|
|
|
|
if ((flags & O_NONBLOCK) && fcntl(fd, F_SETFL, flags & ~O_NONBLOCK)) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-07-21 13:19:35 +00:00
|
|
|
"can't disable nonblocking on fd %d\n", fd);
|
|
|
|
return -2;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-05-12 23:14:19 +00:00
|
|
|
static void *
|
|
|
|
vhost_user_client_reconnect(void *arg __rte_unused)
|
|
|
|
{
|
2016-07-21 13:19:35 +00:00
|
|
|
int ret;
|
2016-05-12 23:14:19 +00:00
|
|
|
struct vhost_user_reconnect *reconn, *next;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
pthread_mutex_lock(&reconn_list.mutex);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* An equal implementation of TAILQ_FOREACH_SAFE,
|
|
|
|
* which does not exist on all platforms.
|
|
|
|
*/
|
|
|
|
for (reconn = TAILQ_FIRST(&reconn_list.head);
|
|
|
|
reconn != NULL; reconn = next) {
|
|
|
|
next = TAILQ_NEXT(reconn, next);
|
|
|
|
|
2016-07-21 13:19:35 +00:00
|
|
|
ret = vhost_user_connect_nonblock(reconn->fd,
|
|
|
|
(struct sockaddr *)&reconn->un,
|
|
|
|
sizeof(reconn->un));
|
|
|
|
if (ret == -2) {
|
|
|
|
close(reconn->fd);
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-07-21 13:19:35 +00:00
|
|
|
"reconnection for fd %d failed\n",
|
|
|
|
reconn->fd);
|
|
|
|
goto remove_fd;
|
|
|
|
}
|
|
|
|
if (ret == -1)
|
2016-05-12 23:14:19 +00:00
|
|
|
continue;
|
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO,
|
2016-05-12 23:14:19 +00:00
|
|
|
"%s: connected\n", reconn->vsocket->path);
|
|
|
|
vhost_user_add_connection(reconn->fd, reconn->vsocket);
|
2016-07-21 13:19:35 +00:00
|
|
|
remove_fd:
|
2016-05-12 23:14:19 +00:00
|
|
|
TAILQ_REMOVE(&reconn_list.head, reconn, next);
|
|
|
|
free(reconn);
|
|
|
|
}
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&reconn_list.mutex);
|
|
|
|
sleep(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
vhost_user_reconnect_init(void)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2017-07-04 08:50:43 +00:00
|
|
|
ret = pthread_mutex_init(&reconn_list.mutex, NULL);
|
|
|
|
if (ret < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR, "failed to initialize mutex");
|
2017-07-04 08:50:43 +00:00
|
|
|
return ret;
|
|
|
|
}
|
2016-05-12 23:14:19 +00:00
|
|
|
TAILQ_INIT(&reconn_list.head);
|
|
|
|
|
2018-04-24 14:46:49 +00:00
|
|
|
ret = rte_ctrl_thread_create(&reconn_tid, "vhost_reconn", NULL,
|
2016-05-12 23:14:19 +00:00
|
|
|
vhost_user_client_reconnect, NULL);
|
2017-12-08 10:19:49 +00:00
|
|
|
if (ret != 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR, "failed to create reconnect thread");
|
2017-07-04 08:50:43 +00:00
|
|
|
if (pthread_mutex_destroy(&reconn_list.mutex)) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2017-07-04 08:50:43 +00:00
|
|
|
"failed to destroy reconnect mutex");
|
|
|
|
}
|
|
|
|
}
|
2016-05-12 23:14:19 +00:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
static int
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
vhost_user_start_client(struct vhost_user_socket *vsocket)
|
2016-05-06 21:26:03 +00:00
|
|
|
{
|
|
|
|
int ret;
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
int fd = vsocket->socket_fd;
|
2016-05-06 21:26:03 +00:00
|
|
|
const char *path = vsocket->path;
|
2016-05-12 23:14:19 +00:00
|
|
|
struct vhost_user_reconnect *reconn;
|
2016-05-06 21:26:03 +00:00
|
|
|
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
ret = vhost_user_connect_nonblock(fd, (struct sockaddr *)&vsocket->un,
|
|
|
|
sizeof(vsocket->un));
|
2016-05-12 23:14:19 +00:00
|
|
|
if (ret == 0) {
|
|
|
|
vhost_user_add_connection(fd, vsocket);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(WARNING,
|
2016-05-12 23:14:19 +00:00
|
|
|
"failed to connect to %s: %s\n",
|
|
|
|
path, strerror(errno));
|
|
|
|
|
2016-07-21 13:19:35 +00:00
|
|
|
if (ret == -2 || !vsocket->reconnect) {
|
2016-05-06 21:26:03 +00:00
|
|
|
close(fd);
|
2015-02-23 17:36:31 +00:00
|
|
|
return -1;
|
2015-06-30 09:20:48 +00:00
|
|
|
}
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO, "%s: reconnecting...\n", path);
|
2016-05-12 23:14:19 +00:00
|
|
|
reconn = malloc(sizeof(*reconn));
|
2016-06-28 03:58:31 +00:00
|
|
|
if (reconn == NULL) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-06-28 03:58:31 +00:00
|
|
|
"failed to allocate memory for reconnect\n");
|
|
|
|
close(fd);
|
|
|
|
return -1;
|
|
|
|
}
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
reconn->un = vsocket->un;
|
2016-05-12 23:14:19 +00:00
|
|
|
reconn->fd = fd;
|
|
|
|
reconn->vsocket = vsocket;
|
|
|
|
pthread_mutex_lock(&reconn_list.mutex);
|
|
|
|
TAILQ_INSERT_TAIL(&reconn_list.head, reconn, next);
|
|
|
|
pthread_mutex_unlock(&reconn_list.mutex);
|
2016-05-06 21:26:03 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-04-01 07:22:39 +00:00
|
|
|
static struct vhost_user_socket *
|
|
|
|
find_vhost_user_socket(const char *path)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2019-04-11 14:48:40 +00:00
|
|
|
if (path == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
2017-04-01 07:22:39 +00:00
|
|
|
for (i = 0; i < vhost_user.vsocket_cnt; i++) {
|
|
|
|
struct vhost_user_socket *vsocket = vhost_user.vsockets[i];
|
|
|
|
|
|
|
|
if (!strcmp(vsocket->path, path))
|
|
|
|
return vsocket;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-04-02 11:46:54 +00:00
|
|
|
int
|
|
|
|
rte_vhost_driver_attach_vdpa_device(const char *path, int did)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
2019-04-11 14:48:40 +00:00
|
|
|
if (rte_vdpa_get_device(did) == NULL || path == NULL)
|
2018-04-02 11:46:54 +00:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket)
|
|
|
|
vsocket->vdpa_dev_id = did;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_vhost_driver_detach_vdpa_device(const char *path)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket)
|
|
|
|
vsocket->vdpa_dev_id = -1;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_vhost_driver_get_vdpa_device_id(const char *path)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
int did = -1;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket)
|
|
|
|
did = vsocket->vdpa_dev_id;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return did;
|
|
|
|
}
|
|
|
|
|
2017-04-01 07:22:39 +00:00
|
|
|
int
|
|
|
|
rte_vhost_driver_disable_features(const char *path, uint64_t features)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
2018-01-31 17:46:50 +00:00
|
|
|
|
|
|
|
/* Note that use_builtin_virtio_net is not affected by this function
|
|
|
|
* since callers may want to selectively disable features of the
|
|
|
|
* built-in vhost net device backend.
|
|
|
|
*/
|
|
|
|
|
2017-04-01 07:22:39 +00:00
|
|
|
if (vsocket)
|
|
|
|
vsocket->features &= ~features;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_vhost_driver_enable_features(const char *path, uint64_t features)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket) {
|
|
|
|
if ((vsocket->supported_features & features) != features) {
|
|
|
|
/*
|
|
|
|
* trying to enable features the driver doesn't
|
|
|
|
* support.
|
|
|
|
*/
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
vsocket->features |= features;
|
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_vhost_driver_set_features(const char *path, uint64_t features)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket) {
|
|
|
|
vsocket->supported_features = features;
|
|
|
|
vsocket->features = features;
|
2018-01-31 17:46:50 +00:00
|
|
|
|
|
|
|
/* Anyone setting feature bits is implementing their own vhost
|
|
|
|
* device backend.
|
|
|
|
*/
|
|
|
|
vsocket->use_builtin_virtio_net = false;
|
2017-04-01 07:22:39 +00:00
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_vhost_driver_get_features(const char *path, uint64_t *features)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
2018-04-02 11:46:55 +00:00
|
|
|
uint64_t vdpa_features;
|
|
|
|
struct rte_vdpa_device *vdpa_dev;
|
|
|
|
int did = -1;
|
|
|
|
int ret = 0;
|
2017-04-01 07:22:39 +00:00
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
2018-04-02 11:46:55 +00:00
|
|
|
if (!vsocket) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-04-02 11:46:55 +00:00
|
|
|
"socket file %s is not registered yet.\n", path);
|
|
|
|
ret = -1;
|
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
did = vsocket->vdpa_dev_id;
|
|
|
|
vdpa_dev = rte_vdpa_get_device(did);
|
|
|
|
if (!vdpa_dev || !vdpa_dev->ops->get_features) {
|
2017-04-01 07:22:39 +00:00
|
|
|
*features = vsocket->features;
|
2018-04-02 11:46:55 +00:00
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vdpa_dev->ops->get_features(did, &vdpa_features) < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-04-02 11:46:55 +00:00
|
|
|
"failed to get vdpa features "
|
|
|
|
"for socket file %s.\n", path);
|
|
|
|
ret = -1;
|
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
*features = vsocket->features & vdpa_features;
|
|
|
|
|
|
|
|
unlock_exit:
|
2017-04-01 07:22:39 +00:00
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
2018-04-02 11:46:55 +00:00
|
|
|
return ret;
|
|
|
|
}
|
2017-04-01 07:22:39 +00:00
|
|
|
|
2019-03-19 10:54:16 +00:00
|
|
|
int
|
|
|
|
rte_vhost_driver_set_protocol_features(const char *path,
|
|
|
|
uint64_t protocol_features)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket)
|
|
|
|
vsocket->protocol_features = protocol_features;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
2018-04-02 11:46:55 +00:00
|
|
|
int
|
|
|
|
rte_vhost_driver_get_protocol_features(const char *path,
|
|
|
|
uint64_t *protocol_features)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
uint64_t vdpa_protocol_features;
|
|
|
|
struct rte_vdpa_device *vdpa_dev;
|
|
|
|
int did = -1;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
2017-04-01 07:22:41 +00:00
|
|
|
if (!vsocket) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2017-04-01 07:22:41 +00:00
|
|
|
"socket file %s is not registered yet.\n", path);
|
2018-04-02 11:46:55 +00:00
|
|
|
ret = -1;
|
|
|
|
goto unlock_exit;
|
2017-04-01 07:22:41 +00:00
|
|
|
}
|
2018-04-02 11:46:55 +00:00
|
|
|
|
|
|
|
did = vsocket->vdpa_dev_id;
|
|
|
|
vdpa_dev = rte_vdpa_get_device(did);
|
|
|
|
if (!vdpa_dev || !vdpa_dev->ops->get_protocol_features) {
|
2018-10-12 12:40:45 +00:00
|
|
|
*protocol_features = vsocket->protocol_features;
|
2018-04-02 11:46:55 +00:00
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vdpa_dev->ops->get_protocol_features(did,
|
|
|
|
&vdpa_protocol_features) < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-04-02 11:46:55 +00:00
|
|
|
"failed to get vdpa protocol features "
|
|
|
|
"for socket file %s.\n", path);
|
|
|
|
ret = -1;
|
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
2018-10-12 12:40:45 +00:00
|
|
|
*protocol_features = vsocket->protocol_features
|
2018-04-02 11:46:55 +00:00
|
|
|
& vdpa_protocol_features;
|
|
|
|
|
|
|
|
unlock_exit:
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
uint32_t vdpa_queue_num;
|
|
|
|
struct rte_vdpa_device *vdpa_dev;
|
|
|
|
int did = -1;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (!vsocket) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-04-02 11:46:55 +00:00
|
|
|
"socket file %s is not registered yet.\n", path);
|
|
|
|
ret = -1;
|
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
did = vsocket->vdpa_dev_id;
|
|
|
|
vdpa_dev = rte_vdpa_get_device(did);
|
|
|
|
if (!vdpa_dev || !vdpa_dev->ops->get_queue_num) {
|
|
|
|
*queue_num = VHOST_MAX_QUEUE_PAIRS;
|
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vdpa_dev->ops->get_queue_num(did, &vdpa_queue_num) < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-04-02 11:46:55 +00:00
|
|
|
"failed to get vdpa queue number "
|
|
|
|
"for socket file %s.\n", path);
|
|
|
|
ret = -1;
|
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
*queue_num = RTE_MIN((uint32_t)VHOST_MAX_QUEUE_PAIRS, vdpa_queue_num);
|
|
|
|
|
|
|
|
unlock_exit:
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
return ret;
|
2017-04-01 07:22:39 +00:00
|
|
|
}
|
|
|
|
|
2018-04-27 15:19:44 +00:00
|
|
|
static void
|
|
|
|
vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
|
|
|
|
{
|
|
|
|
if (vsocket && vsocket->path) {
|
|
|
|
free(vsocket->path);
|
|
|
|
vsocket->path = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vsocket) {
|
|
|
|
free(vsocket);
|
|
|
|
vsocket = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
/*
|
|
|
|
* Register a new vhost-user socket; here we could act as server
|
|
|
|
* (the default case), or client (when RTE_VHOST_USER_CLIENT) flag
|
|
|
|
* is set.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
rte_vhost_driver_register(const char *path, uint64_t flags)
|
|
|
|
{
|
|
|
|
int ret = -1;
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
if (!path)
|
2015-02-23 17:36:31 +00:00
|
|
|
return -1;
|
2016-05-06 21:26:03 +00:00
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2016-05-06 21:26:03 +00:00
|
|
|
"error: the number of vhost sockets reaches maximum\n");
|
|
|
|
goto out;
|
2015-02-23 17:36:31 +00:00
|
|
|
}
|
2015-06-30 09:20:48 +00:00
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
vsocket = malloc(sizeof(struct vhost_user_socket));
|
|
|
|
if (!vsocket)
|
|
|
|
goto out;
|
|
|
|
memset(vsocket, 0, sizeof(struct vhost_user_socket));
|
2016-05-06 20:13:22 +00:00
|
|
|
vsocket->path = strdup(path);
|
2017-07-04 08:50:41 +00:00
|
|
|
if (vsocket->path == NULL) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2017-07-04 08:50:41 +00:00
|
|
|
"error: failed to copy socket path string\n");
|
2018-04-27 15:19:44 +00:00
|
|
|
vhost_user_socket_mem_free(vsocket);
|
2017-07-04 08:50:41 +00:00
|
|
|
goto out;
|
|
|
|
}
|
2017-03-27 08:52:15 +00:00
|
|
|
TAILQ_INIT(&vsocket->conn_list);
|
2017-07-04 08:50:42 +00:00
|
|
|
ret = pthread_mutex_init(&vsocket->conn_mutex, NULL);
|
|
|
|
if (ret) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2017-07-04 08:50:42 +00:00
|
|
|
"error: failed to init connection mutex\n");
|
|
|
|
goto out_free;
|
|
|
|
}
|
2020-01-03 18:36:21 +00:00
|
|
|
vsocket->vdpa_dev_id = -1;
|
2016-10-09 07:27:58 +00:00
|
|
|
vsocket->dequeue_zero_copy = flags & RTE_VHOST_USER_DEQUEUE_ZERO_COPY;
|
2019-10-15 18:59:51 +00:00
|
|
|
vsocket->extbuf = flags & RTE_VHOST_USER_EXTBUF_SUPPORT;
|
|
|
|
vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2019-10-09 11:54:32 +00:00
|
|
|
if (vsocket->dequeue_zero_copy &&
|
|
|
|
(flags & RTE_VHOST_USER_IOMMU_SUPPORT)) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2019-10-09 11:54:32 +00:00
|
|
|
"error: enabling dequeue zero copy and IOMMU features "
|
|
|
|
"simultaneously is not supported\n");
|
|
|
|
goto out_mutex;
|
|
|
|
}
|
|
|
|
|
2017-04-01 07:22:41 +00:00
|
|
|
/*
|
|
|
|
* Set the supported features correctly for the builtin vhost-user
|
|
|
|
* net driver.
|
|
|
|
*
|
|
|
|
* Applications know nothing about features the builtin virtio net
|
|
|
|
* driver (virtio_net.c) supports, thus it's not possible for them
|
|
|
|
* to invoke rte_vhost_driver_set_features(). To workaround it, here
|
|
|
|
* we set it unconditionally. If the application want to implement
|
|
|
|
* another vhost-user driver (say SCSI), it should call the
|
|
|
|
* rte_vhost_driver_set_features(), which will overwrite following
|
|
|
|
* two values.
|
|
|
|
*/
|
2018-01-31 17:46:50 +00:00
|
|
|
vsocket->use_builtin_virtio_net = true;
|
2017-04-01 07:22:41 +00:00
|
|
|
vsocket->supported_features = VIRTIO_NET_SUPPORTED_FEATURES;
|
|
|
|
vsocket->features = VIRTIO_NET_SUPPORTED_FEATURES;
|
2018-10-12 12:40:45 +00:00
|
|
|
vsocket->protocol_features = VHOST_USER_PROTOCOL_FEATURES;
|
2017-04-01 07:22:41 +00:00
|
|
|
|
2018-10-12 12:40:45 +00:00
|
|
|
/*
|
|
|
|
* Dequeue zero copy can't assure descriptors returned in order.
|
|
|
|
* Also, it requires that the guest memory is populated, which is
|
|
|
|
* not compatible with postcopy.
|
|
|
|
*/
|
2018-07-02 13:56:34 +00:00
|
|
|
if (vsocket->dequeue_zero_copy) {
|
2019-10-15 18:59:51 +00:00
|
|
|
if (vsocket->extbuf) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2019-10-15 18:59:51 +00:00
|
|
|
"error: zero copy is incompatible with external buffers\n");
|
|
|
|
ret = -1;
|
|
|
|
goto out_mutex;
|
|
|
|
}
|
|
|
|
if (vsocket->linearbuf) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2019-10-15 18:59:51 +00:00
|
|
|
"error: zero copy is incompatible with linear buffers\n");
|
|
|
|
ret = -1;
|
|
|
|
goto out_mutex;
|
|
|
|
}
|
2018-07-02 13:56:34 +00:00
|
|
|
vsocket->supported_features &= ~(1ULL << VIRTIO_F_IN_ORDER);
|
|
|
|
vsocket->features &= ~(1ULL << VIRTIO_F_IN_ORDER);
|
2018-10-12 12:40:45 +00:00
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO,
|
2018-10-12 12:40:45 +00:00
|
|
|
"Dequeue zero copy requested, disabling postcopy support\n");
|
|
|
|
vsocket->protocol_features &=
|
|
|
|
~(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT);
|
2018-07-02 13:56:34 +00:00
|
|
|
}
|
|
|
|
|
2019-10-17 15:00:19 +00:00
|
|
|
/*
|
|
|
|
* We'll not be able to receive a buffer from guest in linear mode
|
|
|
|
* without external buffer if it will not fit in a single mbuf, which is
|
|
|
|
* likely if segmentation offloading enabled.
|
|
|
|
*/
|
|
|
|
if (vsocket->linearbuf && !vsocket->extbuf) {
|
|
|
|
uint64_t seg_offload_features =
|
|
|
|
(1ULL << VIRTIO_NET_F_HOST_TSO4) |
|
|
|
|
(1ULL << VIRTIO_NET_F_HOST_TSO6) |
|
|
|
|
(1ULL << VIRTIO_NET_F_HOST_UFO);
|
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO,
|
2019-10-17 15:00:19 +00:00
|
|
|
"Linear buffers requested without external buffers, "
|
|
|
|
"disabling host segmentation offloading support\n");
|
|
|
|
vsocket->supported_features &= ~seg_offload_features;
|
|
|
|
vsocket->features &= ~seg_offload_features;
|
|
|
|
}
|
|
|
|
|
2017-11-06 20:38:11 +00:00
|
|
|
if (!(flags & RTE_VHOST_USER_IOMMU_SUPPORT)) {
|
|
|
|
vsocket->supported_features &= ~(1ULL << VIRTIO_F_IOMMU_PLATFORM);
|
|
|
|
vsocket->features &= ~(1ULL << VIRTIO_F_IOMMU_PLATFORM);
|
|
|
|
}
|
|
|
|
|
2018-10-12 12:40:45 +00:00
|
|
|
if (!(flags & RTE_VHOST_USER_POSTCOPY_SUPPORT)) {
|
|
|
|
vsocket->protocol_features &=
|
|
|
|
~(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT);
|
|
|
|
} else {
|
|
|
|
#ifndef RTE_LIBRTE_VHOST_POSTCOPY
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-10-12 12:40:45 +00:00
|
|
|
"Postcopy requested but not compiled\n");
|
|
|
|
ret = -1;
|
|
|
|
goto out_mutex;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
if ((flags & RTE_VHOST_USER_CLIENT) != 0) {
|
2016-05-12 23:14:19 +00:00
|
|
|
vsocket->reconnect = !(flags & RTE_VHOST_USER_NO_RECONNECT);
|
|
|
|
if (vsocket->reconnect && reconn_tid == 0) {
|
2017-12-08 10:19:49 +00:00
|
|
|
if (vhost_user_reconnect_init() != 0)
|
2017-07-04 08:50:42 +00:00
|
|
|
goto out_mutex;
|
2016-05-12 23:14:19 +00:00
|
|
|
}
|
2016-05-06 21:26:03 +00:00
|
|
|
} else {
|
|
|
|
vsocket->is_server = true;
|
|
|
|
}
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
ret = create_unix_socket(vsocket);
|
2016-05-06 21:26:03 +00:00
|
|
|
if (ret < 0) {
|
2017-07-04 08:50:42 +00:00
|
|
|
goto out_mutex;
|
2016-05-06 21:26:03 +00:00
|
|
|
}
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
vhost_user.vsockets[vhost_user.vsocket_cnt++] = vsocket;
|
2016-05-06 21:26:03 +00:00
|
|
|
|
2017-07-10 08:06:48 +00:00
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
return ret;
|
|
|
|
|
2017-07-04 08:50:42 +00:00
|
|
|
out_mutex:
|
|
|
|
if (pthread_mutex_destroy(&vsocket->conn_mutex)) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2017-07-04 08:50:42 +00:00
|
|
|
"error: failed to destroy connection mutex\n");
|
|
|
|
}
|
|
|
|
out_free:
|
2018-04-27 15:19:44 +00:00
|
|
|
vhost_user_socket_mem_free(vsocket);
|
2016-05-06 21:26:03 +00:00
|
|
|
out:
|
2016-05-06 20:13:22 +00:00
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
2015-02-23 17:36:31 +00:00
|
|
|
|
2016-05-06 21:26:03 +00:00
|
|
|
return ret;
|
2015-02-23 17:36:31 +00:00
|
|
|
}
|
|
|
|
|
2016-07-21 12:55:36 +00:00
|
|
|
static bool
|
|
|
|
vhost_user_remove_reconnect(struct vhost_user_socket *vsocket)
|
|
|
|
{
|
|
|
|
int found = false;
|
|
|
|
struct vhost_user_reconnect *reconn, *next;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&reconn_list.mutex);
|
|
|
|
|
|
|
|
for (reconn = TAILQ_FIRST(&reconn_list.head);
|
|
|
|
reconn != NULL; reconn = next) {
|
|
|
|
next = TAILQ_NEXT(reconn, next);
|
|
|
|
|
|
|
|
if (reconn->vsocket == vsocket) {
|
|
|
|
TAILQ_REMOVE(&reconn_list.head, reconn, next);
|
|
|
|
close(reconn->fd);
|
|
|
|
free(reconn);
|
|
|
|
found = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
pthread_mutex_unlock(&reconn_list.mutex);
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2015-06-30 09:20:48 +00:00
|
|
|
/**
|
2016-05-06 21:26:03 +00:00
|
|
|
* Unregister the specified vhost socket
|
2015-06-30 09:20:48 +00:00
|
|
|
*/
|
|
|
|
int
|
|
|
|
rte_vhost_driver_unregister(const char *path)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int count;
|
2017-03-27 08:52:15 +00:00
|
|
|
struct vhost_user_connection *conn, *next;
|
2015-06-30 09:20:48 +00:00
|
|
|
|
2019-04-11 14:48:40 +00:00
|
|
|
if (path == NULL)
|
|
|
|
return -1;
|
|
|
|
|
2019-01-28 06:55:49 +00:00
|
|
|
again:
|
2016-05-06 20:13:22 +00:00
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
2015-06-30 09:20:48 +00:00
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
for (i = 0; i < vhost_user.vsocket_cnt; i++) {
|
2016-07-21 12:55:36 +00:00
|
|
|
struct vhost_user_socket *vsocket = vhost_user.vsockets[i];
|
|
|
|
|
|
|
|
if (!strcmp(vsocket->path, path)) {
|
2017-03-27 08:52:15 +00:00
|
|
|
pthread_mutex_lock(&vsocket->conn_mutex);
|
|
|
|
for (conn = TAILQ_FIRST(&vsocket->conn_list);
|
|
|
|
conn != NULL;
|
|
|
|
conn = next) {
|
|
|
|
next = TAILQ_NEXT(conn, next);
|
|
|
|
|
2018-04-27 15:19:43 +00:00
|
|
|
/*
|
2020-01-14 18:53:57 +00:00
|
|
|
* If r/wcb is executing, release vsocket's
|
|
|
|
* conn_mutex and vhost_user's mutex locks, and
|
|
|
|
* try again since the r/wcb may use the
|
|
|
|
* conn_mutex and mutex locks.
|
2018-04-27 15:19:43 +00:00
|
|
|
*/
|
|
|
|
if (fdset_try_del(&vhost_user.fdset,
|
|
|
|
conn->connfd) == -1) {
|
|
|
|
pthread_mutex_unlock(
|
|
|
|
&vsocket->conn_mutex);
|
2019-01-28 06:55:49 +00:00
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
2018-04-27 15:19:43 +00:00
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(INFO,
|
2016-07-21 12:55:36 +00:00
|
|
|
"free connfd = %d for device '%s'\n",
|
2017-03-27 08:52:15 +00:00
|
|
|
conn->connfd, path);
|
|
|
|
close(conn->connfd);
|
2016-07-21 12:55:36 +00:00
|
|
|
vhost_destroy_device(conn->vid);
|
2017-03-27 08:52:15 +00:00
|
|
|
TAILQ_REMOVE(&vsocket->conn_list, conn, next);
|
2016-07-21 12:55:36 +00:00
|
|
|
free(conn);
|
2016-05-06 21:26:03 +00:00
|
|
|
}
|
2017-03-27 08:52:15 +00:00
|
|
|
pthread_mutex_unlock(&vsocket->conn_mutex);
|
2015-06-30 09:20:48 +00:00
|
|
|
|
2018-04-27 15:19:44 +00:00
|
|
|
if (vsocket->is_server) {
|
2020-01-14 18:53:57 +00:00
|
|
|
/*
|
|
|
|
* If r/wcb is executing, release vhost_user's
|
|
|
|
* mutex lock, and try again since the r/wcb
|
|
|
|
* may use the mutex lock.
|
|
|
|
*/
|
|
|
|
if (fdset_try_del(&vhost_user.fdset,
|
|
|
|
vsocket->socket_fd) == -1) {
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2018-04-27 15:19:44 +00:00
|
|
|
close(vsocket->socket_fd);
|
|
|
|
unlink(path);
|
|
|
|
} else if (vsocket->reconnect) {
|
|
|
|
vhost_user_remove_reconnect(vsocket);
|
|
|
|
}
|
|
|
|
|
2017-06-12 21:29:04 +00:00
|
|
|
pthread_mutex_destroy(&vsocket->conn_mutex);
|
2018-04-27 15:19:44 +00:00
|
|
|
vhost_user_socket_mem_free(vsocket);
|
2015-06-30 09:20:48 +00:00
|
|
|
|
2016-05-06 20:13:22 +00:00
|
|
|
count = --vhost_user.vsocket_cnt;
|
|
|
|
vhost_user.vsockets[i] = vhost_user.vsockets[count];
|
|
|
|
vhost_user.vsockets[count] = NULL;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
2015-06-30 09:20:48 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2016-05-06 20:13:22 +00:00
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
2015-06-30 09:20:48 +00:00
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-04-01 07:22:42 +00:00
|
|
|
/*
|
|
|
|
* Register ops so that we can add/remove device to data core.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
rte_vhost_driver_callback_register(const char *path,
|
2017-04-01 07:22:52 +00:00
|
|
|
struct vhost_device_ops const * const ops)
|
2017-04-01 07:22:42 +00:00
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
if (vsocket)
|
|
|
|
vsocket->notify_ops = ops;
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
2017-04-01 07:22:52 +00:00
|
|
|
struct vhost_device_ops const *
|
2017-04-01 07:22:42 +00:00
|
|
|
vhost_driver_callback_get(const char *path)
|
|
|
|
{
|
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
return vsocket ? vsocket->notify_ops : NULL;
|
|
|
|
}
|
|
|
|
|
2015-02-23 17:36:31 +00:00
|
|
|
int
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
rte_vhost_driver_start(const char *path)
|
2015-02-23 17:36:31 +00:00
|
|
|
{
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
struct vhost_user_socket *vsocket;
|
|
|
|
static pthread_t fdset_tid;
|
|
|
|
|
|
|
|
pthread_mutex_lock(&vhost_user.mutex);
|
|
|
|
vsocket = find_vhost_user_socket(path);
|
|
|
|
pthread_mutex_unlock(&vhost_user.mutex);
|
|
|
|
|
|
|
|
if (!vsocket)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (fdset_tid == 0) {
|
2018-03-28 05:49:25 +00:00
|
|
|
/**
|
|
|
|
* create a pipe which will be waited by poll and notified to
|
|
|
|
* rebuild the wait list of poll.
|
|
|
|
*/
|
|
|
|
if (fdset_pipe_init(&vhost_user.fdset) < 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
2018-03-28 05:49:25 +00:00
|
|
|
"failed to create pipe for vhost fdset\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2018-04-24 14:46:49 +00:00
|
|
|
int ret = rte_ctrl_thread_create(&fdset_tid,
|
|
|
|
"vhost-events", NULL, fdset_event_dispatch,
|
|
|
|
&vhost_user.fdset);
|
2018-03-23 02:18:50 +00:00
|
|
|
if (ret != 0) {
|
2019-12-04 15:07:29 +00:00
|
|
|
VHOST_LOG_CONFIG(ERR,
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
"failed to create fdset handling thread");
|
2018-03-28 05:49:25 +00:00
|
|
|
|
|
|
|
fdset_pipe_uninit(&vhost_user.fdset);
|
2018-03-23 02:18:50 +00:00
|
|
|
return -1;
|
|
|
|
}
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (vsocket->is_server)
|
|
|
|
return vhost_user_start_server(vsocket);
|
|
|
|
else
|
|
|
|
return vhost_user_start_client(vsocket);
|
2015-02-23 17:36:31 +00:00
|
|
|
}
|