2017-12-19 15:49:03 +00:00
|
|
|
/* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
* Copyright(c) 2010-2014 Intel Corporation
|
2015-02-23 17:36:30 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _FD_MAN_H_
|
|
|
|
#define _FD_MAN_H_
|
|
|
|
#include <stdint.h>
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
#include <pthread.h>
|
2016-12-21 09:45:13 +00:00
|
|
|
#include <poll.h>
|
2015-02-23 17:36:30 +00:00
|
|
|
|
|
|
|
#define MAX_FDS 1024
|
|
|
|
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
typedef void (*fd_cb)(int fd, void *dat, int *remove);
|
2015-02-23 17:36:30 +00:00
|
|
|
|
|
|
|
struct fdentry {
|
|
|
|
int fd; /* -1 indicates this entry is empty */
|
|
|
|
fd_cb rcb; /* callback when this fd is readable. */
|
|
|
|
fd_cb wcb; /* callback when this fd is writeable.*/
|
|
|
|
void *dat; /* fd context */
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
int busy; /* whether this entry is being used in cb. */
|
2015-02-23 17:36:30 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct fdset {
|
2016-12-21 09:45:13 +00:00
|
|
|
struct pollfd rwfds[MAX_FDS];
|
2015-02-23 17:36:30 +00:00
|
|
|
struct fdentry fd[MAX_FDS];
|
vhost: support dynamically registering server
* support calling rte_vhost_driver_register after rte_vhost_driver_session_start
* add mutext to protect fdset from concurrent access
* add busy flag in fdentry. this flag is set before cb and cleared after cb is finished.
mutex lock scenario in vhost:
* event_dispatch(in rte_vhost_driver_session_start) runs in a separate thread, infinitely
processing vhost messages through cb(callback).
* event_dispatch acquires the lock, get the cb and its context, mark the busy flag,
and releases the mutex.
* vserver_new_vq_conn cb calls fdset_add, which acquires the mutex and add new fd into fdset.
* vserver_message_handler cb frees data context, marks remove flag to request to delete
connfd(connection fd) from fdset.
* after cb returns, event_dispatch
1. clears busy flag.
2. if there is remove request, call fdset_del, which acquires mutex, checks busy flag, and
removes connfd from fdset.
* rte_vhost_driver_unregister(not implemented) runs in another thread, acquires the mutex,
calls fdset_del to remove fd(listenerfd) from fdset. Then it could free data context.
The above steps ensures fd data context isn't freed when cb is using.
VM(s) should have been shutdown before rte_vhost_driver_unregister.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
2015-02-23 17:36:33 +00:00
|
|
|
pthread_mutex_t fd_mutex;
|
2015-02-23 17:36:30 +00:00
|
|
|
int num; /* current fd number of this fdset */
|
2018-03-28 05:49:25 +00:00
|
|
|
|
|
|
|
union pipefds {
|
|
|
|
struct {
|
|
|
|
int pipefd[2];
|
|
|
|
};
|
|
|
|
struct {
|
|
|
|
int readfd;
|
|
|
|
int writefd;
|
|
|
|
};
|
|
|
|
} u;
|
2015-02-23 17:36:30 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
void fdset_init(struct fdset *pfdset);
|
|
|
|
|
|
|
|
int fdset_add(struct fdset *pfdset, int fd,
|
|
|
|
fd_cb rcb, fd_cb wcb, void *dat);
|
|
|
|
|
2016-07-21 12:55:36 +00:00
|
|
|
void *fdset_del(struct fdset *pfdset, int fd);
|
2018-04-27 15:19:43 +00:00
|
|
|
int fdset_try_del(struct fdset *pfdset, int fd);
|
2015-02-23 17:36:30 +00:00
|
|
|
|
vhost: introduce API to start a specific driver
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2017-04-01 07:22:56 +00:00
|
|
|
void *fdset_event_dispatch(void *arg);
|
2015-02-23 17:36:30 +00:00
|
|
|
|
2018-03-28 05:49:25 +00:00
|
|
|
int fdset_pipe_init(struct fdset *fdset);
|
|
|
|
|
|
|
|
void fdset_pipe_uninit(struct fdset *fdset);
|
|
|
|
|
|
|
|
void fdset_pipe_notify(struct fdset *fdset);
|
|
|
|
|
2015-02-23 17:36:30 +00:00
|
|
|
#endif
|