Some libraries were missing their dependency on eal, mbuf, mempool,
ring and kvargs.
It is revealed by the linker option "-z defs".
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Add a new paramter (flags) to rte_vhost_driver_register(). DPDK
vhost-user acts as client mode when RTE_VHOST_USER_CLIENT flag
is set. The flags would also allow future extensions without
breaking the API (again).
The rest is straingfoward then: allocate a unix socket, and
bind/listen for server, connect for client.
This extension is for vhost-user only, therefore we simply quit
and report error when any flags are given for vhost-cuse.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With all the previous prepare works, we are just one step away from
the final ABI refactoring. That is, to change current API to let them
stick to vid instead of the old virtio_net dev.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
This change could let us avoid the dependency of "virtio_net"
struct, to prepare for the ABI refactoring.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_ifname() to export the ifname to
application.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_queue_num() to export the number of
queues.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_numa_node() to get the numa node
from which the virtio_net struct is allocated.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
It does not make sense to ask the application to set/unset the flag
VIRTIO_DEV_RUNNING (that used internal only) at new_device()/
destroy_device() callback.
Instead, it should be set after new_device() succeeds and reset before
destroy_device() is invoked inside vhost lib. This patch fixes it.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
This patch adds missing DEPDIRS to avoid any library referring to
symbols they are not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add the missing external dependency to pthread to avoid referring to
symbols the library is not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Up to now dependencies between DPDK internal libraries have been
untracked at shared library level, requiring applications to know
about library internal dependencies and often consequently overlinking.
Since the dependencies are already recorded for build ordering in the
makefiles with DEPDIRS-y we can use that information to generate LDLIBS
entries for internal libraries automatically.
Also revert commit 8180554d82 ("vhost: fix linkage of driver with
library") which is made redundant by this change.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Currently, vhost PMD doesn't have linkage for librte_vhost, even though
it depends on librte_vhost APIs. This causes a linkage error if below
conditions are fulfilled.
- DPDK libraries are compiled as shared libraries.
- DPDK application doesn't link librte_vhost.
- Above application tries to link vhost PMD using '-d' DPDK option.
The patch adds linkage for librte_vhost to vhost PMD not to cause an
above error.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
If the vhost PMD were configured with more queues than the guest, the old
code would segfault in rte_vhost_enable_guest_notification due to a NULL
virtqueue pointer.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
After some testing, it was found that retrieving numa information
about a vhost device via a call to get_mempolicy is more
accurate when performed during the new_device callback versus
the vring_state_changed callback, in particular upon initial boot
of the VM. Performing this check during new_device is also
potentially more efficient as this callback is only triggered once
during device initialisation, compared with vring_state_changed
which may be called multiple times depending on the number of
queues assigned to the device.
Reorganise the code to perform this check and assign the correct
socket_id to the device during the new_device callback.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Define and use ETH_LINK_UP and ETH_LINK_DOWN where appropriate.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Currently, the maximum value of rx/tx queues are kept by EAL. But,
the value is used like below with different meanings in vhost PMD.
- The maximum value of current enabled queues.
- The maximum value of current supported queues.
This wrong double meaning will cause an issue like below steps.
* Invoke application with below option.
--vdev 'eth_vhost0,iface=<socket path>,queues=4'
* Configure queues like below.
rte_eth_dev_configure(portid, 2, 2, ...);
* Configure queues again like below.
rte_eth_dev_configure(portid, 4, 4, ...);
The second rte_eth_dev_configure() will fail because both
the maximum value of current enabled queues and supported queues
will be '2' after calling first rte_eth_dev_configure().
To fix the issue, the patch adds another variable to keep the maximum
number of supported queues in vhost PMD.
Fixes: 23981fb0d78b ("vhost: Add vhost PMD")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>