Poll Mode Driver for Paravirtual VMXNET3 NIC.
As a PMD, the VMXNET3 driver provides the packet reception and transmission
callbacks, vmxnet3_recv_pkts and vmxnet3_xmit_pkts. It does not support
scattered packet reception as part of vmxnet3_recv_pkts and
vmxnet3_xmit_pkts. Also, it does not support scattered packet reception as part of
the device operations supported.
The VMXNET3 PMD handles all the packet buffer memory allocation and resides in
guest address space and it is solely responsible to free that memory when not needed.
The packet buffers and features to be supported are made available to hypervisor via
VMXNET3 PCI configuration space BARs. During RX/TX, the packet buffers are
exchanged by their GPAs, and the hypervisor loads the buffers with packets in the RX
case and sends packets to vSwitch in the TX case.
The VMXNET3 PMD is compiled with vmxnet3 device headers. The interface is similar
to that of the other PMDs available in the Intel(R) DPDK API. The driver pre-allocates the
packet buffers and loads the command ring descriptors in advance. The hypervisor fills
those packet buffers on packet arrival and write completion ring descriptors, which are
eventually pulled by the PMD. After reception, the Intel(R) DPDK application frees the
descriptors and loads new packet buffers for the coming packets. The interrupts are
disabled and there is no notification required. This keeps performance up on the RX
side, even though the device provides a notification feature.
In the transmit routine, the Intel(R) DPDK application fills packet buffer pointers in the
descriptors of the command ring and notifies the hypervisor. In response the hypervisor
takes packets and passes them to the vSwitch. It writes into the completion descriptors
ring. The rings are read by the PMD in the next transmit routine call and the buffers
and descriptors are freed from memory.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This provides a para-virtualization packet switching solution, based on the
Xen hypervisor’s Grant Table, which provides simple and fast packet
switching capability between guest domains and host domain based on
MAC address or VLAN tag.
This solution is comprised of two components; a Poll Mode Driver (PMD)
as the front end in the guest domain and a switching back end in the
host domain. XenStore is used to exchange configure information
between the PMD front end and switching back end,
including grant reference IDs for shared Virtio RX/TX rings, MAC
address, device state, and so on.
The front end PMD can be found in the Intel DPDK directory lib/
librte_pmd_xenvirt and back end example in examples/vhost_xen.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
These library changes provide a new Intel DPDK feature for communicating
with virtual machines using QEMU's IVSHMEM mechanism.
The feature works by providing a command line for QEMU to map several hugepages
into a single IVSHMEM device. For the guest to know what is inside any given IVSHMEM
device (and to distinguish between Intel(R) DPDK and non-Intel(R) DPDK IVSHMEM
devices), a metadata file is also mapped into the IVSHMEM segment. No work needs to
be done by the guest application to map IVSHMEM devices into memory; they are
automatically recognized by the Intel(R) DPDK Environment Abstraction Layer (EAL).
Changes in this patch:
* Changes to EAL to allow mapping of all hugepages in a memseg into a single file
* Changes to EAL to allow ivshmem devices to be transparently mapped in
the process running on the guest.
* New ivshmem library to create and manage metadata exported to guest VM's
* New ivshmem compilation targets
* Mempool and ring changes to allow export of structures to a VM and allow
a VM to attach to those structures.
* New autotests to unit tests this functionality.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>