Convert the pcap driver to use the PMD_REGISTER_DRIVER macro and fix up the
Makefile so that its linkage is only done if we are building static libraries.
This means that the test applications now have no reference to the pcap library
when building DSO's and must specify its use on the command line with the -d
option. Static linking will still initalize the driver automatically.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
If some DPDK libraries are installed on the system, the linker was trying
to use them before searching in -L path.
The obscure reason is that we were prefixing -L with -Wl, to pass it
directly to the linker.
But -L is also a gcc option. And allowing gcc to process this option fixes
the issue.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
To fully support dpdk extensions (loading of .so), all symbols provided
by dpdk libraries must be available in the binaries: before this patch,
unused functions/variables from dpdk static libraries could be stripped
by the linker because they are not used. These symbols can be used by a
dpdk extension that is loaded at runtime with the -d option.
Adding --whole-archive when generating a binary solves this issue.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
In order to distinguish clearly this implementation from the extension
vmxnet3-usermap, it is renamed to reflect its usage of uio framework.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Thomas Graf <tgraf@redhat.com>
In order to distinguish clearly this implementation from the extension
virtio-net-pmd, it is renamed to reflect its usage of uio framework.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Chris Wright <chrisw@redhat.com>
This reverts commits
a0cdfcf9 (use pcap-config to guess compilation flags),
ef5b2363 (fix build with empty LIBPCAP_CFLAGS) and
60191b89 (fix build when pcap_sendpacket is unavailable).
These patches are creating more problems than solving the initial one
(which was a build error with too old pcap libraries).
Since old pcap libraries are not that common, just revert them.
Reported-by: Meir Tseitlin <mirots@gmail.com>
Reported-by: Mats Liljegren <mats.liljegren@enea.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Use pcap-config to populate CFLAGS and LDFLAGS.
LIBPCAP_CFLAGS and LIBPCAP_LDFLAGS can be used to override this (useful when
cross-compiling).
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The GCC prefix -Wl was ignored because the command line value has higher priority.
It ended in impossibilty for GCC to pass parameters to LD.
The prefixed value must override the command line one.
Signed-off-by: Julien Courtat <julien.courtat@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Poll Mode Driver for Paravirtual VMXNET3 NIC.
As a PMD, the VMXNET3 driver provides the packet reception and transmission
callbacks, vmxnet3_recv_pkts and vmxnet3_xmit_pkts. It does not support
scattered packet reception as part of vmxnet3_recv_pkts and
vmxnet3_xmit_pkts. Also, it does not support scattered packet reception as part of
the device operations supported.
The VMXNET3 PMD handles all the packet buffer memory allocation and resides in
guest address space and it is solely responsible to free that memory when not needed.
The packet buffers and features to be supported are made available to hypervisor via
VMXNET3 PCI configuration space BARs. During RX/TX, the packet buffers are
exchanged by their GPAs, and the hypervisor loads the buffers with packets in the RX
case and sends packets to vSwitch in the TX case.
The VMXNET3 PMD is compiled with vmxnet3 device headers. The interface is similar
to that of the other PMDs available in the Intel(R) DPDK API. The driver pre-allocates the
packet buffers and loads the command ring descriptors in advance. The hypervisor fills
those packet buffers on packet arrival and write completion ring descriptors, which are
eventually pulled by the PMD. After reception, the Intel(R) DPDK application frees the
descriptors and loads new packet buffers for the coming packets. The interrupts are
disabled and there is no notification required. This keeps performance up on the RX
side, even though the device provides a notification feature.
In the transmit routine, the Intel(R) DPDK application fills packet buffer pointers in the
descriptors of the command ring and notifies the hypervisor. In response the hypervisor
takes packets and passes them to the vSwitch. It writes into the completion descriptors
ring. The rings are read by the PMD in the next transmit routine call and the buffers
and descriptors are freed from memory.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This provides a para-virtualization packet switching solution, based on the
Xen hypervisor’s Grant Table, which provides simple and fast packet
switching capability between guest domains and host domain based on
MAC address or VLAN tag.
This solution is comprised of two components; a Poll Mode Driver (PMD)
as the front end in the guest domain and a switching back end in the
host domain. XenStore is used to exchange configure information
between the PMD front end and switching back end,
including grant reference IDs for shared Virtio RX/TX rings, MAC
address, device state, and so on.
The front end PMD can be found in the Intel DPDK directory lib/
librte_pmd_xenvirt and back end example in examples/vhost_xen.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
These library changes provide a new Intel DPDK feature for communicating
with virtual machines using QEMU's IVSHMEM mechanism.
The feature works by providing a command line for QEMU to map several hugepages
into a single IVSHMEM device. For the guest to know what is inside any given IVSHMEM
device (and to distinguish between Intel(R) DPDK and non-Intel(R) DPDK IVSHMEM
devices), a metadata file is also mapped into the IVSHMEM segment. No work needs to
be done by the guest application to map IVSHMEM devices into memory; they are
automatically recognized by the Intel(R) DPDK Environment Abstraction Layer (EAL).
Changes in this patch:
* Changes to EAL to allow mapping of all hugepages in a memseg into a single file
* Changes to EAL to allow ivshmem devices to be transparently mapped in
the process running on the guest.
* New ivshmem library to create and manage metadata exported to guest VM's
* New ivshmem compilation targets
* Mempool and ring changes to allow export of structures to a VM and allow
a VM to attach to those structures.
* New autotests to unit tests this functionality.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>