doc: nics guide

Create nics guide by moving chapters about Intel and Mellanox NICs.

Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Siobhan Butler <siobhan.a.butler@intel.com>
This commit is contained in:
Thomas Monjalon 2015-01-31 23:06:06 +01:00
parent 08558763b7
commit 972e365bfe
23 changed files with 281 additions and 242 deletions

View File

@ -213,15 +213,20 @@ F: lib/librte_pmd_enic/
Intel e1000
F: lib/librte_pmd_e1000/
F: doc/guides/nics/e1000em.rst
F: doc/guides/nics/intel_vf.rst
Intel ixgbe
M: Helin Zhang <helin.zhang@intel.com>
M: Konstantin Ananyev <konstantin.ananyev@intel.com>
F: lib/librte_pmd_ixgbe/
F: doc/guides/nics/ixgbe.rst
F: doc/guides/nics/intel_vf.rst
Intel i40e
M: Helin Zhang <helin.zhang@intel.com>
F: lib/librte_pmd_i40e/
F: doc/guides/nics/intel_vf.rst
Intel fm10k
M: Jing Chen <jing.d.chen@intel.com>
@ -230,12 +235,12 @@ F: lib/librte_pmd_fm10k/
Mellanox mlx4
M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
F: lib/librte_pmd_mlx4/
F: doc/guides/prog_guide/mlx4_poll_mode_drv.rst
F: doc/guides/nics/mlx4.rst
RedHat virtio
M: Changchun Ouyang <changchun.ouyang@intel.com>
F: lib/librte_pmd_virtio/
F: doc/guides/prog_guide/poll_mode_drv_emulated_virtio_nic.rst
F: doc/guides/nics/virtio.rst
F: lib/librte_vhost/
F: doc/guides/prog_guide/vhost_lib.rst
F: examples/vhost/
@ -244,18 +249,18 @@ F: doc/guides/sample_app_ug/vhost.rst
VMware vmxnet3
M: Yong Wang <yongwang@vmware.com>
F: lib/librte_pmd_vmxnet3/
F: doc/guides/prog_guide/poll_mode_drv_paravirtual_vmxnets_nic.rst
F: doc/guides/nics/vmxnet3.rst
PCAP PMD
M: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
M: John McNamara <john.mcnamara@intel.com>
F: lib/librte_pmd_pcap/
F: doc/guides/prog_guide/libpcap_ring_based_poll_mode_drv.rst
F: doc/guides/nics/pcap_ring.rst
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
F: lib/librte_pmd_ring/
F: doc/guides/prog_guide/ring_lib.rst
F: doc/guides/nics/pcap_ring.rst
F: app/test/test_pmd_ring.c
Null PMD

View File

@ -41,6 +41,7 @@ Contents:
freebsd_gsg/index
xen/index
prog_guide/index
nics/index
sample_app_ug/index
testpmd_app_ug/index
rel_notes/index

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

Before

Width:  |  Height:  |  Size: 348 KiB

After

Width:  |  Height:  |  Size: 348 KiB

View File

Before

Width:  |  Height:  |  Size: 8.6 KiB

After

Width:  |  Height:  |  Size: 8.6 KiB

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 362 KiB

After

Width:  |  Height:  |  Size: 362 KiB

View File

Before

Width:  |  Height:  |  Size: 383 KiB

After

Width:  |  Height:  |  Size: 383 KiB

View File

Before

Width:  |  Height:  |  Size: 415 KiB

After

Width:  |  Height:  |  Size: 415 KiB

View File

Before

Width:  |  Height:  |  Size: 168 KiB

After

Width:  |  Height:  |  Size: 168 KiB

View File

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 105 KiB

View File

Before

Width:  |  Height:  |  Size: 120 KiB

After

Width:  |  Height:  |  Size: 120 KiB

63
doc/guides/nics/index.rst Normal file
View File

@ -0,0 +1,63 @@
.. BSD LICENSE
Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name of Intel Corporation nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Network Interface Controller Drivers
====================================
|today|
**Contents**
.. toctree::
:maxdepth: 3
:numbered:
e1000em
ixgbe
intel_vf
mlx4
virtio
vmxnet3
pcap_ring
**Figures**
:ref:`Figure 1. Virtualization for a Single Port NIC in SR-IOV Mode <nic_figure_1>`
:ref:`Figure 2. SR-IOV Performance Benchmark Setup <nic_figure_2>`
:ref:`Figure 3. Fast Host-based Packet Processing <nic_figure_3>`
:ref:`Figure 4. SR-IOV Inter-VM Communication <nic_figure_4>`
:ref:`Figure 5. Virtio Host2VM Communication Example Using KNI vhost Back End <nic_figure_5>`
:ref:`Figure 6. Virtio Host2VM Communication Example Using Qemu vhost Back End <nic_figure_6>`

View File

@ -72,13 +72,11 @@ For more detail on SR-IOV, please refer to the following documents:
* `Scalable I/O Virtualized Servers <http://www.intel.com/content/www/us/en/virtualization/server-virtualization/scalable-i-o-virtualized-servers-paper.html>`_
.. _pg_figure_10:
.. _nic_figure_1:
**Figure 10. Virtualization for a Single Port NIC in SR-IOV Mode**
**Figure 1. Virtualization for a Single Port NIC in SR-IOV Mode**
.. image24_png has been renamed
|single_port_nic|
.. image:: img/single_port_nic.*
Physical and Virtual Function Infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -514,13 +512,11 @@ The setup procedure is as follows:
* The Virtual Machine Monitor (see Figure 11) is equivalent to a Host OS with KVM installed as described in the instructions.
.. _pg_figure_11:
.. _nic_figure_2:
**Figure 11. Performance Benchmark Setup**
**Figure 2. Performance Benchmark Setup**
.. image25_png has been renamed
|perf_benchmark|
.. image:: img/perf_benchmark.*
DPDK SR-IOV PMD PF/VF Driver Usage Model
----------------------------------------
@ -538,13 +534,11 @@ DPI can be offloaded on the host fast path.
Figure 12 shows the scenario where some VMs directly communicate externally via a VFs,
while others connect to a virtual switch and share the same uplink bandwidth.
.. _pg_figure_12:
.. _nic_figure_3:
**Figure 12. Fast Host-based Packet Processing**
**Figure 3. Fast Host-based Packet Processing**
.. image26_png has been renamed
|fast_pkt_proc|
.. image:: img/fast_pkt_proc.*
SR-IOV (PF/VF) Approach for Inter-VM Communication
--------------------------------------------------
@ -566,18 +560,8 @@ that is, the packet is forwarded to the correct PF pool.
The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
which belongs to the destination VF on the VM.
.. _pg_figure_13:
.. _nic_figure_4:
**Figure 13. Inter-VM Communication**
**Figure 4. Inter-VM Communication**
.. image27_png has been renamed
|inter_vm_comms|
.. |perf_benchmark| image:: img/perf_benchmark.*
.. |single_port_nic| image:: img/single_port_nic.*
.. |inter_vm_comms| image:: img/inter_vm_comms.*
.. |fast_pkt_proc| image:: img/fast_pkt_proc.*
.. image:: img/inter_vm_comms.*

184
doc/guides/nics/ixgbe.rst Normal file
View File

@ -0,0 +1,184 @@
.. BSD LICENSE
Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name of Intel Corporation nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
IXGBE Driver
============
Vector PMD for IXGBE
--------------------
Vector PMD uses Intel® SIMD instructions to optimize packet I/O.
It improves load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX register 1 (1).
The wider register gives space to hold multiple packet buffers so as to save instruction number when processing bulk of packets.
There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet I/O.
They are transparently registered at runtime RX/TX execution if all condition checks pass.
1. To date, only an SSE version of IX GBE vPMD is available.
To ensure that vPMD is in the binary code, ensure that the option CONFIG_RTE_IXGBE_INC_VECTOR=y is in the configure file.
Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
The following sections explain RX and TX constraints in the vPMD.
RX Constraints
~~~~~~~~~~~~~~
Prerequisites and Pre-conditions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following prerequisites apply:
* To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
* The RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y configuration MACRO must be set before compiling the code.
Ensure that the following pre-conditions are satisfied:
* rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
* rxq->rx_free_thresh < rxq->nb_rx_desc
* (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
* rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
These conditions are checked in the code.
Scattered packets are not supported in this mode.
If an incoming packet is greater than the maximum acceptable length of one "mbuf" data size (by default, the size is 2 KB),
vPMD for RX would be disabled.
By default, IXGBE_MAX_RING_DESC is set to 4096 and RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
Feature not Supported by RX Vector PMD
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some features are not supported when trying to increase the throughput in vPMD.
They are:
* IEEE1588
* FDIR
* Header split
* RX checksum off load
Other features are supported using optional MACRO configuration. They include:
* HW VLAN strip
* HW extend dual VLAN
* Enabled by RX_OLFLAGS (RTE_IXGBE_RX_OLFLAGS_DISABLE=n)
To guarantee the constraint, configuration flags in dev_conf.rxmode will be checked:
* hw_vlan_strip
* hw_vlan_extend
* hw_ip_checksum
* header_split
* dev_conf
fdir_conf->mode will also be checked.
RX Burst Size
^^^^^^^^^^^^^
As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater than 32 per burst.
It returns zero if using nb_pkt < 32 as the expected packet number in the receive handler.
TX Constraint
~~~~~~~~~~~~~
Prerequisite
^^^^^^^^^^^^
The only prerequisite is related to tx_rs_thresh.
The tx_rs_thresh value must be greater than or equal to RTE_PMD_IXGBE_TX_MAX_BURST,
but less or equal to RTE_IXGBE_TX_MAX_FREE_BUF_SZ.
Consequently, by default the tx_rs_thresh value is in the range 32 to 64.
Feature not Supported by RX Vector PMD
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TX vPMD only works when txq_flags is set to IXGBE_SIMPLE_FLAGS.
This means that it does not support TX multi-segment, VLAN offload and TX csum offload.
The following MACROs are used for these three features:
* ETH_TXQ_FLAGS_NOMULTSEGS
* ETH_TXQ_FLAGS_NOVLANOFFL
* ETH_TXQ_FLAGS_NOXSUMSCTP
* ETH_TXQ_FLAGS_NOXSUMUDP
* ETH_TXQ_FLAGS_NOXSUMTCP
Sample Application Notes
~~~~~~~~~~~~~~~~~~~~~~~~
testpmd
^^^^^^^
By default, using CONFIG_RTE_IXGBE_RX_OLFLAGS_DISABLE=n:
.. code-block:: console
./x86_64-native-linuxapp-gcc/app/testpmd -c 300 -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --txqflags=0xf01
When CONFIG_RTE_IXGBE_RX_OLFLAGS_DISABLE=y, better performance can be achieved:
.. code-block:: console
./x86_64-native-linuxapp-gcc/app/testpmd -c 300 -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --txqflags=0xf01 --disable-hw-vlan
If scatter gather lists are not required, set CONFIG_RTE_MBUF_SCATTER_GATHER=n for better throughput.
l3fwd
^^^^^
When running l3fwd with vPMD, there is one thing to note.
In the configuration, ensure that port_conf.rxmode.hw_ip_checksum=0.
Otherwise, by default, RX vPMD is disabled.
load_balancer
^^^^^^^^^^^^^
As in the case of l3fwd, set configure port_conf.rxmode.hw_ip_checksum=0 to enable vPMD.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.

View File

@ -199,9 +199,7 @@ Multiple devices may be specified, separated by commas.
Telling cores to stop...
Waiting for lcores to finish...
.. image38_png has been renamed
|forward_stats|
.. image:: img/forward_stats.*
.. code-block:: console
@ -267,5 +265,3 @@ while the rte_ring specific functions are direct function calls in the code and
by calling rte_eth_dev_configure() to set the number of receive and transmit queues,
then calling rte_eth_rx_queue_setup() / tx_queue_setup() for each of those queues and
finally calling rte_eth_dev_start() to allow transmission and reception of packets to begin.
.. |forward_stats| image:: img/forward_stats.*

View File

@ -106,13 +106,11 @@ Virtio with kni vhost Back End
This section demonstrates kni vhost back end example setup for Phy-VM Communication.
.. _pg_figure_14:
.. _nic_figure_5:
**Figure 14. Host2VM Communication Example Using kni vhost Back End**
**Figure 5. Host2VM Communication Example Using kni vhost Back End**
.. image29_png has been renamed
|host_vm_comms|
.. image:: img/host_vm_comms.*
Host2VM communication example
@ -176,9 +174,7 @@ Host2VM communication example
We use testpmd as the forwarding application in this example.
.. image30_png has been renamed
|console|
.. image:: img/console.*
#. Use IXIA packet generator to inject a packet stream into the KNI physical port.
@ -189,13 +185,11 @@ Host2VM communication example
Virtio with qemu virtio Back End
--------------------------------
.. _pg_figure_15:
.. _nic_figure_6:
**Figure 15. Host2VM Communication Example Using qemu vhost Back End**
**Figure 6. Host2VM Communication Example Using qemu vhost Back End**
.. image31_png has been renamed
|host_vm_comms_qemu|
.. image:: img/host_vm_comms_qemu.*
.. code-block:: console
@ -213,9 +207,3 @@ In this example, the packet reception flow path is:
The packet transmission flow is:
IXIA packet generator-> Guest VM 82599 VF port1 rx burst-> Guest VM virtio port 0 tx burst-> tap -> Linux Bridge->82599 PF-> IXIA packet generator
.. |host_vm_comms| image:: img/host_vm_comms.*
.. |console| image:: img/console.*
.. |host_vm_comms_qemu| image:: img/host_vm_comms_qemu.*

View File

@ -121,9 +121,7 @@ The following prerequisites apply:
* Before starting a VM, a VMXNET3 interface to a VM through VMware vSphere Client must be assigned.
This is shown in the figure below.
.. image32_png has been renamed
|vmxnet3_int|
.. image:: img/vmxnet3_int.*
.. note::
@ -144,9 +142,7 @@ VMXNET3 with a Native NIC Connected to a vSwitch
This section describes an example setup for Phy-vSwitch-VM-Phy communication.
.. image33_png has been renamed
|vswitch_vm|
.. image:: img/vswitch_vm.*
.. note::
@ -163,9 +159,7 @@ VMXNET3 Chaining VMs Connected to a vSwitch
The following figure shows an example VM-to-VM communication over a Phy-VM-vSwitch-VM-Phy communication channel.
.. image34_png has been renamed
|vm_vm_comms|
.. image:: img/vm_vm_comms.*
.. note::
@ -176,9 +170,3 @@ In this example, the packet flow path is:
Packet generator -> 82599 VF -> Guest VM 82599 port 0 rx burst -> Guest VM VMXNET3 port 1 tx burst -> VMXNET3
device -> VMware ESXi vSwitch -> VMXNET3 device -> Guest VM VMXNET3 port 0 rx burst -> Guest VM 82599 VF port 1 tx burst -> 82599 VF -> Packet generator
.. |vm_vm_comms| image:: img/vm_vm_comms.*
.. |vmxnet3_int| image:: img/vmxnet3_int.*
.. |vswitch_vm| image:: img/vswitch_vm.*

View File

@ -48,14 +48,8 @@ Programmer's Guide
mempool_lib
mbuf_lib
poll_mode_drv
i40e_ixgbe_igb_virt_func_drv
driver_vm_emul_dev
ivshmem_lib
poll_mode_drv_emulated_virtio_nic
poll_mode_drv_paravirtual_vmxnets_nic
libpcap_ring_based_poll_mode_drv
link_bonding_poll_mode_drv_lib
mlx4_poll_mode_drv
timer_lib
hash_lib
lpm_lib
@ -104,18 +98,6 @@ Programmer's Guide
:ref:`Figure 9. An mbuf with Three Segments <pg_figure_9>`
:ref:`Figure 10. Virtualization for a Single Port NIC in SR-IOV Mode <pg_figure_10>`
:ref:`Figure 11. Performance Benchmark Setup <pg_figure_11>`
:ref:`Figure 12. Fast Host-based Packet Processing <pg_figure_12>`
:ref:`Figure 13. Inter-VM Communication <pg_figure_13>`
:ref:`Figure 14. Host2VM Communication Example Using kni vhost Back End <pg_figure_14>`
:ref:`Figure 15. Host2VM Communication Example Using qemu vhost Back End <pg_figure_15>`
:ref:`Figure 16. Memory Sharing inthe Intel® DPDK Multi-process Sample Application <pg_figure_16>`
:ref:`Figure 17. Components of an Intel® DPDK KNI Application <pg_figure_17>`

View File

@ -288,155 +288,3 @@ Ethernet Device API
~~~~~~~~~~~~~~~~~~~
The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK API Reference*.
Vector PMD for IXGBE
--------------------
Vector PMD uses Intel® SIMD instructions to optimize packet I/O.
It improves load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX register 1 (1).
The wider register gives space to hold multiple packet buffers so as to save instruction number when processing bulk of packets.
There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet I/O.
They are transparently registered at runtime RX/TX execution if all condition checks pass.
1. To date, only an SSE version of IX GBE vPMD is available.
To ensure that vPMD is in the binary code, ensure that the option CONFIG_RTE_IXGBE_INC_VECTOR=y is in the configure file.
Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
The following sections explain RX and TX constraints in the vPMD.
RX Constraints
~~~~~~~~~~~~~~
Prerequisites and Pre-conditions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following prerequisites apply:
* To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
* The RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y configuration MACRO must be set before compiling the code.
Ensure that the following pre-conditions are satisfied:
* rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
* rxq->rx_free_thresh < rxq->nb_rx_desc
* (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
* rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
These conditions are checked in the code.
Scattered packets are not supported in this mode.
If an incoming packet is greater than the maximum acceptable length of one "mbuf" data size (by default, the size is 2 KB),
vPMD for RX would be disabled.
By default, IXGBE_MAX_RING_DESC is set to 4096 and RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
Feature not Supported by RX Vector PMD
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some features are not supported when trying to increase the throughput in vPMD.
They are:
* IEEE1588
* FDIR
* Header split
* RX checksum off load
Other features are supported using optional MACRO configuration. They include:
* HW VLAN strip
* HW extend dual VLAN
* Enabled by RX_OLFLAGS (RTE_IXGBE_RX_OLFLAGS_DISABLE=n)
To guarantee the constraint, configuration flags in dev_conf.rxmode will be checked:
* hw_vlan_strip
* hw_vlan_extend
* hw_ip_checksum
* header_split
* dev_conf
fdir_conf->mode will also be checked.
RX Burst Size
^^^^^^^^^^^^^
As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater than 32 per burst.
It returns zero if using nb_pkt < 32 as the expected packet number in the receive handler.
TX Constraint
~~~~~~~~~~~~~
Prerequisite
^^^^^^^^^^^^
The only prerequisite is related to tx_rs_thresh.
The tx_rs_thresh value must be greater than or equal to RTE_PMD_IXGBE_TX_MAX_BURST,
but less or equal to RTE_IXGBE_TX_MAX_FREE_BUF_SZ.
Consequently, by default the tx_rs_thresh value is in the range 32 to 64.
Feature not Supported by RX Vector PMD
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TX vPMD only works when txq_flags is set to IXGBE_SIMPLE_FLAGS.
This means that it does not support TX multi-segment, VLAN offload and TX csum offload.
The following MACROs are used for these three features:
* ETH_TXQ_FLAGS_NOMULTSEGS
* ETH_TXQ_FLAGS_NOVLANOFFL
* ETH_TXQ_FLAGS_NOXSUMSCTP
* ETH_TXQ_FLAGS_NOXSUMUDP
* ETH_TXQ_FLAGS_NOXSUMTCP
Sample Application Notes
~~~~~~~~~~~~~~~~~~~~~~~~
testpmd
^^^^^^^
By default, using CONFIG_RTE_IXGBE_RX_OLFLAGS_DISABLE=n:
.. code-block:: console
./x86_64-native-linuxapp-gcc/app/testpmd -c 300 -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --txqflags=0xf01
When CONFIG_RTE_IXGBE_RX_OLFLAGS_DISABLE=y, better performance can be achieved:
.. code-block:: console
./x86_64-native-linuxapp-gcc/app/testpmd -c 300 -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --txqflags=0xf01 --disable-hw-vlan
If scatter gather lists are not required, set CONFIG_RTE_MBUF_SCATTER_GATHER=n for better throughput.
l3fwd
^^^^^
When running l3fwd with vPMD, there is one thing to note.
In the configuration, ensure that port_conf.rxmode.hw_ip_checksum=0.
Otherwise, by default, RX vPMD is disabled.
load_balancer
^^^^^^^^^^^^^
As in the case of l3fwd, set configure port_conf.rxmode.hw_ip_checksum=0 to enable vPMD.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.