13fa083c18
Add recommended matching list for ice PMD in DPDK 22.07 and i40e PMD in DPDK 22.07 and 22.11. Signed-off-by: Qiming Yang <qiming.yang@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
999 lines
37 KiB
ReStructuredText
999 lines
37 KiB
ReStructuredText
.. SPDX-License-Identifier: BSD-3-Clause
|
|
Copyright(c) 2016 Intel Corporation.
|
|
|
|
I40E Poll Mode Driver
|
|
======================
|
|
|
|
The i40e PMD (**librte_net_i40e**) provides poll mode driver support for
|
|
10/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on
|
|
the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet
|
|
Connection X722 (only support part of features).
|
|
|
|
|
|
Features
|
|
--------
|
|
|
|
Features of the i40e PMD are:
|
|
|
|
- Multiple queues for TX and RX
|
|
- Receiver Side Scaling (RSS)
|
|
- MAC/VLAN filtering
|
|
- Packet type information
|
|
- Flow director
|
|
- Cloud filter
|
|
- Checksum offload
|
|
- VLAN/QinQ stripping and inserting
|
|
- TSO offload
|
|
- Promiscuous mode
|
|
- Multicast mode
|
|
- Port hardware statistics
|
|
- Jumbo frames
|
|
- Link state information
|
|
- Link flow control
|
|
- Mirror on port, VLAN and VSI
|
|
- Interrupt mode for RX
|
|
- Scattered and gather for TX and RX
|
|
- Vector Poll mode driver
|
|
- DCB
|
|
- VMDQ
|
|
- SR-IOV VF
|
|
- Hot plug
|
|
- IEEE1588/802.1AS timestamping
|
|
- VF Daemon (VFD) - EXPERIMENTAL
|
|
- Dynamic Device Personalization (DDP)
|
|
- Queue region configuration
|
|
- Virtual Function Port Representors
|
|
- Malicious Device Drive event catch and notify
|
|
- Generic flow API
|
|
|
|
Linux Prerequisites
|
|
-------------------
|
|
|
|
- Identifying your adapter using `Intel Support
|
|
<http://www.intel.com/support>`_ and get the latest NVM/FW images.
|
|
|
|
- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
|
|
|
|
- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
|
|
section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
|
|
|
|
- Upgrade the NVM/FW version following the `Intel® Ethernet NVM Update Tool Quick Usage Guide for Linux
|
|
<https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ and `Intel® Ethernet NVM Update Tool: Quick Usage Guide for EFI <https://www.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-efi-usage-guide.html>`_ if needed.
|
|
|
|
- For information about supported media, please refer to this document: `Intel® Ethernet Controller X710/XXV710/XL710 Feature Support Matrix
|
|
<http://www.intel.com/content/dam/www/public/us/en/documents/release-notes/xl710-ethernet-controller-feature-matrix.pdf>`_.
|
|
|
|
.. Note::
|
|
|
|
* Some adapters based on the Intel(R) Ethernet Controller 700 Series only
|
|
support Intel Ethernet Optics modules. On these adapters, other modules are not
|
|
supported and will not function.
|
|
|
|
* For connections based on Intel(R) Ethernet Controller 700 Series,
|
|
support is dependent on your system board. Please see your vendor for details.
|
|
|
|
* In all cases Intel recommends using Intel Ethernet Optics; other modules
|
|
may function but are not validated by Intel. Contact Intel for supported media types.
|
|
|
|
Windows Prerequisites
|
|
---------------------
|
|
|
|
- Follow the :doc:`guide for Windows <../windows_gsg/run_apps>`
|
|
to setup the basic DPDK environment.
|
|
|
|
- Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
|
|
|
|
- To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
|
|
|
|
- To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
|
|
<https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
|
|
|
|
Recommended Matching List
|
|
-------------------------
|
|
|
|
It is highly recommended to upgrade the i40e kernel driver and firmware to
|
|
avoid the compatibility issues with i40e PMD. Here is the suggested matching
|
|
list which has been tested and verified. The detailed information can refer
|
|
to chapter Tested Platforms/Tested NICs in release notes.
|
|
|
|
For X710/XL710/XXV710,
|
|
|
|
+--------------+-----------------------+------------------+
|
|
| DPDK version | Kernel driver version | Firmware version |
|
|
+==============+=======================+==================+
|
|
| 22.11 | 2.20.12 | 9.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 22.07 | 2.19.3 | 8.70 |
|
|
+--------------+-----------------------+------------------+
|
|
| 22.03 | 2.17.15 | 8.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.11 | 2.17.4 | 8.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.08 | 2.15.9 | 8.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.05 | 2.15.9 | 8.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.02 | 2.14.13 | 8.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.11 | 2.14.13 | 8.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.08 | 2.12.6 | 7.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.05 | 2.11.27 | 7.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.02 | 2.10.19 | 7.20 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.11 | 2.9.21 | 7.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.08 | 2.8.43 | 7.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.05 | 2.7.29 | 6.80 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.02 | 2.7.26 | 6.80 |
|
|
+--------------+-----------------------+------------------+
|
|
| 18.11 | 2.4.6 | 6.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 18.08 | 2.4.6 | 6.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 18.05 | 2.4.6 | 6.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 18.02 | 2.4.3 | 6.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 17.11 | 2.1.26 | 6.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 17.08 | 2.0.19 | 6.01 |
|
|
+--------------+-----------------------+------------------+
|
|
| 17.05 | 1.5.23 | 5.05 |
|
|
+--------------+-----------------------+------------------+
|
|
| 17.02 | 1.5.23 | 5.05 |
|
|
+--------------+-----------------------+------------------+
|
|
| 16.11 | 1.5.23 | 5.05 |
|
|
+--------------+-----------------------+------------------+
|
|
| 16.07 | 1.4.25 | 5.04 |
|
|
+--------------+-----------------------+------------------+
|
|
| 16.04 | 1.4.25 | 5.02 |
|
|
+--------------+-----------------------+------------------+
|
|
|
|
|
|
For X722,
|
|
|
|
+--------------+-----------------------+------------------+
|
|
| DPDK version | Kernel driver version | Firmware version |
|
|
+==============+=======================+==================+
|
|
| 22.11 | 2.20.12 | 6.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 22.07 | 2.19.3 | 5.60 |
|
|
+--------------+-----------------------+------------------+
|
|
| 22.03 | 2.17.15 | 5.50 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.11 | 2.17.4 | 5.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.08 | 2.15.9 | 5.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.05 | 2.15.9 | 5.30 |
|
|
+--------------+-----------------------+------------------+
|
|
| 21.02 | 2.14.13 | 5.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.11 | 2.13.10 | 5.00 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.08 | 2.12.6 | 4.11 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.05 | 2.11.27 | 4.11 |
|
|
+--------------+-----------------------+------------------+
|
|
| 20.02 | 2.10.19 | 4.11 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.11 | 2.9.21 | 4.10 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.08 | 2.9.21 | 4.10 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.05 | 2.7.29 | 3.33 |
|
|
+--------------+-----------------------+------------------+
|
|
| 19.02 | 2.7.26 | 3.33 |
|
|
+--------------+-----------------------+------------------+
|
|
| 18.11 | 2.4.6 | 3.33 |
|
|
+--------------+-----------------------+------------------+
|
|
|
|
|
|
Pre-Installation Configuration
|
|
------------------------------
|
|
|
|
Config File Options
|
|
~~~~~~~~~~~~~~~~~~~
|
|
|
|
The following options can be modified in the ``config/rte_config.h`` file.
|
|
|
|
- ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
|
|
|
|
Number of queues reserved for PF.
|
|
|
|
- ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
|
|
|
|
Number of queues reserved for each VMDQ Pool.
|
|
|
|
Runtime Config Options
|
|
~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
- ``Reserved number of Queues per VF`` (default ``4``)
|
|
|
|
The number of reserved queue per VF is determined by its host PF. If the
|
|
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
|
|
VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
|
|
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
|
|
number of reserved queues per VF is 4 by default. If VF request more than
|
|
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
|
|
reset.
|
|
|
|
|
|
- ``Support multiple driver`` (default ``disable``)
|
|
|
|
There was a multiple driver support issue during use of 700 series Ethernet
|
|
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
|
|
parameter ``support-multi-driver`` is introduced, for example::
|
|
|
|
-a 84:00.0,support-multi-driver=1
|
|
|
|
With the above configuration, DPDK PMD will not change global registers, and
|
|
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
|
|
DPDK and Linux Kernel.
|
|
|
|
- ``Support VF Port Representor`` (default ``not enabled``)
|
|
|
|
The i40e PF PMD supports the creation of VF port representors for the control
|
|
and monitoring of i40e virtual function devices. Each port representor
|
|
corresponds to a single virtual function of that device. Using the ``devargs``
|
|
option ``representor`` the user can specify which virtual functions to create
|
|
port representors for on initialization of the PF PMD by passing the VF IDs of
|
|
the VFs which are required.::
|
|
|
|
-a DBDF,representor=[0,1,4]
|
|
|
|
Currently hot-plugging of representor ports is not supported so all required
|
|
representors must be specified on the creation of the PF.
|
|
|
|
- ``Enable validation for VF message`` (default ``not enabled``)
|
|
|
|
The PF counts messages from each VF. If in any period of seconds the message
|
|
statistic from a VF exceeds maximal limitation, the PF will ignore any new message
|
|
from that VF for some seconds.
|
|
Format -- "maximal-message@period-seconds:ignore-seconds"
|
|
For example::
|
|
|
|
-a 84:00.0,vf_msg_cfg=80@120:180
|
|
|
|
Vector RX Pre-conditions
|
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
|
For Vector RX it is assumed that the number of descriptor rings will be a power
|
|
of 2. With this pre-condition, the ring pointer can easily scroll back to the
|
|
head after hitting the tail without a conditional check. In addition Vector RX
|
|
can use this assumption to do a bit mask using ``ring_size - 1``.
|
|
|
|
Driver compilation and testing
|
|
------------------------------
|
|
|
|
Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
|
|
for details.
|
|
|
|
|
|
SR-IOV: Prerequisites and sample Application Notes
|
|
--------------------------------------------------
|
|
|
|
#. Load the kernel module:
|
|
|
|
.. code-block:: console
|
|
|
|
modprobe i40e
|
|
|
|
Check the output in dmesg:
|
|
|
|
.. code-block:: console
|
|
|
|
i40e 0000:83:00.1 ens802f0: renamed from eth0
|
|
|
|
#. Bring up the PF ports:
|
|
|
|
.. code-block:: console
|
|
|
|
ifconfig ens802f0 up
|
|
|
|
#. Create VF device(s):
|
|
|
|
Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
|
|
of the parent PF.
|
|
|
|
Example:
|
|
|
|
.. code-block:: console
|
|
|
|
echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
|
|
|
|
|
|
#. Assign VF MAC address:
|
|
|
|
Assign MAC address to the VF using iproute2 utility. The syntax is:
|
|
|
|
.. code-block:: console
|
|
|
|
ip link set <PF netdev id> vf <VF id> mac <macaddr>
|
|
|
|
Example:
|
|
|
|
.. code-block:: console
|
|
|
|
ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
|
|
|
|
#. Assign VF to VM, and bring up the VM.
|
|
Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
|
|
|
|
#. Running testpmd:
|
|
|
|
Follow instructions available in the document
|
|
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
|
|
to run testpmd.
|
|
|
|
Example output:
|
|
|
|
.. code-block:: console
|
|
|
|
...
|
|
EAL: PCI device 0000:83:00.0 on NUMA socket 1
|
|
EAL: probe driver: 8086:1572 rte_i40e_pmd
|
|
EAL: PCI memory mapped at 0x7f7f80000000
|
|
EAL: PCI memory mapped at 0x7f7f80800000
|
|
PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
|
|
Interactive-mode selected
|
|
Configuring Port 0 (socket 0)
|
|
...
|
|
|
|
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
|
|
satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
|
|
|
|
...
|
|
Port 0: 68:05:CA:26:85:84
|
|
Checking link statuses...
|
|
Port 0 Link Up - speed 10000 Mbps - full-duplex
|
|
Done
|
|
|
|
testpmd>
|
|
|
|
|
|
Sample Application Notes
|
|
------------------------
|
|
|
|
Vlan filter
|
|
~~~~~~~~~~~
|
|
|
|
Vlan filter only works when Promiscuous mode is off.
|
|
|
|
To start ``testpmd``, and add vlan 10 to port 0:
|
|
|
|
.. code-block:: console
|
|
|
|
./<build_dir>/app/dpdk-testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
|
|
...
|
|
|
|
testpmd> set promisc 0 off
|
|
testpmd> rx_vlan add 10 0
|
|
|
|
|
|
Flow Director
|
|
~~~~~~~~~~~~~
|
|
|
|
The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
|
|
The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
|
|
|
|
The default input set of each flow type is::
|
|
|
|
ipv4-other : src_ip_address, dst_ip_address
|
|
ipv4-frag : src_ip_address, dst_ip_address
|
|
ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
|
|
ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
|
|
ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
|
|
verification_tag
|
|
ipv6-other : src_ip_address, dst_ip_address
|
|
ipv6-frag : src_ip_address, dst_ip_address
|
|
ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
|
|
ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
|
|
ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
|
|
verification_tag
|
|
l2_payload : ether_type
|
|
|
|
The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
|
|
|
|
Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
|
|
|
|
.. code-block:: console
|
|
|
|
./<build_dir>/app/dpdk-testpmd -l 0-15 -n 4 -- -i --disable-rss \
|
|
--pkt-filter-mode=perfect --rxq=8 --txq=8 --nb-cores=8 \
|
|
--nb-ports=1
|
|
|
|
Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.3 \
|
|
dst is 2.2.2.5 / udp src is 32 dst is 32 / end \
|
|
actions mark id 1 / queue index 1 / end
|
|
|
|
Check the flow director status:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> show port fdir 0
|
|
|
|
######################## FDIR infos for port 0 ####################
|
|
MODE: PERFECT
|
|
SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
|
|
ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
|
|
l2_payload
|
|
FLEX PAYLOAD INFO:
|
|
max_len: 16 payload_limit: 480
|
|
payload_unit: 2 payload_seg: 3
|
|
bitmask_unit: 2 bitmask_num: 2
|
|
MASK:
|
|
vlan_tci: 0x0000,
|
|
src_ipv4: 0x00000000,
|
|
dst_ipv4: 0x00000000,
|
|
src_port: 0x0000,
|
|
dst_port: 0x0000
|
|
src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
|
|
dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
|
|
FLEX PAYLOAD SRC OFFSET:
|
|
L2_PAYLOAD: 0 1 2 3 4 5 6 ...
|
|
L3_PAYLOAD: 0 1 2 3 4 5 6 ...
|
|
L4_PAYLOAD: 0 1 2 3 4 5 6 ...
|
|
FLEX MASK CFG:
|
|
ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|
|
guarant_count: 1 best_count: 0
|
|
guarant_space: 512 best_space: 7168
|
|
collision: 0 free: 0
|
|
maxhash: 0 maxlen: 0
|
|
add: 0 remove: 0
|
|
f_add: 0 f_remove: 0
|
|
|
|
|
|
Floating VEB
|
|
~~~~~~~~~~~~~
|
|
|
|
The Intel® Ethernet 700 Series support a feature called
|
|
"Floating VEB".
|
|
|
|
A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
|
|
for functionality that allows local switching between virtual endpoints within
|
|
a physical endpoint and also with an external bridge/network.
|
|
|
|
A "Floating" VEB doesn't have an uplink connection to the outside world so all
|
|
switching is done internally and remains within the host. As such, this
|
|
feature provides security benefits.
|
|
|
|
In addition, a Floating VEB overcomes a limitation of normal VEBs where they
|
|
cannot forward packets when the physical link is down. Floating VEBs don't need
|
|
to connect to the NIC port so they can still forward traffic from VF to VF
|
|
even when the physical link is down.
|
|
|
|
Therefore, with this feature enabled VFs can be limited to communicating with
|
|
each other but not an outside network, and they can do so even when there is
|
|
no physical uplink on the associated NIC port.
|
|
|
|
To enable this feature, the user should pass a ``devargs`` parameter to the
|
|
EAL, for example::
|
|
|
|
-a 84:00.0,enable_floating_veb=1
|
|
|
|
In this configuration the PMD will use the floating VEB feature for all the
|
|
VFs created by this PF device.
|
|
|
|
Alternatively, the user can specify which VFs need to connect to this floating
|
|
VEB using the ``floating_veb_list`` argument::
|
|
|
|
-a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
|
|
|
|
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
|
|
while other VFs connect to the normal VEB.
|
|
|
|
The current implementation only supports one floating VEB and one regular
|
|
VEB. VFs can connect to a floating VEB or a regular VEB according to the
|
|
configuration passed on the EAL command line.
|
|
|
|
The floating VEB functionality requires a NIC firmware version of 5.0
|
|
or greater.
|
|
|
|
Dynamic Device Personalization (DDP)
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The Intel® Ethernet 700 Series except for the Intel Ethernet Connection
|
|
X722 support a feature called "Dynamic Device Personalization (DDP)",
|
|
which is used to configure hardware by downloading a profile to support
|
|
protocols/filters which are not supported by default. The DDP
|
|
functionality requires a NIC firmware version of 6.0 or greater.
|
|
|
|
Current implementation supports GTP-C/GTP-U/PPPoE/PPPoL2TP/ESP,
|
|
steering can be used with rte_flow API.
|
|
|
|
GTPv1 package is released, and it can be downloaded from
|
|
https://downloadcenter.intel.com/download/27587.
|
|
|
|
PPPoE package is released, and it can be downloaded from
|
|
https://downloadcenter.intel.com/download/28040.
|
|
|
|
ESP-AH package is released, and it can be downloaded from
|
|
https://downloadcenter.intel.com/download/29446.
|
|
|
|
Load a profile which supports GTP and store backup profile:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> ddp add 0 ./gtp.pkgo,./backup.pkgo
|
|
|
|
Delete a GTP profile and restore backup profile:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> ddp del 0 ./backup.pkgo
|
|
|
|
Get loaded DDP package info list:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> ddp get list 0
|
|
|
|
Display information about a GTP profile:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> ddp get info ./gtp.pkgo
|
|
|
|
Input set configuration
|
|
~~~~~~~~~~~~~~~~~~~~~~~
|
|
Input set for any PCTYPE can be configured with user defined configuration,
|
|
For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> port config 0 pctype 43 hash_inset clear all
|
|
testpmd> port config 0 pctype 43 hash_inset set field 13
|
|
testpmd> port config 0 pctype 43 hash_inset set field 14
|
|
testpmd> port config 0 pctype 43 hash_inset set field 15
|
|
|
|
Queue region configuration
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
The Intel® Ethernet 700 Series supports a feature of queue regions
|
|
configuration for RSS in the PF, so that different traffic classes or
|
|
different packet classification types can be separated to different
|
|
queues in different queue regions. There is an API for configuration
|
|
of queue regions in RSS with a command line. It can parse the parameters
|
|
of the region index, queue number, queue start index, user priority, traffic
|
|
classes and so on. Depending on commands from the command line, it will call
|
|
i40e private APIs and start the process of setting or flushing the queue
|
|
region configuration. As this feature is specific for i40e only private
|
|
APIs are used.
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> set port (port_id) queue-region region_id (value) \
|
|
queue_start_index (value) queue_num (value)
|
|
testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
|
|
testpmd> set port (port_id) queue-region UP (value) region_id (value)
|
|
testpmd> set port (port_id) queue-region flush (on|off)
|
|
testpmd> show port (port_id) queue-region
|
|
|
|
Generic flow API
|
|
~~~~~~~~~~~~~~~~~~~
|
|
|
|
- ``RSS Flow``
|
|
|
|
RSS Flow supports to set hash input set, hash function, enable hash
|
|
and configure queues.
|
|
For example:
|
|
Configure queues as queue 0, 1, 2, 3.
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> flow create 0 ingress pattern end actions rss types end \
|
|
queues 0 1 2 3 end / end
|
|
|
|
Enable hash and set input set for ipv4-tcp.
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
|
|
actions rss types ipv4-tcp l3-src-only end queues end / end
|
|
|
|
Set symmetric hash enable for flow type ipv4-tcp.
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
|
|
actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
|
|
|
|
Set hash function as simple xor.
|
|
|
|
.. code-block:: console
|
|
|
|
testpmd> flow create 0 ingress pattern end actions rss types end \
|
|
queues end func simple_xor / end
|
|
|
|
Limitations or Known issues
|
|
---------------------------
|
|
|
|
MPLS packet classification
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
|
|
The L2 Payload flow type in flow director can be used to classify MPLS packet
|
|
by using a command in testpmd like:
|
|
|
|
testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
|
|
0x8847 flexbytes () fwd pf queue <N> fd_id <M>
|
|
|
|
With the NIC firmware version 5.0 or greater, some limited MPLS support
|
|
is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
|
|
new packet type, no classification or offload are possible. With this change,
|
|
L2 Payload flow type in flow director cannot be used to classify MPLS packet
|
|
as with previous firmware versions. Meanwhile, the Ethertype filter can be
|
|
used to classify MPLS packet by using a command in testpmd like:
|
|
|
|
testpmd> flow create 0 ingress pattern eth type is 0x8847 / end \
|
|
actions queue index <M> / end
|
|
|
|
Receive packets with Ethertype 0x88A8
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
|
|
only when floating VEB is disabled.
|
|
|
|
Incorrect Rx statistics when packet is oversize
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
When a packet is over maximum frame size, the packet is dropped.
|
|
However, the Rx statistics, when calling `rte_eth_stats_get` incorrectly
|
|
shows it as received.
|
|
|
|
RX/TX statistics may be incorrect when register overflowed
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The rx_bytes/tx_bytes statistics register is 48 bit length.
|
|
Although this limitation is enlarged to 64 bit length on the software side,
|
|
but there is no way to detect if the overflow occurred more than once.
|
|
So rx_bytes/tx_bytes statistics data is correct when statistics are
|
|
updated at least once between two overflows.
|
|
|
|
VF & TC max bandwidth setting
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
|
|
The behavior is different when handling per VF and per TC max bandwidth setting.
|
|
When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
|
|
enabled. If so, return failure.
|
|
When enabling per TC max bandwidth, SW will check if per VF max bandwidth
|
|
is enabled. If so, disable per VF max bandwidth and continue with per TC max
|
|
bandwidth setting.
|
|
|
|
TC TX scheduling mode setting
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
There are 2 TX scheduling modes for TCs, round robin and strict priority mode.
|
|
If a TC is set to strict priority mode, it can consume unlimited bandwidth.
|
|
It means if APP has set the max bandwidth for that TC, it comes to no
|
|
effect.
|
|
It's suggested to set the strict priority mode for a TC that is latency
|
|
sensitive but no consuming much bandwidth.
|
|
|
|
DCB function
|
|
~~~~~~~~~~~~
|
|
|
|
DCB works only when RSS is enabled.
|
|
|
|
Global configuration warning
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
I40E PMD will set some global registers to enable some function or set some
|
|
configure. Then when using different ports of the same NIC with Linux kernel
|
|
and DPDK, the port with Linux kernel will be impacted by the port with DPDK.
|
|
For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e
|
|
PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A
|
|
with DPDK, then the configuration will also impact port B in the NIC with
|
|
kernel driver, which don't want to use the TPID.
|
|
So PMD reports warning to clarify what is changed by writing global register.
|
|
|
|
Cloud Filter
|
|
~~~~~~~~~~~~
|
|
|
|
When programming cloud filters for IPv4/6_UDP/TCP/SCTP with SRC port only or DST port only,
|
|
it will make any cloud filter using inner_vlan or tunnel key invalid. Default configuration will be
|
|
recovered only by NIC core reset.
|
|
|
|
Mirror rule limitation for X722
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Due to firmware restriction of X722, the same VSI cannot have more than one mirror rule.
|
|
|
|
.. _net_i40e_testpmd_commands:
|
|
|
|
Testpmd driver specific commands
|
|
--------------------------------
|
|
|
|
Some i40e driver specific features are integrated in testpmd.
|
|
|
|
RSS queue region
|
|
~~~~~~~~~~~~~~~~
|
|
|
|
Set RSS queue region span on a port::
|
|
|
|
testpmd> set port (port_id) queue-region region_id (value) \
|
|
queue_start_index (value) queue_num (value)
|
|
|
|
Set flowtype mapping on a RSS queue region on a port::
|
|
|
|
testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
|
|
|
|
where:
|
|
|
|
* For the flowtype(pctype) of packet,the specific index for each type has
|
|
been defined in file i40e_type.h as enum i40e_filter_pctype.
|
|
|
|
Set user priority mapping on a RSS queue region on a port::
|
|
|
|
testpmd> set port (port_id) queue-region UP (value) region_id (value)
|
|
|
|
Flush all queue region related configuration on a port::
|
|
|
|
testpmd> set port (port_id) queue-region flush (on|off)
|
|
|
|
where:
|
|
|
|
* ``on``: is just an enable function which server for other configuration,
|
|
it is for all configuration about queue region from up layer,
|
|
at first will only keep in DPDK software stored in driver,
|
|
only after "flush on", it commit all configuration to HW.
|
|
|
|
* ``"off``: is just clean all configuration about queue region just now,
|
|
and restore all to DPDK i40e driver default config when start up.
|
|
|
|
Show all queue region related configuration info on a port::
|
|
|
|
testpmd> show port (port_id) queue-region
|
|
|
|
.. note::
|
|
|
|
Queue region only support on PF by now, so these command is
|
|
only for configuration of queue region on PF port.
|
|
|
|
set promisc (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set the unicast promiscuous mode for a VF from PF.
|
|
It's supported by Intel i40e NICs now.
|
|
In promiscuous mode packets are not dropped if they aren't for the specified MAC address::
|
|
|
|
testpmd> set vf promisc (port_id) (vf_id) (on|off)
|
|
|
|
set allmulticast (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set the multicast promiscuous mode for a VF from PF.
|
|
It's supported by Intel i40e NICs now.
|
|
In promiscuous mode packets are not dropped if they aren't for the specified MAC address::
|
|
|
|
testpmd> set vf allmulti (port_id) (vf_id) (on|off)
|
|
|
|
set broadcast mode (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set broadcast mode for a VF from the PF::
|
|
|
|
testpmd> set vf broadcast (port_id) (vf_id) (on|off)
|
|
|
|
vlan set tag (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set VLAN tag for a VF from the PF::
|
|
|
|
testpmd> set vf vlan tag (port_id) (vf_id) (on|off)
|
|
|
|
set tx max bandwidth (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set TX max absolute bandwidth (Mbps) for a VF from PF::
|
|
|
|
testpmd> set vf tx max-bandwidth (port_id) (vf_id) (max_bandwidth)
|
|
|
|
set tc tx min bandwidth (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set all TCs' TX min relative bandwidth (%) for a VF from PF::
|
|
|
|
testpmd> set vf tc tx min-bandwidth (port_id) (vf_id) (bw1, bw2, ...)
|
|
|
|
set tc tx max bandwidth (for VF)
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set a TC's TX max absolute bandwidth (Mbps) for a VF from PF::
|
|
|
|
testpmd> set vf tc tx max-bandwidth (port_id) (vf_id) (tc_no) (max_bandwidth)
|
|
|
|
set tc strict link priority mode
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Set some TCs' strict link priority mode on a physical port::
|
|
|
|
testpmd> set tx strict-link-priority (port_id) (tc_bitmap)
|
|
|
|
ddp add
|
|
~~~~~~~
|
|
|
|
Load a dynamic device personalization (DDP) profile and store backup profile::
|
|
|
|
testpmd> ddp add (port_id) (profile_path[,backup_profile_path])
|
|
|
|
ddp del
|
|
~~~~~~~
|
|
|
|
Delete a dynamic device personalization profile and restore backup profile::
|
|
|
|
testpmd> ddp del (port_id) (backup_profile_path)
|
|
|
|
ddp get list
|
|
~~~~~~~~~~~~
|
|
|
|
Get loaded dynamic device personalization (DDP) package info list::
|
|
|
|
testpmd> ddp get list (port_id)
|
|
|
|
ddp get info
|
|
~~~~~~~~~~~~
|
|
|
|
Display information about dynamic device personalization (DDP) profile::
|
|
|
|
testpmd> ddp get info (profile_path)
|
|
|
|
ptype mapping
|
|
~~~~~~~~~~~~~
|
|
|
|
List all items from the ptype mapping table::
|
|
|
|
testpmd> ptype mapping get (port_id) (valid_only)
|
|
|
|
Where:
|
|
|
|
* ``valid_only``: A flag indicates if only list valid items(=1) or all items(=0).
|
|
|
|
Replace a specific or a group of software defined ptype with a new one::
|
|
|
|
testpmd> ptype mapping replace (port_id) (target) (mask) (pkt_type)
|
|
|
|
where:
|
|
|
|
* ``target``: A specific software ptype or a mask to represent a group of software ptypes.
|
|
|
|
* ``mask``: A flag indicate if "target" is a specific software ptype(=0) or a ptype mask(=1).
|
|
|
|
* ``pkt_type``: The new software ptype to replace the old ones.
|
|
|
|
Update hardware defined ptype to software defined packet type mapping table::
|
|
|
|
testpmd> ptype mapping update (port_id) (hw_ptype) (sw_ptype)
|
|
|
|
where:
|
|
|
|
* ``hw_ptype``: hardware ptype as the index of the ptype mapping table.
|
|
|
|
* ``sw_ptype``: software ptype as the value of the ptype mapping table.
|
|
|
|
Reset ptype mapping table::
|
|
|
|
testpmd> ptype mapping reset (port_id)
|
|
|
|
show port pctype mapping
|
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
List all items from the pctype mapping table::
|
|
|
|
testpmd> show port (port_id) pctype mapping
|
|
|
|
High Performance of Small Packets on 40GbE NIC
|
|
----------------------------------------------
|
|
|
|
As there might be firmware fixes for performance enhancement in latest version
|
|
of firmware image, the firmware update might be needed for getting high performance.
|
|
Check the Intel support website for the latest firmware updates.
|
|
Users should consult the release notes specific to a DPDK release to identify
|
|
the validated firmware version for a NIC using the i40e driver.
|
|
|
|
Use 16 Bytes RX Descriptor Size
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
|
|
In ``config/rte_config.h`` set the following to use 16 bytes size RX descriptors::
|
|
|
|
#define RTE_LIBRTE_I40E_16BYTE_RX_DESC 1
|
|
|
|
Input set requirement of each pctype for FDIR
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Each PCTYPE can only have one specific FDIR input set at one time.
|
|
For example, if creating 2 rte_flow rules with different input set for one PCTYPE,
|
|
it will fail and return the info "Conflict with the first rule's input set",
|
|
which means the current rule's input set conflicts with the first rule's.
|
|
Remove the first rule if want to change the input set of the PCTYPE.
|
|
|
|
Vlan related Features miss when FW >= 8.4
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
If FW version >= 8.4, there'll be some Vlan related issues:
|
|
|
|
#. TCI input set for QinQ is invalid.
|
|
#. Fail to configure TPID for QinQ.
|
|
#. Need to enable QinQ before enabling Vlan filter.
|
|
#. Fail to strip outer Vlan.
|
|
|
|
Example of getting best performance with l3fwd example
|
|
------------------------------------------------------
|
|
|
|
The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with a
|
|
server with Intel Xeon processors and Intel Ethernet CNA XL710.
|
|
|
|
The example scenario is to get best performance with two Intel Ethernet CNA XL710 40GbE ports.
|
|
See :numref:`figure_intel_perf_test_setup` for the performance test setup.
|
|
|
|
.. _figure_intel_perf_test_setup:
|
|
|
|
.. figure:: img/intel_perf_test_setup.*
|
|
|
|
Performance Test Setup
|
|
|
|
|
|
1. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
|
|
The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
|
|
for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
|
|
Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
|
|
|
|
82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
|
|
85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
|
|
|
|
2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
|
|
|
|
3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
|
|
In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
|
|
are 18-35 and 54-71.
|
|
Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
|
|
cores from different cores (e.g core18 and core19).
|
|
|
|
4. Bind these two ports to igb_uio.
|
|
|
|
5. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
|
|
will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
|
|
|
|
6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
|
|
Compile the ``l3fwd sample`` with the default lpm mode.
|
|
|
|
7. The command line of running l3fwd would be something like the following::
|
|
|
|
./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
|
|
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
|
|
|
|
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
|
|
core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
|
|
|
|
8. Configure the traffic at a traffic generator.
|
|
|
|
* Start creating a stream on packet generator.
|
|
|
|
* Set the Ethernet II type to 0x0800.
|
|
|
|
Tx bytes affected by the link status change
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
For firmware versions prior to 6.01 for X710 series and 3.33 for X722 series, the tx_bytes statistics data is affected by
|
|
the link down event. Each time the link status changes to down, the tx_bytes decreases 110 bytes.
|