2018-03-20 19:20:35 +00:00
.. SPDX-License-Identifier: BSD-3-Clause
2015-10-30 18:52:42 +00:00
Copyright 2015 6WIND S.A.
2018-03-20 19:20:35 +00:00
Copyright 2015 Mellanox Technologies, Ltd
2015-10-30 18:52:42 +00:00
2020-02-24 19:52:14 +00:00
.. include :: <isonum.txt>
2022-02-23 13:48:33 +00:00
MLX5 Ethernet Poll Mode Driver
==============================
2015-10-30 18:52:42 +00:00
2022-02-23 13:48:33 +00:00
The mlx5 Ethernet poll mode driver library (**librte_net_mlx5** ) provides support
2018-05-15 06:12:50 +00:00
for **Mellanox ConnectX-4** , **Mellanox ConnectX-4 Lx** , **Mellanox
2020-11-24 10:30:35 +00:00
ConnectX-5**, ** Mellanox ConnectX-6**, ** Mellanox ConnectX-6 Dx**, ** Mellanox
ConnectX-6 Lx**, ** Mellanox BlueField** and ** Mellanox BlueField-2** families
of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
in SR-IOV context.
2015-10-30 18:52:42 +00:00
2019-08-05 15:32:21 +00:00
Design
------
2015-10-30 18:52:42 +00:00
Besides its dependency on libibverbs (that implies libmlx5 and associated
2020-11-03 12:36:02 +00:00
kernel support), librte_net_mlx5 relies heavily on system calls for control
2015-10-30 18:52:42 +00:00
operations such as querying/updating the MTU and flow control parameters.
This capability allows the PMD to coexist with kernel network interfaces
which remain functional, although they stop receiving unicast packets as
long as they share the same MAC address.
2017-07-28 14:28:33 +00:00
This means legacy linux control tools (for example: ethtool, ifconfig and
more) can operate on the same network interfaces that owned by the DPDK
application.
2015-10-30 18:52:42 +00:00
2022-02-23 13:48:33 +00:00
See :doc: `../../platform/mlx5` guide for more design details.
2015-10-30 18:52:42 +00:00
2015-10-30 18:55:19 +00:00
Features
--------
2018-07-12 12:01:31 +00:00
- Multi arch support: x86_64, POWER8, ARMv8, i686.
2015-10-30 18:55:19 +00:00
- Multiple TX and RX queues.
2021-11-04 12:33:19 +00:00
- Shared Rx queue.
2021-11-05 15:30:38 +00:00
- Rx queue delay drop.
2022-02-24 23:25:11 +00:00
- Support steering for external Rx queue created outside the PMD.
2020-10-26 11:55:05 +00:00
- Support for scattered TX frames.
- Advanced support for scattered Rx frames with tunable buffer attributes.
2015-12-12 19:43:24 +00:00
- IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
2019-12-18 10:05:47 +00:00
- RSS using different combinations of fields: L3 only, L4 only or both,
and source only, destination only or both.
2015-10-30 18:55:19 +00:00
- Several RSS hash keys, one for each flow type.
2018-11-04 12:10:20 +00:00
- Default RSS operation with no hash key specification.
2015-12-12 19:43:24 +00:00
- Configurable RETA table.
2019-07-30 15:06:21 +00:00
- Link flow control (pause frame).
2015-10-30 18:55:19 +00:00
- Support for multiple MAC addresses.
- VLAN filtering.
2016-03-03 14:26:44 +00:00
- RX VLAN stripping.
2016-03-17 15:38:58 +00:00
- TX VLAN insertion.
2016-03-17 15:38:56 +00:00
- RX CRC stripping configuration.
2021-01-22 17:12:09 +00:00
- TX mbuf fast free offload.
2019-05-13 11:41:02 +00:00
- Promiscuous mode on PF and VF.
- Multicast promiscuous mode on PF and VF.
2015-12-12 19:43:24 +00:00
- Hardware checksum offloads.
2017-10-09 14:45:05 +00:00
- Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
RTE_ETH_FDIR_REJECT).
2019-01-30 11:20:19 +00:00
- Flow API, including :ref: `flow_isolated_mode` .
2017-10-06 15:45:51 +00:00
- Multiple process.
2016-06-08 09:43:31 +00:00
- KVM and VMware ESX SR-IOV modes are supported.
2016-09-28 12:11:18 +00:00
- RSS hash result is supported.
2018-05-09 00:14:49 +00:00
- Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
- Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
2017-07-28 14:28:33 +00:00
- RX interrupts.
- Statistics query including Basic, Extended and per queue.
2017-10-30 10:58:25 +00:00
- Rx HW timestamp.
2020-01-16 18:36:23 +00:00
- Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP, Geneve, GTP.
2018-04-23 12:33:10 +00:00
- Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
2019-05-13 11:41:02 +00:00
- NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
2019-08-05 15:32:19 +00:00
increment/decrement, count, drop, mark. For details please see :ref: `mlx5_offloads_support` .
2019-05-13 11:41:02 +00:00
- Flow insertion rate of more then million flows per second, when using Direct Rules.
- Support for multiple rte_flow groups.
2020-02-25 13:57:28 +00:00
- Per packet no-inline hint flag to disable packet data copying into Tx descriptors.
2019-07-22 14:51:59 +00:00
- Hardware LRO.
2020-04-22 03:11:20 +00:00
- Hairpin.
2020-10-28 09:33:53 +00:00
- Multiple-thread flow insertion.
2021-07-05 11:40:35 +00:00
- Matching on IPv4 Internet Header Length (IHL).
2021-01-11 18:21:52 +00:00
- Matching on GTP extension header with raw encap/decap action.
2021-01-17 10:21:22 +00:00
- Matching on Geneve TLV option header with raw encap/decap action.
2022-05-12 09:17:11 +00:00
- Matching on ESP header SPI field.
2021-01-14 07:24:47 +00:00
- RSS support in sample action.
2021-01-12 10:29:16 +00:00
- E-Switch mirroring and jump.
2021-01-12 10:29:18 +00:00
- E-Switch mirroring and modify.
2021-01-12 06:40:00 +00:00
- 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
flow group.
2021-04-27 10:43:54 +00:00
- Flow metering, including meter policy API.
2021-07-06 13:14:50 +00:00
- Flow meter hierarchy.
2021-04-29 18:36:58 +00:00
- Flow integrity offload API.
2021-05-05 12:23:13 +00:00
- Connection tracking.
2021-05-16 10:56:20 +00:00
- Sub-Function representors.
2021-07-21 14:37:35 +00:00
- Sub-Function.
2022-06-07 11:17:32 +00:00
- Matching on represented port.
2021-07-21 14:37:35 +00:00
2015-10-30 18:55:19 +00:00
Limitations
-----------
2021-01-03 10:28:27 +00:00
- Windows support:
On Windows, the features are limited:
- Promiscuous mode is not supported
- The following rules are supported:
- IPv4/UDP with CVLAN filtering
- Unicast MAC filtering
2021-06-22 15:34:50 +00:00
- Additional rules are supported from WinOF2 version 2.70:
- IPv4/TCP with CVLAN filtering
- L4 steering rules for port RSS of UDP, TCP and IP
2018-01-25 16:17:59 +00:00
- For secondary process:
- Forked secondary process not supported.
2022-02-04 03:19:13 +00:00
- MPRQ is not supported. Callback to free externally attached MPRQ buffer is set
in a primary process, but has a different virtual address in a secondary process.
Calling a function at the wrong address leads to a segmentation fault.
2019-04-01 21:17:53 +00:00
- External memory unregistered in EAL memseg list cannot be used for DMA
unless such memory has been registered by `` mlx5_mr_update_ext_mp() `` in
primary process and remapped to the same virtual address in secondary
process. If the external memory is registered by primary process but has
different virtual address in secondary process, unexpected error may happen.
2018-01-25 16:17:59 +00:00
2021-11-04 12:33:19 +00:00
- Shared Rx queue:
- Counters of received packets and bytes number of devices in same share group are same.
- Counters of received packets and bytes number of queues in same group and queue ID are same.
2020-02-11 11:05:11 +00:00
- When using Verbs flow engine (`` dv_flow_en `` = 0), flow pattern without any
specific VLAN will match for VLAN packets as well:
2017-07-28 14:28:33 +00:00
When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
Meaning, the flow rule::
flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...
2020-02-11 11:05:11 +00:00
Will only match vlan packets with vid=3. and the flow rule::
2017-07-28 14:28:33 +00:00
flow create 0 ingress pattern eth / ipv4 / end ...
Will match any ipv4 packet (VLAN included).
2015-10-30 18:55:19 +00:00
2020-10-25 16:03:39 +00:00
- When using Verbs flow engine (`` dv_flow_en `` = 0), multi-tagged(QinQ) match is not supported.
- When using DV flow engine (`` dv_flow_en `` = 1), flow pattern with any VLAN specification will match only single-tagged packets unless the ETH item `` type `` field is 0x88A8 or the VLAN item `` has_more_vlan `` field is 1.
2020-05-05 12:57:54 +00:00
The flow rule::
flow create 0 ingress pattern eth / ipv4 / end ...
2020-10-25 16:03:39 +00:00
Will match any ipv4 packet.
The flow rules::
2020-05-05 12:57:54 +00:00
2020-10-25 16:03:39 +00:00
flow create 0 ingress pattern eth / vlan / end ...
flow create 0 ingress pattern eth has_vlan is 1 / end ...
flow create 0 ingress pattern eth type is 0x8100 / end ...
2020-05-05 12:57:54 +00:00
2020-10-25 16:03:39 +00:00
Will match single-tagged packets only, with any VLAN ID value.
The flow rules::
2020-05-05 12:57:54 +00:00
2020-10-25 16:03:39 +00:00
flow create 0 ingress pattern eth type is 0x88A8 / end ...
flow create 0 ingress pattern eth / vlan has_more_vlan is 1 / end ...
Will match multi-tagged packets only, with any VLAN ID value.
2020-05-05 12:57:54 +00:00
2020-10-25 16:03:39 +00:00
- A flow pattern with 2 sequential VLAN items is not supported.
2020-05-05 12:57:54 +00:00
2019-09-09 15:56:45 +00:00
- VLAN pop offload command:
- Flow rules having a VLAN pop offload command as one of their actions and
are lacking a match on VLAN as one of their items are not supported.
2021-04-01 13:22:47 +00:00
- The command is not supported on egress traffic in NIC mode.
2019-09-09 15:56:45 +00:00
2021-04-01 13:22:47 +00:00
- VLAN push offload is not supported on ingress traffic in NIC mode.
2019-09-09 15:56:46 +00:00
2019-09-09 15:56:47 +00:00
- VLAN set PCP offload is not supported on existing headers.
2019-08-07 12:57:47 +00:00
- A multi segment packet must have not more segments than reported by dev_infos_get()
in tx_desc_lim.nb_seg_max field. This value depends on maximal supported Tx descriptor
size and `` txq_inline_min `` settings and may be from 2 (worst case forced by maximal
inline settings) to 58.
2018-05-09 11:13:50 +00:00
2021-07-13 12:09:19 +00:00
- Match on VXLAN supports the following fields only:
- VNI
- Last reserved 8-bits
Last reserved 8-bits matching is only supported When using DV flow
engine (`` dv_flow_en `` = 1).
2021-08-02 12:20:48 +00:00
For ConnectX-5, the UDP destination port must be the standard one (4789).
2021-07-13 12:09:19 +00:00
Group zero's behavior may differ which depends on FW.
Matching value equals 0 (value & mask) is not supported.
2018-05-09 11:13:50 +00:00
2018-05-15 11:07:14 +00:00
- L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
2019-10-16 08:36:10 +00:00
- Match on Geneve header supports the following fields only:
- VNI
- OAM
- protocol type
- options length
2021-01-17 10:21:22 +00:00
- Match on Geneve TLV option is supported on the following fields:
- Class
- Type
- Length
- Data
Only one Class/Type/Length Geneve TLV option is supported per shared device.
Class/Type/Length fields must be specified as well as masks.
Class/Type/Length specified masks must be full.
Matching Geneve TLV option without specifying data is not supported.
Matching Geneve TLV option with `` data & mask == 0 `` is not supported.
2019-10-16 08:36:10 +00:00
2018-04-05 15:07:19 +00:00
- VF: flow rules created on VF devices can only match traffic targeted at the
configured MAC addresses (see `` rte_eth_dev_mac_addr_add() `` ).
2020-01-16 18:36:23 +00:00
- Match on GTP tunnel header item supports the following fields only:
2020-05-06 17:13:38 +00:00
- v_pt_rsv_flags: E flag, S flag, PN flag
2020-01-16 18:36:23 +00:00
- msg_type
- teid
2020-02-24 17:57:49 +00:00
2021-01-11 18:21:52 +00:00
- Match on GTP extension header only for GTP PDU session container (next
extension header type = 0x85).
- Match on GTP extension header is not supported in group 0.
2021-11-16 15:45:14 +00:00
- Flex item:
2022-02-23 13:48:32 +00:00
- Hardware support: BlueField-2.
2021-11-16 15:45:14 +00:00
- Flex item is supported on PF only.
- Hardware limits `` header_length_mask_width `` up to 6 bits.
- Firmware supports 8 global sample fields.
Each flex item allocates non-shared sample fields from that pool.
- Supported flex item can have 1 input link - `` eth `` or `` udp ``
and up to 2 output links - `` ipv4 `` or `` ipv6 `` .
- Flex item fields (`` next_header `` , `` next_protocol `` , `` samples `` )
do not participate in RSS hash functions.
- In flex item configuration, `` next_header.field_base `` value
must be byte aligned (multiple of 8).
2020-02-24 17:57:49 +00:00
- No Tx metadata go to the E-Switch steering domain for the Flow group 0.
The flows within group 0 and set metadata action are rejected by hardware.
2020-01-16 18:36:23 +00:00
2018-04-05 15:07:19 +00:00
.. note ::
MAC addresses not already present in the bridge table of the associated
kernel network device will be added and cleaned up by the PMD when closing
the device. In case of ungraceful program termination, some entries may
remain present and should be removed manually by other means.
2017-09-14 10:50:39 +00:00
2020-10-26 11:55:05 +00:00
- Buffer split offload is supported with regular Rx burst routine only,
no MPRQ feature or vectorized code can be engaged.
2018-05-09 11:13:50 +00:00
- When Multi-Packet Rx queue is configured (`` mprq_en `` ), a Rx packet can be
2021-10-15 19:24:08 +00:00
externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
2018-05-09 11:13:50 +00:00
ol_flags. As the mempool for the external buffer is managed by PMD, all the
Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
the external buffers will be freed by PMD and the application which still
holds the external buffers may be corrupted.
2022-03-10 05:06:36 +00:00
User-managed mempools with external pinned data buffers
cannot be used in conjunction with MPRQ
since packets may be already attached to PMD-managed external buffers.
2018-05-09 11:13:50 +00:00
2018-06-26 12:39:24 +00:00
- If Multi-Packet Rx queue is configured (`` mprq_en `` ) and Rx CQE compression is
enabled (`` rxq_cqe_comp_en `` ) at the same time, RSS hash result is not fully
2021-10-15 19:24:08 +00:00
supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
2018-06-26 12:39:24 +00:00
2018-11-12 11:31:06 +00:00
- IPv6 Multicast messages are not supported on VM, while promiscuous mode
and allmulticast mode are both set to off.
To receive IPv6 Multicast messages on VM, explicitly set the relevant
MAC address using rte_eth_dev_mac_addr_add() API.
2020-02-25 13:57:28 +00:00
- To support a mixed traffic pattern (some buffers from local host memory, some
buffers from other devices) with high bandwidth, a mbuf flag is used.
An application hints the PMD whether or not it should try to inline the
given mbuf data buffer. PMD should do the best effort to act upon this request.
The hint flag `` RTE_PMD_MLX5_FINE_GRANULARITY_INLINE `` is dynamic,
registered by application with rte_mbuf_dynflag_register(). This flag is
purely driver-specific and declared in PMD specific header `` rte_pmd_mlx5.h `` ,
which is intended to be used by the application.
To query the supported specific flags in runtime,
the function `` rte_pmd_mlx5_get_dyn_flag_names `` returns the array of
currently (over present hardware and configuration) supported specific flags.
The "not inline hint" feature operating flow is the following one:
- application starts
- probe the devices, ports are created
- query the port capabilities
- if port supporting the feature is found
- register dynamic flag `` RTE_PMD_MLX5_FINE_GRANULARITY_INLINE ``
- application starts the ports
- on `` dev_start() `` PMD checks whether the feature flag is registered and
enables the feature support in datapath
- application might set the registered flag bit in `` ol_flags `` field
of mbuf being sent and PMD will handle ones appropriately.
2019-08-08 11:47:23 +00:00
- The amount of descriptors in Tx queue may be limited by data inline settings.
Inline data require the more descriptor building blocks and overall block
amount may exceed the hardware supported limits. The application should
reduce the requested Tx size or adjust data inline settings with
`` txq_inline_max `` and `` txq_inline_mpw `` devargs keys.
2020-07-16 08:23:05 +00:00
- To provide the packet send scheduling on mbuf timestamps the `` tx_pp ``
2020-11-02 19:51:00 +00:00
parameter should be specified.
When PMD sees the RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME set on the packet
2020-07-16 08:23:05 +00:00
being sent it tries to synchronize the time of packet appearing on
the wire with the specified packet timestamp. It the specified one
is in the past it should be ignored, if one is in the distant future
it should be capped with some reasonable value (in range of seconds).
These specific cases ("too late" and "distant future") can be optionally
reported via device xstats to assist applications to detect the
time-related problems.
2020-07-27 15:51:04 +00:00
The timestamp upper "too-distant-future" limit
at the moment of invoking the Tx burst routine
can be estimated as `` tx_pp `` option (in nanoseconds) multiplied by 2^23.
Please note, for the testpmd txonly mode,
the limit is deduced from the expression::
(n_tx_descriptors / burst_size + 1) * inter_burst_gap
2020-07-16 08:23:05 +00:00
There is no any packet reordering according timestamps is supposed,
neither within packet burst, nor between packets, it is an entirely
application responsibility to generate packets and its timestamps
in desired order. The timestamps can be put only in the first packet
in the burst providing the entire burst scheduling.
2019-05-13 11:41:02 +00:00
- E-Switch decapsulation Flow:
2018-11-22 13:49:16 +00:00
2019-03-31 10:12:09 +00:00
- can be applied to PF port only.
2018-11-22 13:49:16 +00:00
- must specify VF port action (packet redirection from PF to VF).
- optionally may specify tunnel inner source and destination MAC addresses.
2019-05-13 11:41:02 +00:00
- E-Switch encapsulation Flow:
2018-11-22 13:49:16 +00:00
- can be applied to VF ports only.
- must specify PF port action (packet redirection from VF to PF).
2020-02-23 09:30:59 +00:00
- Raw encapsulation:
- The input buffer, used as outer header, is not validated.
- Raw decapsulation:
- The decapsulation is always done up to the outermost tunnel detected by the HW.
- The input buffer, providing the removal size, is not validated.
- The buffer size must match the length of the headers to be removed.
2020-10-09 06:11:42 +00:00
- ICMP(code/type/identifier/sequence number) / ICMP6(code/type) matching, IP-in-IP and MPLS flow matching are all
2019-08-05 15:32:22 +00:00
mutually exclusive features which cannot be supported together
(see :ref: `mlx5_firmware_config` ).
2019-07-03 07:22:49 +00:00
2019-07-22 14:52:03 +00:00
- LRO:
2019-10-24 12:46:42 +00:00
- Requires DevX and DV flow to be enabled.
2019-07-29 11:53:27 +00:00
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
2019-11-11 17:47:34 +00:00
- Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
it with size limited to max LRO size, not to max RX packet length.
2020-04-12 10:48:32 +00:00
- LRO can be used with outer header of TCP packets of the standard format:
eth (with or without vlan) / ipv4 or ipv6 / tcp / payload
Other TCP packets (e.g. with MPLS label) received on Rx queue with LRO enabled, will be received with bad checksum.
2020-10-15 13:37:09 +00:00
- LRO packet aggregation is performed by HW only for packet size larger than
`` lro_min_mss_size `` . This value is reported on device start, when debug
mode is enabled.
2019-07-22 14:52:03 +00:00
2020-07-15 13:10:20 +00:00
- CRC:
2021-10-22 11:03:12 +00:00
- `` RTE_ETH_RX_OFFLOAD_KEEP_CRC `` cannot be supported with decapsulation
2020-11-24 10:30:35 +00:00
for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
2020-07-15 13:10:20 +00:00
The capability bit `` scatter_fcs_w_decap_disable `` shows NIC support.
2021-01-22 17:12:09 +00:00
- TX mbuf fast free:
- fast free offload assumes the all mbufs being sent are originated from the
same memory pool and there is no any extra references to the mbufs (the
reference counter for each mbuf is equal 1 on tx_burst call). The latter
means there should be no any externally attached buffers in mbufs. It is
an application responsibility to provide the correct mbufs if the fast
free offload is engaged. The mlx5 PMD implicitly produces the mbufs with
externally attached buffers if MPRQ option is enabled, hence, the fast
free offload is neither supported nor advertised if there is MPRQ enabled.
2020-10-13 14:11:51 +00:00
- Sample flow:
2021-03-16 15:18:19 +00:00
- Supports `` RTE_FLOW_ACTION_TYPE_SAMPLE `` action only within NIC Rx and
E-Switch steering domain.
- For E-Switch Sampling flow with sample ratio > 1, additional actions are not
supported in the sample actions list.
- For ConnectX-5, the `` RTE_FLOW_ACTION_TYPE_SAMPLE `` is typically used as
first action in the E-Switch egress flow if with header modify or
encapsulation actions.
- For NIC Rx flow, supports `` MARK `` , `` COUNT `` , `` QUEUE `` , `` RSS `` in the
sample actions list.
2021-04-07 11:48:56 +00:00
- For E-Switch mirroring flow, supports `` RAW ENCAP `` , `` Port ID `` ,
2021-04-07 11:49:38 +00:00
`` VXLAN ENCAP `` , `` NVGRE ENCAP `` in the sample actions list.
2022-03-09 10:19:46 +00:00
- For ConnectX-5 trusted device, the application metadata with SET_TAG index 0
is not supported before `` RTE_FLOW_ACTION_TYPE_SAMPLE `` action.
2020-10-13 14:11:51 +00:00
2021-01-28 08:00:14 +00:00
- Modify Field flow:
- Supports the 'set' operation only for `` RTE_FLOW_ACTION_TYPE_MODIFY_FIELD `` action.
- Modification of an arbitrary place in a packet via the special `` RTE_FLOW_FIELD_START `` Field ID is not supported.
2021-03-24 15:04:39 +00:00
- Modification of the 802.1Q Tag, VXLAN Network or GENEVE Network ID's is not supported.
2021-01-28 08:00:14 +00:00
- Encapsulation levels are not supported, can modify outermost header fields only.
- Offsets must be 32-bits aligned, cannot skip past the boundary of a field.
2022-02-24 16:01:36 +00:00
- If the field type is `` RTE_FLOW_FIELD_MAC_TYPE ``
and packet contains one or more VLAN headers,
the meaningful type field following the last VLAN header
is used as modify field operation argument.
The modify field action is not intended to modify VLAN headers type field,
dedicated VLAN push and pop actions should be used instead.
2021-01-28 08:00:14 +00:00
2020-10-15 14:05:57 +00:00
- IPv6 header item 'proto' field, indicating the next header protocol, should
not be set as extension header.
In case the next header is an extension header, it should not be specified in
IPv6 header item 'proto' field.
The last extension header item 'next header' field can specify the following
header protocol type.
2020-10-26 16:37:47 +00:00
- Hairpin:
- Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
- Hairpin in switchdev SR-IOV mode is not supported till now.
2021-04-20 10:55:11 +00:00
- Meter:
2021-04-27 10:43:54 +00:00
2021-04-20 10:55:11 +00:00
- All the meter colors with drop action will be counted only by the global drop statistics.
2021-07-21 08:54:19 +00:00
- Yellow detection is only supported with ASO metering.
2021-04-20 10:55:11 +00:00
- Red color must be with drop action.
2021-04-27 10:43:54 +00:00
- Meter statistics are supported only for drop case.
- A meter action created with pre-defined policy must be the last action in the flow except single case where the policy actions are:
- green: NULL or END.
- yellow: NULL or END.
- RED: DROP / END.
- The only supported meter policy actions:
2022-05-13 07:33:08 +00:00
- green: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MARK, METER and SET_TAG.
- yellow: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MARK, METER and SET_TAG.
2021-04-27 10:43:54 +00:00
- RED: must be DROP.
2021-07-21 08:54:19 +00:00
- Policy actions of RSS for green and yellow should have the same configuration except queues.
2021-10-27 09:13:58 +00:00
- Policy with RSS/queue action is not supported when `` dv_xmeta_en `` enabled.
2022-05-13 07:33:08 +00:00
- If green action is METER, yellow action must be the same METER action or NULL.
2021-04-27 10:41:34 +00:00
- meter profile packet mode is supported.
2021-07-21 08:54:21 +00:00
- meter profiles of RFC2697, RFC2698 and RFC4115 are supported.
2022-05-13 07:33:08 +00:00
- RFC4115 implementation is following MEF, meaning yellow traffic may reclaim unused green bandwidth when green token bucket is full.
2021-04-20 10:55:11 +00:00
2021-04-29 18:36:58 +00:00
- Integrity:
- Integrity offload is enabled for **ConnectX-6** family.
- Verification bits provided by the hardware are `` l3_ok `` , `` ipv4_csum_ok `` , `` l4_ok `` , `` l4_csum_ok `` .
- `` level `` value 0 references outer headers.
- Multiple integrity items not supported in a single flow rule.
- Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
For example, if integrity item mask sets `` l4_ok `` or `` l4_csum_ok `` bits, reference to L4 network header,
TCP or UDP, must be in the rule pattern as well::
flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
or
flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
2021-05-05 12:23:14 +00:00
- Connection tracking:
- Cannot co-exist with ASO meter, ASO age action in a single flow rule.
2021-05-05 12:23:18 +00:00
- Flow rules insertion rate and memory consumption need more optimization.
2021-05-05 12:23:27 +00:00
- 256 ports maximum.
- 4M connections maximum.
2021-05-05 12:23:14 +00:00
2021-07-13 08:45:00 +00:00
- Multi-thread flow insertion:
- In order to achieve best insertion rate, application should manage the flows per lcore.
- Better to disable memory reclaim by setting `` reclaim_mem_mode `` to 0 to accelerate the flow object allocation and release with cache.
2021-10-21 08:56:36 +00:00
- HW hashed bonding
- TXQ affinity subjects to HW hash once enabled.
2021-10-26 08:48:30 +00:00
- Bonding under socket direct mode
- Needs OFED 5.4+.
2021-11-08 16:41:01 +00:00
- Timestamps:
- CQE timestamp field width is limited by hardware to 63 bits, MSB is zero.
- In the free-running mode the timestamp counter is reset on power on
and 63-bit value provides over 1800 years of uptime till overflow.
- In the real-time mode
(configurable with `` REAL_TIME_CLOCK_ENABLE `` firmware settings),
the timestamp presents the nanoseconds elapsed since 01-Jan-1970,
hardware timestamp overflow will happen on 19-Jan-2038
(0x80000000 seconds since 01-Jan-1970).
- The send scheduling is based on timestamps
from the reference "Clock Queue" completions,
the scheduled send timestamps should not be specified with non-zero MSB.
2022-02-24 13:40:45 +00:00
- HW steering:
- WQE based high scaling and safer flow insertion/destruction.
- Set `` dv_flow_en `` to 2 in order to enable HW steering.
- Async queue-based `` rte_flow_q `` APIs supported only.
2022-02-25 01:14:17 +00:00
- Match on GRE header supports the following fields:
- c_rsvd0_v: C bit, K bit, S bit
- protocol type
- checksum
- key
- sequence
Matching on checksum and sequence needs OFED 5.6+.
2022-03-02 15:30:51 +00:00
- The NIC egress flow rules on representor port are not supported.
2022-02-25 01:14:17 +00:00
2017-11-29 11:29:07 +00:00
Statistics
----------
2019-08-05 15:32:19 +00:00
MLX5 supports various methods to report statistics:
2017-11-29 11:29:07 +00:00
2018-11-23 08:03:37 +00:00
Port statistics can be queried using `` rte_eth_stats_get() `` . The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.
2017-11-29 11:29:07 +00:00
Extended statistics can be queried using `` rte_eth_xstats_get() `` . The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As Mellanox NICs are using the :ref: `Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with `` _phy `` suffix counts the total events on the physical port, therefore not valid for VF.
Finally per-flow statistics can by queried using `` rte_flow_query `` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.
2019-01-09 14:23:19 +00:00
2022-02-23 13:48:33 +00:00
Compilation
-----------
2015-11-23 14:44:49 +00:00
2022-02-23 13:48:33 +00:00
See :ref: `mlx5 common compilation <mlx5_common_compilation>` .
2018-02-02 16:46:18 +00:00
2022-02-23 13:48:33 +00:00
Configuration
-------------
2017-10-25 00:27:25 +00:00
2022-02-23 13:48:33 +00:00
Environment Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~
2017-10-25 00:27:25 +00:00
2022-02-23 13:48:33 +00:00
See :ref: `mlx5 common configuration <mlx5_common_env>` .
2017-10-25 00:27:25 +00:00
2022-02-23 13:48:33 +00:00
Firmware configuration
2015-10-30 18:52:42 +00:00
~~~~~~~~~~~~~~~~~~~~~~
2022-02-23 13:48:33 +00:00
See :ref: `mlx5_firmware_config` guide.
2020-03-02 23:08:53 +00:00
Driver options
2022-02-23 13:48:33 +00:00
~~~~~~~~~~~~~~
Please refer to :ref: `mlx5 common options <mlx5_common_driver_options>`
for an additional list of options shared with other mlx5 drivers.
2020-03-02 23:08:53 +00:00
2016-06-24 13:17:54 +00:00
- `` rxq_cqe_comp_en `` parameter [int]
A nonzero value enables the compression of CQE on RX side. This feature
2017-07-28 14:28:33 +00:00
allows to save PCI bandwidth and improve performance. Enabled by default.
2020-11-01 16:27:39 +00:00
Different compression formats are supported in order to achieve the best
2021-02-01 17:16:30 +00:00
performance for different traffic patterns. Default format depends on
Multi-Packet Rx queue configuration: Hash RSS format is used in case
MPRQ is disabled, Checksum format is used in case MPRQ is enabled.
2020-11-01 16:27:39 +00:00
Specifying 2 as a `` rxq_cqe_comp_en `` value selects Flow Tag format for
better compression rate in case of RTE Flow Mark traffic.
Specifying 3 as a `` rxq_cqe_comp_en `` value selects Checksum format.
Specifying 4 as a `` rxq_cqe_comp_en `` value selects L3/L4 Header format for
better compression rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
2021-02-01 17:16:30 +00:00
CQE compression format selection requires DevX to be enabled. If there is
no DevX enabled/supported the value is reset to 1 by default.
2016-06-24 13:17:54 +00:00
2016-07-27 09:27:26 +00:00
Supported on:
2020-11-24 10:30:35 +00:00
- x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
ConnectX-6 Lx, BlueField and BlueField-2.
- POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
ConnectX-6 Lx, BlueField and BlueField-2.
2016-07-27 09:27:26 +00:00
2019-01-15 17:38:58 +00:00
- `` rxq_pkt_pad_en `` parameter [int]
A nonzero value enables padding Rx packet to the size of cacheline on PCI
transaction. This feature would waste PCI bandwidth but could improve
performance by avoiding partial cacheline write which may cause costly
read-modify-copy in memory transaction on some architectures. Disabled by
default.
Supported on:
2020-11-24 10:30:35 +00:00
- x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
ConnectX-6 Lx, BlueField and BlueField-2.
- POWER8 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
ConnectX-6 Lx, BlueField and BlueField-2.
2019-01-15 17:38:58 +00:00
2021-11-05 15:30:38 +00:00
- `` delay_drop `` parameter [int]
Bitmask value for the Rx queue delay drop attribute. Bit 0 is used for the
standard Rx queue and bit 1 is used for the hairpin Rx queue. By default, the
delay drop is disabled for all Rx queues. It will be ignored if the port does
not support the attribute even if it is enabled explicitly.
The packets being received will not be dropped immediately when the WQEs are
exhausted in a Rx queue with delay drop enabled.
2021-11-05 15:30:39 +00:00
A timeout value is set in the driver to control the waiting time before
dropping a packet. Once the timer is expired, the delay drop will be
2021-11-29 16:08:02 +00:00
deactivated for all the Rx queues with this feature enable. To re-activate
2021-11-05 15:30:39 +00:00
it, a rearming is needed and it is part of the kernel driver starting from
OFED 5.5.
To enable / disable the delay drop rearming, the private flag `` dropless_rq ``
can be set and queried via ethtool:
- ethtool --set-priv-flags <netdev> dropless_rq on (/ off)
- ethtool --show-priv-flags <netdev>
The configuration flag is global per PF and can only be set on the PF, once
it is on, all the VFs', SFs' and representors' Rx queues will share the timer
and rearming.
2018-05-09 11:13:50 +00:00
- `` mprq_en `` parameter [int]
A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
configured as Multi-Packet RQ if the total number of Rx queues is
2020-04-09 22:23:51 +00:00
`` rxqs_min_mprq `` or more. Disabled by default.
2018-05-09 11:13:50 +00:00
Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
by posting a single large buffer for multiple packets. Instead of posting a
buffers per a packet, one large buffer is posted in order to receive multiple
packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
and each stride receives one packet. MPRQ can improve throughput for
2019-03-31 10:12:09 +00:00
small-packet traffic.
2018-05-09 11:13:50 +00:00
ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 13:48:48 +00:00
When MPRQ is enabled, MTU can be larger than the size of
2021-10-22 11:03:12 +00:00
user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 13:48:48 +00:00
configure large stride size enough to accommodate MTU as long as
2018-05-09 11:13:50 +00:00
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
- `` mprq_log_stride_num `` parameter [int]
Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
2019-03-31 10:12:09 +00:00
strides can reduce PCIe traffic further. If configured value is not in the
2018-05-09 11:13:50 +00:00
range of device capability, the default value will be set with a warning
message. The default value is 4 which is 16 strides per a buffer, valid only
if `` mprq_en `` is set.
The size of Rx queue should be bigger than the number of strides.
2020-04-09 22:23:51 +00:00
- `` mprq_log_stride_size `` parameter [int]
Log 2 of the size of a stride for Multi-Packet Rx queue. Configuring a smaller
stride size can save some memory and reduce probability of a depletion of all
available strides due to unreleased packets by an application. If configured
value is not in the range of device capability, the default value will be set
with a warning message. The default value is 11 which is 2048 bytes per a
stride, valid only if `` mprq_en `` is set. With `` mprq_log_stride_size `` set
2020-07-08 09:13:20 +00:00
it is possible for a packet to span across multiple strides. This mode allows
2020-04-09 22:23:51 +00:00
support of jumbo frames (9K) with MPRQ. The memcopy of some packets (or part
of a packet if Rx scatter is configured) may be required in case there is no
space left for a head room at the end of a stride which incurs some
performance penalty.
2018-05-09 11:13:50 +00:00
- `` mprq_max_memcpy_len `` parameter [int]
The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
the mbuf by external buffer attachment - `` rte_pktmbuf_attach_extbuf() `` .
A mempool for external buffers will be allocated and managed by PMD. If Rx
packet is externally attached, ol_flags field of the mbuf will have
2021-10-15 19:24:08 +00:00
RTE_MBUF_F_EXTERNAL and this flag must be preserved. `` RTE_MBUF_HAS_EXTBUF() ``
2018-05-09 11:13:50 +00:00
checks the flag. The default value is 128, valid only if `` mprq_en `` is set.
- `` rxqs_min_mprq `` parameter [int]
Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
greater or equal to this value. The default value is 12, valid only if
`` mprq_en `` is set.
2016-06-24 13:17:56 +00:00
- `` txq_inline `` parameter [int]
2019-07-21 14:24:53 +00:00
Amount of data to be inlined during TX operations. This parameter is
2019-07-21 14:24:54 +00:00
deprecated and converted to the new parameter `` txq_inline_max `` providing
partial compatibility.
2016-06-24 13:17:56 +00:00
- `` txqs_min_inline `` parameter [int]
2019-07-21 14:24:54 +00:00
Enable inline data send only when the number of TX queues is greater or equal
2016-06-24 13:17:56 +00:00
to this value.
2019-07-21 14:24:54 +00:00
This option should be used in combination with `` txq_inline_max `` and
`` txq_inline_mpw `` below and does not affect `` txq_inline_min `` settings above.
If this option is not specified the default value 16 is used for BlueField
and 8 for other platforms
The data inlining consumes the CPU cycles, so this option is intended to
auto enable inline data if we have enough Tx queues, which means we have
enough CPU cores and PCI bandwidth is getting more critical and CPU
is not supposed to be bottleneck anymore.
The copying data into WQE improves latency and can improve PPS performance
when PCI back pressure is detected and may be useful for scenarios involving
heavy traffic on many queues.
Because additional software logic is necessary to handle this mode, this
option should be used with care, as it may lower performance when back
pressure is not expected.
2019-08-08 11:47:23 +00:00
If inline data are enabled it may affect the maximal size of Tx queue in
descriptors because the inline data increase the descriptor size and
queue size limits supported by hardware may be exceeded.
2019-07-21 14:24:54 +00:00
- `` txq_inline_min `` parameter [int]
Minimal amount of data to be inlined into WQE during Tx operations. NICs
may require this minimal data amount to operate correctly. The exact value
2019-08-07 12:57:47 +00:00
may depend on NIC operation mode, requested offloads, etc. It is strongly
recommended to omit this parameter and use the default values. Anyway,
applications using this parameter should take into consideration that
specifying an inconsistent value may prevent the NIC from sending packets.
2019-07-21 14:24:54 +00:00
If `` txq_inline_min `` key is present the specified value (may be aligned
by the driver in order not to exceed the limits and provide better descriptor
2019-08-07 12:57:47 +00:00
space utilization) will be used by the driver and it is guaranteed that
requested amount of data bytes are inlined into the WQE beside other inline
settings. This key also may update `` txq_inline_max `` value (default
or specified explicitly in devargs) to reserve the space for inline data.
2019-07-21 14:24:54 +00:00
If `` txq_inline_min `` key is not present, the value may be queried by the
driver from the NIC via DevX if this feature is available. If there is no DevX
enabled/supported the value 18 (supposing L2 header including VLAN) is set
2020-02-24 19:52:14 +00:00
for ConnectX-4 and ConnectX-4 Lx, and 0 is set by default for ConnectX-5
2019-07-21 14:24:54 +00:00
and newer NICs. If packet is shorter the `` txq_inline_min `` value, the entire
packet is inlined.
2019-08-07 12:57:47 +00:00
For ConnectX-4 NIC, driver does not allow specifying value below 18
(minimal L2 header, including VLAN), error will be raised.
2020-02-24 19:52:14 +00:00
For ConnectX-4 Lx NIC, it is allowed to specify values below 18, but
2019-08-07 12:57:47 +00:00
it is not recommended and may prevent NIC from sending packets over
some configurations.
2019-07-21 14:24:54 +00:00
2021-06-30 07:01:06 +00:00
For ConnectX-4 and ConnectX-4 Lx NICs, automatically configured value
is insufficient for some traffic, because they require at least all L2 headers
to be inlined. For example, Q-in-Q adds 4 bytes to default 18 bytes
of Ethernet and VLAN, thus `` txq_inline_min `` must be set to 22.
MPLS would add 4 bytes per label. Final value must account for all possible
L2 encapsulation headers used in particular environment.
2019-07-21 14:24:54 +00:00
Please, note, this minimal data inlining disengages eMPW feature (Enhanced
Multi-Packet Write), because last one does not support partial packet inlining.
This is not very critical due to minimal data inlining is mostly required
by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.
- `` txq_inline_max `` parameter [int]
Specifies the maximal packet length to be completely inlined into WQE
Ethernet Segment for ordinary SEND method. If packet is larger than specified
value, the packet data won't be copied by the driver at all, data buffer
is addressed with a pointer. If packet length is less or equal all packet
data will be copied into WQE. This may improve PCI bandwidth utilization for
short packets significantly but requires the extra CPU cycles.
The data inline feature is controlled by number of Tx queues, if number of Tx
queues is larger than `` txqs_min_inline `` key parameter, the inline feature
is engaged, if there are not enough Tx queues (which means not enough CPU cores
and CPU resources are scarce), data inline is not performed by the driver.
Assigning `` txqs_min_inline `` with zero always enables the data inline.
The default `` txq_inline_max `` value is 290. The specified value may be adjusted
by the driver in order not to exceed the limit (930 bytes) and to provide better
WQE space filling without gaps, the adjustment is reflected in the debug log.
2019-10-01 06:53:37 +00:00
Also, the default value (290) may be decreased in run-time if the large transmit
queue size is requested and hardware does not support enough descriptor
amount, in this case warning is emitted. If `` txq_inline_max `` key is
specified and requested inline settings can not be satisfied then error
will be raised.
2019-07-21 14:24:54 +00:00
- `` txq_inline_mpw `` parameter [int]
Specifies the maximal packet length to be completely inlined into WQE for
Enhanced MPW method. If packet is large the specified value, the packet data
won't be copied, and data buffer is addressed with pointer. If packet length
is less or equal, all packet data will be copied into WQE. This may improve PCI
bandwidth utilization for short packets significantly but requires the extra
CPU cycles.
The data inline feature is controlled by number of TX queues, if number of Tx
queues is larger than `` txqs_min_inline `` key parameter, the inline feature
is engaged, if there are not enough Tx queues (which means not enough CPU cores
and CPU resources are scarce), data inline is not performed by the driver.
Assigning `` txqs_min_inline `` with zero always enables the data inline.
2019-08-05 13:03:53 +00:00
The default `` txq_inline_mpw `` value is 268. The specified value may be adjusted
2019-07-21 14:24:54 +00:00
by the driver in order not to exceed the limit (930 bytes) and to provide better
WQE space filling without gaps, the adjustment is reflected in the debug log.
Due to multiple packets may be included to the same WQE with Enhanced Multi
Packet Write Method and overall WQE size is limited it is not recommended to
2019-10-01 06:53:37 +00:00
specify large values for the `` txq_inline_mpw `` . Also, the default value (268)
may be decreased in run-time if the large transmit queue size is requested
and hardware does not support enough descriptor amount, in this case warning
is emitted. If `` txq_inline_mpw `` key is specified and requested inline
settings can not be satisfied then error will be raised.
2017-07-28 14:28:33 +00:00
2018-11-01 17:20:32 +00:00
- `` txqs_max_vec `` parameter [int]
Enable vectorized Tx only when the number of TX queues is less than or
2019-07-21 14:24:53 +00:00
equal to this value. This parameter is deprecated and ignored, kept
for compatibility issue to not prevent driver from probing.
2018-11-01 17:20:32 +00:00
2017-03-15 23:55:44 +00:00
- `` txq_mpw_hdr_dseg_en `` parameter [int]
A nonzero value enables including two pointers in the first block of TX
2019-07-21 14:24:53 +00:00
descriptor. The parameter is deprecated and ignored, kept for compatibility
issue.
2017-03-15 23:55:44 +00:00
- `` txq_max_inline_len `` parameter [int]
Maximum size of packet to be inlined. This limits the size of packet to
be inlined. If the size of a packet is larger than configured value, the
packet isn't inlined even though there's enough space remained in the
2019-07-21 14:24:53 +00:00
descriptor. Instead, the packet is included with pointer. This parameter
2019-07-21 14:24:54 +00:00
is deprecated and converted directly to `` txq_inline_mpw `` providing full
compatibility. Valid only if eMPW feature is engaged.
- `` txq_mpw_en `` parameter [int]
A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
2020-11-24 10:30:35 +00:00
ConnectX-6, ConnectX-6 Dx, ConnectX-6 Lx, BlueField, BlueField-2.
eMPW allows the Tx burst function to pack up multiple packets
in a single descriptor session in order to save PCI bandwidth
and improve performance at the cost of a slightly higher CPU usage.
When `` txq_inline_mpw `` is set along with `` txq_mpw_en `` ,
Tx burst function copies entire packet data on to Tx descriptor
instead of including pointer of packet.
2019-07-21 14:24:54 +00:00
The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
it, can be disabled by explicit specifying 0 value for `` txq_mpw_en `` option.
Also, if minimal data inlining is requested by non-zero `` txq_inline_min ``
option or reported by the NIC, the eMPW feature is disengaged.
2017-03-15 23:55:44 +00:00
2019-11-08 15:07:50 +00:00
- `` tx_db_nc `` parameter [int]
2022-02-23 13:48:34 +00:00
This parameter name is deprecated and ignored.
The new name for this parameter is `` sq_db_nc `` .
See :ref: `common driver options <mlx5_common_driver_options>` .
2019-11-08 15:07:50 +00:00
2020-07-16 08:23:05 +00:00
- `` tx_pp `` parameter [int]
If a nonzero value is specified the driver creates all necessary internal
objects to provide accurate packet send scheduling on mbuf timestamps.
The positive value specifies the scheduling granularity in nanoseconds,
the packet send will be accurate up to specified digits. The allowed range is
from 500 to 1 million of nanoseconds. The negative value specifies the module
of granularity and engages the special test mode the check the schedule rate.
By default (if the `` tx_pp `` is not specified) send scheduling on timestamps
feature is disabled.
2022-02-24 10:55:01 +00:00
Starting with ConnectX-7 the capability to schedule traffic directly
on timestamp specified in descriptor is provided,
no extra objects are needed anymore and scheduling capability
is advertised and handled regardless `` tx_pp `` parameter presence.
2020-07-16 08:23:05 +00:00
- `` tx_skew `` parameter [int]
The parameter adjusts the send packet scheduling on timestamps and represents
the average delay between beginning of the transmitting descriptor processing
by the hardware and appearance of actual packet data on the wire. The value
should be provided in nanoseconds and is valid only if `` tx_pp `` parameter is
specified. The default value is zero.
2017-08-02 15:32:56 +00:00
- `` tx_vec_en `` parameter [int]
2020-11-24 10:30:35 +00:00
A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 Dx,
ConnectX-6 Lx, BlueField and BlueField-2 NICs
if the number of global Tx queues on the port is less than `` txqs_max_vec `` .
The parameter is deprecated and ignored.
2017-08-02 15:32:56 +00:00
- `` rx_vec_en `` parameter [int]
A nonzero value enables Rx vector if the port is not configured in
multi-segment otherwise this parameter is ignored.
Enabled by default.
2018-04-05 15:07:21 +00:00
- `` vf_nl_en `` parameter [int]
A nonzero value enables Netlink requests from the VF to add/remove MAC
addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
Otherwise the relevant configuration must be run with Linux iproute2 tools.
This is a prerequisite to receive this kind of traffic.
Enabled by default, valid only on VF devices ignored otherwise.
2018-04-23 12:33:02 +00:00
- `` l3_vxlan_en `` parameter [int]
2018-04-23 12:33:08 +00:00
A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
parameter. This is a prerequisite to receive this kind of traffic.
2018-04-23 12:33:02 +00:00
Disabled by default.
net/mlx5: add devarg for extensive metadata support
The PMD parameter dv_xmeta_en is added to control extensive
metadata support. A nonzero value enables extensive flow
metadata support if device is capable and driver supports it.
This can enable extensive support of MARK and META item of
rte_flow. The newly introduced SET_TAG and SET_META actions
do not depend on dv_xmeta_en parameter, because there is
no compatibility issue for new entities. The dv_xmeta_en is
disabled by default.
There are some possible configurations, depending on parameter
value:
- 0, this is default value, defines the legacy mode, the MARK
and META related actions and items operate only within NIC Tx
and NIC Rx steering domains, no MARK and META information
crosses the domain boundaries. The MARK item is 24 bits wide,
the META item is 32 bits wide.
- 1, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The ``MARK`` item is 24 bits wide, the
META item width depends on kernel and firmware configurations
and might be 0, 16 or 32 bits. Within NIC Tx domain META data
width is 32 bits for compatibility, the actual width of data
transferred to the FDB domain depends on kernel configuration
and may be vary. The actual supported width can be retrieved
in runtime by series of rte_flow_validate() trials.
- 2, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The META item is 32 bits wide, the MARK
item width depends on kernel and firmware configurations and
might be 0, 16 or 24 bits. The actual supported width can be
retrieved in runtime by series of rte_flow_validate() trials.
If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
ignored and the device is configured to operate in legacy mode (0).
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2019-11-07 17:09:54 +00:00
- `` dv_xmeta_en `` parameter [int]
A nonzero value enables extensive flow metadata support if device is
capable and driver supports it. This can enable extensive support of
`` MARK `` and `` META `` item of `` rte_flow `` . The newly introduced
`` SET_TAG `` and `` SET_META `` actions do not depend on `` dv_xmeta_en `` .
There are some possible configurations, depending on parameter value:
- 0, this is default value, defines the legacy mode, the `` MARK `` and
`` META `` related actions and items operate only within NIC Tx and
NIC Rx steering domains, no `` MARK `` and `` META `` information crosses
the domain boundaries. The `` MARK `` item is 24 bits wide, the `` META ``
item is 32 bits wide and match supported on egress only.
- 1, this engages extensive metadata mode, the `` MARK `` and `` META ``
related actions and items operate within all supported steering domains,
including FDB, `` MARK `` and `` META `` information may cross the domain
boundaries. The `` MARK `` item is 24 bits wide, the `` META `` item width
depends on kernel and firmware configurations and might be 0, 16 or
32 bits. Within NIC Tx domain `` META `` data width is 32 bits for
compatibility, the actual width of data transferred to the FDB domain
depends on kernel configuration and may be vary. The actual supported
width can be retrieved in runtime by series of rte_flow_validate()
trials.
- 2, this engages extensive metadata mode, the `` MARK `` and `` META ``
related actions and items operate within all supported steering domains,
including FDB, `` MARK `` and `` META `` information may cross the domain
boundaries. The `` META `` item is 32 bits wide, the `` MARK `` item width
depends on kernel and firmware configurations and might be 0, 16 or
24 bits. The actual supported width can be retrieved in runtime by
series of rte_flow_validate() trials.
2020-10-25 14:08:09 +00:00
- 3, this engages tunnel offload mode. In E-Switch configuration, that
mode implicitly activates `` dv_xmeta_en=1 `` .
net/mlx5: add devarg for extensive metadata support
The PMD parameter dv_xmeta_en is added to control extensive
metadata support. A nonzero value enables extensive flow
metadata support if device is capable and driver supports it.
This can enable extensive support of MARK and META item of
rte_flow. The newly introduced SET_TAG and SET_META actions
do not depend on dv_xmeta_en parameter, because there is
no compatibility issue for new entities. The dv_xmeta_en is
disabled by default.
There are some possible configurations, depending on parameter
value:
- 0, this is default value, defines the legacy mode, the MARK
and META related actions and items operate only within NIC Tx
and NIC Rx steering domains, no MARK and META information
crosses the domain boundaries. The MARK item is 24 bits wide,
the META item is 32 bits wide.
- 1, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The ``MARK`` item is 24 bits wide, the
META item width depends on kernel and firmware configurations
and might be 0, 16 or 32 bits. Within NIC Tx domain META data
width is 32 bits for compatibility, the actual width of data
transferred to the FDB domain depends on kernel configuration
and may be vary. The actual supported width can be retrieved
in runtime by series of rte_flow_validate() trials.
- 2, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The META item is 32 bits wide, the MARK
item width depends on kernel and firmware configurations and
might be 0, 16 or 24 bits. The actual supported width can be
retrieved in runtime by series of rte_flow_validate() trials.
If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
ignored and the device is configured to operate in legacy mode (0).
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2019-11-07 17:09:54 +00:00
+------+-----------+-----------+-------------+-------------+
| Mode | `` MARK `` | `` META `` | `` META `` Tx | FDB/Through |
+======+===========+===========+=============+=============+
| 0 | 24 bits | 32 bits | 32 bits | no |
+------+-----------+-----------+-------------+-------------+
| 1 | 24 bits | vary 0-32 | 32 bits | yes |
+------+-----------+-----------+-------------+-------------+
2020-12-11 12:05:40 +00:00
| 2 | vary 0-24 | 32 bits | 32 bits | yes |
net/mlx5: add devarg for extensive metadata support
The PMD parameter dv_xmeta_en is added to control extensive
metadata support. A nonzero value enables extensive flow
metadata support if device is capable and driver supports it.
This can enable extensive support of MARK and META item of
rte_flow. The newly introduced SET_TAG and SET_META actions
do not depend on dv_xmeta_en parameter, because there is
no compatibility issue for new entities. The dv_xmeta_en is
disabled by default.
There are some possible configurations, depending on parameter
value:
- 0, this is default value, defines the legacy mode, the MARK
and META related actions and items operate only within NIC Tx
and NIC Rx steering domains, no MARK and META information
crosses the domain boundaries. The MARK item is 24 bits wide,
the META item is 32 bits wide.
- 1, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The ``MARK`` item is 24 bits wide, the
META item width depends on kernel and firmware configurations
and might be 0, 16 or 32 bits. Within NIC Tx domain META data
width is 32 bits for compatibility, the actual width of data
transferred to the FDB domain depends on kernel configuration
and may be vary. The actual supported width can be retrieved
in runtime by series of rte_flow_validate() trials.
- 2, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The META item is 32 bits wide, the MARK
item width depends on kernel and firmware configurations and
might be 0, 16 or 24 bits. The actual supported width can be
retrieved in runtime by series of rte_flow_validate() trials.
If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
ignored and the device is configured to operate in legacy mode (0).
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2019-11-07 17:09:54 +00:00
+------+-----------+-----------+-------------+-------------+
If there is no E-Switch configuration the `` dv_xmeta_en `` parameter is
ignored and the device is configured to operate in legacy mode (0).
Disabled by default (set to 0).
2019-11-18 11:20:34 +00:00
The Direct Verbs/Rules (engaged with `` dv_flow_en `` = 1) supports all
of the extensive metadata features. The legacy Verbs supports FLAG and
MARK metadata actions over NIC Rx steering domain only.
2021-02-05 12:15:04 +00:00
Setting META value to zero in flow action means there is no item provided
and receiving datapath will not report in mbufs the metadata are present.
Setting MARK value to zero in flow action means the zero FDIR ID value
will be reported on packet receiving.
2020-12-11 12:05:40 +00:00
For the MARK action the last 16 values in the full range are reserved for
internal PMD purposes (to emulate FLAG action). The valid range for the
2021-11-29 16:08:02 +00:00
MARK action values is 0-0xFFEF for the 16-bit mode and 0-0xFFFFEF
2020-12-11 12:05:40 +00:00
for the 24-bit mode, the flows with the MARK action value outside
the specified range will be rejected.
2018-09-24 23:17:54 +00:00
- `` dv_flow_en `` parameter [int]
2022-02-24 13:40:40 +00:00
Value 0 means legacy Verbs flow offloading.
2018-09-24 23:17:54 +00:00
2022-02-24 13:40:40 +00:00
Value 1 enables the DV flow steering assuming it is supported by the
driver (requires rdma-core 24 or higher).
Value 2 enables the WQE based hardware steering.
In this mode, only queue-based flow management is supported.
It is configured by default to 1 (DV flow steering) if supported.
Otherwise, the value is 0 which indicates legacy Verbs flow offloading.
2018-09-24 23:17:54 +00:00
2019-05-13 11:41:02 +00:00
- `` dv_esw_en `` parameter [int]
A nonzero value enables E-Switch using Direct Rules.
Enabled by default if supported.
2020-06-23 08:41:07 +00:00
- `` lacp_by_user `` parameter [int]
A nonzero value enables the control of LACP traffic by the user application.
When a bond exists in the driver, by default it should be managed by the
kernel and therefore LACP traffic should be steered to the kernel.
If this devarg is set to 1 it will allow the user to manage the bond by
itself and not steer LACP traffic to the kernel.
Disabled by default (set to 0).
2018-07-10 16:04:58 +00:00
- `` representor `` parameter [list]
This parameter can be used to instantiate DPDK Ethernet devices from
2021-03-28 13:48:08 +00:00
existing port (PF, VF or SF) representors configured on the device.
2018-07-10 16:04:58 +00:00
It is a standard parameter whose format is described in
:ref: `ethernet_device_standard_device_arguments` .
2021-03-28 13:48:08 +00:00
For instance, to probe VF port representors 0 through 2::
2018-07-10 16:04:58 +00:00
net/mlx5: refactor bonding representor probing
To probe representor on 2nd PF of kernel bonding device, had to specify
PF1 BDF in devarg:
<PF1_BDF>,representor=0
When closing bonding device, all representors had to be closed together
and this implies all representors have to use primary PF of bonding
device. So after probing representor port on 2nd PF, when locating new
probed device using device argument, the filter used 2nd PF as PCI
address and failed to locate new device.
Conflict happened by using current representor devargs:
- Use PCI BDF to specify representor owner PF
- Use PCI BDF to locate probed representor device.
- PMD uses primary PCI BDF as PCI device.
To resolve such conflicts, new representor syntax is introduced here:
<primary BDF>,representor=pfXvfY
All representors must use primary PF as owner PCI device, PMD internally
locate owner PCI address by checking representor "pfX" part. To EAL, all
representors are registered to primary PCI device, the 2nd PF is hidden
to EAL, thus all search should be consistent.
Same to VF representor, HPF (host PF on BlueField) uses same syntax to
probe, example: representor=pf1vf[0-3,-1]
This patch also adds pf index into kernel bonding representor port name:
<BDF>_<ib_name>_representor_pf<X>vf<Y>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-28 13:48:10 +00:00
<PCI_BDF>,representor=vf[0-2]
2021-03-28 13:48:08 +00:00
To probe SF port representors 0 through 2::
net/mlx5: refactor bonding representor probing
To probe representor on 2nd PF of kernel bonding device, had to specify
PF1 BDF in devarg:
<PF1_BDF>,representor=0
When closing bonding device, all representors had to be closed together
and this implies all representors have to use primary PF of bonding
device. So after probing representor port on 2nd PF, when locating new
probed device using device argument, the filter used 2nd PF as PCI
address and failed to locate new device.
Conflict happened by using current representor devargs:
- Use PCI BDF to specify representor owner PF
- Use PCI BDF to locate probed representor device.
- PMD uses primary PCI BDF as PCI device.
To resolve such conflicts, new representor syntax is introduced here:
<primary BDF>,representor=pfXvfY
All representors must use primary PF as owner PCI device, PMD internally
locate owner PCI address by checking representor "pfX" part. To EAL, all
representors are registered to primary PCI device, the 2nd PF is hidden
to EAL, thus all search should be consistent.
Same to VF representor, HPF (host PF on BlueField) uses same syntax to
probe, example: representor=pf1vf[0-3,-1]
This patch also adds pf index into kernel bonding representor port name:
<BDF>_<ib_name>_representor_pf<X>vf<Y>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-28 13:48:10 +00:00
<PCI_BDF>,representor=sf[0-2]
2018-07-10 16:04:58 +00:00
2021-03-28 13:48:11 +00:00
To probe VF port representors 0 through 2 on both PFs of bonding device::
<Primary_PCI_BDF>,representor=pf[0,1]vf[0-2]
2019-05-30 10:20:32 +00:00
- `` max_dump_files_num `` parameter [int]
The maximum number of files per PMD entity that may be created for debug information.
The files will be created in /var/log directory or in current directory.
set to 128 by default.
2019-07-22 14:51:59 +00:00
- `` lro_timeout_usec `` parameter [int]
The maximum allowed duration of an LRO session, in micro-seconds.
PMD will set the nearest value supported by HW, which is not bigger than
the input `` lro_timeout_usec `` value.
If this parameter is not specified, by default PMD will set
the smallest value supported by HW.
2020-04-22 03:11:20 +00:00
- `` hp_buf_log_sz `` parameter [int]
The total data buffer size of a hairpin queue (logarithmic form), in bytes.
PMD will set the data buffer size to 2 ** `` hp_buf_log_sz `` , both for RX & TX.
The capacity of the value is specified by the firmware and the initialization
will get a failure if it is out of scope.
The range of the value is from 11 to 19 right now, and the supported frame
size of a single packet for hairpin is from 512B to 128KB. It might change if
different firmware release is being used. By using a small value, it could
reduce memory consumption but not work with a large frame. If the value is
too large, the memory consumption will be high and some potential performance
degradation will be introduced.
By default, the PMD will set this value to 16, which means that 9KB jumbo
frames will be supported.
net/mlx5: add reclaim memory mode
Currently, when flow destroyed, some memory resources may still be kept
as cached to help next time create flow more efficiently.
Some system may need the resources to be more flexible with flow create
and destroy. After peak time, with millions of flows destroyed, the
system would prefer the resources to be reclaimed completely, no cache
is needed. Then the resources can be allocated and used by other
components. The system is not so sensitive about the flow insertion
rate, but more care about the resources.
Both DPDK mlx5 PMD driver and the low level component rdma-core have
provided the flow resources to be configured cached or not, but there is
no APIs or parameters exposed to user to configure the flow resources
cache mode. In this case, introduce a new PMD devarg to let user
configure the flow resources cache mode will be helpful.
This commit is to add a new "reclaim_mem_mode" to help user configure if
the destroyed flows' cache resources should be kept or not.
Their will be three mode can be chosen:
1. 0(none). It means the flow resources will be cached as usual. The
resources will be cached, helpful with flow insertion rate.
2. 1(light). It will only enable the DPDK PMD level resources reclaim.
3. 2(aggressive). Both DPDK PMD level and rdma-core low level will be
configured as reclaimed mode.
With these three mode, user can configure the resources cache mode with
different levels.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-06-01 06:09:43 +00:00
- `` reclaim_mem_mode `` parameter [int]
Cache some resources in flow destroy will help flow recreation more efficient.
While some systems may require the all the resources can be reclaimed after
flow destroyed.
The parameter `` reclaim_mem_mode `` provides the option for user to configure
if the resource cache is needed or not.
There are three options to choose:
- 0. It means the flow resources will be cached as usual. The resources will
be cached, helpful with flow insertion rate.
- 1. It will only enable the DPDK PMD level resources reclaim.
- 2. Both DPDK PMD level and rdma-core low level will be configured as
reclaimed mode.
By default, the PMD will set this value to 0.
2020-07-15 13:10:21 +00:00
- `` decap_en `` parameter [int]
Some devices do not support FCS (frame checksum) scattering for
tunnel-decapsulated packets.
If set to 0, this option forces the FCS feature and rejects tunnel
decapsulation in the flow engine for such devices.
By default, the PMD will set this value to 1.
2021-07-06 08:12:27 +00:00
- `` allow_duplicate_pattern `` parameter [int]
There are two options to choose:
- 0. Prevent insertion of rules with the same pattern items on non-root table.
In this case, only the first rule is inserted and the following rules are
rejected and error code EEXIST is returned.
- 1. Allow insertion of rules with the same pattern items.
In this case, all rules are inserted but only the first rule takes effect,
the next rule takes effect only if the previous rules are deleted.
By default, the PMD will set this value to 1.
2015-10-30 18:52:42 +00:00
2017-02-09 08:32:01 +00:00
Supported NICs
--------------
2020-02-24 19:52:14 +00:00
The following Mellanox device families are supported by the same mlx5 driver:
- ConnectX-4
- ConnectX-4 Lx
- ConnectX-5
- ConnectX-5 Ex
- ConnectX-6
- ConnectX-6 Dx
2020-11-24 10:30:35 +00:00
- ConnectX-6 Lx
2020-02-24 19:52:14 +00:00
- BlueField
2020-11-24 10:30:35 +00:00
- BlueField-2
2020-02-24 19:52:14 +00:00
Below are detailed device names:
* Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX4111A-XCAT (1x10G)
* Mellanox\ |reg| ConnectX\ |reg|-4 10G MCX412A-XCAT (2x10G)
* Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX4111A-ACAT (1x25G)
* Mellanox\ |reg| ConnectX\ |reg|-4 25G MCX412A-ACAT (2x25G)
* Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX413A-BCAT (1x40G)
* Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX4131A-BCAT (1x40G)
* Mellanox\ |reg| ConnectX\ |reg|-4 40G MCX415A-BCAT (1x40G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX413A-GCAT (1x50G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX4131A-GCAT (1x50G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX414A-BCAT (2x50G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-GCAT (1x50G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-BCAT (2x50G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX416A-GCAT (2x50G)
* Mellanox\ |reg| ConnectX\ |reg|-4 50G MCX415A-CCAT (1x100G)
* Mellanox\ |reg| ConnectX\ |reg|-4 100G MCX416A-CCAT (2x100G)
* Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4111A-XCAT (1x10G)
* Mellanox\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4121A-XCAT (2x10G)
* Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4111A-ACAT (1x25G)
* Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
* Mellanox\ |reg| ConnectX\ |reg|-4 Lx 40G MCX4131A-BCAT (1x40G)
* Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
* Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
* Mellanox\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
* Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
* Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
2020-11-24 10:30:35 +00:00
* Mellanox\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
2017-02-09 08:32:01 +00:00
2017-07-28 14:28:33 +00:00
2022-02-23 13:48:33 +00:00
Sub-Function
------------
2017-07-28 14:28:33 +00:00
2022-02-23 13:48:33 +00:00
See :ref: `mlx5_sub_function` .
2021-03-28 13:48:08 +00:00
2021-07-21 14:37:35 +00:00
Sub-Function representor support
2022-02-23 13:48:33 +00:00
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2021-03-28 13:48:08 +00:00
2021-07-21 14:37:35 +00:00
A SF netdev supports E-Switch representation offload
similar to PF and VF representors.
Use <sfnum> to probe SF representor::
2021-03-28 13:48:08 +00:00
2022-02-23 13:48:33 +00:00
testpmd> port attach <PCI_BDF>,representor=sf<sfnum>,dv_flow_en=1
2021-03-28 13:48:08 +00:00
2017-07-28 14:28:33 +00:00
Performance tuning
------------------
2019-08-05 15:32:20 +00:00
1. Configure aggressive CQE Zipping for maximum performance::
2017-07-28 14:28:33 +00:00
mlxconfig -d <mst device> s CQE_COMPRESSION=1
2019-08-05 15:32:20 +00:00
To set it back to the default CQE Zipping mode use::
2017-07-28 14:28:33 +00:00
mlxconfig -d <mst device> s CQE_COMPRESSION=0
2. In case of virtualization:
- Make sure that hypervisor kernel is 3.16 or newer.
- Configure boot with `` iommu=pt `` .
- Use 1G huge pages.
- Make sure to allocate a VM on huge pages.
- Make sure to set CPU pinning.
3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
for better performance. For VMs, verify that the right CPU
2019-08-05 15:32:20 +00:00
and NUMA node are pinned according to the above. Run::
2017-07-28 14:28:33 +00:00
2021-03-08 22:25:52 +00:00
lstopo-no-graphics --merge
2017-07-28 14:28:33 +00:00
to identify the NUMA node to which the PCIe adapter is connected.
4. If more than one adapter is used, and root complex capabilities allow
to put both adapters on the same NUMA node without PCI bandwidth degradation,
it is recommended to locate both adapters on the same NUMA node.
This in order to forward packets from one to the other without
NUMA performance penalty.
2019-08-05 15:32:20 +00:00
5. Disable pause frames::
2017-07-28 14:28:33 +00:00
ethtool -A <netdev> rx off tx off
6. Verify IO non-posted prefetch is disabled by default. This can be checked
via the BIOS configuration. Please contact you server provider for more
information about the settings.
.. note ::
On some machines, depends on the machine integrator, it is beneficial
to set the PCI max read request parameter to 1K. This can be
done in the following way:
2019-08-05 15:32:20 +00:00
To query the read request size use::
2017-07-28 14:28:33 +00:00
setpci -s <NIC PCI address> 68.w
2019-08-05 15:32:20 +00:00
If the output is different than 3XXX, set it by::
2017-07-28 14:28:33 +00:00
setpci -s <NIC PCI address> 68.w=3XXX
The XXX can be different on different systems. Make sure to configure
according to the setpci output.
2017-06-13 10:20:58 +00:00
net/mlx5: add new memory region support
This is the new design of Memory Region (MR) for mlx PMD, in order to:
- Accommodate the new memory hotplug model.
- Support non-contiguous Mempool.
There are multiple layers for MR search.
L0 is to look up the last-hit entry which is pointed by mr_ctrl->mru (Most
Recently Used). If L0 misses, L1 is to look up the address in a fixed-sized
array by linear search. L0/L1 is in an inline function -
mlx5_mr_lookup_cache().
If L1 misses, the bottom-half function is called to look up the address
from the bigger local cache of the queue. This is L2 - mlx5_mr_addr2mr_bh()
and it is not an inline function. Data structure for L2 is the Binary Tree.
If L2 misses, the search falls into the slowest path which takes locks in
order to access global device cache (priv->mr.cache) which is also a B-tree
and caches the original MR list (priv->mr.mr_list) of the device. Unless
the global cache is overflowed, it is all-inclusive of the MR list. This is
L3 - mlx5_mr_lookup_dev(). The size of the L3 cache table is limited and
can't be expanded on the fly due to deadlock. Refer to the comments in the
code for the details - mr_lookup_dev(). If L3 is overflowed, the list will
have to be searched directly bypassing the cache although it is slower.
If L3 misses, a new MR for the address should be created -
mlx5_mr_create(). When it creates a new MR, it tries to register adjacent
memsegs as much as possible which are virtually contiguous around the
address. This must take two locks - memory_hotplug_lock and
priv->mr.rwlock. Due to memory_hotplug_lock, there can't be any
allocation/free of memory inside.
In the free callback of the memory hotplug event, freed space is searched
from the MR list and corresponding bits are cleared from the bitmap of MRs.
This can fragment a MR and the MR will have multiple search entries in the
caches. Once there's a change by the event, the global cache must be
rebuilt and all the per-queue caches will be flushed as well. If memory is
frequently freed in run-time, that may cause jitter on dataplane processing
in the worst case by incurring MR cache flush and rebuild. But, it would be
the least probable scenario.
To guarantee the most optimal performance, it is highly recommended to use
an EAL option - '--socket-mem'. Then, the reserved memory will be pinned
and won't be freed dynamically. And it is also recommended to configure
per-lcore cache of Mempool. Even though there're many MRs for a device or
MRs are highly fragmented, the cache of Mempool will be much helpful to
reduce misses on per-queue caches anyway.
'--legacy-mem' is also supported.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-09 11:09:04 +00:00
7. To minimize overhead of searching Memory Regions:
- '--socket-mem' is recommended to pin memory by predictable amount.
- Configure per-lcore cache when creating Mempools for packet buffer.
- Refrain from dynamically allocating/freeing memory in run-time.
2020-11-24 07:44:16 +00:00
Rx burst functions
------------------
There are multiple Rx burst functions with different advantages and limitations.
.. table :: Rx burst functions
+-------------------+------------------------+---------+-----------------+------+-------+
|| Function Name || Enabler || Scatter|| Error Recovery || CQE || Large|
| | | | || comp|| MTU |
+===================+========================+=========+=================+======+=======+
| rx_burst | rx_vec_en=0 | Yes | Yes | Yes | Yes |
+-------------------+------------------------+---------+-----------------+------+-------+
| rx_burst_vec | rx_vec_en=1 (default) | No | if CQE comp off | Yes | No |
+-------------------+------------------------+---------+-----------------+------+-------+
| rx_burst_mprq || mprq_en=1 | No | Yes | Yes | Yes |
| || RxQs >= rxqs_min_mprq | | | | |
+-------------------+------------------------+---------+-----------------+------+-------+
| rx_burst_mprq_vec || rx_vec_en=1 (default) | No | if CQE comp off | Yes | Yes |
| || mprq_en=1 | | | | |
| || RxQs >= rxqs_min_mprq | | | | |
+-------------------+------------------------+---------+-----------------+------+-------+
2019-08-05 15:32:19 +00:00
.. _mlx5_offloads_support:
2019-08-05 15:32:22 +00:00
Supported hardware offloads
---------------------------
.. table :: Minimal SW/HW versions for queue offloads
2020-11-22 12:05:55 +00:00
============== ===== ===== ========= ===== ========== =============
2019-08-05 15:32:22 +00:00
Offload DPDK Linux rdma-core OFED firmware hardware
2020-11-22 12:05:55 +00:00
============== ===== ===== ========= ===== ========== =============
2019-08-05 15:32:22 +00:00
common base 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
checksums 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
Rx timestamp 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
TSO 17.11 4.14 16 4.2-1 12.21.1000 ConnectX-4
LRO 19.08 N/A N/A 4.6-4 16.25.6406 ConnectX-5
2021-02-02 14:27:32 +00:00
Tx scheduling 20.08 N/A N/A 5.1-2 22.28.2006 ConnectX-6 Dx
Buffer Split 20.11 N/A N/A 5.1-2 16.28.2006 ConnectX-5
2020-11-22 12:05:55 +00:00
============== ===== ===== ========= ===== ========== =============
2019-05-13 11:41:02 +00:00
2019-08-05 15:32:22 +00:00
.. table :: Minimal SW/HW versions for rte_flow offloads
2019-05-13 11:41:02 +00:00
+-----------------------+-----------------+-----------------+
2019-11-28 11:40:58 +00:00
| Offload | with E-Switch | with NIC |
2019-05-13 11:41:02 +00:00
+=======================+=================+=================+
| Count | | DPDK 19.05 | | DPDK 19.02 |
| | | OFED 4.6 | | OFED 4.6 |
2019-08-05 15:32:19 +00:00
| | | rdma-core 24 | | rdma-core 23 |
2019-05-13 11:41:02 +00:00
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2019-11-28 11:40:58 +00:00
| Drop | | DPDK 19.05 | | DPDK 18.11 |
2019-05-13 11:41:02 +00:00
| | | OFED 4.6 | | OFED 4.5 |
2019-08-05 15:32:19 +00:00
| | | rdma-core 24 | | rdma-core 23 |
2019-05-13 11:41:02 +00:00
| | | ConnectX-5 | | ConnectX-4 |
+-----------------------+-----------------+-----------------+
2019-11-28 11:40:58 +00:00
| Queue / RSS | | | | DPDK 18.11 |
| | | N/A | | OFED 4.5 |
| | | | | rdma-core 23 |
| | | | | ConnectX-4 |
+-----------------------+-----------------+-----------------+
2021-02-02 12:23:51 +00:00
| Shared action | | | | |
| | | :numref: `sact` | | :numref: `sact` |
| | | | | |
| | | | | |
2020-11-22 12:05:55 +00:00
+-----------------------+-----------------+-----------------+
| | VLAN | | DPDK 19.11 | | DPDK 19.11 |
| | (of_pop_vlan / | | OFED 4.7-1 | | OFED 4.7-1 |
| | of_push_vlan / | | ConnectX-5 | | ConnectX-5 |
| | of_set_vlan_pcp / | | | | |
| | of_set_vlan_vid) | | | | |
+-----------------------+-----------------+-----------------+
2021-04-01 13:22:47 +00:00
| | VLAN | | DPDK 21.05 | | |
| | ingress and / | | OFED 5.3 | | N/A |
| | of_push_vlan / | | ConnectX-6 Dx | | |
+-----------------------+-----------------+-----------------+
| | VLAN | | DPDK 21.05 | | |
| | egress and / | | OFED 5.3 | | N/A |
| | of_pop_vlan / | | ConnectX-6 Dx | | |
+-----------------------+-----------------+-----------------+
2019-05-13 11:41:02 +00:00
| Encapsulation | | DPDK 19.05 | | DPDK 19.02 |
2019-11-28 11:40:58 +00:00
| (VXLAN / NVGRE / RAW) | | OFED 4.7-1 | | OFED 4.6 |
2019-08-05 15:32:19 +00:00
| | | rdma-core 24 | | rdma-core 23 |
2019-05-13 11:41:02 +00:00
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2019-11-28 11:40:58 +00:00
| Encapsulation | | DPDK 19.11 | | DPDK 19.11 |
| GENEVE | | OFED 4.7-3 | | OFED 4.7-3 |
| | | rdma-core 27 | | rdma-core 27 |
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2020-11-22 12:05:55 +00:00
| Tunnel Offload | | DPDK 20.11 | | DPDK 20.11 |
| | | OFED 5.1-2 | | OFED 5.1-2 |
| | | rdma-core 32 | | N/A |
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2019-08-05 15:32:19 +00:00
| | Header rewrite | | DPDK 19.05 | | DPDK 19.02 |
2019-11-28 11:40:58 +00:00
| | (set_ipv4_src / | | OFED 4.7-1 | | OFED 4.7-1 |
| | set_ipv4_dst / | | rdma-core 24 | | rdma-core 24 |
2019-08-05 15:32:19 +00:00
| | set_ipv6_src / | | ConnectX-5 | | ConnectX-5 |
2019-09-09 15:56:49 +00:00
| | set_ipv6_dst / | | | | |
| | set_tp_src / | | | | |
| | set_tp_dst / | | | | |
| | dec_ttl / | | | | |
| | set_ttl / | | | | |
| | set_mac_src / | | | | |
| | set_mac_dst) | | | | |
2019-05-13 11:41:02 +00:00
+-----------------------+-----------------+-----------------+
2020-02-24 19:50:05 +00:00
| | Header rewrite | | DPDK 20.02 | | DPDK 20.02 |
| | (set_dscp) | | OFED 5.0 | | OFED 5.0 |
| | | | rdma-core 24 | | rdma-core 24 |
| | | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2019-05-13 11:41:02 +00:00
| Jump | | DPDK 19.05 | | DPDK 19.02 |
2019-11-28 11:40:58 +00:00
| | | OFED 4.7-1 | | OFED 4.7-1 |
2019-08-05 15:32:19 +00:00
| | | rdma-core 24 | | N/A |
2019-05-13 11:41:02 +00:00
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
| Mark / Flag | | DPDK 19.05 | | DPDK 18.11 |
| | | OFED 4.6 | | OFED 4.5 |
2019-08-05 15:32:19 +00:00
| | | rdma-core 24 | | rdma-core 23 |
2019-05-13 11:41:02 +00:00
| | | ConnectX-5 | | ConnectX-4 |
+-----------------------+-----------------+-----------------+
2020-11-22 12:05:55 +00:00
| Meta data | | DPDK 19.11 | | DPDK 19.11 |
| | | OFED 4.7-3 | | OFED 4.7-3 |
| | | rdma-core 26 | | rdma-core 26 |
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2019-05-13 11:41:02 +00:00
| Port ID | | DPDK 19.05 | | N/A |
2019-11-28 11:40:58 +00:00
| | | OFED 4.7-1 | | N/A |
2019-08-05 15:32:19 +00:00
| | | rdma-core 24 | | N/A |
2019-05-13 11:41:02 +00:00
| | | ConnectX-5 | | N/A |
+-----------------------+-----------------+-----------------+
2019-11-28 11:40:58 +00:00
| Hairpin | | | | DPDK 19.11 |
| | | N/A | | OFED 4.7-3 |
| | | | | rdma-core 26 |
| | | | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2020-11-22 12:05:55 +00:00
| 2-port Hairpin | | | | DPDK 20.11 |
| | | N/A | | OFED 5.1-2 |
| | | | | N/A |
| | | | | ConnectX-5 |
2019-11-28 11:40:58 +00:00
+-----------------------+-----------------+-----------------+
| Metering | | DPDK 19.11 | | DPDK 19.11 |
| | | OFED 4.7-3 | | OFED 4.7-3 |
| | | rdma-core 26 | | rdma-core 26 |
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2021-08-03 13:02:54 +00:00
| ASO Metering | | DPDK 21.05 | | DPDK 21.05 |
| | | OFED 5.3 | | OFED 5.3 |
| | | rdma-core 33 | | rdma-core 33 |
| | | ConnectX-6 Dx| | ConnectX-6 Dx |
+-----------------------+-----------------+-----------------+
| Metering Hierarchy | | DPDK 21.08 | | DPDK 21.08 |
| | | OFED 5.3 | | OFED 5.3 |
| | | N/A | | N/A |
| | | ConnectX-6 Dx| | ConnectX-6 Dx |
+-----------------------+-----------------+-----------------+
2020-10-13 14:11:51 +00:00
| Sampling | | DPDK 20.11 | | DPDK 20.11 |
2020-11-22 12:05:55 +00:00
| | | OFED 5.1-2 | | OFED 5.1-2 |
| | | rdma-core 32 | | N/A |
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2021-01-11 18:21:52 +00:00
| Encapsulation | | DPDK 21.02 | | DPDK 21.02 |
| GTP PSC | | OFED 5.2 | | OFED 5.2 |
| | | rdma-core 35 | | rdma-core 35 |
| | | ConnectX-6 Dx| | ConnectX-6 Dx |
+-----------------------+-----------------+-----------------+
2021-01-17 10:21:22 +00:00
| Encapsulation | | DPDK 21.02 | | DPDK 21.02 |
| GENEVE TLV option | | OFED 5.2 | | OFED 5.2 |
| | | rdma-core 34 | | rdma-core 34 |
| | | ConnectX-6 Dx | | ConnectX-6 Dx |
+-----------------------+-----------------+-----------------+
2021-01-28 08:00:14 +00:00
| Modify Field | | DPDK 21.02 | | DPDK 21.02 |
| | | OFED 5.2 | | OFED 5.2 |
| | | rdma-core 35 | | rdma-core 35 |
| | | ConnectX-5 | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2021-05-05 12:23:13 +00:00
| Connection tracking | | | | DPDK 21.05 |
| | | N/A | | OFED 5.3 |
| | | | | rdma-core 35 |
| | | | | ConnectX-6 Dx |
+-----------------------+-----------------+-----------------+
2019-05-13 11:41:02 +00:00
2021-02-02 12:23:51 +00:00
.. table :: Minimal SW/HW versions for shared action offload
:name: sact
+-----------------------+-----------------+-----------------+
| Shared Action | with E-Switch | with NIC |
+=======================+=================+=================+
| RSS | | | | DPDK 20.11 |
| | | N/A | | OFED 5.2 |
| | | | | rdma-core 33 |
| | | | | ConnectX-5 |
+-----------------------+-----------------+-----------------+
2021-04-29 09:55:38 +00:00
| Age | | DPDK 20.11 | | DPDK 20.11 |
| | | OFED 5.2 | | OFED 5.2 |
| | | rdma-core 32 | | rdma-core 32 |
| | | ConnectX-6 Dx | | ConnectX-6 Dx |
+-----------------------+-----------------+-----------------+
| Count | | DPDK 21.05 | | DPDK 21.05 |
| | | OFED 4.6 | | OFED 4.6 |
| | | rdma-core 24 | | rdma-core 23 |
| | | ConnectX-5 | | ConnectX-5 |
2021-02-02 12:23:51 +00:00
+-----------------------+-----------------+-----------------+
2020-03-26 10:22:00 +00:00
Notes for metadata
------------------
MARK and META items are interrelated with datapath - they might move from/to
the applications in mbuf fields. Hence, zero value for these items has the
special meaning - it means "no metadata are provided", not zero values are
treated by applications and PMD as valid ones.
Moreover in the flow engine domain the value zero is acceptable to match and
set, and we should allow to specify zero values as rte_flow parameters for the
META and MARK items and actions. In the same time zero mask has no meaning and
should be rejected on validation stage.
2020-04-22 15:27:15 +00:00
Notes for rte_flow
------------------
Flows are not cached in the driver.
When stopping a device port, all the flows created on this port from the
application will be flushed automatically in the background.
After stopping the device port, all flows on this port become invalid and
not represented in the system.
All references to these flows held by the application should be discarded
directly but neither destroyed nor flushed.
The application should re-create the flows as required after the port restart.
2015-12-12 19:43:24 +00:00
Notes for testpmd
-----------------
2020-11-03 12:36:02 +00:00
Compared to librte_net_mlx4 that implements a single RSS configuration per
port, librte_net_mlx5 supports per-protocol RSS configuration.
2015-12-12 19:43:24 +00:00
Since `` testpmd `` defaults to IP RSS mode and there is currently no
command-line parameter to enable additional protocols (UDP and TCP as well
as IP), the following commands must be entered from its CLI to get the same
2020-11-03 12:36:02 +00:00
behavior as librte_net_mlx4::
2015-12-12 19:43:24 +00:00
> port stop all
> port config all rss all
> port start all
2015-10-30 18:52:42 +00:00
Usage example
-------------
2017-01-06 00:49:31 +00:00
This section demonstrates how to launch **testpmd** with Mellanox
2020-11-03 12:36:02 +00:00
ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
2015-10-30 18:52:42 +00:00
2019-08-05 15:32:20 +00:00
#. Load the kernel modules::
2015-10-30 18:52:42 +00:00
modprobe -a ib_uverbs mlx5_core mlx5_ib
2019-01-30 07:53:20 +00:00
Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
2019-08-05 15:32:20 +00:00
can be run::
2015-12-12 19:43:24 +00:00
/etc/init.d/openibd restart
2015-10-30 18:52:42 +00:00
.. note ::
User space I/O kernel modules (uio and igb_uio) are not used and do
not have to be loaded.
#. Make sure Ethernet interfaces are in working order and linked to kernel
2019-08-05 15:32:20 +00:00
verbs. Related sysfs entries should be present::
2015-10-30 18:52:42 +00:00
ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
2019-08-05 15:32:20 +00:00
Example output::
2015-10-30 18:52:42 +00:00
eth30
eth31
eth32
eth33
2020-11-10 22:55:40 +00:00
#. Optionally, retrieve their PCI bus addresses for to be used with the allow list::
2015-10-30 18:52:42 +00:00
{
for intf in eth2 eth3 eth4 eth5;
do
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
2020-11-10 22:55:40 +00:00
sed -n 's,.*/\(.* \),-a \1,p'
2015-10-30 18:52:42 +00:00
2019-08-05 15:32:20 +00:00
Example output::
2015-10-30 18:52:42 +00:00
2020-11-10 22:55:40 +00:00
-a 0000:05:00.1
-a 0000:06:00.0
-a 0000:06:00.1
-a 0000:05:00.0
2015-10-30 18:52:42 +00:00
2019-08-05 15:32:20 +00:00
#. Request huge pages::
2015-10-30 18:52:42 +00:00
2021-02-11 18:16:59 +00:00
dpdk-hugepages.py --setup 2G
2015-10-30 18:52:42 +00:00
2019-08-05 15:32:20 +00:00
#. Start testpmd with basic parameters::
2015-10-30 18:52:42 +00:00
2021-02-11 07:19:37 +00:00
dpdk-testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
2015-10-30 18:52:42 +00:00
2019-08-05 15:32:20 +00:00
Example output::
2015-10-30 18:52:42 +00:00
[...]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
2020-11-03 12:36:02 +00:00
EAL: probe driver: 15b3:1013 librte_net_mlx5
PMD: librte_net_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
PMD: librte_net_mlx5: 1 port(s) detected
PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
2015-10-30 18:52:42 +00:00
EAL: PCI device 0000:05:00.1 on NUMA socket 0
2020-11-03 12:36:02 +00:00
EAL: probe driver: 15b3:1013 librte_net_mlx5
PMD: librte_net_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
PMD: librte_net_mlx5: 1 port(s) detected
PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
2015-10-30 18:52:42 +00:00
EAL: PCI device 0000:06:00.0 on NUMA socket 0
2020-11-03 12:36:02 +00:00
EAL: probe driver: 15b3:1013 librte_net_mlx5
PMD: librte_net_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
PMD: librte_net_mlx5: 1 port(s) detected
PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
2015-10-30 18:52:42 +00:00
EAL: PCI device 0000:06:00.1 on NUMA socket 0
2020-11-03 12:36:02 +00:00
EAL: probe driver: 15b3:1013 librte_net_mlx5
PMD: librte_net_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
PMD: librte_net_mlx5: 1 port(s) detected
PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
2015-10-30 18:52:42 +00:00
Interactive-mode selected
Configuring Port 0 (socket 0)
2020-11-03 12:36:02 +00:00
PMD: librte_net_mlx5: 0x8cba80: TX queues number update: 0 -> 2
PMD: librte_net_mlx5: 0x8cba80: RX queues number update: 0 -> 2
2015-10-30 18:52:42 +00:00
Port 0: E4:1D:2D:E7:0C:FE
Configuring Port 1 (socket 0)
2020-11-03 12:36:02 +00:00
PMD: librte_net_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
PMD: librte_net_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
2015-10-30 18:52:42 +00:00
Port 1: E4:1D:2D:E7:0C:FF
Configuring Port 2 (socket 0)
2020-11-03 12:36:02 +00:00
PMD: librte_net_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
PMD: librte_net_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
2015-10-30 18:52:42 +00:00
Port 2: E4:1D:2D:E7:0C:FA
Configuring Port 3 (socket 0)
2020-11-03 12:36:02 +00:00
PMD: librte_net_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
PMD: librte_net_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
2015-10-30 18:52:42 +00:00
Port 3: E4:1D:2D:E7:0C:FB
Checking link statuses...
Port 0 Link Up - speed 40000 Mbps - full-duplex
Port 1 Link Up - speed 40000 Mbps - full-duplex
Port 2 Link Up - speed 10000 Mbps - full-duplex
Port 3 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
2020-01-17 11:56:03 +00:00
How to dump flows
-----------------
This section demonstrates how to dump flows. Currently, it's possible to dump
all flows with assistance of external tools.
#. 2 ways to get flow raw file:
- Using testpmd CLI:
.. code-block :: console
2021-04-14 10:19:59 +00:00
To dump all flows:
testpmd> flow dump <port> all <output_file>
and dump one flow:
testpmd> flow dump <port> rule <rule_id> <output_file>
2020-01-17 11:56:03 +00:00
- call rte_flow_dev_dump api:
.. code-block :: console
2021-04-14 10:19:59 +00:00
rte_flow_dev_dump(port, flow, file, NULL);
2020-01-17 11:56:03 +00:00
#. Dump human-readable flows from raw file:
Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump
.. code-block :: console
2021-04-14 10:19:59 +00:00
mlx_steering_dump.py -f <output_file> -flowptr <flow_ptr>
2021-07-02 09:14:46 +00:00
How to share a meter between ports in the same switch domain
------------------------------------------------------------
This section demonstrates how to use the shared meter. A meter M can be created
on port X and to be shared with a port Y on the same switch domain by the next way:
.. code-block :: console
flow create X ingress transfer pattern eth / port_id id is Y / end actions meter mtr_id M / end
2021-07-06 13:14:50 +00:00
How to use meter hierarchy
--------------------------
This section demonstrates how to create and use a meter hierarchy.
A termination meter M can be the policy green action of another termination meter N.
The two meters are chained together as a chain. Using meter N in a flow will apply
both the meters in hierarchy on that flow.
.. code-block :: console
add port meter policy 0 1 g_actions queue index 0 / end y_actions end r_actions drop / end
create port meter 0 M 1 1 yes 0xffff 1 0
add port meter policy 0 2 g_actions meter mtr_id M / end y_actions end r_actions drop / end
create port meter 0 N 2 2 yes 0xffff 1 0
flow create 0 ingress group 1 pattern eth / end actions meter mtr_id N / end
2022-02-28 08:11:17 +00:00
How to configure a VF as trusted
--------------------------------
This section demonstrates how to configure a virtual function (VF) interface as trusted.
Trusted VF is needed to offload rules with rte_flow to a group that is bigger than 0.
The configuration is done in two parts: driver and FW.
The procedure below is an example of using a ConnectX-5 adapter card (pf0) with 2 VFs:
#. Create 2 VFs on the PF pf0 when in Legacy SR-IOV mode::
$ echo 2 > /sys/class/net/pf0/device/mlx5_num_vfs
#. Verify the VFs are created:
.. code-block :: console
$ lspci | grep Mellanox
82:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
82:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
82:00.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
82:00.3 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
#. Unbind all VFs. For each VF PCIe, using the following command to unbind the driver::
$ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/unbind
#. Set the VFs to be trusted for the kernel by using one of the methods below:
- Using sysfs file::
$ echo ON | tee /sys/class/net/pf0/device/sriov/0/trust
$ echo ON | tee /sys/class/net/pf0/device/sriov/1/trust
- Using “ip link” command::
$ ip link set p0 vf 0 trust on
$ ip link set p0 vf 1 trust on
#. Configure all VFs using mlxreg::
$ mlxreg -d /dev/mst/mt4121_pciconf0 --reg_name VHCA_TRUST_LEVEL --yes --set "all_vhca=0x1,trust_level=0x1"
.. note ::
Firmware version used must be >= xx.29.1016 and MFT >= 4.18
#. For each VF PCIe, using the following command to bind the driver::
$ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind