numam-dpdk/doc/guides/nics/octeontx.rst
Ferruh Yigit 1bb4a528c4 ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.

Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.

These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.

Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
  'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
  Ethernet frame overhead, and this overhead may be different from
  device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
  which adds additional confusion and some APIs and PMDs already
  discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
  field, this adds configuration complexity for application.

As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.

For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.

When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.

Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 19:20:20 +02:00

170 lines
5.1 KiB
ReStructuredText

.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Cavium, Inc
OCTEON TX Poll Mode driver
==========================
The OCTEON TX ETHDEV PMD (**librte_net_octeontx**) provides poll mode ethdev
driver support for the inbuilt network device found in the **Cavium OCTEON TX**
SoC family as well as their virtual functions (VF) in SR-IOV context.
More information can be found at `Cavium, Inc Official Website
<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
Features
--------
Features of the OCTEON TX Ethdev PMD are:
- Packet type information
- Promiscuous mode
- Port hardware statistics
- Jumbo frames
- Scatter-Gather IO support
- Link state information
- MAC/VLAN filtering
- MTU update
- SR-IOV VF
- Multiple queues for TX
- Lock-free Tx queue
- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
Supported OCTEON TX SoCs
------------------------
- CN83xx
Unsupported features
--------------------
The features supported by the device and not yet supported by this PMD include:
- Receive Side Scaling (RSS)
- Scattered and gather for TX and RX
- Ingress classification support
- Egress hierarchical scheduling, traffic shaping, and marking
Prerequisites
-------------
See :doc:`../platform/octeontx` for setup information.
Pre-Installation Configuration
------------------------------
Driver compilation and testing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
for details.
#. Running testpmd:
Follow instructions available in the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
to run testpmd.
Example output:
.. code-block:: console
./<build_dir>/app/dpdk-testpmd -c 700 \
--base-virtaddr=0x100000000000 \
--mbuf-pool-ops-name="octeontx_fpavf" \
--vdev='event_octeontx' \
--vdev='eth_octeontx,nr_port=2' \
-- --rxq=1 --txq=1 --nb-core=2 \
--total-num-mbufs=16384 -i
.....
EAL: Detected 24 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
.....
EAL: PCI device 0000:07:00.1 on NUMA socket 0
EAL: probe driver: 177d:a04b octeontx_ssovf
.....
EAL: PCI device 0001:02:00.7 on NUMA socket 0
EAL: probe driver: 177d:a0dd octeontx_pkivf
.....
EAL: PCI device 0001:03:01.0 on NUMA socket 0
EAL: probe driver: 177d:a049 octeontx_pkovf
.....
PMD: octeontx_probe(): created ethdev eth_octeontx for port 0
PMD: octeontx_probe(): created ethdev eth_octeontx for port 1
.....
Configuring Port 0 (socket 0)
Port 0: 00:0F:B7:11:94:46
Configuring Port 1 (socket 0)
Port 1: 00:0F:B7:11:94:47
.....
Checking link statuses...
Port 0 Link Up - speed 40000 Mbps - full-duplex
Port 1 Link Up - speed 40000 Mbps - full-duplex
Done
testpmd>
Initialization
--------------
The OCTEON TX ethdev pmd is exposed as a vdev device which consists of a set
of PKI and PKO PCIe VF devices. On EAL initialization,
PKI/PKO PCIe VF devices will be probed and then the vdev device can be created
from the application code, or from the EAL command line based on
the number of probed/bound PKI/PKO PCIe VF device to DPDK by
* Invoking ``rte_vdev_init("eth_octeontx")`` from the application
* Using ``--vdev="eth_octeontx"`` in the EAL options, which will call
rte_vdev_init() internally
Device arguments
~~~~~~~~~~~~~~~~
Each ethdev port is mapped to a physical port(LMAC), Application can specify
the number of interesting ports with ``nr_ports`` argument.
Dependency
~~~~~~~~~~
``eth_octeontx`` pmd is depend on ``event_octeontx`` eventdev device and
``octeontx_fpavf`` external mempool handler.
Example:
.. code-block:: console
./your_dpdk_application --mbuf-pool-ops-name="octeontx_fpavf" \
--vdev='event_octeontx' \
--vdev="eth_octeontx,nr_port=2"
Limitations
-----------
``octeontx_fpavf`` external mempool handler dependency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NIC has inbuilt HW assisted external mempool manager.
This driver will only work with ``octeontx_fpavf`` external mempool handler
as it is the most performance effective way for packet allocation and Tx buffer
recycling on OCTEON TX SoC platform.
CRC stripping
~~~~~~~~~~~~~
The OCTEON TX SoC family NICs strip the CRC for every packets coming into the
host interface irrespective of the offload configuration.
Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
Maximum mempool size
~~~~~~~~~~~~~~~~~~~~
The maximum mempool size supplied to Rx queue setup should be less than 128K.
When running testpmd on OCTEON TX the application can limit the number of mbufs
by using the option ``--total-num-mbufs=131072``.