2018-01-08 05:25:18 +00:00
|
|
|
.. SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
Copyright(c) 2017 Cavium, Inc
|
2017-10-08 12:44:30 +00:00
|
|
|
|
2018-10-10 02:50:56 +00:00
|
|
|
OCTEON TX Poll Mode driver
|
|
|
|
==========================
|
2017-10-08 12:44:30 +00:00
|
|
|
|
2020-11-03 12:36:02 +00:00
|
|
|
The OCTEON TX ETHDEV PMD (**librte_net_octeontx**) provides poll mode ethdev
|
2018-10-10 02:50:56 +00:00
|
|
|
driver support for the inbuilt network device found in the **Cavium OCTEON TX**
|
2017-10-08 12:44:30 +00:00
|
|
|
SoC family as well as their virtual functions (VF) in SR-IOV context.
|
|
|
|
|
|
|
|
More information can be found at `Cavium, Inc Official Website
|
|
|
|
<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
|
|
|
|
|
|
|
|
Features
|
|
|
|
--------
|
|
|
|
|
2018-10-10 02:50:56 +00:00
|
|
|
Features of the OCTEON TX Ethdev PMD are:
|
2017-10-08 12:44:30 +00:00
|
|
|
|
|
|
|
- Packet type information
|
|
|
|
- Promiscuous mode
|
|
|
|
- Port hardware statistics
|
|
|
|
- Jumbo frames
|
2020-03-16 09:33:37 +00:00
|
|
|
- Scatter-Gather IO support
|
2017-10-08 12:44:30 +00:00
|
|
|
- Link state information
|
2020-03-16 09:33:41 +00:00
|
|
|
- MAC/VLAN filtering
|
2020-03-16 09:33:40 +00:00
|
|
|
- MTU update
|
2017-10-08 12:44:30 +00:00
|
|
|
- SR-IOV VF
|
|
|
|
- Multiple queues for TX
|
|
|
|
- Lock-free Tx queue
|
|
|
|
- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
|
|
|
|
|
2018-10-10 02:50:56 +00:00
|
|
|
Supported OCTEON TX SoCs
|
|
|
|
------------------------
|
2017-10-08 12:44:30 +00:00
|
|
|
|
|
|
|
- CN83xx
|
|
|
|
|
|
|
|
Unsupported features
|
|
|
|
--------------------
|
|
|
|
|
|
|
|
The features supported by the device and not yet supported by this PMD include:
|
|
|
|
|
|
|
|
- Receive Side Scaling (RSS)
|
|
|
|
- Scattered and gather for TX and RX
|
|
|
|
- Ingress classification support
|
|
|
|
- Egress hierarchical scheduling, traffic shaping, and marking
|
|
|
|
|
|
|
|
Prerequisites
|
|
|
|
-------------
|
|
|
|
|
2017-11-07 06:59:32 +00:00
|
|
|
See :doc:`../platform/octeontx` for setup information.
|
2017-10-08 12:44:30 +00:00
|
|
|
|
|
|
|
Pre-Installation Configuration
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
Driver compilation and testing
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
|
|
|
|
for details.
|
|
|
|
|
|
|
|
#. Running testpmd:
|
|
|
|
|
|
|
|
Follow instructions available in the document
|
|
|
|
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
|
|
|
|
to run testpmd.
|
|
|
|
|
|
|
|
Example output:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
2020-10-21 08:17:13 +00:00
|
|
|
./<build_dir>/app/dpdk-testpmd -c 700 \
|
2017-10-08 12:44:30 +00:00
|
|
|
--base-virtaddr=0x100000000000 \
|
|
|
|
--mbuf-pool-ops-name="octeontx_fpavf" \
|
|
|
|
--vdev='event_octeontx' \
|
|
|
|
--vdev='eth_octeontx,nr_port=2' \
|
2018-01-31 17:32:03 +00:00
|
|
|
-- --rxq=1 --txq=1 --nb-core=2 \
|
|
|
|
--total-num-mbufs=16384 -i
|
2017-10-08 12:44:30 +00:00
|
|
|
.....
|
|
|
|
EAL: Detected 24 lcore(s)
|
|
|
|
EAL: Probing VFIO support...
|
|
|
|
EAL: VFIO support initialized
|
|
|
|
.....
|
|
|
|
EAL: PCI device 0000:07:00.1 on NUMA socket 0
|
|
|
|
EAL: probe driver: 177d:a04b octeontx_ssovf
|
|
|
|
.....
|
|
|
|
EAL: PCI device 0001:02:00.7 on NUMA socket 0
|
|
|
|
EAL: probe driver: 177d:a0dd octeontx_pkivf
|
|
|
|
.....
|
|
|
|
EAL: PCI device 0001:03:01.0 on NUMA socket 0
|
|
|
|
EAL: probe driver: 177d:a049 octeontx_pkovf
|
|
|
|
.....
|
|
|
|
PMD: octeontx_probe(): created ethdev eth_octeontx for port 0
|
|
|
|
PMD: octeontx_probe(): created ethdev eth_octeontx for port 1
|
|
|
|
.....
|
|
|
|
Configuring Port 0 (socket 0)
|
|
|
|
Port 0: 00:0F:B7:11:94:46
|
|
|
|
Configuring Port 1 (socket 0)
|
|
|
|
Port 1: 00:0F:B7:11:94:47
|
|
|
|
.....
|
|
|
|
Checking link statuses...
|
|
|
|
Port 0 Link Up - speed 40000 Mbps - full-duplex
|
|
|
|
Port 1 Link Up - speed 40000 Mbps - full-duplex
|
|
|
|
Done
|
|
|
|
testpmd>
|
|
|
|
|
|
|
|
|
|
|
|
Initialization
|
|
|
|
--------------
|
|
|
|
|
2021-11-22 10:50:46 +00:00
|
|
|
The OCTEON TX ethdev PMD is exposed as a vdev device which consists of a set
|
2017-10-08 12:44:30 +00:00
|
|
|
of PKI and PKO PCIe VF devices. On EAL initialization,
|
|
|
|
PKI/PKO PCIe VF devices will be probed and then the vdev device can be created
|
|
|
|
from the application code, or from the EAL command line based on
|
|
|
|
the number of probed/bound PKI/PKO PCIe VF device to DPDK by
|
|
|
|
|
|
|
|
* Invoking ``rte_vdev_init("eth_octeontx")`` from the application
|
|
|
|
|
|
|
|
* Using ``--vdev="eth_octeontx"`` in the EAL options, which will call
|
|
|
|
rte_vdev_init() internally
|
|
|
|
|
|
|
|
Device arguments
|
|
|
|
~~~~~~~~~~~~~~~~
|
|
|
|
Each ethdev port is mapped to a physical port(LMAC), Application can specify
|
|
|
|
the number of interesting ports with ``nr_ports`` argument.
|
|
|
|
|
|
|
|
Dependency
|
|
|
|
~~~~~~~~~~
|
2021-11-22 10:50:46 +00:00
|
|
|
``eth_octeontx`` PMD is depend on ``event_octeontx`` eventdev device and
|
2017-10-08 12:44:30 +00:00
|
|
|
``octeontx_fpavf`` external mempool handler.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
2017-11-07 06:59:34 +00:00
|
|
|
./your_dpdk_application --mbuf-pool-ops-name="octeontx_fpavf" \
|
2017-10-08 12:44:30 +00:00
|
|
|
--vdev='event_octeontx' \
|
|
|
|
--vdev="eth_octeontx,nr_port=2"
|
|
|
|
|
|
|
|
Limitations
|
|
|
|
-----------
|
|
|
|
|
|
|
|
``octeontx_fpavf`` external mempool handler dependency
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
2018-10-10 02:50:56 +00:00
|
|
|
The OCTEON TX SoC family NIC has inbuilt HW assisted external mempool manager.
|
2017-10-08 12:44:30 +00:00
|
|
|
This driver will only work with ``octeontx_fpavf`` external mempool handler
|
|
|
|
as it is the most performance effective way for packet allocation and Tx buffer
|
2018-10-10 02:50:56 +00:00
|
|
|
recycling on OCTEON TX SoC platform.
|
2017-10-08 12:44:30 +00:00
|
|
|
|
2019-10-18 15:06:57 +00:00
|
|
|
CRC stripping
|
|
|
|
~~~~~~~~~~~~~
|
2017-10-08 12:44:30 +00:00
|
|
|
|
2018-10-10 02:50:56 +00:00
|
|
|
The OCTEON TX SoC family NICs strip the CRC for every packets coming into the
|
2018-05-24 08:08:56 +00:00
|
|
|
host interface irrespective of the offload configuration.
|
2017-10-08 12:44:30 +00:00
|
|
|
|
|
|
|
Maximum packet length
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
2018-10-10 02:50:56 +00:00
|
|
|
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
|
ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 13:48:48 +00:00
|
|
|
is fixed and cannot be changed. So, even when the ``rxmode.mtu``
|
2017-10-08 12:44:30 +00:00
|
|
|
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
|
|
|
|
up to 32k bytes can still reach the host interface.
|
2019-11-20 03:48:02 +00:00
|
|
|
|
|
|
|
Maximum mempool size
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
The maximum mempool size supplied to Rx queue setup should be less than 128K.
|
|
|
|
When running testpmd on OCTEON TX the application can limit the number of mbufs
|
|
|
|
by using the option ``--total-num-mbufs=131072``.
|