numam-dpdk/doc/guides/nics/kni.rst
Ciara Power 68d99d00ae doc: remove references to make from NICs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2020-10-22 22:54:05 +02:00

171 lines
5.7 KiB
ReStructuredText

.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
KNI Poll Mode Driver
======================
KNI PMD is wrapper to the :ref:`librte_kni <kni>` library.
This PMD enables using KNI without having a KNI specific application,
any forwarding application can use PMD interface for KNI.
Sending packets to any DPDK controlled interface or sending to the
Linux networking stack will be transparent to the DPDK application.
To create a KNI device ``net_kni#`` device name should be used, and this
will create ``kni#`` Linux virtual network interface.
There is no physical device backend for the virtual KNI device.
Packets sent to the KNI Linux interface will be received by the DPDK
application, and DPDK application may forward packets to a physical NIC
or to a virtual device (like another KNI interface or PCAP interface).
To forward any traffic from physical NIC to the Linux networking stack,
an application should control a physical port and create one virtual KNI port,
and forward between two.
Using this PMD requires KNI kernel module be inserted.
Usage
-----
EAL ``--vdev`` argument can be used to create KNI device instance, like::
testpmd --vdev=net_kni0 --vdev=net_kn1 -- -i
Above command will create ``kni0`` and ``kni1`` Linux network interfaces,
those interfaces can be controlled by standard Linux tools.
When testpmd forwarding starts, any packets sent to ``kni0`` interface
forwarded to the ``kni1`` interface and vice versa.
There is no hard limit on number of interfaces that can be created.
Default interface configuration
-------------------------------
``librte_kni`` can create Linux network interfaces with different features,
feature set controlled by a configuration struct, and KNI PMD uses a fixed
configuration:
.. code-block:: console
Interface name: kni#
force bind kernel thread to a core : NO
mbuf size: (rte_pktmbuf_data_room_size(pktmbuf_pool) - RTE_PKTMBUF_HEADROOM)
mtu: (conf.mbuf_size - RTE_ETHER_HDR_LEN)
KNI control path is not supported with the PMD, since there is no physical
backend device by default.
PMD arguments
-------------
``no_request_thread``, by default PMD creates a pthread for each KNI interface
to handle Linux network interface control commands, like ``ifconfig kni0 up``
With ``no_request_thread`` option, pthread is not created and control commands
not handled by PMD.
By default request thread is enabled. And this argument should not be used
most of the time, unless this PMD used with customized DPDK application to handle
requests itself.
Argument usage::
testpmd --vdev "net_kni0,no_request_thread=1" -- -i
PMD log messages
----------------
If KNI kernel module (rte_kni.ko) not inserted, following error log printed::
"KNI: KNI subsystem has not been initialized. Invoke rte_kni_init() first"
PMD testing
-----------
It is possible to test PMD quickly using KNI kernel module loopback feature:
* Insert KNI kernel module with loopback support:
.. code-block:: console
insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
* Start testpmd with no physical device but two KNI virtual devices:
.. code-block:: console
./dpdk-testpmd --vdev net_kni0 --vdev net_kni1 -- -i
.. code-block:: console
...
Configuring Port 0 (socket 0)
KNI: pci: 00:00:00 c580:b8
Port 0: 1A:4A:5B:7C:A2:8C
Configuring Port 1 (socket 0)
KNI: pci: 00:00:00 600:b9
Port 1: AE:95:21:07:93:DD
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
* Observe Linux interfaces
.. code-block:: console
$ ifconfig kni0 && ifconfig kni1
kni0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether ae:8e:79:8e:9b:c8 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
kni1: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 9e:76:43:53:3e:9b txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
* Start forwarding with tx_first:
.. code-block:: console
testpmd> start tx_first
* Quit and check forwarding stats:
.. code-block:: console
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 35637905 RX-dropped: 0 RX-total: 35637905
TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 35637915 RX-dropped: 0 RX-total: 35637915
TX-packets: 35637937 TX-dropped: 0 TX-total: 35637937
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 71275820 RX-dropped: 0 RX-total: 71275820
TX-packets: 71275884 TX-dropped: 0 TX-total: 71275884
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++