doc: fix typos

Fix some typos in i40e and poll_mode_drv sections of the programmers guide.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
This commit is contained in:
Bernard Iremonger 2014-12-18 21:11:17 +00:00 committed by Thomas Monjalon
parent d5af8bafd7
commit 5eb379550f
2 changed files with 8 additions and 10 deletions

16
doc/guides/prog_guide/i40e_ixgbe_igb_virt_func_drv.rst Normal file → Executable file
View File

@ -36,7 +36,7 @@ support the following modes of operation in a virtualized environment:
* **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
also known as "native mode" or"pass-through" mode.
also known as "native mode" or "pass-through" mode.
In this chapter, this mode is referred to as IOV mode.
* **VMDq mode**: Involves central management of the networking resources by an IO Virtual Machine (IOVM) or
@ -329,7 +329,7 @@ The setup procedure is as follows:
.. code-block:: console
rmmod ixgbe
"modprobe ixgbe max_vfs=2,2"
modprobe ixgbe max_vfs=2,2
When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
@ -364,18 +364,16 @@ The setup procedure is as follows:
.. code-block:: console
ls -alrt /sys/bus/pci/devices/0000\:02\:00.0/virt*
lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/ virtfn1 -> ../0000:02:10.2
lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/ virtfn0 -> ../0000:02:10.0
lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn1 -> ../0000:02:10.2
lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn0 -> ../0000:02:10.0
It also creates two vfs for device 0000:02:00.1:
.. code-block:: console
ls -alrt /sys/bus/pci/devices/0000\:02\:00.1/virt*
lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/
virtfn1 -> ../0000:02:10.3
lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/
virtfn0 -> ../0000:02:10.1
lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn1 -> ../0000:02:10.3
lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn0 -> ../0000:02:10.1
#. List the PCI devices connected and notice that the Host OS shows two Physical Functions (traditional ports)
and four Virtual Functions (two for each port).
@ -505,7 +503,7 @@ the DPDK VF PMD driver performs the same throughput result as a non-VT native en
With such host instance fast packet processing, lots of services such as filtering, QoS,
DPI can be offloaded on the host fast path.
shows the scenario where some VMs directly communicate externally via a VFs,
Figure 12 shows the scenario where some VMs directly communicate externally via a VFs,
while others connect to a virtual switch and share the same uplink bandwidth.
.. _pg_figure_12:

2
doc/guides/prog_guide/poll_mode_drv.rst Normal file → Executable file
View File

@ -115,7 +115,7 @@ to dynamically adapt its overall behavior through different global loop policies
To achieve optimal performance, overall software design choices and pure software optimization techniques must be considered and
balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on).
The case of packet transmission is an example of this software/ hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue.
On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time.
However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations: