doc: fix spelling

Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.

Cc: stable@dpdk.org

Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
This commit is contained in:
Henry Nadeau 2021-07-29 12:48:05 -04:00 committed by Thomas Monjalon
parent 9a212dc06c
commit 9c30a6f3c9
32 changed files with 43 additions and 43 deletions

View File

@ -55,7 +55,7 @@ License Header
~~~~~~~~~~~~~~
Each file must begin with a special comment containing the
`Software Package Data Exchange (SPDX) License Identfier <https://spdx.org/using-spdx-license-identifier>`_.
`Software Package Data Exchange (SPDX) License Identifier <https://spdx.org/using-spdx-license-identifier>`_.
Generally this is the BSD License, except for code granted special exceptions.
The SPDX licences identifier is sufficient, a file should not contain

View File

@ -99,7 +99,7 @@ The mlxreg dedicated tool should be used as follows:
The "wrapped_crypto_operational" value will be "0x00000001" if the mode was
successfully changed to operational mode.
The mlx5 crypto PMD can be verfied by running the test application::
The mlx5 crypto PMD can be verified by running the test application::
dpdk-test -c 1 -n 1 -w <dev>,class=crypto,wcs_file=<file_path>
RTE>>cryptodev_mlx5_autotest
@ -130,7 +130,7 @@ Driver options
- ``keytag`` parameter [int]
The plaintext of the keytag appanded to the AES-XTS keys, default value is 0.
The plaintext of the keytag appended to the AES-XTS keys, default value is 0.
- ``max_segs_num`` parameter [int]

View File

@ -118,7 +118,7 @@ operation:
than the designated threshold, otherwise it will be handled by the secondary
worker.
A typical usecase in this mode is with the QAT cryptodev as the primary and
A typical use case in this mode is with the QAT cryptodev as the primary and
a software cryptodev as the secondary worker. This may help applications to
process additional crypto workload than what the QAT cryptodev can handle on
its own, by making use of the available CPU cycles to deal with smaller

View File

@ -26,7 +26,7 @@ Setup overview
PVP setup using 2 NICs
In this diagram, each red arrow represents one logical core. This use-case
In this diagram, each red arrow represents one logical core. This use case
requires 6 dedicated logical cores. A forwarding configuration with a single
NIC is also possible, requiring 3 logical cores.

View File

@ -105,7 +105,7 @@ Jumbo: Limitation
-----------------
Rx descriptor limit for number of segments per MTU is set to 1.
PMD doesn't support Jumbo Rx scatter gather. Some applciations can
PMD doesn't support Jumbo Rx scatter gather. Some applications can
adjust mbuf_size based on this param and max_pkt_len.
For others, PMD detects the condition where Rx packet length cannot

View File

@ -297,7 +297,7 @@ FMC - FMAN Configuration Tool
The details can be found in FMC Doc at:
`Frame Mnager Configuration Tool <https://www.nxp.com/docs/en/application-note/AN4760.pdf>`_.
`Frame Manager Configuration Tool <https://www.nxp.com/docs/en/application-note/AN4760.pdf>`_.
FMLIB
~~~~~
@ -307,7 +307,7 @@ FMLIB
This is an alternate to the FMC based configuration. This library provides
direct ioctl based interfaces for FMAN configuration as used by the FMC tool
as well. This helps in overcoming the main limitaiton of FMC - i.e. lack
as well. This helps in overcoming the main limitation of FMC - i.e. lack
of dynamic configuration.
The location for the fmd driver as used by FMLIB and FMC is as follows:
@ -319,7 +319,7 @@ VSP (Virtual Storage Profile)
The storage profiled are means to provide virtualized interface. A ranges of
storage profiles cab be associated to Ethernet ports.
They are selected during classification. Specify how the frame should be
written to memory and which buffer pool to select for packet storange in
written to memory and which buffer pool to select for packet storage in
queues. Start and End margin of buffer can also be configured.
Limitations

View File

@ -1476,7 +1476,7 @@ the DPDK application.
echo -n "<device pci address" > /sys/bus/pci/drivers/mlx5_core/unbind
5. Enbale switchdev mode::
5. Enable switchdev mode::
echo switchdev > /sys/class/net/<net device>/compat/devlink/mode

View File

@ -153,7 +153,7 @@ Runtime Config Options
-a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
With the above configuration, each send queue's descriptor buffer count is
limited to a maximum of 64 buffers.
- ``Switch header enable`` (default ``none``)
@ -242,7 +242,7 @@ configure the following features:
#. Hierarchical scheduling
#. Single rate - Two color, Two rate - Three color shaping
Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
Every parent can have atmost 10 SP Children and unlimited DWRR children.

View File

@ -86,7 +86,7 @@ TXGBE PMD provides the following log types available for control:
- ``pmd.net.txgbe.bp`` (default level is **notice**)
Extra logging of auto-negtiation process for backplane NICs.
Extra logging of auto-negotiation process for backplane NICs.
Supply ``--log-level=pmd.net.txgbe.bp:debug`` to view messages.
Runtime Options
@ -156,7 +156,7 @@ ingress or egress traffic, alter its fate and query related counters according
to any number of user-defined rules.
A flow rule is the combination of attributes with a matching pattern and a list of
actions. Theorically one rule can match more than one filters, which named for
actions. Theoretically one rule can match more than one filters, which named for
different patterns and actions. Like ethertype filter defines a rule in pattern:
the first not void item can be ETH, and the next not void item must be END.

View File

@ -509,7 +509,7 @@ are shown in below table:
Split virtqueue in-order non-mergeable path virtio_recv_pkts_inorder virtio_xmit_pkts_inorder
Split virtqueue vectorized Rx path virtio_recv_pkts_vec virtio_xmit_pkts
Packed virtqueue mergeable path virtio_recv_mergeable_pkts_packed virtio_xmit_pkts_packed
Packed virtqueue non-meregable path virtio_recv_pkts_packed virtio_xmit_pkts_packed
Packed virtqueue non-mergeable path virtio_recv_pkts_packed virtio_xmit_pkts_packed
Packed virtqueue in-order mergeable path virtio_recv_mergeable_pkts_packed virtio_xmit_pkts_packed
Packed virtqueue in-order non-mergeable path virtio_recv_pkts_packed virtio_xmit_pkts_packed
Packed virtqueue vectorized Rx path virtio_recv_pkts_packed_vec virtio_xmit_pkts_packed

View File

@ -358,7 +358,7 @@ RPM example usage:
Packets received with FrameCheckSequenceErrors: 0
Packets received with VLAN header: 0
Error packets: 0
Packets recievd with unicast DMAC: 0
Packets received with unicast DMAC: 0
Packets received with multicast DMAC: 0
Packets received with broadcast DMAC: 0
Dropped packets: 0

View File

@ -78,7 +78,7 @@ compatible board:
based config (if /tmp/fmc.bin is present). DPAA FMD will be used only if no
previous fmc config is existing.
Note that fmlib based integratin rely on underlying fmd driver in kernel,
Note that fmlib based integration rely on underlying fmd driver in kernel,
which is available as part of NXP kernel or NXP SDK.
The following dependencies are not part of DPDK and must be installed

View File

@ -639,7 +639,7 @@ optionally the ``soft_output`` mbuf data pointers.
"soft output","soft LLR output buffer (optional)"
"op_flags","bitmask of all active operation capabilities"
"rv_index","redundancy version index [0..3]"
"iter_max","maximum number of iterations to perofrm in decode all CBs"
"iter_max","maximum number of iterations to perform in decode all CBs"
"iter_min","minimum number of iterations to perform in decoding all CBs"
"iter_count","number of iterations to performed in decoding all CBs"
"ext_scale","scale factor on extrinsic info (5 bits)"

View File

@ -465,7 +465,7 @@ devices would fail anyway.
- By default, the mempool, first asks for IOVA-contiguous memory using
``RTE_MEMZONE_IOVA_CONTIG``. This is slow in RTE_IOVA_PA mode and it may
affect the application boot time.
- It is easy to enable large amount of IOVA-contiguous memory use-cases
- It is easy to enable large amount of IOVA-contiguous memory use cases
with IOVA in VA mode.
It is expected that all PCI drivers work in both RTE_IOVA_PA and

View File

@ -152,7 +152,7 @@ Ports
~~~~~
Ports are the points of contact between worker cores and the eventdev. The
general use-case will see one CPU core using one port to enqueue and dequeue
general use case will see one CPU core using one port to enqueue and dequeue
events from an eventdev. Ports are linked to queues in order to retrieve events
from those queues (more details in `Linking Queues and Ports`_ below).

View File

@ -325,7 +325,7 @@ supported. However, since sending messages (not requests) does not involve an
IPC thread, sending messages while processing another message or request is
supported.
Since the memory sybsystem uses IPC internally, memory allocations and IPC must
Since the memory subsystem uses IPC internally, memory allocations and IPC must
not be mixed: it is not safe to use IPC inside a memory-related callback, nor is
it safe to allocate/free memory inside IPC callbacks. Attempting to do so may
lead to a deadlock.

View File

@ -737,7 +737,7 @@ Strict priority scheduling of traffic classes within the same pipe is implemente
which selects the queues in ascending order.
Therefore, queue 0 (associated with TC 0, highest priority TC) is handled before
queue 1 (TC 1, lower priority than TC 0),
which is handled before queue 2 (TC 2, lower priority than TC 1) and it conitnues until queues of all TCs except the
which is handled before queue 2 (TC 2, lower priority than TC 1) and it continues until queues of all TCs except the
lowest priority TC are handled. At last, queues 12..15 (best effort TC, lowest priority TC) are handled.
Upper Limit Enforcement

View File

@ -124,7 +124,7 @@ The configuration mode is depended on the PMD capabilities.
Online rule configuration is done using the following API functions:
``rte_regexdev_rule_db_update`` which add / remove rules from the rules
precomplied list, and ``rte_regexdev_rule_db_compile_activate``
precompiled list, and ``rte_regexdev_rule_db_compile_activate``
which compile the rules and loads them to the RegEx HW.
Offline rule configuration can be done by adding a pointer to the compiled

View File

@ -41,7 +41,7 @@ but it expects ``rss_key`` to be converted to the host byte order.
Predictable RSS
---------------
In some usecases it is useful to have a way to find partial collisions of the
In some use cases it is useful to have a way to find partial collisions of the
Toeplitz hash function. In figure :numref:`figure_rss_queue_assign` only a few
of the least significant bits (LSB) of the hash value are used to indicate an
entry in the RSS Redirection Table (ReTa) and thus the index of the queue. So,
@ -178,10 +178,10 @@ It expects:
tuple, if the callback function returns an error.
Usecase example
---------------
Use case example
----------------
There could be a number of different usecases, such as NAT, TCP stack, MPLS
There could be a number of different use cases, such as NAT, TCP stack, MPLS
tag allocation, etc. In the following we will consider a SNAT application.
Packets of a single bidirectional flow belonging to different directions can

View File

@ -65,7 +65,7 @@ To assign an engine to a group::
$ accel-config config-engine dsa0/engine0.1 --group-id=1
To assign work queues to groups for passing descriptors to the engines a similar accel-config command can be used.
However, the work queues also need to be configured depending on the use-case.
However, the work queues also need to be configured depending on the use case.
Some configuration options include:
* mode (Dedicated/Shared): Indicates whether a WQ may accept jobs from multiple queues simultaneously.

View File

@ -17,7 +17,7 @@ some information by using scratchpad registers.
BIOS setting on Intel Xeon
--------------------------
Intel Non-transparent Bridge needs special BIOS setting. The referencce for
Intel Non-transparent Bridge needs special BIOS setting. The reference for
Skylake is https://www.intel.com/content/dam/support/us/en/documents/server-products/Intel_Xeon_Processor_Scalable_Family_BIOS_User_Guide.pdf
- Set the needed PCIe port as NTB to NTB mode on both hosts.

View File

@ -16,7 +16,7 @@ PCRE atomic grouping
Support PCRE atomic grouping.
PCRE back reference
Support PCRE back regerence.
Support PCRE back reference.
PCRE back tracking ctrl
Support PCRE back tracking ctrl.

View File

@ -77,7 +77,7 @@ New Features
the current version, even 64 bytes packets take two slots with Virtio PMD on guest
side.
The main impact is better performance for 0% packet loss use-cases, as it
The main impact is better performance for 0% packet loss use cases, as it
behaves as if the virtqueue size was enlarged, so more packets can be buffered
in the case of system perturbations. On the downside, small performance degradations
were measured when running micro-benchmarks.

View File

@ -151,7 +151,7 @@ New Features
* Added multi-queue support to allow one af_xdp vdev with multiple netdev
queues.
* Enabled "need_wakeup" feature which can provide efficient support for the
usecase where the application and driver executing on the same core.
use case where the application and driver executing on the same core.
* **Enabled infinite Rx in the PCAP PMD.**

View File

@ -152,7 +152,7 @@ New Features
Added Baseband PHY PMD which allows to configure BPHY hardware block
comprising accelerators and DSPs specifically tailored for 5G/LTE inline
usecases. Configuration happens via standard rawdev enq/deq operations. See
use cases. Configuration happens via standard rawdev enq/deq operations. See
the :doc:`../rawdevs/cnxk_bphy` rawdev guide for more details on this driver.
* **Added support for Marvell CN10K, CN9K, event Rx/Tx adapter.**

View File

@ -322,7 +322,7 @@ Drivers
Several customers have reported a link flap issue on 82579. The symptoms
are random and intermittent link losses when 82579 is connected to specific
switches. the Issue was root caused as an inter-operability problem between
switches. the Issue was root caused as an interoperability problem between
the NIC and at least some Broadcom PHYs in the Energy Efficient Ethernet
wake mechanism.

View File

@ -113,7 +113,7 @@ where,
* mbuf-dataroom: By default the application creates mbuf pool with maximum
possible data room (65535 bytes). If the user wants to test scatter-gather
list feature of the PMD he or she may set this value to reduce the dataroom
size so that the input data may be dividied into multiple chained mbufs.
size so that the input data may be divided into multiple chained mbufs.
To run the application in linux environment to test one AES FIPS test data

View File

@ -93,7 +93,7 @@ Additionally the event mode introduces two submodes of processing packets:
protocol use case, the worker thread resembles l2fwd worker thread as the IPsec
processing is done entirely in HW. This mode can be used to benchmark the raw
performance of the HW. The driver submode is selected with --single-sa option
(used also by poll mode). When --single-sa option is used in conjution with event
(used also by poll mode). When --single-sa option is used in conjunction with event
mode then index passed to --single-sa is ignored.
* App submode: This submode has all the features currently implemented with the

View File

@ -1176,7 +1176,7 @@ Tracing of events can be individually masked, and the mask may be programmed
at run time. An unmasked event results in a callback that provides information
about the event. The default callback simply prints trace information. The
default mask is 0 (all events off) the mask can be modified by calling the
function ``lthread_diagniostic_set_mask()``.
function ``lthread_diagnostic_set_mask()``.
It is possible register a user callback function to implement more
sophisticated diagnostic functions.

View File

@ -112,7 +112,7 @@ The command line options are:
Set the data size of the mbufs used to N bytes, where N < 65536.
The default value is 2048. If multiple mbuf-size values are specified the
extra memory pools will be created for allocating mbufs to receive packets
with buffer splittling features.
with buffer splitting features.
* ``--total-num-mbufs=N``

View File

@ -1742,7 +1742,7 @@ List all items from the ptype mapping table::
Where:
* ``valid_only``: A flag indicates if only list valid items(=1) or all itemss(=0).
* ``valid_only``: A flag indicates if only list valid items(=1) or all items(=0).
Replace a specific or a group of software defined ptype with a new one::
@ -4842,7 +4842,7 @@ Sample Raw encapsulation rule
Raw encapsulation configuration can be set by the following commands
Eecapsulating VxLAN::
Encapsulating VxLAN::
testpmd> set raw_encap 4 eth src is 10:11:22:33:44:55 / vlan tci is 1
inner_type is 0x0800 / ipv4 / udp dst is 4789 / vxlan vni

View File

@ -62,7 +62,7 @@ Options
.. warning::
While any user can run the ``dpdk-hugpages.py`` script to view the
While any user can run the ``dpdk-hugepages.py`` script to view the
status of huge pages, modifying the setup requires root privileges.
@ -71,8 +71,8 @@ Examples
To display current huge page settings::
dpdk-hugpages.py -s
dpdk-hugepages.py -s
To a complete setup of with 2 Gigabyte of 1G huge pages::
dpdk-hugpages.py -p 1G --setup 2G
dpdk-hugepages.py -p 1G --setup 2G