doc: fix references in guides

Replace some hard-coded section numbers by dynamic links.

Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This commit is contained in:
Thomas Monjalon 2016-04-11 23:21:35 +02:00
parent edbeb7d962
commit 29e30cbcc1
12 changed files with 21 additions and 12 deletions

View File

@ -140,7 +140,7 @@ Host2VM communication example
For each physical port, kni also creates a kernel thread that retrieves packets from the kni receive queue,
place them onto kni's raw socket's queue and wake up the vhost kernel thread to exchange packets with the virtio virt queue.
For more details about kni, please refer to Chapter 24 "Kernel NIC Interface".
For more details about kni, please refer to :ref:`kni`.
#. Enable the kni raw socket functionality for the specified physical NIC port,
get the generated file descriptor and set it in the qemu command line parameter.

View File

@ -54,7 +54,7 @@ Finally 'direct' and 'indirect' mbufs for each fragment are linked together via
The caller has an ability to explicitly specify which mempools should be used to allocate 'direct' and 'indirect' mbufs from.
For more information about direct and indirect mbufs, refer to the *DPDK Programmers guide 7.7 Direct and Indirect Buffers.*
For more information about direct and indirect mbufs, refer to :ref:`direct_indirect_buffer`.
Packet reassembly
-----------------

View File

@ -28,6 +28,8 @@
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.. _kni:
Kernel NIC Interface
====================

View File

@ -75,7 +75,7 @@ The main methods exported for the LPM component are:
Implementation Details
~~~~~~~~~~~~~~~~~~~~~~
This is a modification of the algorithm used for IPv4 (see Section 19.2 "Implementation Details").
This is a modification of the algorithm used for IPv4 (see :ref:`lpm4_details`).
In this case, instead of using two levels, one with a tbl24 and a second with a tbl8, 14 levels are used.
The implementation can be seen as a multi-bit trie where the *stride*

View File

@ -62,6 +62,8 @@ The main methods exported by the LPM component are:
the algorithm picks the rule with the highest depth as the best match rule,
which means that the rule has the highest number of most significant bits matching between the input key and the rule key.
.. _lpm4_details:
Implementation Details
----------------------

View File

@ -235,6 +235,8 @@ The list of flags and their precise meaning is described in the mbuf API
documentation (rte_mbuf.h). Also refer to the testpmd source code
(specifically the csumonly.c file) for details.
.. _direct_indirect_buffer:
Direct and Indirect Buffers
---------------------------

View File

@ -98,6 +98,8 @@ no padding is required between objects (except for objects whose size are n x 3
When creating a new pool, the user can specify to use this feature or not.
.. _mempool_local_cache:
Local Cache
-----------

View File

@ -80,7 +80,7 @@ and point to the same objects, in both processes.
.. note::
Refer to Section 23.3 "Multi-process Limitations" for details of
Refer to `Multi-process Limitations`_ for details of
how Linux kernel Address-Space Layout Randomization (ASLR) can affect memory sharing.
.. _figure_multi_process_memory:

View File

@ -147,7 +147,7 @@ these packets are later on removed and handed over to the NIC TX with the packet
The hierarchical scheduler is optimized for a large number of packet queues.
When only a small number of queues are needed, message passing queues should be used instead of this block.
See Section 26.2.5 "Worst Case Scenarios for Performance" for a more detailed discussion.
See `Worst Case Scenarios for Performance`_ for a more detailed discussion.
Scheduling Hierarchy
~~~~~~~~~~~~~~~~~~~~
@ -712,7 +712,7 @@ where, r = port line rate (in bytes per second).
| | | of the grinders), update the credits for the pipe and its subport. |
| | | |
| | | The current implementation is using option 3. According to Section |
| | | 26.2.4.4 "Dequeue State Machine", the pipe and subport credits are |
| | | `Dequeue State Machine`_, the pipe and subport credits are |
| | | updated every time a pipe is selected by the dequeue process before the |
| | | pipe and subport credits are actually used. |
| | | |
@ -783,7 +783,7 @@ as described in :numref:`table_qos_10` and :numref:`table_qos_11`.
| 1 | tc_time | Bytes | Time of the next update (upper limit refill) for the 4 TCs of the |
| | | | current subport / pipe. |
| | | | |
| | | | See Section 26.2.4.5.1, "Internal Time Reference" for the |
| | | | See Section `Internal Time Reference`_ for the |
| | | | explanation of why the time is maintained in byte units. |
| | | | |
+---+-----------------------+-------+-----------------------------------------------------------------------+
@ -1334,7 +1334,7 @@ Where:
The time reference is in units of bytes,
where a byte signifies the time duration required by the physical interface to send out a byte on the transmission medium
(see Section 26.2.4.5.1 "Internal Time Reference").
(see Section `Internal Time Reference`_).
The parameter s is defined in the dropper module as a constant with the value: s=2^22.
This corresponds to the time required by every leaf node in a hierarchy with 64K leaf nodes
to transmit one 64-byte packet onto the wire and represents the worst case scenario.

View File

@ -113,7 +113,7 @@ it is advised to use the DPDK ring API, which provides a lockless ring implement
The ring supports bulk and burst access,
meaning that it is possible to read several elements from the ring with only one costly atomic operation
(see Chapter 5 "Ring Library").
(see :doc:`ring_lib`).
Performance is greatly improved when using bulk access operations.
The code algorithm that dequeues messages may be something similar to the following:

View File

@ -154,7 +154,8 @@ Command Line Arguments
~~~~~~~~~~~~~~~~~~~~~~
The L2 Forwarding sample application takes specific parameters,
in addition to Environment Abstraction Layer (EAL) arguments (see Section 9.3).
in addition to Environment Abstraction Layer (EAL) arguments
(see `Running the Application`_).
The preferred way to parse parameters is to use the getopt() function,
since it is part of a well-defined and portable library.
@ -344,7 +345,7 @@ The list of queues that must be polled for a given lcore is stored in a private
Values of struct lcore_queue_conf:
* n_rx_port and rx_port_list[] are used in the main packet processing loop
(see Section 9.4.6 "Receive, Process and Transmit Packets" later in this chapter).
(see Section `Receive, Process and Transmit Packets`_ later in this chapter).
* rx_timers and flush_timer are used to ensure forced TX on low packet rate.

View File

@ -495,7 +495,7 @@ For threads/processes not created in that way, either pinned to a core or not, t
rte_lcore_id() function will not work in the correct way.
However, sometimes these threads/processes still need the unique ID mechanism to do easy access on structures or resources.
For example, the DPDK mempool library provides a local cache mechanism
(refer to *DPDK Programmer's Guide* , Section 6.4, "Local Cache")
(refer to :ref:`mempool_local_cache`)
for fast element allocation and freeing.
If using a non-unique ID or a fake one,
a race condition occurs if two or more threads/ processes with the same core ID try to use the local cache.