doc: convert image extensions to wildcard

Changed all image.svg and image.png extensions to image.*
This allows Sphinx to decide the appropriate image type
from the available image options.

In case of PDF, SVG images are converted and Sphinx must pick
the converted version.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
This commit is contained in:
John McNamara 2015-02-03 14:11:18 +00:00 committed by Thomas Monjalon
parent 1e7055ac8a
commit ba9e05cb6b
34 changed files with 106 additions and 106 deletions

View File

@ -369,4 +369,4 @@ We expect only 50% of CPU spend on packet IO.
echo 50000 > pkt_io/cpu.cfs_quota_us
.. |linuxapp_launch| image:: img/linuxapp_launch.svg
.. |linuxapp_launch| image:: img/linuxapp_launch.*

View File

@ -574,10 +574,10 @@ which belongs to the destination VF on the VM.
|inter_vm_comms|
.. |perf_benchmark| image:: img/perf_benchmark.png
.. |perf_benchmark| image:: img/perf_benchmark.*
.. |single_port_nic| image:: img/single_port_nic.png
.. |single_port_nic| image:: img/single_port_nic.*
.. |inter_vm_comms| image:: img/inter_vm_comms.png
.. |inter_vm_comms| image:: img/inter_vm_comms.*
.. |fast_pkt_proc| image:: img/fast_pkt_proc.png
.. |fast_pkt_proc| image:: img/fast_pkt_proc.*

View File

@ -457,8 +457,8 @@ The packet flow is:
packet generator->Virtio in guest VM1->switching backend->Virtio in guest VM2->switching backend->wire
.. |grant_table| image:: img/grant_table.png
.. |grant_table| image:: img/grant_table.*
.. |grant_refs| image:: img/grant_refs.png
.. |grant_refs| image:: img/grant_refs.*
.. |dpdk_xen_pkt_switch| image:: img/dpdk_xen_pkt_switch.png
.. |dpdk_xen_pkt_switch| image:: img/dpdk_xen_pkt_switch.*

View File

@ -155,4 +155,4 @@ As a result, if the user wishes to shut down or restart the IVSHMEM host applica
it is not enough to simply shut the application down.
The virtual machine must also be shut down (if not, it will hold onto outdated host data).
.. |ivshmem| image:: img/ivshmem.png
.. |ivshmem| image:: img/ivshmem.*

View File

@ -281,10 +281,10 @@ Even if the option is turned on, kni-vhost will ignore the information that the
When working with legacy virtio on the guest, it is better to turn off unsupported offload features using ethtool -K.
Otherwise, there may be problems such as an incorrect L4 checksum error.
.. |kni_traffic_flow| image:: img/kni_traffic_flow.png
.. |kni_traffic_flow| image:: img/kni_traffic_flow.*
.. |vhost_net_arch| image:: img/vhost_net_arch.png
.. |vhost_net_arch| image:: img/vhost_net_arch.*
.. |pkt_flow_kni| image:: img/pkt_flow_kni.png
.. |pkt_flow_kni| image:: img/pkt_flow_kni.*
.. |kernel_nic_intf| image:: img/kernel_nic_intf.png
.. |kernel_nic_intf| image:: img/kernel_nic_intf.*

View File

@ -268,4 +268,4 @@ while the rte_ring specific functions are direct function calls in the code and
then calling rte_eth_rx_queue_setup() / tx_queue_setup() for each of those queues and
finally calling rte_eth_dev_start() to allow transmission and reception of packets to begin.
.. |forward_stats| image:: img/forward_stats.png
.. |forward_stats| image:: img/forward_stats.*

View File

@ -434,10 +434,10 @@ Create a bonded device in balance mode with two slaves specified by their PCI ad
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=2, slave=0000:00a:00.01,slave=0000:004:00.00,xmit_policy=l34' -- --port-topology=chained
.. |bond-overview| image:: img/bond-overview.svg
.. |bond-mode-0| image:: img/bond-mode-0.svg
.. |bond-mode-1| image:: img/bond-mode-1.svg
.. |bond-mode-2| image:: img/bond-mode-2.svg
.. |bond-mode-3| image:: img/bond-mode-3.svg
.. |bond-mode-4| image:: img/bond-mode-4.svg
.. |bond-mode-5| image:: img/bond-mode-5.svg
.. |bond-overview| image:: img/bond-overview.*
.. |bond-mode-0| image:: img/bond-mode-0.*
.. |bond-mode-1| image:: img/bond-mode-1.*
.. |bond-mode-2| image:: img/bond-mode-2.*
.. |bond-mode-3| image:: img/bond-mode-3.*
.. |bond-mode-4| image:: img/bond-mode-4.*
.. |bond-mode-5| image:: img/bond-mode-5.*

View File

@ -232,4 +232,4 @@ Use Case: IPv6 Forwarding
The LPM algorithm is used to implement the Classless Inter-Domain Routing (CIDR) strategy used by routers implementing IP forwarding.
.. |tbl24_tbl8_tbl8| image:: img/tbl24_tbl8_tbl8.png
.. |tbl24_tbl8_tbl8| image:: img/tbl24_tbl8_tbl8.*

View File

@ -220,4 +220,4 @@ References
* Pankaj Gupta, Algorithms for Routing Lookups and Packet Classification, PhD Thesis, Stanford University,
2000 (`http://klamath.stanford.edu/~pankaj/thesis/ thesis_1sided.pdf <http://klamath.stanford.edu/~pankaj/thesis/%20thesis_1sided.pdf>`_ )
.. |tbl24_tbl8| image:: img/tbl24_tbl8.png
.. |tbl24_tbl8| image:: img/tbl24_tbl8.*

View File

@ -233,4 +233,4 @@ and if so, they are merged with the current elements.
This means that we can never have two free memory blocks adjacent to one another,
they are always merged into a single block.
.. |malloc_heap| image:: img/malloc_heap.png
.. |malloc_heap| image:: img/malloc_heap.*

View File

@ -189,6 +189,6 @@ Use Cases
All networking application should use mbufs to transport network packets.
.. |mbuf1| image:: img/mbuf1.svg
.. |mbuf1| image:: img/mbuf1.*
.. |mbuf2| image:: img/mbuf2.svg
.. |mbuf2| image:: img/mbuf2.*

View File

@ -141,8 +141,8 @@ Below are some examples:
* Any application that needs to allocate fixed-sized objects in the data plane and that will be continuously utilized by the system.
.. |memory-management| image:: img/memory-management.svg
.. |memory-management| image:: img/memory-management.*
.. |memory-management2| image:: img/memory-management2.svg
.. |memory-management2| image:: img/memory-management2.*
.. |mempool| image:: img/mempool.svg
.. |mempool| image:: img/mempool.*

View File

@ -200,4 +200,4 @@ instead of the functions which do the hashing internally, such as rte_hash_add()
If the number of required DPDK processes exceeds that of the number of available HPET comparators,
the TSC (which is the default timer in this release) must be used as a time source across all processes instead of the HPET.
.. |multi_process_memory| image:: img/multi_process_memory.svg
.. |multi_process_memory| image:: img/multi_process_memory.*

View File

@ -204,4 +204,4 @@ The librte_net library is a collection of IP protocol definitions and convenienc
It is based on code from the FreeBSD* IP stack and contains protocol numbers (for use in IP headers),
IP-related macros, IPv4/IPv6 header structures and TCP, UDP and SCTP header structures.
.. |architecture-overview| image:: img/architecture-overview.svg
.. |architecture-overview| image:: img/architecture-overview.*

View File

@ -111,6 +111,6 @@ i.e. to save power at times of lighter load,
it is possible to have a worker stop processing packets by calling "rte_distributor_return_pkt()" to indicate that
it has finished the current packet and does not want a new one.
.. |packet_distributor1| image:: img/packet_distributor1.png
.. |packet_distributor1| image:: img/packet_distributor1.*
.. |packet_distributor2| image:: img/packet_distributor2.png
.. |packet_distributor2| image:: img/packet_distributor2.*

View File

@ -1168,16 +1168,16 @@ with all the implementations sharing the same API: pure SW implementation (no ac
The selection between these implementations could be done at build time or at run-time (recommended), based on which accelerators are present in the system,
with no application changes required.
.. |figure33| image:: img/figure33.png
.. |figure33| image:: img/figure33.*
.. |figure35| image:: img/figure35.png
.. |figure35| image:: img/figure35.*
.. |figure39| image:: img/figure39.png
.. |figure39| image:: img/figure39.*
.. |figure34| image:: img/figure34.png
.. |figure34| image:: img/figure34.*
.. |figure32| image:: img/figure32.png
.. |figure32| image:: img/figure32.*
.. |figure37| image:: img/figure37.png
.. |figure37| image:: img/figure37.*
.. |figure38| image:: img/figure38.png
.. |figure38| image:: img/figure38.*

View File

@ -214,8 +214,8 @@ The packet transmission flow is:
IXIA packet generator-> Guest VM 82599 VF port1 rx burst-> Guest VM virtio port 0 tx burst-> tap -> Linux Bridge->82599 PF-> IXIA packet generator
.. |host_vm_comms| image:: img/host_vm_comms.png
.. |host_vm_comms| image:: img/host_vm_comms.*
.. |console| image:: img/console.png
.. |console| image:: img/console.*
.. |host_vm_comms_qemu| image:: img/host_vm_comms_qemu.png
.. |host_vm_comms_qemu| image:: img/host_vm_comms_qemu.*

View File

@ -177,8 +177,8 @@ In this example, the packet flow path is:
Packet generator -> 82599 VF -> Guest VM 82599 port 0 rx burst -> Guest VM VMXNET3 port 1 tx burst -> VMXNET3
device -> VMware ESXi vSwitch -> VMXNET3 device -> Guest VM VMXNET3 port 0 rx burst -> Guest VM 82599 VF port 1 tx burst -> 82599 VF -> Packet generator
.. |vm_vm_comms| image:: img/vm_vm_comms.png
.. |vm_vm_comms| image:: img/vm_vm_comms.*
.. |vmxnet3_int| image:: img/vmxnet3_int.png
.. |vmxnet3_int| image:: img/vmxnet3_int.*
.. |vswitch_vm| image:: img/vswitch_vm.png
.. |vswitch_vm| image:: img/vswitch_vm.*

View File

@ -1728,38 +1728,38 @@ For each input packet, the steps for the srTCM / trTCM algorithms are:
When the output color is not red, a number of tokens equal to the length of the IP packet are
subtracted from the C or E /P or both buckets, depending on the algorithm and the output color of the packet.
.. |flow_tru_droppper| image:: img/flow_tru_droppper.png
.. |flow_tru_droppper| image:: img/flow_tru_droppper.*
.. |drop_probability_graph| image:: img/drop_probability_graph.png
.. |drop_probability_graph| image:: img/drop_probability_graph.*
.. |drop_probability_eq3| image:: img/drop_probability_eq3.png
.. |drop_probability_eq3| image:: img/drop_probability_eq3.*
.. |eq2_expression| image:: img/eq2_expression.png
.. |eq2_expression| image:: img/eq2_expression.*
.. |drop_probability_eq4| image:: img/drop_probability_eq4.png
.. |drop_probability_eq4| image:: img/drop_probability_eq4.*
.. |pkt_drop_probability| image:: img/pkt_drop_probability.png
.. |pkt_drop_probability| image:: img/pkt_drop_probability.*
.. |pkt_proc_pipeline_qos| image:: img/pkt_proc_pipeline_qos.png
.. |pkt_proc_pipeline_qos| image:: img/pkt_proc_pipeline_qos.*
.. |ex_data_flow_tru_dropper| image:: img/ex_data_flow_tru_dropper.png
.. |ex_data_flow_tru_dropper| image:: img/ex_data_flow_tru_dropper.*
.. |ewma_filter_eq_1| image:: img/ewma_filter_eq_1.png
.. |ewma_filter_eq_1| image:: img/ewma_filter_eq_1.*
.. |ewma_filter_eq_2| image:: img/ewma_filter_eq_2.png
.. |ewma_filter_eq_2| image:: img/ewma_filter_eq_2.*
.. |data_struct_per_port| image:: img/data_struct_per_port.png
.. |data_struct_per_port| image:: img/data_struct_per_port.*
.. |prefetch_pipeline| image:: img/prefetch_pipeline.png
.. |prefetch_pipeline| image:: img/prefetch_pipeline.*
.. |pipe_prefetch_sm| image:: img/pipe_prefetch_sm.png
.. |pipe_prefetch_sm| image:: img/pipe_prefetch_sm.*
.. |blk_diag_dropper| image:: img/blk_diag_dropper.png
.. |blk_diag_dropper| image:: img/blk_diag_dropper.*
.. |m_definition| image:: img/m_definition.png
.. |m_definition| image:: img/m_definition.*
.. |eq2_factor| image:: img/eq2_factor.png
.. |eq2_factor| image:: img/eq2_factor.*
.. |sched_hier_per_port| image:: img/sched_hier_per_port.png
.. |sched_hier_per_port| image:: img/sched_hier_per_port.*
.. |hier_sched_blk| image:: img/hier_sched_blk.png
.. |hier_sched_blk| image:: img/hier_sched_blk.*

View File

@ -347,30 +347,30 @@ References
* `Linux Lockless Ring Buffer Design <http://lwn.net/Articles/340400/>`_
.. |ring1| image:: img/ring1.svg
.. |ring1| image:: img/ring1.*
.. |ring-enqueue1| image:: img/ring-enqueue1.svg
.. |ring-enqueue1| image:: img/ring-enqueue1.*
.. |ring-enqueue2| image:: img/ring-enqueue2.svg
.. |ring-enqueue2| image:: img/ring-enqueue2.*
.. |ring-enqueue3| image:: img/ring-enqueue3.svg
.. |ring-enqueue3| image:: img/ring-enqueue3.*
.. |ring-dequeue1| image:: img/ring-dequeue1.svg
.. |ring-dequeue1| image:: img/ring-dequeue1.*
.. |ring-dequeue2| image:: img/ring-dequeue2.svg
.. |ring-dequeue2| image:: img/ring-dequeue2.*
.. |ring-dequeue3| image:: img/ring-dequeue3.svg
.. |ring-dequeue3| image:: img/ring-dequeue3.*
.. |ring-mp-enqueue1| image:: img/ring-mp-enqueue1.svg
.. |ring-mp-enqueue1| image:: img/ring-mp-enqueue1.*
.. |ring-mp-enqueue2| image:: img/ring-mp-enqueue2.svg
.. |ring-mp-enqueue2| image:: img/ring-mp-enqueue2.*
.. |ring-mp-enqueue3| image:: img/ring-mp-enqueue3.svg
.. |ring-mp-enqueue3| image:: img/ring-mp-enqueue3.*
.. |ring-mp-enqueue4| image:: img/ring-mp-enqueue4.svg
.. |ring-mp-enqueue4| image:: img/ring-mp-enqueue4.*
.. |ring-mp-enqueue5| image:: img/ring-mp-enqueue5.svg
.. |ring-mp-enqueue5| image:: img/ring-mp-enqueue5.*
.. |ring-modulo1| image:: img/ring-modulo1.svg
.. |ring-modulo1| image:: img/ring-modulo1.*
.. |ring-modulo2| image:: img/ring-modulo2.svg
.. |ring-modulo2| image:: img/ring-modulo2.*

View File

@ -172,6 +172,6 @@ Sample Application. See Section 9.4.4, "RX Queue Initialization".
TX queue initialization is done in the same way as it is done in the L2 Forwarding
Sample Application. See Section 9.4.5, "TX Queue Initialization".
.. |dist_perf| image:: img/dist_perf.svg
.. |dist_perf| image:: img/dist_perf.*
.. |dist_app| image:: img/dist_app.svg
.. |dist_app| image:: img/dist_app.*

View File

@ -327,4 +327,4 @@ To remove bridges and persistent TAP interfaces, the following commands are used
brctl delbr br0
openvpn --rmtun --dev tap_dpdk_00
.. |exception_path_example| image:: img/exception_path_example.svg
.. |exception_path_example| image:: img/exception_path_example.*

View File

@ -221,4 +221,4 @@ performing AES-CBC-128 encryption with AES-XCBC-MAC-96 hash, the following setti
Refer to the *DPDK Test Report* for more examples of traffic generator setup and the application startup command lines.
If no errors are generated in response to the startup commands, the application is running correctly.
.. |quickassist_block_diagram| image:: img/quickassist_block_diagram.png
.. |quickassist_block_diagram| image:: img/quickassist_block_diagram.*

View File

@ -75,7 +75,7 @@ The packet flow through the Kernel NIC Interface application is as shown in the
**Figure 2. Kernel NIC Application Packet Flow**
.. image3_png has been renamed to kernel_nic.png
.. image3_png has been renamed to kernel_nic.*
|kernel_nic|
@ -617,4 +617,4 @@ Currently, setting a new MTU and configuring the network interface (up/ down) ar
return ret;
}
.. |kernel_nic| image:: img/kernel_nic.png
.. |kernel_nic| image:: img/kernel_nic.*

View File

@ -527,6 +527,6 @@ however it improves performance:
prev_tsc = cur_tsc;
}
.. |l2_fwd_benchmark_setup| image:: img/l2_fwd_benchmark_setup.svg
.. |l2_fwd_benchmark_setup| image:: img/l2_fwd_benchmark_setup.*
.. |l2_fwd_virtenv_benchmark_setup| image:: img/l2_fwd_virtenv_benchmark_setup.png
.. |l2_fwd_virtenv_benchmark_setup| image:: img/l2_fwd_virtenv_benchmark_setup.*

View File

@ -398,6 +398,6 @@ adds rules parsed from the file into the database and build an ACL trie.
It is important to note that the application creates an independent copy of each database for each socket CPU
involved in the task to reduce the time for remote memory access.
.. |ipv4_acl_rule| image:: img/ipv4_acl_rule.png
.. |ipv4_acl_rule| image:: img/ipv4_acl_rule.*
.. |example_rules| image:: img/example_rules.png
.. |example_rules| image:: img/example_rules.*

View File

@ -242,4 +242,4 @@ are on the same or different CPU sockets, the following run-time scenarios are p
then it has to be transmitted out by a NIC connected to socket C.
The performance price for crossing the CPU socket boundary is paid twice for this packet.
.. |load_bal_app_arch| image:: img/load_bal_app_arch.png
.. |load_bal_app_arch| image:: img/load_bal_app_arch.*

View File

@ -775,10 +775,10 @@ so it remaps the resource to the new core ID slot.
return 0;
}
.. |sym_multi_proc_app| image:: img/sym_multi_proc_app.png
.. |sym_multi_proc_app| image:: img/sym_multi_proc_app.*
.. |client_svr_sym_multi_proc_app| image:: img/client_svr_sym_multi_proc_app.png
.. |client_svr_sym_multi_proc_app| image:: img/client_svr_sym_multi_proc_app.*
.. |master_slave_proc| image:: img/master_slave_proc.png
.. |master_slave_proc| image:: img/master_slave_proc.*
.. |slave_proc_recov| image:: img/slave_proc_recov.png
.. |slave_proc_recov| image:: img/slave_proc_recov.*

View File

@ -348,4 +348,4 @@ This application classifies based on the QinQ double VLAN tags and the IP destin
Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.
.. |qos_sched_app_arch| image:: img/qos_sched_app_arch.png
.. |qos_sched_app_arch| image:: img/qos_sched_app_arch.*

View File

@ -499,8 +499,8 @@ low_watermark from the rte_memzone previously created by qw.
low_watermark = (unsigned int *) qw_memzone->addr + sizeof(int);
}
.. |pipeline_overview| image:: img/pipeline_overview.png
.. |pipeline_overview| image:: img/pipeline_overview.*
.. |ring_pipeline_perf_setup| image:: img/ring_pipeline_perf_setup.png
.. |ring_pipeline_perf_setup| image:: img/ring_pipeline_perf_setup.*
.. |threads_pipelines| image:: img/threads_pipelines.png
.. |threads_pipelines| image:: img/threads_pipelines.*

View File

@ -282,4 +282,4 @@ The profile for input traffic is TCP/IPv4 packets with:
* source TCP port fixed to 0
.. |test_pipeline_app| image:: img/test_pipeline_app.png
.. |test_pipeline_app| image:: img/test_pipeline_app.*

View File

@ -747,12 +747,12 @@ The above message indicates that device 0 has been registered with MAC address c
Any packets received on the NIC with these values is placed on the devices receive queue.
When a virtio-net device transmits packets, the VLAN tag is added to the packet by the DPDK vhost sample code.
.. |vhost_net_arch| image:: img/vhost_net_arch.png
.. |vhost_net_arch| image:: img/vhost_net_arch.*
.. |qemu_virtio_net| image:: img/qemu_virtio_net.png
.. |qemu_virtio_net| image:: img/qemu_virtio_net.*
.. |tx_dpdk_testpmd| image:: img/tx_dpdk_testpmd.png
.. |tx_dpdk_testpmd| image:: img/tx_dpdk_testpmd.*
.. |vhost_net_sample_app| image:: img/vhost_net_sample_app.png
.. |vhost_net_sample_app| image:: img/vhost_net_sample_app.*
.. |virtio_linux_vhost| image:: img/virtio_linux_vhost.png
.. |virtio_linux_vhost| image:: img/virtio_linux_vhost.*

View File

@ -356,6 +356,6 @@ Where {core_num} is the lcore and channel to change frequency by scaling up/down
set_cpu_freq {core_num} up|down|min|max
.. |vm_power_mgr_highlevel| image:: img/vm_power_mgr_highlevel.svg
.. |vm_power_mgr_highlevel| image:: img/vm_power_mgr_highlevel.*
.. |vm_power_mgr_vm_request_seq| image:: img/vm_power_mgr_vm_request_seq.svg
.. |vm_power_mgr_vm_request_seq| image:: img/vm_power_mgr_vm_request_seq.*

View File

@ -248,4 +248,4 @@ To generate the statistics output, use the following command:
Please note that the statistics output will appear on the terminal where the vmdq_dcb_app is running,
rather than the terminal from which the HUP signal was sent.
.. |vmdq_dcb_example| image:: img/vmdq_dcb_example.svg
.. |vmdq_dcb_example| image:: img/vmdq_dcb_example.*