For each forward engine, there may be some special conditions
must be met before the forwarding runs.
Adding checks for these conditions in configuring is not suitable,
because one condition may rely on multiple configurations, and the
conditions required by each forward engine is not general.
The best solution is each forward engine has a callback to check
whether these conditions are met, and then testpmd can call the
callback to determine whether the forwarding can be started.
There was a void callback 'port_fwd_begin' in forward engine,
it did some initialization for forwarding, this patch updates its
return value then we can add some checks in it to confirm whether
the forwarding can be started. In addition, this patch calls the
callback before the forwarding stats is reset and then launches the
forwarding engine.
Bugzilla ID: 797
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
This adds a new forwarding mode to testpmd to simulate
more realistic behavior of a guest machine engaged in receiving
and sending packets performing Virtual Network Function (VNF).
The goal is to enable a simple way of measuring performance impact on
cache and memory footprint utilization from various VNF co-located on
the same host machine. For this it does:
* Buffer packets in a FIFO:
Create a fifo to buffer received packets. Once it flows over put
those packets into the actual tx queue. The fifo is created per tx
queue and its size can be set with the --noisy-tx-sw-buffer-flushtime
commandline parameter.
A second commandline parameter is used to set a timeout in
milliseconds after which the fifo is flushed.
--noisy-tx-sw-buffer-size [packet numbers]
Keep the mbuf in a FIFO and forward the over flooding packets from the
FIFO. This queue is per TX-queue (after all other packet processing).
--noisy-tx-sw-buffer-flushtime [delay]
Flush the packet queue if no packets have been seen during
[delay]. As long as packets are seen, the timer is reset.
Add several options to simulate route lookups (memory reads) in tables
that can be quite large, as well as route hit statistics update.
These options simulates the while stack traversal and
will trash the cache. Memory access is random.
* simulate route lookups:
Allocate a buffer and perform reads and writes on it as specified by
commandline options:
--noisy-lkup-memory [size]
Size of the VNF internal memory (MB), in which the random
read/write will be done, allocated by rte_malloc (hugepages).
--noisy-lkup-num-writes [num]
Number of random writes in memory per packet should be
performed, simulating hit-flags update. 64 bits per write,
all write in different cache lines.
--noisy-lkup-num-reads [num]
Number of random reads in memory per packet should be
performed, simulating FIB/table lookups. 64 bits per read,
all write in different cache lines.
--noisy-lkup-num-reads-writes [num]
Number of random reads and writes in memory per packet should
be performed, simulating stats update. 64 bits per read-write, all
reads and writes in different cache lines.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>