app/testpmd: handle all Rx queues in RSS setup

This patch removes constraints in rxq handling when multiqueue is enabled
to handle all the rxqs.

Current testpmd forces a dedicated core for each rxq, some rxqs may be
ignored when core number is less than rxq number, and that causes confusion
and inconvenience.

One example: an engineer was doing multiqueue test, there're 2
ports in guest each with 4 queues, and testpmd was used as the forwarding
engine in guest, as usual he used 1 core for forwarding, as a results he
only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
emails and quite some time are spent to root cause it, and of course it's
caused by this unreasonable testpmd behavior.

Moreover, even if we understand this behavior, if we want to test the
above case, we still need 8 cores for a single guest to poll all the
rxqs, obviously this is too expensive.

We met quite a lot cases like this, one recent example:
http://openvswitch.org/pipermail/dev/2016-June/072110.html

Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This commit is contained in:
Zhihong Wang 2016-06-14 19:08:05 -04:00 committed by Thomas Monjalon
parent 0e10698030
commit f2bb7ae1d2

View File

@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
cur_fwd_config.nb_fwd_streams =
(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
cur_fwd_config.nb_fwd_streams =
(streamid_t)cur_fwd_config.nb_fwd_lcores;
else
cur_fwd_config.nb_fwd_lcores =
(lcoreid_t)cur_fwd_config.nb_fwd_streams;
/* reinitialize forwarding streams */
init_fwd_streams();
setup_fwd_config_of_each_lcore(&cur_fwd_config);
rxp = 0; rxq = 0;
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
struct fwd_stream *fs;
fs = fwd_streams[lc_id];