Ziye Yang cdc0170c1b nvmf/tcp: Add a maximal PDU loop number
In our previous code, we will handle all the PDU until there is
no incoming data from the network if we can continue the loop.
However this is not quite fair when we handling multiple connections
in a polling group.

And this change is setting a maximal NVME/TCP PDU we can handle
for each conneciton, it can improve the performance. After some
tuing, 32 should be a good loop number. Our iSCSI target uses
16.

The following shows some performance data:

Configuration:
1 Command used in the initiator side:
./examples/nvme/perf/perf -r 'trtype:TCP adrfam:IPv4 traddr:192.168.4.11 trsvcid:4420'
-q 128 -o 4096 -w randrw -M 50 -t 10

2 target side, export 4 malloc bdev in a same subsystem

Result:

Before patch:

Starting thread on core 0
========================================================
                                                                                                           Latency(us)
Device Information                                                    :       IOPS      MiB/s    Average        min        max
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   51554.20     201.38    2483.07     462.31    4158.45
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   51533.00     201.30    2484.12     508.06    4464.07
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   51630.20     201.68    2479.30     481.19    4120.83
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   51700.70     201.96    2475.85     442.61    4018.67
========================================================
Total                                                                 :  206418.10     806.32    2480.58     442.61    4464.07

After patch:
Starting thread on core 0
========================================================
                                                                                                           Latency(us)
Device Information                                                    :       IOPS      MiB/s    Average        min        max
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   57445.30     224.40    2228.46     450.03    4231.23
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   57529.50     224.72    2225.17     676.07    4251.76
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   57524.80     224.71    2225.29     627.08    4193.28
TCP  (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0:   57476.50     224.52    2227.17     663.14    4205.12
========================================================
Total                                                                 :  229976.10     898.34    2226.52     450.03    4251.76

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I86b7af1b669169eee2225de2d28c2cc313e7d905
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459572
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-06-28 12:28:54 +00:00
..
2019-02-04 18:11:33 +00:00
2018-10-12 17:40:16 +00:00
2019-06-25 13:37:15 +00:00
2019-06-28 12:28:54 +00:00
2019-03-15 19:19:17 +00:00