5499c1fc9b
How to reproduce: 1. Start vhost-switch ./examples/vhost/build/vhost-switch -c 0x3 -n 4 -- -p 1 --stat 0 2. Start VM with a virtio port $ $QEMU -smp cores=2,sockets=1 -m 4G -cpu host -enable-kvm \ -chardev socket,id=char1,path=<path to vhost-user socket> \ -device virtio-net-pci,netdev=vhostuser1 \ -netdev vhost-user,id=vhostuser1,chardev=char1 -object memory-backend-file,id=mem,size=4G,mem-path=<hugetlbfs path>,share=on \ -numa node,memdev=mem -mem-prealloc \ -hda <path to VM img> 3. Start l2fwd in VM $ ./examples/l2fwd/build/l2fwd -c 0x1 -n 4 -m 1024 -- -p 0x1 4. Use ixia to inject packets in a small data bit rate. Error: vhost-switch keeps printing error message: failed to allocate memory for mbuf. Root cause: How many mbufs allocated for a port is calculated by below formula. NUM_MBUFS_PER_PORT = ((MAX_QUEUES*RTE_TEST_RX_DESC_DEFAULT) + \ (num_switching_cores*MAX_PKT_BURST) + \ (num_switching_cores*RTE_TEST_TX_DESC_DEFAULT) +\ (num_switching_cores*MBUF_CACHE_SIZE)) We suppose num_switching_cores is 1 and MBUF_CACHE_SIZE is 128. And when initializing port, master core fills mbuf mempool cache, so there would be some left in that cache, for example 121. So total mbufs which can be used is: (MAX_PKT_BURST + MBUF_CACHE_SIZE - 121) = (32 + 128 - 121) = 39. What makes it worse is that there is a buffer to store mbufs (which will be tx_burst to physical port), if it occupies some mbufs, there will be possible < 32 mbufs left, so vhost dequeue prints out this msg. In all, it fails to include master core's mbuf mempool cache. Reported-by: Qian Xu <qian.q.xu@intel.com> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com> |
||
---|---|---|
.. | ||
deprecation.rst | ||
index.rst | ||
known_issues.rst | ||
rel_description.rst | ||
release_1_8.rst | ||
release_2_0.rst | ||
release_2_1.rst | ||
release_2_2.rst | ||
release_16_04.rst | ||
supported_os.rst |