f7a58af502
Occasionally, the amount of packets to free from the work queue ends perfectly on a boundary to have nb_free = 0 and pool = 0. This causes a segfault as follows: (gdb) bt #0 rte_mempool_default_cache #1 rte_mempool_put_bulk (n=0, obj_table=0x7f10deff2530, mp=0x0) #2 enic_free_wq_bufs (wq=wq@entry=0x7efabffcd5b0, completed_index=completed_index@entry=33) #3 0x00007f11e9c86e17 in enic_cleanup_wq (enic=<optimized out>, wq=wq@entry=0x7efabffcd5b0) at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:442 #4 0x00007f11e9c86e5f in enic_xmit_pkts (tx_queue=0x7efabffcd5b0, tx_pkts=0x7f10deffb1a8, nb_pkts=<optimized out>) at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:470 #5 0x00007f11e9e147ad in rte_eth_tx_burst (nb_pkts=<optimized out>, tx_pkts=0x7f10deffb1a8, queue_id=0, port_id=<optimized out>) This commit makes the enic wq driver match other drivers who call the bulk free, by checking that there are actual packets to free. Fixes: 36935afbc53c ("net/enic: refactor Tx mbuf recycling") CC: stable@dpdk.org Reported-by: Vincent S. Cojot <vcojot@redhat.com> Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1468631 Signed-off-by: Aaron Conole <aconole@redhat.com> Reviewed-by: John Daley <johndale@cisco.com>