Now that we correctly enable rx interrupts on all cores, performance has gotten

quite awful, because e.g. 4 packets will come in and get processed on 4
different cores at the same time, really battling with the TCP stack quite
painfully.  For now, just run one task at a time.

This gets performance up in most cases to where it was before the correctness
fixes that got interrupts to run on all cores (except in high-load TCP transmit
cases where all we're handling receive for is ACKs) and in some cases it's
better now.  What would be ideal would be to use a more advanced interrupt
mitigation strategy and possibly to use different workqueue groups per port for
multi-port systems, and so on, but this is a fine stopgap.
This commit is contained in:
Juli Mallett 2011-01-09 23:46:24 +00:00
parent 529fb1406b
commit 987da28eb7

View File

@ -54,6 +54,8 @@ extern struct ifnet *cvm_oct_device[];
static struct task cvm_oct_task;
static struct taskqueue *cvm_oct_taskq;
static int cvm_oct_rx_active;
/**
* Interrupt handler. The interrupt occurs whenever the POW
* transitions from 0->1 packets in our group.
@ -70,7 +72,13 @@ int cvm_oct_do_interrupt(void *dev_id)
cvmx_write_csr(CVMX_POW_WQ_INT, 1<<pow_receive_group);
else
cvmx_write_csr(CVMX_POW_WQ_INT, 0x10001<<pow_receive_group);
taskqueue_enqueue(cvm_oct_taskq, &cvm_oct_task);
/*
* Schedule task if there isn't one running.
*/
if (atomic_cmpset_int(&cvm_oct_rx_active, 0, 1))
taskqueue_enqueue(cvm_oct_taskq, &cvm_oct_task);
return FILTER_HANDLED;
}
@ -353,6 +361,19 @@ void cvm_oct_tasklet_rx(void *context, int pending)
cvm_oct_free_work(work);
}
/*
* If we hit our limit, schedule another task while we clean up.
*/
if (INTERRUPT_LIMIT != 0 && rx_count == MAX_RX_PACKETS) {
taskqueue_enqueue(cvm_oct_taskq, &cvm_oct_task);
} else {
/*
* No more packets, all done.
*/
if (!atomic_cmpset_int(&cvm_oct_rx_active, 1, 0))
panic("%s: inconsistent rx active state.", __func__);
}
/* Restore the original POW group mask */
cvmx_write_csr(CVMX_POW_PP_GRP_MSKX(coreid), old_group_mask);
if (USE_ASYNC_IOBDMA) {