Neel Natu f49fde7faf Fix a race between clock_intr() and tick_ticker() when updating
'counter_upper' and 'counter_lower_last'. The race exists because
interrupts are enabled even though tick_ticker() executes in a
critical section.

Fix a bug in clock_intr() in how it updates the cached values of
'counter_upper' and 'counter_lower_last'. They are updated only
when the COUNT register rolls over. More interestingly it will *never*
update the cached values if 'counter_lower_last' happens to be zero.

Get rid of superfluous critical section in clock_intr(). There is no
reason to do this because clock_intr() executes in hard interrupt
context.

Switch back to using 'tick_ticker()' as the cpu ticker for Sibyte.

Reviewed by:	jmallett, mav
2010-08-05 04:59:54 +00:00
..
2010-05-13 01:50:29 +00:00
2010-04-19 09:03:34 +00:00