'counter_upper' and 'counter_lower_last'. The race exists because
interrupts are enabled even though tick_ticker() executes in a
critical section.
Fix a bug in clock_intr() in how it updates the cached values of
'counter_upper' and 'counter_lower_last'. They are updated only
when the COUNT register rolls over. More interestingly it will *never*
update the cached values if 'counter_lower_last' happens to be zero.
Get rid of superfluous critical section in clock_intr(). There is no
reason to do this because clock_intr() executes in hard interrupt
context.
Switch back to using 'tick_ticker()' as the cpu ticker for Sibyte.
Reviewed by: jmallett, mav