our feet when we look inside timecounter structures.
Make the "sync_other" code more robust by never overwriting the
tc_next field.
Add counters for the bin[up]time functions.
Call tc_windup() in tc_init() and switch_timecounter() to make sure
we all the fields set right.
The binary format "bintime" is a 32.64 format, it will go to 64.64
when time_t does.
The bintime format is available to consumers of time in the kernel,
and is preferable where timeintervals needs to be accumulated.
This change simplifies much of the magic math inside the timecounters
and improves the frequency and time precision by a couple of bits.
I have not been able to measure a performance difference which was not
a tiny fraction of the standard deviation on the measurements.
HZ=BIGNUM will strain the assumptions behind timecounters to the
point where they break.
This may or may not help people seeing microuptime() backwards messages.
Make the global timecounter variable volatile, it makes no difference in
the code GCC generates, but it makes represents the intent correctly.
Thanks to: jdp
MFC after: 2 weeks
include:
* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The
alpha port is still in transition and currently uses both.)
* Per-CPU idle processes.
* Interrupts are run in their own separate kernel threads and can be
preempted (i386 only).
Partially contributed by: BSDi (BSD/OS)
Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
Make the public interface more systematically named.
Remove the alternate method, it doesn't do any good, only ruins performance.
Add counters to profile the usage of the 8 access functions.
Apply the beer-ware to my code.
The weird +/- counts are caused by two repocopies behind the scenes:
kern/kern_clock.c -> kern/kern_tc.c
sys/time.h -> sys/timetc.h
(thanks peter!)
and extend. The new function containing the code is named schedclock()
as in NetBSD, but it has slightly different semantics (it already handles
incrementation of p->p_cpticks, and it should handle any calling frequency).
Agreed with in principle by: dufault
NOTE: This will break building ntpd until ntpd has been upgraded to also
support draft 05. People that want to build ntpd in the meantime can
get patches from me.
used for timecounting. The possible values are the names of the
physically present harware timecounters ("i8254" and "TSC" on i386's).
Fixed some nearby bitrot in comments in <sys/time.h>.
Reviewed by: phk
This code is backwards compatible with the older "microkernel" PLL, but
allows ntpd v4 to use nanosecond resolution. Many other improvements.
PPS_SYNC and hardpps() are NOT supported yet.
is the preparation step for moving pmap storage out of vmspace proper.
Reviewed by: Alan Cox <alc@cs.rice.edu>
Matthew Dillion <dillon@apollo.backplane.com>
can set if your hw/sw produces the "calcru negative..." message.
Setting the alternate method (sysctl -w kern.timecounter.method=1)
makes the the get{nano|micro}*() functions call the real thing at
resulting in a measurable but minor overhead.
I decided to NOT have the "calcru" change the method automatically
because you should be aware of this problem if you have it.
The problems currently seen, related to usleep and a few other corners
are fixed for both methods.
out interrupts for too long. If you still see the "calcru: negative
time..." message you can increase NTIMECOUNTER (see LINT).
Sideeffect is that a timecounter is required to not wrap around in
less than (1 + delta) seconds instead of the (1/hz + delta) required
until now.
Many thanks to: msmith, wpaul, wosch & bde
If you have problems with the "calcru" messages and processes being
killed for excessive cpu time, try to increase the NTIMECOUNTER
#define and report your findings.
Clean up (or if antipodic: down) some of the msgbuf stuff.
Use an inline function rather than a macro for timecounter delta.
Maintain process "on-cpu" time as 64 bits of microseconds to avoid
needless second rollover overhead.
Avoid calling microuptime the second time in mi_switch() if we do
not pass through _idle in cpu_switch()
This should reduce our context-switch overhead a bit, in particular
on pre-P5 and SMP systems.
WARNING: Programs which muck about with struct proc in userland
will have to be fixed.
Reviewed, but found imperfect by: bde