2002-04-30 20:42:06 +00:00
|
|
|
/*-
|
2017-11-30 20:33:45 +00:00
|
|
|
* SPDX-License-Identifier: Beerware
|
|
|
|
*
|
2000-03-20 14:09:06 +00:00
|
|
|
* ----------------------------------------------------------------------------
|
|
|
|
* "THE BEER-WARE LICENSE" (Revision 42):
|
|
|
|
* <phk@FreeBSD.ORG> wrote this file. As long as you retain this notice you
|
|
|
|
* can do whatever you want with this stuff. If we meet some day, and you think
|
|
|
|
* this stuff is worth it, you can buy me a beer in return. Poul-Henning Kamp
|
|
|
|
* ----------------------------------------------------------------------------
|
2011-11-19 14:10:16 +00:00
|
|
|
*
|
Implement userspace gettimeofday(2) with HPET timecounter.
Right now, userspace (fast) gettimeofday(2) on x86 only works for
RDTSC. For older machines, like Core2, where RDTSC is not C2/C3
invariant, and which fall to HPET hardware, this means that the call
has both the penalty of the syscall and of the uncached hw behind the
QPI or PCIe connection to the sought bridge. Nothing can me done
against the access latency, but the syscall overhead can be removed.
System already provides mappable /dev/hpetX devices, which gives
straight access to the HPET registers page.
Add yet another algorithm to the x86 'vdso' timehands. Libc is updated
to handle both RDTSC and HPET. For HPET, the index of the hpet device
to mmap is passed from kernel to userspace, index might be changed and
libc invalidates its mapping as needed.
Remove cpu_fill_vdso_timehands() KPI, instead require that
timecounters which can be used from userspace, to provide
tc_fill_vdso_timehands{,32}() methods. Merge i386 and amd64
libc/<arch>/sys/__vdso_gettc.c into one source file in the new
libc/x86/sys location. __vdso_gettc() internal interface is changed
to move timecounter algorithm detection into the MD code.
Measurements show that RDTSC even with the syscall overhead is faster
than userspace HPET access. But still, userspace HPET is three-four
times faster than syscall HPET on several Core2 and SandyBridge
machines.
Tested by: Howard Su <howard0su@gmail.com>
Sponsored by: The FreeBSD Foundation
MFC after: 1 month
Differential revision: https://reviews.freebsd.org/D7473
2016-08-17 09:52:09 +00:00
|
|
|
* Copyright (c) 2011, 2015, 2016 The FreeBSD Foundation
|
2011-11-19 14:10:16 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Portions of this software were developed by Julien Ridoux at the University
|
|
|
|
* of Melbourne under sponsorship from the FreeBSD Foundation.
|
Implement userspace gettimeofday(2) with HPET timecounter.
Right now, userspace (fast) gettimeofday(2) on x86 only works for
RDTSC. For older machines, like Core2, where RDTSC is not C2/C3
invariant, and which fall to HPET hardware, this means that the call
has both the penalty of the syscall and of the uncached hw behind the
QPI or PCIe connection to the sought bridge. Nothing can me done
against the access latency, but the syscall overhead can be removed.
System already provides mappable /dev/hpetX devices, which gives
straight access to the HPET registers page.
Add yet another algorithm to the x86 'vdso' timehands. Libc is updated
to handle both RDTSC and HPET. For HPET, the index of the hpet device
to mmap is passed from kernel to userspace, index might be changed and
libc invalidates its mapping as needed.
Remove cpu_fill_vdso_timehands() KPI, instead require that
timecounters which can be used from userspace, to provide
tc_fill_vdso_timehands{,32}() methods. Merge i386 and amd64
libc/<arch>/sys/__vdso_gettc.c into one source file in the new
libc/x86/sys location. __vdso_gettc() internal interface is changed
to move timecounter algorithm detection into the MD code.
Measurements show that RDTSC even with the syscall overhead is faster
than userspace HPET access. But still, userspace HPET is three-four
times faster than syscall HPET on several Core2 and SandyBridge
machines.
Tested by: Howard Su <howard0su@gmail.com>
Sponsored by: The FreeBSD Foundation
MFC after: 1 month
Differential revision: https://reviews.freebsd.org/D7473
2016-08-17 09:52:09 +00:00
|
|
|
*
|
|
|
|
* Portions of this software were developed by Konstantin Belousov
|
|
|
|
* under sponsorship from the FreeBSD Foundation.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
2003-06-11 00:56:59 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
1999-03-11 15:09:51 +00:00
|
|
|
#include "opt_ntp.h"
|
2011-11-19 14:10:16 +00:00
|
|
|
#include "opt_ffclock.h"
|
1999-03-11 15:09:51 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
2000-03-20 14:09:06 +00:00
|
|
|
#include <sys/kernel.h>
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
#include <sys/limits.h>
|
2011-11-19 14:10:16 +00:00
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/mutex.h>
|
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Reviewed by: jhb, kib
MFC after: 2 weeks
Relnotes: yes
Sponsored by: Dell EMC
Differential Revision: https://reviews.freebsd.org/D9791
2017-03-14 19:06:44 +00:00
|
|
|
#include <sys/proc.h>
|
2015-03-14 23:16:12 +00:00
|
|
|
#include <sys/sbuf.h>
|
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Reviewed by: jhb, kib
MFC after: 2 weeks
Relnotes: yes
Sponsored by: Dell EMC
Differential Revision: https://reviews.freebsd.org/D9791
2017-03-14 19:06:44 +00:00
|
|
|
#include <sys/sleepqueue.h>
|
2000-03-20 14:09:06 +00:00
|
|
|
#include <sys/sysctl.h>
|
2004-01-21 21:05:40 +00:00
|
|
|
#include <sys/syslog.h>
|
2000-03-20 14:09:06 +00:00
|
|
|
#include <sys/systm.h>
|
2011-11-19 14:10:16 +00:00
|
|
|
#include <sys/timeffc.h>
|
1999-03-11 15:09:51 +00:00
|
|
|
#include <sys/timepps.h>
|
2002-05-03 08:46:03 +00:00
|
|
|
#include <sys/timetc.h>
|
2002-04-30 20:42:06 +00:00
|
|
|
#include <sys/timex.h>
|
2012-06-22 07:06:40 +00:00
|
|
|
#include <sys/vdso.h>
|
2002-04-30 20:42:06 +00:00
|
|
|
|
Fix leap second processing by the kernel time keeping routines.
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
2003-06-25 21:23:51 +00:00
|
|
|
/*
|
2003-08-20 19:12:46 +00:00
|
|
|
* A large step happens on boot. This constant detects such steps.
|
|
|
|
* It is relatively small so that ntp_update_second gets called enough
|
|
|
|
* in the typical 'missed a couple of seconds' case, but doesn't loop
|
|
|
|
* forever when the time step is large.
|
Fix leap second processing by the kernel time keeping routines.
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
2003-06-25 21:23:51 +00:00
|
|
|
*/
|
|
|
|
#define LARGE_STEP 200
|
|
|
|
|
1998-10-23 10:44:52 +00:00
|
|
|
/*
|
2002-04-26 21:51:08 +00:00
|
|
|
* Implement a dummy timecounter which we can use until we get a real one
|
|
|
|
* in the air. This allows the console and other early stuff to use
|
2002-04-30 20:42:06 +00:00
|
|
|
* time services.
|
1998-10-23 10:44:52 +00:00
|
|
|
*/
|
|
|
|
|
2002-04-28 18:24:21 +00:00
|
|
|
static u_int
|
2002-04-26 21:51:08 +00:00
|
|
|
dummy_get_timecount(struct timecounter *tc)
|
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
static u_int now;
|
2002-04-26 21:51:08 +00:00
|
|
|
|
|
|
|
return (++now);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct timecounter dummy_timecounter = {
|
2003-08-16 08:23:53 +00:00
|
|
|
dummy_get_timecount, 0, ~0u, 1000000, "dummy", -1000000
|
2002-04-26 21:51:08 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct timehands {
|
|
|
|
/* These fields must be initialized by the driver. */
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timecounter *th_counter;
|
|
|
|
int64_t th_adjustment;
|
2010-06-21 09:55:56 +00:00
|
|
|
uint64_t th_scale;
|
2002-04-28 18:24:21 +00:00
|
|
|
u_int th_offset_count;
|
|
|
|
struct bintime th_offset;
|
2016-07-30 09:25:57 +00:00
|
|
|
struct bintime th_bintime;
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timeval th_microtime;
|
|
|
|
struct timespec th_nanotime;
|
2016-07-27 11:49:41 +00:00
|
|
|
struct bintime th_boottime;
|
2002-04-30 20:42:06 +00:00
|
|
|
/* Fields not to be copied in tc_windup start with th_generation. */
|
2015-06-09 11:49:56 +00:00
|
|
|
u_int th_generation;
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th_next;
|
2002-04-26 21:51:08 +00:00
|
|
|
};
|
|
|
|
|
2005-09-07 10:06:14 +00:00
|
|
|
static struct timehands th0;
|
2016-07-27 11:27:52 +00:00
|
|
|
static struct timehands th1 = {
|
|
|
|
.th_next = &th0
|
|
|
|
};
|
2002-04-27 07:28:54 +00:00
|
|
|
static struct timehands th0 = {
|
2016-07-27 11:27:52 +00:00
|
|
|
.th_counter = &dummy_timecounter,
|
|
|
|
.th_scale = (uint64_t)-1 / 1000000,
|
|
|
|
.th_offset = { .sec = 1 },
|
|
|
|
.th_generation = 1,
|
|
|
|
.th_next = &th1
|
2002-04-27 07:28:54 +00:00
|
|
|
};
|
2002-04-26 21:51:08 +00:00
|
|
|
|
|
|
|
static struct timehands *volatile timehands = &th0;
|
|
|
|
struct timecounter *timecounter = &dummy_timecounter;
|
|
|
|
static struct timecounter *timecounters = &dummy_timecounter;
|
1998-10-23 10:44:52 +00:00
|
|
|
|
2010-09-14 08:48:06 +00:00
|
|
|
int tc_min_ticktock_freq = 1;
|
|
|
|
|
2013-01-28 19:38:13 +00:00
|
|
|
volatile time_t time_second = 1;
|
|
|
|
volatile time_t time_uptime = 1;
|
1998-03-30 09:56:58 +00:00
|
|
|
|
2004-10-11 22:04:16 +00:00
|
|
|
static int sysctl_kern_boottime(SYSCTL_HANDLER_ARGS);
|
|
|
|
SYSCTL_PROC(_kern, KERN_BOOTTIME, boottime, CTLTYPE_STRUCT|CTLFLAG_RD,
|
|
|
|
NULL, 0, sysctl_kern_boottime, "S,timeval", "System boottime");
|
1999-09-13 14:22:27 +00:00
|
|
|
|
2000-03-20 14:09:06 +00:00
|
|
|
SYSCTL_NODE(_kern, OID_AUTO, timecounter, CTLFLAG_RW, 0, "");
|
2011-11-07 15:43:11 +00:00
|
|
|
static SYSCTL_NODE(_kern_timecounter, OID_AUTO, tc, CTLFLAG_RW, 0, "");
|
2000-03-20 14:09:06 +00:00
|
|
|
|
2004-01-21 21:05:40 +00:00
|
|
|
static int timestepwarnings;
|
|
|
|
SYSCTL_INT(_kern_timecounter, OID_AUTO, stepwarnings, CTLFLAG_RW,
|
2010-11-14 06:09:50 +00:00
|
|
|
×tepwarnings, 0, "Log time steps");
|
2004-01-21 21:05:40 +00:00
|
|
|
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
struct bintime bt_timethreshold;
|
|
|
|
struct bintime bt_tickthreshold;
|
|
|
|
sbintime_t sbt_timethreshold;
|
|
|
|
sbintime_t sbt_tickthreshold;
|
|
|
|
struct bintime tc_tick_bt;
|
|
|
|
sbintime_t tc_tick_sbt;
|
|
|
|
int tc_precexp;
|
|
|
|
int tc_timepercentage = TC_DEFAULTPERC;
|
|
|
|
static int sysctl_kern_timecounter_adjprecision(SYSCTL_HANDLER_ARGS);
|
|
|
|
SYSCTL_PROC(_kern_timecounter, OID_AUTO, alloweddeviation,
|
2014-06-28 03:56:17 +00:00
|
|
|
CTLTYPE_INT | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, 0, 0,
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
sysctl_kern_timecounter_adjprecision, "I",
|
|
|
|
"Allowed time interval deviation in percents");
|
|
|
|
|
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Reviewed by: jhb, kib
MFC after: 2 weeks
Relnotes: yes
Sponsored by: Dell EMC
Differential Revision: https://reviews.freebsd.org/D9791
2017-03-14 19:06:44 +00:00
|
|
|
volatile int rtc_generation = 1;
|
|
|
|
|
2015-08-12 20:50:20 +00:00
|
|
|
static int tc_chosen; /* Non-zero if a specific tc was chosen via sysctl. */
|
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
static void tc_windup(struct bintime *new_boottimebin);
|
2006-02-11 09:33:07 +00:00
|
|
|
static void cpu_tick_calibrate(int);
|
2002-04-26 12:37:36 +00:00
|
|
|
|
2012-07-16 20:17:19 +00:00
|
|
|
void dtrace_getnanotime(struct timespec *tsp);
|
|
|
|
|
2004-10-11 22:04:16 +00:00
|
|
|
static int
|
|
|
|
sysctl_kern_boottime(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
2016-07-27 11:08:59 +00:00
|
|
|
struct timeval boottime;
|
|
|
|
|
|
|
|
getboottime(&boottime);
|
|
|
|
|
2012-03-03 08:19:18 +00:00
|
|
|
#ifndef __mips__
|
2004-10-11 22:04:16 +00:00
|
|
|
#ifdef SCTL_MASK32
|
|
|
|
int tv[2];
|
|
|
|
|
|
|
|
if (req->flags & SCTL_MASK32) {
|
|
|
|
tv[0] = boottime.tv_sec;
|
|
|
|
tv[1] = boottime.tv_usec;
|
2016-07-27 11:08:59 +00:00
|
|
|
return (SYSCTL_OUT(req, tv, sizeof(tv)));
|
|
|
|
}
|
2012-03-03 08:19:18 +00:00
|
|
|
#endif
|
2004-10-11 22:04:16 +00:00
|
|
|
#endif
|
2016-07-27 11:08:59 +00:00
|
|
|
return (SYSCTL_OUT(req, &boottime, sizeof(boottime)));
|
2004-10-11 22:04:16 +00:00
|
|
|
}
|
2006-02-07 21:22:02 +00:00
|
|
|
|
2006-06-16 20:29:05 +00:00
|
|
|
static int
|
|
|
|
sysctl_kern_timecounter_get(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
|
|
|
u_int ncount;
|
|
|
|
struct timecounter *tc = arg1;
|
|
|
|
|
|
|
|
ncount = tc->tc_get_timecount(tc);
|
2016-07-27 11:33:33 +00:00
|
|
|
return (sysctl_handle_int(oidp, &ncount, 0, req));
|
2006-06-16 20:29:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
sysctl_kern_timecounter_freq(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
2010-06-21 09:55:56 +00:00
|
|
|
uint64_t freq;
|
2006-06-16 20:29:05 +00:00
|
|
|
struct timecounter *tc = arg1;
|
|
|
|
|
|
|
|
freq = tc->tc_frequency;
|
2016-07-27 11:33:33 +00:00
|
|
|
return (sysctl_handle_64(oidp, &freq, 0, req));
|
2006-06-16 20:29:05 +00:00
|
|
|
}
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
|
|
|
* Return the difference between the timehands' counter value now and what
|
|
|
|
* was when we copied it to the timehands' offset_count.
|
|
|
|
*/
|
2002-04-28 18:24:21 +00:00
|
|
|
static __inline u_int
|
|
|
|
tc_delta(struct timehands *th)
|
1998-05-28 09:30:28 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timecounter *tc;
|
1998-05-28 09:30:28 +00:00
|
|
|
|
2002-04-28 18:24:21 +00:00
|
|
|
tc = th->th_counter;
|
|
|
|
return ((tc->tc_get_timecount(tc) - th->th_offset_count) &
|
|
|
|
tc->tc_counter_mask);
|
1998-05-28 09:30:28 +00:00
|
|
|
}
|
1998-03-26 20:54:05 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-28 18:24:21 +00:00
|
|
|
* Functions for reading the time. We have to loop until we are sure that
|
2002-04-30 20:42:06 +00:00
|
|
|
* the timehands that we operated on was not updated under our feet. See
|
|
|
|
* the comment in <sys/time.h> for a description of these 12 functions.
|
2002-04-28 18:24:21 +00:00
|
|
|
*/
|
|
|
|
|
2011-11-20 05:32:12 +00:00
|
|
|
#ifdef FFCLOCK
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_binuptime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
*bt = th->th_offset;
|
|
|
|
bintime_addx(bt, th->th_scale * tc_delta(th));
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_nanouptime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
2011-11-29 06:12:19 +00:00
|
|
|
fbclock_binuptime(&bt);
|
2011-11-20 05:32:12 +00:00
|
|
|
bintime2timespec(&bt, tsp);
|
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_microuptime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
2011-11-29 06:12:19 +00:00
|
|
|
fbclock_binuptime(&bt);
|
2011-11-20 05:32:12 +00:00
|
|
|
bintime2timeval(&bt, tvp);
|
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_bintime(struct bintime *bt)
|
|
|
|
{
|
2016-07-27 11:49:41 +00:00
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
2011-11-20 05:32:12 +00:00
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
do {
|
|
|
|
th = timehands;
|
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2016-07-30 09:25:57 +00:00
|
|
|
*bt = th->th_bintime;
|
2016-07-27 11:49:41 +00:00
|
|
|
bintime_addx(bt, th->th_scale * tc_delta(th));
|
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_nanotime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
2011-11-29 06:12:19 +00:00
|
|
|
fbclock_bintime(&bt);
|
2011-11-20 05:32:12 +00:00
|
|
|
bintime2timespec(&bt, tsp);
|
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_microtime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
2011-11-29 06:12:19 +00:00
|
|
|
fbclock_bintime(&bt);
|
2011-11-20 05:32:12 +00:00
|
|
|
bintime2timeval(&bt, tvp);
|
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_getbinuptime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
*bt = th->th_offset;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_getnanouptime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
bintime2timespec(&th->th_offset, tsp);
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_getmicrouptime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
bintime2timeval(&th->th_offset, tvp);
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_getbintime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2016-07-30 09:25:57 +00:00
|
|
|
*bt = th->th_bintime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_getnanotime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
*tsp = th->th_nanotime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
2011-11-29 06:53:36 +00:00
|
|
|
void
|
2011-11-20 05:32:12 +00:00
|
|
|
fbclock_getmicrotime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
unsigned int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
*tvp = th->th_microtime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
#else /* !FFCLOCK */
|
2002-02-07 21:21:55 +00:00
|
|
|
void
|
|
|
|
binuptime(struct bintime *bt)
|
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
2002-02-24 20:04:07 +00:00
|
|
|
|
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
*bt = th->th_offset;
|
|
|
|
bintime_addx(bt, th->th_scale * tc_delta(th));
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2002-02-07 21:21:55 +00:00
|
|
|
}
|
|
|
|
|
2002-04-26 10:19:29 +00:00
|
|
|
void
|
2002-04-30 20:42:06 +00:00
|
|
|
nanouptime(struct timespec *tsp)
|
2002-04-26 10:19:29 +00:00
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
|
|
|
binuptime(&bt);
|
2002-04-30 20:42:06 +00:00
|
|
|
bintime2timespec(&bt, tsp);
|
2002-04-26 10:19:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-30 20:42:06 +00:00
|
|
|
microuptime(struct timeval *tvp)
|
2002-04-26 10:19:29 +00:00
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
|
|
|
binuptime(&bt);
|
2002-04-30 20:42:06 +00:00
|
|
|
bintime2timeval(&bt, tvp);
|
2002-04-26 10:19:29 +00:00
|
|
|
}
|
|
|
|
|
2002-02-07 21:21:55 +00:00
|
|
|
void
|
|
|
|
bintime(struct bintime *bt)
|
|
|
|
{
|
2016-07-27 11:49:41 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
2002-02-07 21:21:55 +00:00
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
do {
|
|
|
|
th = timehands;
|
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2016-07-30 09:25:57 +00:00
|
|
|
*bt = th->th_bintime;
|
2016-07-27 11:49:41 +00:00
|
|
|
bintime_addx(bt, th->th_scale * tc_delta(th));
|
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2002-02-07 21:21:55 +00:00
|
|
|
}
|
|
|
|
|
1998-03-26 20:54:05 +00:00
|
|
|
void
|
2002-04-30 20:42:06 +00:00
|
|
|
nanotime(struct timespec *tsp)
|
2002-04-26 10:19:29 +00:00
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
|
|
|
bintime(&bt);
|
2002-04-30 20:42:06 +00:00
|
|
|
bintime2timespec(&bt, tsp);
|
2002-04-26 10:19:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-30 20:42:06 +00:00
|
|
|
microtime(struct timeval *tvp)
|
2002-04-26 10:19:29 +00:00
|
|
|
{
|
|
|
|
struct bintime bt;
|
|
|
|
|
|
|
|
bintime(&bt);
|
2002-04-30 20:42:06 +00:00
|
|
|
bintime2timeval(&bt, tvp);
|
2002-04-26 10:19:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getbinuptime(struct bintime *bt)
|
1998-03-26 20:54:05 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
1998-03-26 20:54:05 +00:00
|
|
|
|
2002-02-24 20:04:07 +00:00
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
*bt = th->th_offset;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
1998-04-04 13:26:20 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-26 10:19:29 +00:00
|
|
|
getnanouptime(struct timespec *tsp)
|
1998-04-04 13:26:20 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
1998-04-04 13:26:20 +00:00
|
|
|
|
2002-02-24 20:04:07 +00:00
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
bintime2timespec(&th->th_offset, tsp);
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
1998-04-04 13:26:20 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
1998-05-17 11:53:46 +00:00
|
|
|
getmicrouptime(struct timeval *tvp)
|
1998-04-04 13:26:20 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
1998-04-04 13:26:20 +00:00
|
|
|
|
2002-02-24 20:04:07 +00:00
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
bintime2timeval(&th->th_offset, tvp);
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
1998-03-26 20:54:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-26 10:19:29 +00:00
|
|
|
getbintime(struct bintime *bt)
|
1998-03-26 20:54:05 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
1998-03-26 20:54:05 +00:00
|
|
|
|
2002-02-24 20:04:07 +00:00
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2016-07-30 09:25:57 +00:00
|
|
|
*bt = th->th_bintime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
1998-03-26 20:54:05 +00:00
|
|
|
}
|
|
|
|
|
1998-02-15 13:55:06 +00:00
|
|
|
void
|
2002-04-26 10:19:29 +00:00
|
|
|
getnanotime(struct timespec *tsp)
|
1998-02-20 16:36:17 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
1998-02-20 16:36:17 +00:00
|
|
|
|
2002-04-26 10:19:29 +00:00
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
*tsp = th->th_nanotime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
1998-02-20 16:36:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2002-04-26 10:19:29 +00:00
|
|
|
getmicrotime(struct timeval *tvp)
|
1998-02-20 16:36:17 +00:00
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
1998-02-20 16:36:17 +00:00
|
|
|
|
2002-04-26 10:19:29 +00:00
|
|
|
do {
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
*tvp = th->th_microtime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
1998-02-20 16:36:17 +00:00
|
|
|
}
|
2011-11-20 05:32:12 +00:00
|
|
|
#endif /* FFCLOCK */
|
1998-02-20 16:36:17 +00:00
|
|
|
|
2016-07-27 11:08:59 +00:00
|
|
|
void
|
|
|
|
getboottime(struct timeval *boottime)
|
|
|
|
{
|
2016-07-27 11:49:41 +00:00
|
|
|
struct bintime boottimebin;
|
2016-07-27 11:08:59 +00:00
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
getboottimebin(&boottimebin);
|
|
|
|
bintime2timeval(&boottimebin, boottime);
|
2016-07-27 11:08:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getboottimebin(struct bintime *boottimebin)
|
|
|
|
{
|
2016-07-27 11:49:41 +00:00
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
2016-07-27 11:08:59 +00:00
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
do {
|
|
|
|
th = timehands;
|
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
|
|
|
*boottimebin = th->th_boottime;
|
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2016-07-27 11:08:59 +00:00
|
|
|
}
|
|
|
|
|
2011-11-19 14:10:16 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
/*
|
|
|
|
* Support for feed-forward synchronization algorithms. This is heavily inspired
|
|
|
|
* by the timehands mechanism but kept independent from it. *_windup() functions
|
|
|
|
* have some connection to avoid accessing the timecounter hardware more than
|
|
|
|
* necessary.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Feed-forward clock estimates kept updated by the synchronization daemon. */
|
|
|
|
struct ffclock_estimate ffclock_estimate;
|
|
|
|
struct bintime ffclock_boottime; /* Feed-forward boot time estimate. */
|
|
|
|
uint32_t ffclock_status; /* Feed-forward clock status. */
|
|
|
|
int8_t ffclock_updated; /* New estimates are available. */
|
|
|
|
struct mtx ffclock_mtx; /* Mutex on ffclock_estimate. */
|
|
|
|
|
|
|
|
struct fftimehands {
|
|
|
|
struct ffclock_estimate cest;
|
|
|
|
struct bintime tick_time;
|
|
|
|
struct bintime tick_time_lerp;
|
|
|
|
ffcounter tick_ffcount;
|
|
|
|
uint64_t period_lerp;
|
|
|
|
volatile uint8_t gen;
|
|
|
|
struct fftimehands *next;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define NUM_ELEMENTS(x) (sizeof(x) / sizeof(*x))
|
|
|
|
|
|
|
|
static struct fftimehands ffth[10];
|
|
|
|
static struct fftimehands *volatile fftimehands = ffth;
|
|
|
|
|
|
|
|
static void
|
|
|
|
ffclock_init(void)
|
|
|
|
{
|
|
|
|
struct fftimehands *cur;
|
|
|
|
struct fftimehands *last;
|
|
|
|
|
|
|
|
memset(ffth, 0, sizeof(ffth));
|
|
|
|
|
|
|
|
last = ffth + NUM_ELEMENTS(ffth) - 1;
|
|
|
|
for (cur = ffth; cur < last; cur++)
|
|
|
|
cur->next = cur + 1;
|
|
|
|
last->next = ffth;
|
|
|
|
|
|
|
|
ffclock_updated = 0;
|
|
|
|
ffclock_status = FFCLOCK_STA_UNSYNC;
|
|
|
|
mtx_init(&ffclock_mtx, "ffclock lock", NULL, MTX_DEF);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset the feed-forward clock estimates. Called from inittodr() to get things
|
|
|
|
* kick started and uses the timecounter nominal frequency as a first period
|
|
|
|
* estimate. Note: this function may be called several time just after boot.
|
|
|
|
* Note: this is the only function that sets the value of boot time for the
|
|
|
|
* monotonic (i.e. uptime) version of the feed-forward clock.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ffclock_reset_clock(struct timespec *ts)
|
|
|
|
{
|
|
|
|
struct timecounter *tc;
|
|
|
|
struct ffclock_estimate cest;
|
|
|
|
|
|
|
|
tc = timehands->th_counter;
|
|
|
|
memset(&cest, 0, sizeof(struct ffclock_estimate));
|
|
|
|
|
|
|
|
timespec2bintime(ts, &ffclock_boottime);
|
|
|
|
timespec2bintime(ts, &(cest.update_time));
|
|
|
|
ffclock_read_counter(&cest.update_ffcount);
|
|
|
|
cest.leapsec_next = 0;
|
|
|
|
cest.period = ((1ULL << 63) / tc->tc_frequency) << 1;
|
|
|
|
cest.errb_abs = 0;
|
|
|
|
cest.errb_rate = 0;
|
|
|
|
cest.status = FFCLOCK_STA_UNSYNC;
|
|
|
|
cest.leapsec_total = 0;
|
|
|
|
cest.leapsec = 0;
|
|
|
|
|
|
|
|
mtx_lock(&ffclock_mtx);
|
|
|
|
bcopy(&cest, &ffclock_estimate, sizeof(struct ffclock_estimate));
|
|
|
|
ffclock_updated = INT8_MAX;
|
|
|
|
mtx_unlock(&ffclock_mtx);
|
|
|
|
|
|
|
|
printf("ffclock reset: %s (%llu Hz), time = %ld.%09lu\n", tc->tc_name,
|
|
|
|
(unsigned long long)tc->tc_frequency, (long)ts->tv_sec,
|
|
|
|
(unsigned long)ts->tv_nsec);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sub-routine to convert a time interval measured in RAW counter units to time
|
|
|
|
* in seconds stored in bintime format.
|
|
|
|
* NOTE: bintime_mul requires u_int, but the value of the ffcounter may be
|
|
|
|
* larger than the max value of u_int (on 32 bit architecture). Loop to consume
|
|
|
|
* extra cycles.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ffclock_convert_delta(ffcounter ffdelta, uint64_t period, struct bintime *bt)
|
|
|
|
{
|
|
|
|
struct bintime bt2;
|
|
|
|
ffcounter delta, delta_max;
|
|
|
|
|
|
|
|
delta_max = (1ULL << (8 * sizeof(unsigned int))) - 1;
|
|
|
|
bintime_clear(bt);
|
|
|
|
do {
|
|
|
|
if (ffdelta > delta_max)
|
|
|
|
delta = delta_max;
|
|
|
|
else
|
|
|
|
delta = ffdelta;
|
|
|
|
bt2.sec = 0;
|
|
|
|
bt2.frac = period;
|
|
|
|
bintime_mul(&bt2, (unsigned int)delta);
|
|
|
|
bintime_add(bt, &bt2);
|
|
|
|
ffdelta -= delta;
|
|
|
|
} while (ffdelta > 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update the fftimehands.
|
|
|
|
* Push the tick ffcount and time(s) forward based on current clock estimate.
|
|
|
|
* The conversion from ffcounter to bintime relies on the difference clock
|
|
|
|
* principle, whose accuracy relies on computing small time intervals. If a new
|
|
|
|
* clock estimate has been passed by the synchronisation daemon, make it
|
|
|
|
* current, and compute the linear interpolation for monotonic time if needed.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ffclock_windup(unsigned int delta)
|
|
|
|
{
|
|
|
|
struct ffclock_estimate *cest;
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
struct bintime bt, gap_lerp;
|
|
|
|
ffcounter ffdelta;
|
|
|
|
uint64_t frac;
|
|
|
|
unsigned int polling;
|
|
|
|
uint8_t forward_jump, ogen;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pick the next timehand, copy current ffclock estimates and move tick
|
|
|
|
* times and counter forward.
|
|
|
|
*/
|
|
|
|
forward_jump = 0;
|
|
|
|
ffth = fftimehands->next;
|
|
|
|
ogen = ffth->gen;
|
|
|
|
ffth->gen = 0;
|
|
|
|
cest = &ffth->cest;
|
|
|
|
bcopy(&fftimehands->cest, cest, sizeof(struct ffclock_estimate));
|
|
|
|
ffdelta = (ffcounter)delta;
|
|
|
|
ffth->period_lerp = fftimehands->period_lerp;
|
|
|
|
|
|
|
|
ffth->tick_time = fftimehands->tick_time;
|
|
|
|
ffclock_convert_delta(ffdelta, cest->period, &bt);
|
|
|
|
bintime_add(&ffth->tick_time, &bt);
|
|
|
|
|
|
|
|
ffth->tick_time_lerp = fftimehands->tick_time_lerp;
|
|
|
|
ffclock_convert_delta(ffdelta, ffth->period_lerp, &bt);
|
|
|
|
bintime_add(&ffth->tick_time_lerp, &bt);
|
|
|
|
|
|
|
|
ffth->tick_ffcount = fftimehands->tick_ffcount + ffdelta;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Assess the status of the clock, if the last update is too old, it is
|
|
|
|
* likely the synchronisation daemon is dead and the clock is free
|
|
|
|
* running.
|
|
|
|
*/
|
|
|
|
if (ffclock_updated == 0) {
|
|
|
|
ffdelta = ffth->tick_ffcount - cest->update_ffcount;
|
|
|
|
ffclock_convert_delta(ffdelta, cest->period, &bt);
|
|
|
|
if (bt.sec > 2 * FFCLOCK_SKM_SCALE)
|
|
|
|
ffclock_status |= FFCLOCK_STA_UNSYNC;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If available, grab updated clock estimates and make them current.
|
|
|
|
* Recompute time at this tick using the updated estimates. The clock
|
|
|
|
* estimates passed the feed-forward synchronisation daemon may result
|
|
|
|
* in time conversion that is not monotonically increasing (just after
|
|
|
|
* the update). time_lerp is a particular linear interpolation over the
|
|
|
|
* synchronisation algo polling period that ensures monotonicity for the
|
|
|
|
* clock ids requesting it.
|
|
|
|
*/
|
|
|
|
if (ffclock_updated > 0) {
|
|
|
|
bcopy(&ffclock_estimate, cest, sizeof(struct ffclock_estimate));
|
|
|
|
ffdelta = ffth->tick_ffcount - cest->update_ffcount;
|
|
|
|
ffth->tick_time = cest->update_time;
|
|
|
|
ffclock_convert_delta(ffdelta, cest->period, &bt);
|
|
|
|
bintime_add(&ffth->tick_time, &bt);
|
|
|
|
|
|
|
|
/* ffclock_reset sets ffclock_updated to INT8_MAX */
|
|
|
|
if (ffclock_updated == INT8_MAX)
|
|
|
|
ffth->tick_time_lerp = ffth->tick_time;
|
|
|
|
|
|
|
|
if (bintime_cmp(&ffth->tick_time, &ffth->tick_time_lerp, >))
|
|
|
|
forward_jump = 1;
|
|
|
|
else
|
|
|
|
forward_jump = 0;
|
|
|
|
|
|
|
|
bintime_clear(&gap_lerp);
|
|
|
|
if (forward_jump) {
|
|
|
|
gap_lerp = ffth->tick_time;
|
|
|
|
bintime_sub(&gap_lerp, &ffth->tick_time_lerp);
|
|
|
|
} else {
|
|
|
|
gap_lerp = ffth->tick_time_lerp;
|
|
|
|
bintime_sub(&gap_lerp, &ffth->tick_time);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The reset from the RTC clock may be far from accurate, and
|
|
|
|
* reducing the gap between real time and interpolated time
|
|
|
|
* could take a very long time if the interpolated clock insists
|
|
|
|
* on strict monotonicity. The clock is reset under very strict
|
|
|
|
* conditions (kernel time is known to be wrong and
|
|
|
|
* synchronization daemon has been restarted recently.
|
|
|
|
* ffclock_boottime absorbs the jump to ensure boot time is
|
|
|
|
* correct and uptime functions stay consistent.
|
|
|
|
*/
|
|
|
|
if (((ffclock_status & FFCLOCK_STA_UNSYNC) == FFCLOCK_STA_UNSYNC) &&
|
|
|
|
((cest->status & FFCLOCK_STA_UNSYNC) == 0) &&
|
|
|
|
((cest->status & FFCLOCK_STA_WARMUP) == FFCLOCK_STA_WARMUP)) {
|
|
|
|
if (forward_jump)
|
|
|
|
bintime_add(&ffclock_boottime, &gap_lerp);
|
|
|
|
else
|
|
|
|
bintime_sub(&ffclock_boottime, &gap_lerp);
|
|
|
|
ffth->tick_time_lerp = ffth->tick_time;
|
|
|
|
bintime_clear(&gap_lerp);
|
|
|
|
}
|
|
|
|
|
|
|
|
ffclock_status = cest->status;
|
|
|
|
ffth->period_lerp = cest->period;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute corrected period used for the linear interpolation of
|
|
|
|
* time. The rate of linear interpolation is capped to 5000PPM
|
|
|
|
* (5ms/s).
|
|
|
|
*/
|
|
|
|
if (bintime_isset(&gap_lerp)) {
|
|
|
|
ffdelta = cest->update_ffcount;
|
|
|
|
ffdelta -= fftimehands->cest.update_ffcount;
|
|
|
|
ffclock_convert_delta(ffdelta, cest->period, &bt);
|
|
|
|
polling = bt.sec;
|
|
|
|
bt.sec = 0;
|
|
|
|
bt.frac = 5000000 * (uint64_t)18446744073LL;
|
|
|
|
bintime_mul(&bt, polling);
|
|
|
|
if (bintime_cmp(&gap_lerp, &bt, >))
|
|
|
|
gap_lerp = bt;
|
|
|
|
|
|
|
|
/* Approximate 1 sec by 1-(1/2^64) to ease arithmetic */
|
|
|
|
frac = 0;
|
|
|
|
if (gap_lerp.sec > 0) {
|
|
|
|
frac -= 1;
|
|
|
|
frac /= ffdelta / gap_lerp.sec;
|
|
|
|
}
|
|
|
|
frac += gap_lerp.frac / ffdelta;
|
|
|
|
|
|
|
|
if (forward_jump)
|
|
|
|
ffth->period_lerp += frac;
|
|
|
|
else
|
|
|
|
ffth->period_lerp -= frac;
|
|
|
|
}
|
|
|
|
|
|
|
|
ffclock_updated = 0;
|
|
|
|
}
|
|
|
|
if (++ogen == 0)
|
|
|
|
ogen = 1;
|
|
|
|
ffth->gen = ogen;
|
|
|
|
fftimehands = ffth;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adjust the fftimehands when the timecounter is changed. Stating the obvious,
|
|
|
|
* the old and new hardware counter cannot be read simultaneously. tc_windup()
|
|
|
|
* does read the two counters 'back to back', but a few cycles are effectively
|
|
|
|
* lost, and not accumulated in tick_ffcount. This is a fairly radical
|
|
|
|
* operation for a feed-forward synchronization daemon, and it is its job to not
|
|
|
|
* pushing irrelevant data to the kernel. Because there is no locking here,
|
|
|
|
* simply force to ignore pending or next update to give daemon a chance to
|
|
|
|
* realize the counter has changed.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ffclock_change_tc(struct timehands *th)
|
|
|
|
{
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
struct ffclock_estimate *cest;
|
|
|
|
struct timecounter *tc;
|
|
|
|
uint8_t ogen;
|
|
|
|
|
|
|
|
tc = th->th_counter;
|
|
|
|
ffth = fftimehands->next;
|
|
|
|
ogen = ffth->gen;
|
|
|
|
ffth->gen = 0;
|
|
|
|
|
|
|
|
cest = &ffth->cest;
|
|
|
|
bcopy(&(fftimehands->cest), cest, sizeof(struct ffclock_estimate));
|
|
|
|
cest->period = ((1ULL << 63) / tc->tc_frequency ) << 1;
|
|
|
|
cest->errb_abs = 0;
|
|
|
|
cest->errb_rate = 0;
|
|
|
|
cest->status |= FFCLOCK_STA_UNSYNC;
|
|
|
|
|
|
|
|
ffth->tick_ffcount = fftimehands->tick_ffcount;
|
|
|
|
ffth->tick_time_lerp = fftimehands->tick_time_lerp;
|
|
|
|
ffth->tick_time = fftimehands->tick_time;
|
|
|
|
ffth->period_lerp = cest->period;
|
|
|
|
|
|
|
|
/* Do not lock but ignore next update from synchronization daemon. */
|
|
|
|
ffclock_updated--;
|
|
|
|
|
|
|
|
if (++ogen == 0)
|
|
|
|
ogen = 1;
|
|
|
|
ffth->gen = ogen;
|
|
|
|
fftimehands = ffth;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Retrieve feed-forward counter and time of last kernel tick.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ffclock_last_tick(ffcounter *ffcount, struct bintime *bt, uint32_t flags)
|
|
|
|
{
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
uint8_t gen;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No locking but check generation has not changed. Also need to make
|
|
|
|
* sure ffdelta is positive, i.e. ffcount > tick_ffcount.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
ffth = fftimehands;
|
|
|
|
gen = ffth->gen;
|
|
|
|
if ((flags & FFCLOCK_LERP) == FFCLOCK_LERP)
|
|
|
|
*bt = ffth->tick_time_lerp;
|
|
|
|
else
|
|
|
|
*bt = ffth->tick_time;
|
|
|
|
*ffcount = ffth->tick_ffcount;
|
|
|
|
} while (gen == 0 || gen != ffth->gen);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Absolute clock conversion. Low level function to convert ffcounter to
|
|
|
|
* bintime. The ffcounter is converted using the current ffclock period estimate
|
|
|
|
* or the "interpolated period" to ensure monotonicity.
|
|
|
|
* NOTE: this conversion may have been deferred, and the clock updated since the
|
|
|
|
* hardware counter has been read.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ffclock_convert_abs(ffcounter ffcount, struct bintime *bt, uint32_t flags)
|
|
|
|
{
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
struct bintime bt2;
|
|
|
|
ffcounter ffdelta;
|
|
|
|
uint8_t gen;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No locking but check generation has not changed. Also need to make
|
|
|
|
* sure ffdelta is positive, i.e. ffcount > tick_ffcount.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
ffth = fftimehands;
|
|
|
|
gen = ffth->gen;
|
|
|
|
if (ffcount > ffth->tick_ffcount)
|
|
|
|
ffdelta = ffcount - ffth->tick_ffcount;
|
|
|
|
else
|
|
|
|
ffdelta = ffth->tick_ffcount - ffcount;
|
|
|
|
|
|
|
|
if ((flags & FFCLOCK_LERP) == FFCLOCK_LERP) {
|
|
|
|
*bt = ffth->tick_time_lerp;
|
|
|
|
ffclock_convert_delta(ffdelta, ffth->period_lerp, &bt2);
|
|
|
|
} else {
|
|
|
|
*bt = ffth->tick_time;
|
|
|
|
ffclock_convert_delta(ffdelta, ffth->cest.period, &bt2);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ffcount > ffth->tick_ffcount)
|
|
|
|
bintime_add(bt, &bt2);
|
|
|
|
else
|
|
|
|
bintime_sub(bt, &bt2);
|
|
|
|
} while (gen == 0 || gen != ffth->gen);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Difference clock conversion.
|
|
|
|
* Low level function to Convert a time interval measured in RAW counter units
|
|
|
|
* into bintime. The difference clock allows measuring small intervals much more
|
|
|
|
* reliably than the absolute clock.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ffclock_convert_diff(ffcounter ffdelta, struct bintime *bt)
|
|
|
|
{
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
uint8_t gen;
|
|
|
|
|
|
|
|
/* No locking but check generation has not changed. */
|
|
|
|
do {
|
|
|
|
ffth = fftimehands;
|
|
|
|
gen = ffth->gen;
|
|
|
|
ffclock_convert_delta(ffdelta, ffth->cest.period, bt);
|
|
|
|
} while (gen == 0 || gen != ffth->gen);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Access to current ffcounter value.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ffclock_read_counter(ffcounter *ffcount)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
unsigned int gen, delta;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ffclock_windup() called from tc_windup(), safe to rely on
|
|
|
|
* th->th_generation only, for correct delta and ffcounter.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-11-19 14:10:16 +00:00
|
|
|
ffth = fftimehands;
|
|
|
|
delta = tc_delta(th);
|
|
|
|
*ffcount = ffth->tick_ffcount;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-11-19 14:10:16 +00:00
|
|
|
|
|
|
|
*ffcount += delta;
|
|
|
|
}
|
2011-11-20 05:32:12 +00:00
|
|
|
|
|
|
|
void
|
|
|
|
binuptime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
binuptime_fromclock(bt, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
nanouptime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
nanouptime_fromclock(tsp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
microuptime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
microuptime_fromclock(tvp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
bintime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
bintime_fromclock(bt, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
nanotime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
nanotime_fromclock(tsp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
microtime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
microtime_fromclock(tvp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getbinuptime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
getbinuptime_fromclock(bt, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getnanouptime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
getnanouptime_fromclock(tsp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getmicrouptime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
getmicrouptime_fromclock(tvp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getbintime(struct bintime *bt)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
getbintime_fromclock(bt, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getnanotime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
getnanotime_fromclock(tsp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
getmicrotime(struct timeval *tvp)
|
|
|
|
{
|
|
|
|
|
2011-11-29 08:33:40 +00:00
|
|
|
getmicrouptime_fromclock(tvp, sysclock_active);
|
2011-11-20 05:32:12 +00:00
|
|
|
}
|
2011-12-24 01:32:01 +00:00
|
|
|
|
2011-11-19 14:10:16 +00:00
|
|
|
#endif /* FFCLOCK */
|
|
|
|
|
2012-07-16 20:17:19 +00:00
|
|
|
/*
|
|
|
|
* This is a clone of getnanotime and used for walltimestamps.
|
|
|
|
* The dtrace_ prefix prevents fbt from creating probes for
|
|
|
|
* it so walltimestamp can be safely used in all fbt probes.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
dtrace_getnanotime(struct timespec *tsp)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
u_int gen;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2012-07-16 20:17:19 +00:00
|
|
|
*tsp = th->th_nanotime;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2012-07-16 20:17:19 +00:00
|
|
|
}
|
|
|
|
|
2011-12-24 01:32:01 +00:00
|
|
|
/*
|
|
|
|
* System clock currently providing time to the system. Modifiable via sysctl
|
|
|
|
* when the FFCLOCK option is defined.
|
|
|
|
*/
|
|
|
|
int sysclock_active = SYSCLOCK_FBCK;
|
|
|
|
|
|
|
|
/* Internal NTP status and error estimates. */
|
|
|
|
extern int time_status;
|
|
|
|
extern long time_esterror;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Take a snapshot of sysclock data which can be used to compare system clocks
|
|
|
|
* and generate timestamps after the fact.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
sysclock_getsnapshot(struct sysclock_snap *clock_snap, int fast)
|
|
|
|
{
|
|
|
|
struct fbclock_info *fbi;
|
|
|
|
struct timehands *th;
|
|
|
|
struct bintime bt;
|
|
|
|
unsigned int delta, gen;
|
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffcounter ffcount;
|
|
|
|
struct fftimehands *ffth;
|
|
|
|
struct ffclock_info *ffi;
|
|
|
|
struct ffclock_estimate cest;
|
|
|
|
|
|
|
|
ffi = &clock_snap->ff_info;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
fbi = &clock_snap->fb_info;
|
|
|
|
delta = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
gen = atomic_load_acq_int(&th->th_generation);
|
2011-12-24 01:32:01 +00:00
|
|
|
fbi->th_scale = th->th_scale;
|
|
|
|
fbi->tick_time = th->th_offset;
|
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffth = fftimehands;
|
|
|
|
ffi->tick_time = ffth->tick_time_lerp;
|
|
|
|
ffi->tick_time_lerp = ffth->tick_time_lerp;
|
|
|
|
ffi->period = ffth->cest.period;
|
|
|
|
ffi->period_lerp = ffth->period_lerp;
|
|
|
|
clock_snap->ffcount = ffth->tick_ffcount;
|
|
|
|
cest = ffth->cest;
|
|
|
|
#endif
|
|
|
|
if (!fast)
|
|
|
|
delta = tc_delta(th);
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
} while (gen == 0 || gen != th->th_generation);
|
2011-12-24 01:32:01 +00:00
|
|
|
|
|
|
|
clock_snap->delta = delta;
|
|
|
|
clock_snap->sysclock_active = sysclock_active;
|
|
|
|
|
|
|
|
/* Record feedback clock status and error. */
|
|
|
|
clock_snap->fb_info.status = time_status;
|
|
|
|
/* XXX: Very crude estimate of feedback clock error. */
|
|
|
|
bt.sec = time_esterror / 1000000;
|
|
|
|
bt.frac = ((time_esterror - bt.sec) * 1000000) *
|
|
|
|
(uint64_t)18446744073709ULL;
|
|
|
|
clock_snap->fb_info.error = bt;
|
|
|
|
|
|
|
|
#ifdef FFCLOCK
|
|
|
|
if (!fast)
|
|
|
|
clock_snap->ffcount += delta;
|
|
|
|
|
|
|
|
/* Record feed-forward clock leap second adjustment. */
|
|
|
|
ffi->leapsec_adjustment = cest.leapsec_total;
|
|
|
|
if (clock_snap->ffcount > cest.leapsec_next)
|
|
|
|
ffi->leapsec_adjustment -= cest.leapsec;
|
|
|
|
|
|
|
|
/* Record feed-forward clock status and error. */
|
|
|
|
clock_snap->ff_info.status = cest.status;
|
|
|
|
ffcount = clock_snap->ffcount - cest.update_ffcount;
|
|
|
|
ffclock_convert_delta(ffcount, cest.period, &bt);
|
|
|
|
/* 18446744073709 = int(2^64/1e12), err_bound_rate in [ps/s]. */
|
|
|
|
bintime_mul(&bt, cest.errb_rate * (uint64_t)18446744073709ULL);
|
|
|
|
/* 18446744073 = int(2^64 / 1e9), since err_abs in [ns]. */
|
|
|
|
bintime_addx(&bt, cest.errb_abs * (uint64_t)18446744073ULL);
|
|
|
|
clock_snap->ff_info.error = bt;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Convert a sysclock snapshot into a struct bintime based on the specified
|
|
|
|
* clock source and flags.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
sysclock_snap2bintime(struct sysclock_snap *cs, struct bintime *bt,
|
|
|
|
int whichclock, uint32_t flags)
|
|
|
|
{
|
2016-07-27 11:08:59 +00:00
|
|
|
struct bintime boottimebin;
|
2011-12-24 01:32:01 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
struct bintime bt2;
|
|
|
|
uint64_t period;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
switch (whichclock) {
|
|
|
|
case SYSCLOCK_FBCK:
|
|
|
|
*bt = cs->fb_info.tick_time;
|
|
|
|
|
|
|
|
/* If snapshot was created with !fast, delta will be >0. */
|
|
|
|
if (cs->delta > 0)
|
|
|
|
bintime_addx(bt, cs->fb_info.th_scale * cs->delta);
|
|
|
|
|
2016-07-27 11:08:59 +00:00
|
|
|
if ((flags & FBCLOCK_UPTIME) == 0) {
|
|
|
|
getboottimebin(&boottimebin);
|
2011-12-24 01:32:01 +00:00
|
|
|
bintime_add(bt, &boottimebin);
|
2016-07-27 11:08:59 +00:00
|
|
|
}
|
2011-12-24 01:32:01 +00:00
|
|
|
break;
|
|
|
|
#ifdef FFCLOCK
|
|
|
|
case SYSCLOCK_FFWD:
|
|
|
|
if (flags & FFCLOCK_LERP) {
|
|
|
|
*bt = cs->ff_info.tick_time_lerp;
|
|
|
|
period = cs->ff_info.period_lerp;
|
|
|
|
} else {
|
|
|
|
*bt = cs->ff_info.tick_time;
|
|
|
|
period = cs->ff_info.period;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If snapshot was created with !fast, delta will be >0. */
|
|
|
|
if (cs->delta > 0) {
|
|
|
|
ffclock_convert_delta(cs->delta, period, &bt2);
|
|
|
|
bintime_add(bt, &bt2);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Leap second adjustment. */
|
|
|
|
if (flags & FFCLOCK_LEAPSEC)
|
|
|
|
bt->sec -= cs->ff_info.leapsec_adjustment;
|
|
|
|
|
|
|
|
/* Boot time adjustment, for uptime/monotonic clocks. */
|
|
|
|
if (flags & FFCLOCK_UPTIME)
|
|
|
|
bintime_sub(bt, &ffclock_boottime);
|
2012-02-10 06:30:52 +00:00
|
|
|
break;
|
2011-12-24 01:32:01 +00:00
|
|
|
#endif
|
|
|
|
default:
|
|
|
|
return (EINVAL);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2003-08-16 08:23:53 +00:00
|
|
|
* Initialize a new timecounter and possibly use it.
|
2002-04-28 18:24:21 +00:00
|
|
|
*/
|
1998-02-20 16:36:17 +00:00
|
|
|
void
|
2000-03-20 14:09:06 +00:00
|
|
|
tc_init(struct timecounter *tc)
|
1998-02-20 16:36:17 +00:00
|
|
|
{
|
2003-11-13 10:03:58 +00:00
|
|
|
u_int u;
|
2006-06-16 20:29:05 +00:00
|
|
|
struct sysctl_oid *tc_root;
|
1998-10-23 10:44:52 +00:00
|
|
|
|
2003-09-03 08:14:16 +00:00
|
|
|
u = tc->tc_frequency / tc->tc_counter_mask;
|
2003-11-13 10:03:58 +00:00
|
|
|
/* XXX: We need some margin here, 10% is a guess */
|
|
|
|
u *= 11;
|
|
|
|
u /= 10;
|
2003-09-03 08:14:16 +00:00
|
|
|
if (u > hz && tc->tc_quality >= 0) {
|
|
|
|
tc->tc_quality = -2000;
|
|
|
|
if (bootverbose) {
|
|
|
|
printf("Timecounter \"%s\" frequency %ju Hz",
|
2003-11-13 10:03:58 +00:00
|
|
|
tc->tc_name, (uintmax_t)tc->tc_frequency);
|
2003-09-03 08:14:16 +00:00
|
|
|
printf(" -- Insufficient hz, needs at least %u\n", u);
|
|
|
|
}
|
|
|
|
} else if (tc->tc_quality >= 0 || bootverbose) {
|
2003-11-13 10:03:58 +00:00
|
|
|
printf("Timecounter \"%s\" frequency %ju Hz quality %d\n",
|
|
|
|
tc->tc_name, (uintmax_t)tc->tc_frequency,
|
2003-08-16 08:23:53 +00:00
|
|
|
tc->tc_quality);
|
2002-09-04 19:32:18 +00:00
|
|
|
}
|
2003-09-03 08:14:16 +00:00
|
|
|
|
2002-04-26 21:51:08 +00:00
|
|
|
tc->tc_next = timecounters;
|
|
|
|
timecounters = tc;
|
2006-06-16 20:29:05 +00:00
|
|
|
/*
|
|
|
|
* Set up sysctl tree for this counter.
|
|
|
|
*/
|
2016-12-14 12:56:58 +00:00
|
|
|
tc_root = SYSCTL_ADD_NODE_WITH_LABEL(NULL,
|
2006-06-16 20:29:05 +00:00
|
|
|
SYSCTL_STATIC_CHILDREN(_kern_timecounter_tc), OID_AUTO, tc->tc_name,
|
2016-12-14 12:56:58 +00:00
|
|
|
CTLFLAG_RW, 0, "timecounter description", "timecounter");
|
2006-06-16 20:29:05 +00:00
|
|
|
SYSCTL_ADD_UINT(NULL, SYSCTL_CHILDREN(tc_root), OID_AUTO,
|
|
|
|
"mask", CTLFLAG_RD, &(tc->tc_counter_mask), 0,
|
|
|
|
"mask for implemented bits");
|
|
|
|
SYSCTL_ADD_PROC(NULL, SYSCTL_CHILDREN(tc_root), OID_AUTO,
|
|
|
|
"counter", CTLTYPE_UINT | CTLFLAG_RD, tc, sizeof(*tc),
|
|
|
|
sysctl_kern_timecounter_get, "IU", "current timecounter value");
|
|
|
|
SYSCTL_ADD_PROC(NULL, SYSCTL_CHILDREN(tc_root), OID_AUTO,
|
2011-01-19 23:00:25 +00:00
|
|
|
"frequency", CTLTYPE_U64 | CTLFLAG_RD, tc, sizeof(*tc),
|
2007-06-04 18:25:08 +00:00
|
|
|
sysctl_kern_timecounter_freq, "QU", "timecounter frequency");
|
2006-06-16 20:29:05 +00:00
|
|
|
SYSCTL_ADD_INT(NULL, SYSCTL_CHILDREN(tc_root), OID_AUTO,
|
|
|
|
"quality", CTLFLAG_RD, &(tc->tc_quality), 0,
|
|
|
|
"goodness of time counter");
|
2003-11-13 10:03:58 +00:00
|
|
|
/*
|
2015-08-12 20:50:20 +00:00
|
|
|
* Do not automatically switch if the current tc was specifically
|
|
|
|
* chosen. Never automatically use a timecounter with negative quality.
|
2003-11-13 10:03:58 +00:00
|
|
|
* Even though we run on the dummy counter, switching here may be
|
2015-08-12 20:50:20 +00:00
|
|
|
* worse since this timecounter may not be monotonic.
|
2003-11-13 10:03:58 +00:00
|
|
|
*/
|
2015-08-12 20:50:20 +00:00
|
|
|
if (tc_chosen)
|
|
|
|
return;
|
2003-08-16 08:23:53 +00:00
|
|
|
if (tc->tc_quality < 0)
|
|
|
|
return;
|
|
|
|
if (tc->tc_quality < timecounter->tc_quality)
|
|
|
|
return;
|
2003-11-13 10:03:58 +00:00
|
|
|
if (tc->tc_quality == timecounter->tc_quality &&
|
|
|
|
tc->tc_frequency < timecounter->tc_frequency)
|
|
|
|
return;
|
|
|
|
(void)tc->tc_get_timecount(tc);
|
|
|
|
(void)tc->tc_get_timecount(tc);
|
1998-02-20 16:36:17 +00:00
|
|
|
timecounter = tc;
|
2002-04-26 21:51:08 +00:00
|
|
|
}
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/* Report the frequency of the current timecounter. */
|
2010-06-21 09:55:56 +00:00
|
|
|
uint64_t
|
2002-04-26 21:51:08 +00:00
|
|
|
tc_getfrequency(void)
|
|
|
|
{
|
|
|
|
|
2002-04-28 18:24:21 +00:00
|
|
|
return (timehands->th_counter->tc_frequency);
|
1998-02-20 16:36:17 +00:00
|
|
|
}
|
|
|
|
|
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Reviewed by: jhb, kib
MFC after: 2 weeks
Relnotes: yes
Sponsored by: Dell EMC
Differential Revision: https://reviews.freebsd.org/D9791
2017-03-14 19:06:44 +00:00
|
|
|
static bool
|
|
|
|
sleeping_on_old_rtc(struct thread *td)
|
|
|
|
{
|
|
|
|
|
2017-03-14 22:02:02 +00:00
|
|
|
/*
|
|
|
|
* td_rtcgen is modified by curthread when it is running,
|
|
|
|
* and by other threads in this function. By finding the thread
|
|
|
|
* on a sleepqueue and holding the lock on the sleepqueue
|
|
|
|
* chain, we guarantee that the thread is not running and that
|
|
|
|
* modifying td_rtcgen is safe. Setting td_rtcgen to zero informs
|
|
|
|
* the thread that it was woken due to a real-time clock adjustment.
|
|
|
|
* (The declaration of td_rtcgen refers to this comment.)
|
|
|
|
*/
|
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Reviewed by: jhb, kib
MFC after: 2 weeks
Relnotes: yes
Sponsored by: Dell EMC
Differential Revision: https://reviews.freebsd.org/D9791
2017-03-14 19:06:44 +00:00
|
|
|
if (td->td_rtcgen != 0 && td->td_rtcgen != rtc_generation) {
|
|
|
|
td->td_rtcgen = 0;
|
|
|
|
return (true);
|
|
|
|
}
|
|
|
|
return (false);
|
|
|
|
}
|
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
static struct mtx tc_setclock_mtx;
|
|
|
|
MTX_SYSINIT(tc_setclock_init, &tc_setclock_mtx, "tcsetc", MTX_SPIN);
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2003-06-23 20:14:08 +00:00
|
|
|
* Step our concept of UTC. This is done by modifying our estimate of
|
2004-01-21 21:05:40 +00:00
|
|
|
* when we booted.
|
2002-04-28 18:24:21 +00:00
|
|
|
*/
|
1998-02-20 16:36:17 +00:00
|
|
|
void
|
2000-03-20 14:09:06 +00:00
|
|
|
tc_setclock(struct timespec *ts)
|
1998-02-15 13:55:06 +00:00
|
|
|
{
|
2006-03-04 06:06:43 +00:00
|
|
|
struct timespec tbef, taft;
|
2004-01-21 21:05:40 +00:00
|
|
|
struct bintime bt, bt2;
|
1998-04-04 13:26:20 +00:00
|
|
|
|
2004-01-21 21:05:40 +00:00
|
|
|
timespec2bintime(ts, &bt);
|
2016-07-27 11:49:41 +00:00
|
|
|
nanotime(&tbef);
|
|
|
|
mtx_lock_spin(&tc_setclock_mtx);
|
|
|
|
cpu_tick_calibrate(1);
|
2006-03-04 06:06:43 +00:00
|
|
|
binuptime(&bt2);
|
2004-01-21 21:05:40 +00:00
|
|
|
bintime_sub(&bt, &bt2);
|
2002-04-30 20:42:06 +00:00
|
|
|
|
|
|
|
/* XXX fiddle all the little crinkly bits around the fiords... */
|
2016-07-27 11:49:41 +00:00
|
|
|
tc_windup(&bt);
|
|
|
|
mtx_unlock_spin(&tc_setclock_mtx);
|
2017-03-14 22:02:02 +00:00
|
|
|
|
When the RTC is adjusted, reevaluate absolute sleep times based on the RTC
POSIX 2008 says this about clock_settime(2):
If the value of the CLOCK_REALTIME clock is set via clock_settime(),
the new value of the clock shall be used to determine the time
of expiration for absolute time services based upon the
CLOCK_REALTIME clock. This applies to the time at which armed
absolute timers expire. If the absolute time requested at the
invocation of such a time service is before the new value of
the clock, the time service shall expire immediately as if the
clock had reached the requested time normally.
Setting the value of the CLOCK_REALTIME clock via clock_settime()
shall have no effect on threads that are blocked waiting for
a relative time service based upon this clock, including the
nanosleep() function; nor on the expiration of relative timers
based upon this clock. Consequently, these time services shall
expire when the requested relative interval elapses, independently
of the new or old value of the clock.
When the real-time clock is adjusted, such as by clock_settime(3),
wake any threads sleeping until an absolute real-clock time.
Such a sleep is indicated by a non-zero td_rtcgen. The sleep functions
will set that field to zero and return zero to tell the caller
to reevaluate its sleep duration based on the new value of the clock.
At present, this affects the following functions:
pthread_cond_timedwait(3)
pthread_mutex_timedlock(3)
pthread_rwlock_timedrdlock(3)
pthread_rwlock_timedwrlock(3)
sem_timedwait(3)
sem_clockwait_np(3)
I'm working on adding clock_nanosleep(2), which will also be affected.
Reported by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Reviewed by: jhb, kib
MFC after: 2 weeks
Relnotes: yes
Sponsored by: Dell EMC
Differential Revision: https://reviews.freebsd.org/D9791
2017-03-14 19:06:44 +00:00
|
|
|
/* Avoid rtc_generation == 0, since td_rtcgen == 0 is special. */
|
|
|
|
atomic_add_rel_int(&rtc_generation, 2);
|
|
|
|
sleepq_chains_remove_matching(sleeping_on_old_rtc);
|
2004-01-21 21:05:40 +00:00
|
|
|
if (timestepwarnings) {
|
2016-07-27 11:49:41 +00:00
|
|
|
nanotime(&taft);
|
2006-03-04 06:06:43 +00:00
|
|
|
log(LOG_INFO,
|
|
|
|
"Time stepped from %jd.%09ld to %jd.%09ld (%jd.%09ld)\n",
|
|
|
|
(intmax_t)tbef.tv_sec, tbef.tv_nsec,
|
|
|
|
(intmax_t)taft.tv_sec, taft.tv_nsec,
|
2004-01-22 19:50:06 +00:00
|
|
|
(intmax_t)ts->tv_sec, ts->tv_nsec);
|
2004-01-21 21:05:40 +00:00
|
|
|
}
|
1998-02-15 13:55:06 +00:00
|
|
|
}
|
1998-02-20 16:36:17 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
|
|
|
* Initialize the next struct timehands in the ring and make
|
2002-04-28 18:24:21 +00:00
|
|
|
* it the active timehands. Along the way we might switch to a different
|
|
|
|
* timecounter and/or do seconds processing in NTP. Slightly magic.
|
|
|
|
*/
|
2002-04-26 12:37:36 +00:00
|
|
|
static void
|
2016-07-27 11:49:41 +00:00
|
|
|
tc_windup(struct bintime *new_boottimebin)
|
1998-02-20 16:36:17 +00:00
|
|
|
{
|
2002-02-07 21:21:55 +00:00
|
|
|
struct bintime bt;
|
2002-04-30 20:42:06 +00:00
|
|
|
struct timehands *th, *tho;
|
2010-06-21 09:55:56 +00:00
|
|
|
uint64_t scale;
|
2002-04-30 20:42:06 +00:00
|
|
|
u_int delta, ncount, ogen;
|
|
|
|
int i;
|
Fix leap second processing by the kernel time keeping routines.
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
2003-06-25 21:23:51 +00:00
|
|
|
time_t t;
|
1998-02-20 16:36:17 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2015-07-08 18:42:08 +00:00
|
|
|
* Make the next timehands a copy of the current one, but do
|
|
|
|
* not overwrite the generation or next pointer. While we
|
|
|
|
* update the contents, the generation must be zero. We need
|
|
|
|
* to ensure that the zero generation is visible before the
|
|
|
|
* data updates become visible, which requires release fence.
|
|
|
|
* For similar reasons, re-reading of the generation after the
|
|
|
|
* data is read should use acquire fence.
|
2002-04-28 18:24:21 +00:00
|
|
|
*/
|
|
|
|
tho = timehands;
|
|
|
|
th = tho->th_next;
|
|
|
|
ogen = th->th_generation;
|
2015-07-08 18:42:08 +00:00
|
|
|
th->th_generation = 0;
|
|
|
|
atomic_thread_fence_rel();
|
2018-05-04 22:48:10 +00:00
|
|
|
memcpy(th, tho, offsetof(struct timehands, th_generation));
|
2016-07-27 11:49:41 +00:00
|
|
|
if (new_boottimebin != NULL)
|
|
|
|
th->th_boottime = *new_boottimebin;
|
2002-04-28 18:24:21 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-28 18:24:21 +00:00
|
|
|
* Capture a timecounter delta on the current timecounter and if
|
|
|
|
* changing timecounters, a counter value from the new timecounter.
|
|
|
|
* Update the offset fields accordingly.
|
|
|
|
*/
|
|
|
|
delta = tc_delta(th);
|
|
|
|
if (th->th_counter != timecounter)
|
2002-04-26 21:51:08 +00:00
|
|
|
ncount = timecounter->tc_get_timecount(timecounter);
|
2002-04-30 20:42:06 +00:00
|
|
|
else
|
|
|
|
ncount = 0;
|
2011-11-19 14:10:16 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffclock_windup(delta);
|
|
|
|
#endif
|
2002-04-28 18:24:21 +00:00
|
|
|
th->th_offset_count += delta;
|
|
|
|
th->th_offset_count &= th->th_counter->tc_counter_mask;
|
2010-11-22 09:13:25 +00:00
|
|
|
while (delta > th->th_counter->tc_frequency) {
|
|
|
|
/* Eat complete unadjusted seconds. */
|
|
|
|
delta -= th->th_counter->tc_frequency;
|
|
|
|
th->th_offset.sec++;
|
|
|
|
}
|
|
|
|
if ((delta > th->th_counter->tc_frequency / 2) &&
|
2010-11-23 04:50:01 +00:00
|
|
|
(th->th_scale * delta < ((uint64_t)1 << 63))) {
|
2010-11-22 09:13:25 +00:00
|
|
|
/* The product th_scale * delta just barely overflows. */
|
|
|
|
th->th_offset.sec++;
|
|
|
|
}
|
2002-04-28 18:24:21 +00:00
|
|
|
bintime_addx(&th->th_offset, th->th_scale * delta);
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-28 18:24:21 +00:00
|
|
|
* Hardware latching timecounters may not generate interrupts on
|
|
|
|
* PPS events, so instead we poll them. There is a finite risk that
|
|
|
|
* the hardware might capture a count which is later than the one we
|
|
|
|
* got above, and therefore possibly in the next NTP second which might
|
|
|
|
* have a different rate than the current NTP second. It doesn't
|
|
|
|
* matter in practice.
|
1998-07-04 19:12:21 +00:00
|
|
|
*/
|
2002-04-28 18:24:21 +00:00
|
|
|
if (tho->th_counter->tc_poll_pps)
|
|
|
|
tho->th_counter->tc_poll_pps(tho->th_counter);
|
|
|
|
|
2003-08-20 19:12:46 +00:00
|
|
|
/*
|
|
|
|
* Deal with NTP second processing. The for loop normally
|
|
|
|
* iterates at most once, but in extreme situations it might
|
|
|
|
* keep NTP sane if timeouts are not run for several seconds.
|
|
|
|
* At boot, the time step can be large when the TOD hardware
|
|
|
|
* has been read, so on really large steps, we call
|
|
|
|
* ntp_update_second only twice. We need to call it twice in
|
|
|
|
* case we missed a leap second.
|
Fix leap second processing by the kernel time keeping routines.
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
2003-06-25 21:23:51 +00:00
|
|
|
*/
|
|
|
|
bt = th->th_offset;
|
2016-07-27 11:49:41 +00:00
|
|
|
bintime_add(&bt, &th->th_boottime);
|
2003-08-20 05:34:27 +00:00
|
|
|
i = bt.sec - tho->th_microtime.tv_sec;
|
|
|
|
if (i > LARGE_STEP)
|
|
|
|
i = 2;
|
|
|
|
for (; i > 0; i--) {
|
Fix leap second processing by the kernel time keeping routines.
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
2003-06-25 21:23:51 +00:00
|
|
|
t = bt.sec;
|
|
|
|
ntp_update_second(&th->th_adjustment, &bt.sec);
|
|
|
|
if (bt.sec != t)
|
2016-07-27 11:49:41 +00:00
|
|
|
th->th_boottime.sec += bt.sec - t;
|
Fix leap second processing by the kernel time keeping routines.
Before, we would add/subtract the leap second when the system had been
up for an even multiple of days, rather than at the end of the day, as
a leap second is defined (at least wrt ntp). We do this by
calculating the notion of UTC earlier in the loop, and passing that to
get it adjusted. Any adjustments that ntp_update_second makes to this
time are then transferred to boot time. We can't pass it either the
boot time or the uptime because their sum is what determines when a
leap second is needed. This code adds an extra assignment and two
extra compare in the typical case, which is as cheap as I could made
it.
I have confirmed with this code the kernel time does the correct thing
for both positive and negative leap seconds. Since the ntp interface
doesn't allow for +2 or -2, those cases can't be tested (and the folks
in the know here say there will never be a +2s or -2s leap event, but
rather two +1s or -1s leap events).
There will very likely be no leap seconds for a while, given how the
earth is speeding up and slowing down, so there will be plenty of time
for this fix to propigate. UT1-UTC is currently at "about -0.4s" and
decrementing by .1s every 8 months or so. 6 * 8 is 48 months, or 4
years.
-stable has different code, but a similar bug that was introduced
about the time of the last leap second, which is why nobody has
noticed until now.
MFC After: 3 weeks
Reviewed by: phk
"Furthermore, leap seconds must die." -- Cato the Elder
2003-06-25 21:23:51 +00:00
|
|
|
}
|
2003-08-20 19:12:46 +00:00
|
|
|
/* Update the UTC timestamps used by the get*() functions. */
|
2017-10-11 11:03:11 +00:00
|
|
|
th->th_bintime = bt;
|
2003-08-20 19:12:46 +00:00
|
|
|
bintime2timeval(&bt, &th->th_microtime);
|
|
|
|
bintime2timespec(&bt, &th->th_nanotime);
|
2002-04-28 18:24:21 +00:00
|
|
|
|
|
|
|
/* Now is a good time to change timecounters. */
|
|
|
|
if (th->th_counter != timecounter) {
|
2011-07-14 21:00:26 +00:00
|
|
|
#ifndef __arm__
|
2015-01-05 20:44:44 +00:00
|
|
|
if ((timecounter->tc_flags & TC_FLAGS_C2STOP) != 0)
|
|
|
|
cpu_disable_c2_sleep++;
|
|
|
|
if ((th->th_counter->tc_flags & TC_FLAGS_C2STOP) != 0)
|
|
|
|
cpu_disable_c2_sleep--;
|
2011-07-14 21:00:26 +00:00
|
|
|
#endif
|
2002-04-28 18:24:21 +00:00
|
|
|
th->th_counter = timecounter;
|
|
|
|
th->th_offset_count = ncount;
|
2010-09-14 08:48:06 +00:00
|
|
|
tc_min_ticktock_freq = max(1, timecounter->tc_frequency /
|
|
|
|
(((uint64_t)timecounter->tc_counter_mask + 1) / 3));
|
2011-11-19 14:10:16 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffclock_change_tc(th);
|
|
|
|
#endif
|
2002-04-27 07:28:54 +00:00
|
|
|
}
|
1998-03-16 10:19:12 +00:00
|
|
|
|
2010-07-18 20:57:53 +00:00
|
|
|
/*-
|
2002-04-28 18:24:21 +00:00
|
|
|
* Recalculate the scaling factor. We want the number of 1/2^64
|
|
|
|
* fractions of a second per period of the hardware counter, taking
|
|
|
|
* into account the th_adjustment factor which the NTP PLL/adjtime(2)
|
|
|
|
* processing provides us with.
|
|
|
|
*
|
|
|
|
* The th_adjustment is nanoseconds per second with 32 bit binary
|
2003-07-02 08:01:52 +00:00
|
|
|
* fraction and we want 64 bit binary fraction of second:
|
2002-04-28 18:24:21 +00:00
|
|
|
*
|
|
|
|
* x = a * 2^32 / 10^9 = a * 4.294967296
|
|
|
|
*
|
|
|
|
* The range of th_adjustment is +/- 5000PPM so inside a 64bit int
|
2006-02-11 09:33:07 +00:00
|
|
|
* we can only multiply by about 850 without overflowing, that
|
|
|
|
* leaves no suitably precise fractions for multiply before divide.
|
2002-04-28 18:24:21 +00:00
|
|
|
*
|
|
|
|
* Divide before multiply with a fraction of 2199/512 results in a
|
|
|
|
* systematic undercompensation of 10PPM of th_adjustment. On a
|
|
|
|
* 5000PPM adjustment this is a 0.05PPM error. This is acceptable.
|
|
|
|
*
|
|
|
|
* We happily sacrifice the lowest of the 64 bits of our result
|
|
|
|
* to the goddess of code clarity.
|
2002-04-30 20:42:06 +00:00
|
|
|
*
|
2002-04-28 18:24:21 +00:00
|
|
|
*/
|
2010-06-21 09:55:56 +00:00
|
|
|
scale = (uint64_t)1 << 63;
|
2002-04-28 18:24:21 +00:00
|
|
|
scale += (th->th_adjustment / 1024) * 2199;
|
|
|
|
scale /= th->th_counter->tc_frequency;
|
|
|
|
th->th_scale = scale * 2;
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
|
|
|
* Now that the struct timehands is again consistent, set the new
|
2002-04-28 18:24:21 +00:00
|
|
|
* generation number, making sure to not make it zero.
|
|
|
|
*/
|
|
|
|
if (++ogen == 0)
|
2002-04-30 20:42:06 +00:00
|
|
|
ogen = 1;
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_store_rel_int(&th->th_generation, ogen);
|
2002-04-28 18:24:21 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/* Go live with the new struct timehands. */
|
2011-11-20 05:32:12 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
switch (sysclock_active) {
|
|
|
|
case SYSCLOCK_FBCK:
|
|
|
|
#endif
|
|
|
|
time_second = th->th_microtime.tv_sec;
|
|
|
|
time_uptime = th->th_offset.sec;
|
|
|
|
#ifdef FFCLOCK
|
|
|
|
break;
|
|
|
|
case SYSCLOCK_FFWD:
|
|
|
|
time_second = fftimehands->tick_time_lerp.sec;
|
|
|
|
time_uptime = fftimehands->tick_time_lerp.sec - ffclock_boottime.sec;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-04-28 18:24:21 +00:00
|
|
|
timehands = th;
|
2012-06-23 09:33:06 +00:00
|
|
|
timekeep_push_vdso();
|
1998-02-20 16:36:17 +00:00
|
|
|
}
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/* Report or change the active timecounter hardware. */
|
1999-07-18 15:07:20 +00:00
|
|
|
static int
|
2000-07-04 11:25:35 +00:00
|
|
|
sysctl_kern_timecounter_hardware(SYSCTL_HANDLER_ARGS)
|
1999-07-18 15:07:20 +00:00
|
|
|
{
|
|
|
|
char newname[32];
|
|
|
|
struct timecounter *newtc, *tc;
|
|
|
|
int error;
|
|
|
|
|
2002-04-26 21:51:08 +00:00
|
|
|
tc = timecounter;
|
2002-10-17 20:03:38 +00:00
|
|
|
strlcpy(newname, tc->tc_name, sizeof(newname));
|
|
|
|
|
1999-07-18 15:07:20 +00:00
|
|
|
error = sysctl_handle_string(oidp, &newname[0], sizeof(newname), req);
|
2015-08-12 20:50:20 +00:00
|
|
|
if (error != 0 || req->newptr == NULL)
|
2002-04-30 20:42:06 +00:00
|
|
|
return (error);
|
2015-08-12 20:50:20 +00:00
|
|
|
/* Record that the tc in use now was specifically chosen. */
|
|
|
|
tc_chosen = 1;
|
|
|
|
if (strcmp(newname, tc->tc_name) == 0)
|
|
|
|
return (0);
|
2002-04-26 21:51:08 +00:00
|
|
|
for (newtc = timecounters; newtc != NULL; newtc = newtc->tc_next) {
|
2002-04-30 20:42:06 +00:00
|
|
|
if (strcmp(newname, newtc->tc_name) != 0)
|
2002-04-26 21:51:08 +00:00
|
|
|
continue;
|
2002-04-30 20:42:06 +00:00
|
|
|
|
2002-04-26 21:51:08 +00:00
|
|
|
/* Warm up new timecounter. */
|
|
|
|
(void)newtc->tc_get_timecount(newtc);
|
|
|
|
(void)newtc->tc_get_timecount(newtc);
|
2002-04-30 20:42:06 +00:00
|
|
|
|
2002-04-26 21:51:08 +00:00
|
|
|
timecounter = newtc;
|
2015-01-20 03:54:30 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The vdso timehands update is deferred until the next
|
|
|
|
* 'tc_windup()'.
|
|
|
|
*
|
|
|
|
* This is prudent given that 'timekeep_push_vdso()' does not
|
|
|
|
* use any locking and that it can be called in hard interrupt
|
|
|
|
* context via 'tc_windup()'.
|
|
|
|
*/
|
2002-04-26 21:51:08 +00:00
|
|
|
return (0);
|
1999-07-18 15:07:20 +00:00
|
|
|
}
|
2002-04-26 21:51:08 +00:00
|
|
|
return (EINVAL);
|
1999-07-18 15:07:20 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
SYSCTL_PROC(_kern_timecounter, OID_AUTO, hardware, CTLTYPE_STRING | CTLFLAG_RW,
|
2010-11-14 08:06:29 +00:00
|
|
|
0, 0, sysctl_kern_timecounter_hardware, "A",
|
|
|
|
"Timecounter hardware selected");
|
1999-07-18 15:07:20 +00:00
|
|
|
|
2003-08-16 08:23:53 +00:00
|
|
|
|
2015-08-12 20:50:20 +00:00
|
|
|
/* Report the available timecounter hardware. */
|
2003-08-16 08:23:53 +00:00
|
|
|
static int
|
|
|
|
sysctl_kern_timecounter_choice(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
2015-03-14 23:16:12 +00:00
|
|
|
struct sbuf sb;
|
2003-08-16 08:23:53 +00:00
|
|
|
struct timecounter *tc;
|
|
|
|
int error;
|
|
|
|
|
2015-03-14 23:16:12 +00:00
|
|
|
sbuf_new_for_sysctl(&sb, NULL, 0, req);
|
|
|
|
for (tc = timecounters; tc != NULL; tc = tc->tc_next) {
|
|
|
|
if (tc != timecounters)
|
|
|
|
sbuf_putc(&sb, ' ');
|
|
|
|
sbuf_printf(&sb, "%s(%d)", tc->tc_name, tc->tc_quality);
|
2003-08-16 08:23:53 +00:00
|
|
|
}
|
2015-03-14 23:16:12 +00:00
|
|
|
error = sbuf_finish(&sb);
|
|
|
|
sbuf_delete(&sb);
|
2003-08-16 08:23:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCTL_PROC(_kern_timecounter, OID_AUTO, choice, CTLTYPE_STRING | CTLFLAG_RD,
|
2010-11-14 06:09:50 +00:00
|
|
|
0, 0, sysctl_kern_timecounter_choice, "A", "Timecounter hardware detected");
|
2003-08-16 08:23:53 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-28 18:24:21 +00:00
|
|
|
* RFC 2783 PPS-API implementation.
|
|
|
|
*/
|
1999-03-11 15:09:51 +00:00
|
|
|
|
2015-05-04 17:59:39 +00:00
|
|
|
/*
|
|
|
|
* Return true if the driver is aware of the abi version extensions in the
|
|
|
|
* pps_state structure, and it supports at least the given abi version number.
|
|
|
|
*/
|
|
|
|
static inline int
|
|
|
|
abi_aware(struct pps_state *pps, int vers)
|
|
|
|
{
|
|
|
|
|
|
|
|
return ((pps->kcmode & KCMODE_ABIFLAG) && pps->driver_abi >= vers);
|
|
|
|
}
|
|
|
|
|
2013-02-15 18:30:32 +00:00
|
|
|
static int
|
|
|
|
pps_fetch(struct pps_fetch_args *fapi, struct pps_state *pps)
|
|
|
|
{
|
|
|
|
int err, timo;
|
|
|
|
pps_seq_t aseq, cseq;
|
|
|
|
struct timeval tv;
|
|
|
|
|
|
|
|
if (fapi->tsformat && fapi->tsformat != PPS_TSFMT_TSPEC)
|
|
|
|
return (EINVAL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If no timeout is requested, immediately return whatever values were
|
|
|
|
* most recently captured. If timeout seconds is -1, that's a request
|
|
|
|
* to block without a timeout. WITNESS won't let us sleep forever
|
|
|
|
* without a lock (we really don't need a lock), so just repeatedly
|
|
|
|
* sleep a long time.
|
|
|
|
*/
|
|
|
|
if (fapi->timeout.tv_sec || fapi->timeout.tv_nsec) {
|
|
|
|
if (fapi->timeout.tv_sec == -1)
|
|
|
|
timo = 0x7fffffff;
|
|
|
|
else {
|
|
|
|
tv.tv_sec = fapi->timeout.tv_sec;
|
|
|
|
tv.tv_usec = fapi->timeout.tv_nsec / 1000;
|
|
|
|
timo = tvtohz(&tv);
|
|
|
|
}
|
2017-12-19 10:05:45 +00:00
|
|
|
aseq = atomic_load_int(&pps->ppsinfo.assert_sequence);
|
|
|
|
cseq = atomic_load_int(&pps->ppsinfo.clear_sequence);
|
|
|
|
while (aseq == atomic_load_int(&pps->ppsinfo.assert_sequence) &&
|
|
|
|
cseq == atomic_load_int(&pps->ppsinfo.clear_sequence)) {
|
2015-05-04 17:59:39 +00:00
|
|
|
if (abi_aware(pps, 1) && pps->driver_mtx != NULL) {
|
|
|
|
if (pps->flags & PPSFLAG_MTX_SPIN) {
|
|
|
|
err = msleep_spin(pps, pps->driver_mtx,
|
|
|
|
"ppsfch", timo);
|
|
|
|
} else {
|
|
|
|
err = msleep(pps, pps->driver_mtx, PCATCH,
|
|
|
|
"ppsfch", timo);
|
|
|
|
}
|
|
|
|
} else {
|
2015-03-07 18:23:32 +00:00
|
|
|
err = tsleep(pps, PCATCH, "ppsfch", timo);
|
2015-05-04 17:59:39 +00:00
|
|
|
}
|
2015-08-07 21:14:19 +00:00
|
|
|
if (err == EWOULDBLOCK) {
|
|
|
|
if (fapi->timeout.tv_sec == -1) {
|
|
|
|
continue;
|
|
|
|
} else {
|
|
|
|
return (ETIMEDOUT);
|
|
|
|
}
|
2013-02-15 18:30:32 +00:00
|
|
|
} else if (err != 0) {
|
|
|
|
return (err);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pps->ppsinfo.current_mode = pps->ppsparam.mode;
|
|
|
|
fapi->pps_info_buf = pps->ppsinfo;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
1999-03-11 15:09:51 +00:00
|
|
|
int
|
|
|
|
pps_ioctl(u_long cmd, caddr_t data, struct pps_state *pps)
|
1998-02-20 16:36:17 +00:00
|
|
|
{
|
1999-10-09 14:49:56 +00:00
|
|
|
pps_params_t *app;
|
|
|
|
struct pps_fetch_args *fapi;
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
struct pps_fetch_ffc_args *fapi_ffc;
|
|
|
|
#endif
|
1999-10-10 16:18:36 +00:00
|
|
|
#ifdef PPS_SYNC
|
1999-10-09 14:49:56 +00:00
|
|
|
struct pps_kcbind_args *kapi;
|
1999-10-10 16:18:36 +00:00
|
|
|
#endif
|
1999-10-09 14:49:56 +00:00
|
|
|
|
2004-08-14 08:33:49 +00:00
|
|
|
KASSERT(pps != NULL, ("NULL pps pointer in pps_ioctl"));
|
1999-10-09 14:49:56 +00:00
|
|
|
switch (cmd) {
|
|
|
|
case PPS_IOC_CREATE:
|
|
|
|
return (0);
|
|
|
|
case PPS_IOC_DESTROY:
|
|
|
|
return (0);
|
|
|
|
case PPS_IOC_SETPARAMS:
|
|
|
|
app = (pps_params_t *)data;
|
|
|
|
if (app->mode & ~pps->ppscap)
|
|
|
|
return (EINVAL);
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
/* Ensure only a single clock is selected for ffc timestamp. */
|
|
|
|
if ((app->mode & PPS_TSCLK_MASK) == PPS_TSCLK_MASK)
|
|
|
|
return (EINVAL);
|
|
|
|
#endif
|
2002-04-28 18:24:21 +00:00
|
|
|
pps->ppsparam = *app;
|
1999-10-09 14:49:56 +00:00
|
|
|
return (0);
|
|
|
|
case PPS_IOC_GETPARAMS:
|
|
|
|
app = (pps_params_t *)data;
|
|
|
|
*app = pps->ppsparam;
|
|
|
|
app->api_version = PPS_API_VERS_1;
|
|
|
|
return (0);
|
|
|
|
case PPS_IOC_GETCAP:
|
|
|
|
*(int*)data = pps->ppscap;
|
|
|
|
return (0);
|
|
|
|
case PPS_IOC_FETCH:
|
|
|
|
fapi = (struct pps_fetch_args *)data;
|
2013-02-15 18:30:32 +00:00
|
|
|
return (pps_fetch(fapi, pps));
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
case PPS_IOC_FETCH_FFCOUNTER:
|
|
|
|
fapi_ffc = (struct pps_fetch_ffc_args *)data;
|
|
|
|
if (fapi_ffc->tsformat && fapi_ffc->tsformat !=
|
|
|
|
PPS_TSFMT_TSPEC)
|
|
|
|
return (EINVAL);
|
|
|
|
if (fapi_ffc->timeout.tv_sec || fapi_ffc->timeout.tv_nsec)
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
pps->ppsinfo_ffc.current_mode = pps->ppsparam.mode;
|
|
|
|
fapi_ffc->pps_info_buf_ffc = pps->ppsinfo_ffc;
|
|
|
|
/* Overwrite timestamps if feedback clock selected. */
|
|
|
|
switch (pps->ppsparam.mode & PPS_TSCLK_MASK) {
|
|
|
|
case PPS_TSCLK_FBCK:
|
|
|
|
fapi_ffc->pps_info_buf_ffc.assert_timestamp =
|
|
|
|
pps->ppsinfo.assert_timestamp;
|
|
|
|
fapi_ffc->pps_info_buf_ffc.clear_timestamp =
|
|
|
|
pps->ppsinfo.clear_timestamp;
|
|
|
|
break;
|
|
|
|
case PPS_TSCLK_FFWD:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return (0);
|
|
|
|
#endif /* FFCLOCK */
|
1999-10-09 14:49:56 +00:00
|
|
|
case PPS_IOC_KCBIND:
|
|
|
|
#ifdef PPS_SYNC
|
|
|
|
kapi = (struct pps_kcbind_args *)data;
|
|
|
|
/* XXX Only root should be able to do this */
|
|
|
|
if (kapi->tsformat && kapi->tsformat != PPS_TSFMT_TSPEC)
|
|
|
|
return (EINVAL);
|
|
|
|
if (kapi->kernel_consumer != PPS_KC_HARDPPS)
|
|
|
|
return (EINVAL);
|
|
|
|
if (kapi->edge & ~pps->ppscap)
|
|
|
|
return (EINVAL);
|
2015-05-04 17:59:39 +00:00
|
|
|
pps->kcmode = (kapi->edge & KCMODE_EDGEMASK) |
|
|
|
|
(pps->kcmode & KCMODE_ABIFLAG);
|
1999-10-09 14:49:56 +00:00
|
|
|
return (0);
|
|
|
|
#else
|
|
|
|
return (EOPNOTSUPP);
|
|
|
|
#endif
|
|
|
|
default:
|
2005-03-26 20:04:28 +00:00
|
|
|
return (ENOIOCTL);
|
1999-10-09 14:49:56 +00:00
|
|
|
}
|
1999-03-11 15:09:51 +00:00
|
|
|
}
|
1998-03-16 10:19:12 +00:00
|
|
|
|
1999-03-11 15:09:51 +00:00
|
|
|
void
|
|
|
|
pps_init(struct pps_state *pps)
|
|
|
|
{
|
2013-02-15 18:30:32 +00:00
|
|
|
pps->ppscap |= PPS_TSFMT_TSPEC | PPS_CANWAIT;
|
1999-03-11 15:09:51 +00:00
|
|
|
if (pps->ppscap & PPS_CAPTUREASSERT)
|
|
|
|
pps->ppscap |= PPS_OFFSETASSERT;
|
|
|
|
if (pps->ppscap & PPS_CAPTURECLEAR)
|
|
|
|
pps->ppscap |= PPS_OFFSETCLEAR;
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
pps->ppscap |= PPS_TSCLK_MASK;
|
|
|
|
#endif
|
2015-05-04 17:59:39 +00:00
|
|
|
pps->kcmode &= ~KCMODE_ABIFLAG;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
pps_init_abi(struct pps_state *pps)
|
|
|
|
{
|
|
|
|
|
|
|
|
pps_init(pps);
|
|
|
|
if (pps->driver_abi > 0) {
|
|
|
|
pps->kcmode |= KCMODE_ABIFLAG;
|
|
|
|
pps->kernel_abi = PPS_ABI_VERSION;
|
|
|
|
}
|
1998-02-20 16:36:17 +00:00
|
|
|
}
|
|
|
|
|
1999-03-11 15:09:51 +00:00
|
|
|
void
|
2002-04-26 20:24:28 +00:00
|
|
|
pps_capture(struct pps_state *pps)
|
|
|
|
{
|
2002-04-28 18:24:21 +00:00
|
|
|
struct timehands *th;
|
|
|
|
|
2004-08-14 08:33:49 +00:00
|
|
|
KASSERT(pps != NULL, ("NULL pps pointer in pps_capture"));
|
2002-04-28 18:24:21 +00:00
|
|
|
th = timehands;
|
2015-07-08 18:42:08 +00:00
|
|
|
pps->capgen = atomic_load_acq_int(&th->th_generation);
|
2002-04-28 18:24:21 +00:00
|
|
|
pps->capth = th;
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
pps->capffth = fftimehands;
|
|
|
|
#endif
|
2002-04-28 18:24:21 +00:00
|
|
|
pps->capcount = th->th_counter->tc_get_timecount(th->th_counter);
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
if (pps->capgen != th->th_generation)
|
2002-04-28 18:24:21 +00:00
|
|
|
pps->capgen = 0;
|
2002-04-26 20:24:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
pps_event(struct pps_state *pps, int event)
|
1998-02-20 16:36:17 +00:00
|
|
|
{
|
2002-04-30 20:42:06 +00:00
|
|
|
struct bintime bt;
|
1999-03-11 15:09:51 +00:00
|
|
|
struct timespec ts, *tsp, *osp;
|
2002-04-28 18:24:21 +00:00
|
|
|
u_int tcount, *pcount;
|
2015-11-02 03:14:37 +00:00
|
|
|
int foff;
|
2002-04-30 20:42:06 +00:00
|
|
|
pps_seq_t *pseq;
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
struct timespec *tsp_ffc;
|
|
|
|
pps_seq_t *pseq_ffc;
|
|
|
|
ffcounter *ffcount;
|
|
|
|
#endif
|
2015-11-02 03:14:37 +00:00
|
|
|
#ifdef PPS_SYNC
|
|
|
|
int fhard;
|
|
|
|
#endif
|
1999-03-11 15:09:51 +00:00
|
|
|
|
2004-08-14 08:33:49 +00:00
|
|
|
KASSERT(pps != NULL, ("NULL pps pointer in pps_event"));
|
2015-08-07 23:31:31 +00:00
|
|
|
/* Nothing to do if not currently set to capture this event type. */
|
|
|
|
if ((event & pps->ppsparam.mode) == 0)
|
|
|
|
return;
|
2002-04-30 20:42:06 +00:00
|
|
|
/* If the timecounter was wound up underneath us, bail out. */
|
2015-07-08 18:42:08 +00:00
|
|
|
if (pps->capgen == 0 || pps->capgen !=
|
|
|
|
atomic_load_acq_int(&pps->capth->th_generation))
|
2002-04-26 20:24:28 +00:00
|
|
|
return;
|
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/* Things would be easier with arrays. */
|
1999-03-11 15:09:51 +00:00
|
|
|
if (event == PPS_CAPTUREASSERT) {
|
|
|
|
tsp = &pps->ppsinfo.assert_timestamp;
|
|
|
|
osp = &pps->ppsparam.assert_offset;
|
|
|
|
foff = pps->ppsparam.mode & PPS_OFFSETASSERT;
|
2015-11-02 03:14:37 +00:00
|
|
|
#ifdef PPS_SYNC
|
1999-10-09 14:49:56 +00:00
|
|
|
fhard = pps->kcmode & PPS_CAPTUREASSERT;
|
2015-11-02 03:14:37 +00:00
|
|
|
#endif
|
1999-03-11 15:09:51 +00:00
|
|
|
pcount = &pps->ppscount[0];
|
|
|
|
pseq = &pps->ppsinfo.assert_sequence;
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffcount = &pps->ppsinfo_ffc.assert_ffcount;
|
|
|
|
tsp_ffc = &pps->ppsinfo_ffc.assert_timestamp;
|
|
|
|
pseq_ffc = &pps->ppsinfo_ffc.assert_sequence;
|
|
|
|
#endif
|
1999-03-11 15:09:51 +00:00
|
|
|
} else {
|
|
|
|
tsp = &pps->ppsinfo.clear_timestamp;
|
|
|
|
osp = &pps->ppsparam.clear_offset;
|
|
|
|
foff = pps->ppsparam.mode & PPS_OFFSETCLEAR;
|
2015-11-02 03:14:37 +00:00
|
|
|
#ifdef PPS_SYNC
|
1999-10-09 14:49:56 +00:00
|
|
|
fhard = pps->kcmode & PPS_CAPTURECLEAR;
|
2015-11-02 03:14:37 +00:00
|
|
|
#endif
|
1999-03-11 15:09:51 +00:00
|
|
|
pcount = &pps->ppscount[1];
|
|
|
|
pseq = &pps->ppsinfo.clear_sequence;
|
2011-11-21 13:34:29 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffcount = &pps->ppsinfo_ffc.clear_ffcount;
|
|
|
|
tsp_ffc = &pps->ppsinfo_ffc.clear_timestamp;
|
|
|
|
pseq_ffc = &pps->ppsinfo_ffc.clear_sequence;
|
|
|
|
#endif
|
1999-03-11 15:09:51 +00:00
|
|
|
}
|
1998-03-16 10:19:12 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-28 18:24:21 +00:00
|
|
|
* If the timecounter changed, we cannot compare the count values, so
|
|
|
|
* we have to drop the rest of the PPS-stuff until the next event.
|
|
|
|
*/
|
|
|
|
if (pps->ppstc != pps->capth->th_counter) {
|
|
|
|
pps->ppstc = pps->capth->th_counter;
|
2002-04-26 20:24:28 +00:00
|
|
|
*pcount = pps->capcount;
|
|
|
|
pps->ppscount[2] = pps->capcount;
|
1999-03-11 15:09:51 +00:00
|
|
|
return;
|
|
|
|
}
|
1998-02-20 16:36:17 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/* Convert the count to a timespec. */
|
2002-04-28 18:24:21 +00:00
|
|
|
tcount = pps->capcount - pps->capth->th_offset_count;
|
|
|
|
tcount &= pps->capth->th_counter->tc_counter_mask;
|
2016-07-30 09:25:57 +00:00
|
|
|
bt = pps->capth->th_bintime;
|
2002-04-28 18:24:21 +00:00
|
|
|
bintime_addx(&bt, pps->capth->th_scale * tcount);
|
2002-02-07 21:21:55 +00:00
|
|
|
bintime2timespec(&bt, &ts);
|
1999-03-11 15:09:51 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/* If the timecounter was wound up underneath us, bail out. */
|
2015-07-08 18:42:08 +00:00
|
|
|
atomic_thread_fence_acq();
|
|
|
|
if (pps->capgen != pps->capth->th_generation)
|
2002-04-26 20:24:28 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
*pcount = pps->capcount;
|
1999-03-11 15:09:51 +00:00
|
|
|
(*pseq)++;
|
|
|
|
*tsp = ts;
|
1999-10-09 14:49:56 +00:00
|
|
|
|
1999-03-11 15:09:51 +00:00
|
|
|
if (foff) {
|
Make timespecadd(3) and friends public
The timespecadd(3) family of macros were imported from NetBSD back in
r35029. However, they were initially guarded by #ifdef _KERNEL. In the
meantime, we have grown at least 28 syscalls that use timespecs in some
way, leading many programs both inside and outside of the base system to
redefine those macros. It's better just to make the definitions public.
Our kernel currently defines two-argument versions of timespecadd and
timespecsub. NetBSD, OpenBSD, and FreeDesktop.org's libbsd, however, define
three-argument versions. Solaris also defines a three-argument version, but
only in its kernel. This revision changes our definition to match the
common three-argument version.
Bump _FreeBSD_version due to the breaking KPI change.
Discussed with: cem, jilles, ian, bde
Differential Revision: https://reviews.freebsd.org/D14725
2018-07-30 15:46:40 +00:00
|
|
|
timespecadd(tsp, osp, tsp);
|
1999-03-11 15:09:51 +00:00
|
|
|
if (tsp->tv_nsec < 0) {
|
|
|
|
tsp->tv_nsec += 1000000000;
|
|
|
|
tsp->tv_sec -= 1;
|
|
|
|
}
|
|
|
|
}
|
2011-11-21 13:34:29 +00:00
|
|
|
|
|
|
|
#ifdef FFCLOCK
|
|
|
|
*ffcount = pps->capffth->tick_ffcount + tcount;
|
|
|
|
bt = pps->capffth->tick_time;
|
|
|
|
ffclock_convert_delta(tcount, pps->capffth->cest.period, &bt);
|
|
|
|
bintime_add(&bt, &pps->capffth->tick_time);
|
|
|
|
bintime2timespec(&bt, &ts);
|
|
|
|
(*pseq_ffc)++;
|
|
|
|
*tsp_ffc = ts;
|
|
|
|
#endif
|
|
|
|
|
1999-03-11 15:09:51 +00:00
|
|
|
#ifdef PPS_SYNC
|
|
|
|
if (fhard) {
|
2010-06-21 09:55:56 +00:00
|
|
|
uint64_t scale;
|
2003-01-16 20:06:45 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-28 18:24:21 +00:00
|
|
|
* Feed the NTP PLL/FLL.
|
2003-01-16 19:22:13 +00:00
|
|
|
* The FLL wants to know how many (hardware) nanoseconds
|
|
|
|
* elapsed since the previous event.
|
2002-04-28 18:24:21 +00:00
|
|
|
*/
|
2002-04-26 20:24:28 +00:00
|
|
|
tcount = pps->capcount - pps->ppscount[2];
|
|
|
|
pps->ppscount[2] = pps->capcount;
|
2002-04-28 18:24:21 +00:00
|
|
|
tcount &= pps->capth->th_counter->tc_counter_mask;
|
2010-06-21 09:55:56 +00:00
|
|
|
scale = (uint64_t)1 << 63;
|
2003-01-16 19:22:13 +00:00
|
|
|
scale /= pps->capth->th_counter->tc_frequency;
|
|
|
|
scale *= 2;
|
2002-02-07 21:21:55 +00:00
|
|
|
bt.sec = 0;
|
|
|
|
bt.frac = 0;
|
2003-01-16 19:22:13 +00:00
|
|
|
bintime_addx(&bt, scale * tcount);
|
2002-02-07 21:21:55 +00:00
|
|
|
bintime2timespec(&bt, &ts);
|
|
|
|
hardpps(tsp, ts.tv_nsec + 1000000000 * ts.tv_sec);
|
1999-03-11 15:09:51 +00:00
|
|
|
}
|
|
|
|
#endif
|
2013-02-15 18:30:32 +00:00
|
|
|
|
|
|
|
/* Wakeup anyone sleeping in pps_fetch(). */
|
|
|
|
wakeup(pps);
|
1999-03-11 15:09:51 +00:00
|
|
|
}
|
2002-04-26 12:37:36 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
2002-04-26 12:37:36 +00:00
|
|
|
* Timecounters need to be updated every so often to prevent the hardware
|
|
|
|
* counter from overflowing. Updating also recalculates the cached values
|
|
|
|
* used by the get*() family of functions, so their precision depends on
|
|
|
|
* the update frequency.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int tc_tick;
|
2010-11-14 08:06:29 +00:00
|
|
|
SYSCTL_INT(_kern_timecounter, OID_AUTO, tick, CTLFLAG_RD, &tc_tick, 0,
|
2010-11-14 16:10:15 +00:00
|
|
|
"Approximate number of hardclock ticks in a millisecond");
|
2002-04-26 12:37:36 +00:00
|
|
|
|
2002-09-04 10:15:19 +00:00
|
|
|
void
|
2010-09-14 08:48:06 +00:00
|
|
|
tc_ticktock(int cnt)
|
2002-04-26 12:37:36 +00:00
|
|
|
{
|
2002-09-04 10:15:19 +00:00
|
|
|
static int count;
|
2002-04-26 12:37:36 +00:00
|
|
|
|
2016-07-27 11:49:41 +00:00
|
|
|
if (mtx_trylock_spin(&tc_setclock_mtx)) {
|
|
|
|
count += cnt;
|
|
|
|
if (count >= tc_tick) {
|
|
|
|
count = 0;
|
|
|
|
tc_windup(NULL);
|
|
|
|
}
|
|
|
|
mtx_unlock_spin(&tc_setclock_mtx);
|
|
|
|
}
|
2002-04-26 12:37:36 +00:00
|
|
|
}
|
|
|
|
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
static void __inline
|
|
|
|
tc_adjprecision(void)
|
|
|
|
{
|
|
|
|
int t;
|
|
|
|
|
|
|
|
if (tc_timepercentage > 0) {
|
|
|
|
t = (99 + tc_timepercentage) / tc_timepercentage;
|
|
|
|
tc_precexp = fls(t + (t >> 1)) - 1;
|
|
|
|
FREQ2BT(hz / tc_tick, &bt_timethreshold);
|
|
|
|
FREQ2BT(hz, &bt_tickthreshold);
|
|
|
|
bintime_shift(&bt_timethreshold, tc_precexp);
|
|
|
|
bintime_shift(&bt_tickthreshold, tc_precexp);
|
|
|
|
} else {
|
|
|
|
tc_precexp = 31;
|
|
|
|
bt_timethreshold.sec = INT_MAX;
|
|
|
|
bt_timethreshold.frac = ~(uint64_t)0;
|
|
|
|
bt_tickthreshold = bt_timethreshold;
|
|
|
|
}
|
|
|
|
sbt_timethreshold = bttosbt(bt_timethreshold);
|
|
|
|
sbt_tickthreshold = bttosbt(bt_tickthreshold);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
sysctl_kern_timecounter_adjprecision(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
|
|
|
int error, val;
|
|
|
|
|
|
|
|
val = tc_timepercentage;
|
|
|
|
error = sysctl_handle_int(oidp, &val, 0, req);
|
|
|
|
if (error != 0 || req->newptr == NULL)
|
|
|
|
return (error);
|
|
|
|
tc_timepercentage = val;
|
2014-06-28 03:56:17 +00:00
|
|
|
if (cold)
|
|
|
|
goto done;
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
tc_adjprecision();
|
2014-06-28 03:56:17 +00:00
|
|
|
done:
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2002-04-28 18:24:21 +00:00
|
|
|
static void
|
2002-04-26 12:37:36 +00:00
|
|
|
inittimecounter(void *dummy)
|
|
|
|
{
|
|
|
|
u_int p;
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
int tick_rate;
|
2002-04-26 12:37:36 +00:00
|
|
|
|
2002-04-30 20:42:06 +00:00
|
|
|
/*
|
|
|
|
* Set the initial timeout to
|
|
|
|
* max(1, <approx. number of hardclock ticks in a millisecond>).
|
|
|
|
* People should probably not use the sysctl to set the timeout
|
2016-04-29 22:15:33 +00:00
|
|
|
* to smaller than its initial value, since that value is the
|
2002-04-30 20:42:06 +00:00
|
|
|
* smallest reasonable one. If they want better timestamps they
|
|
|
|
* should use the non-"get"* functions.
|
|
|
|
*/
|
2002-04-26 12:37:36 +00:00
|
|
|
if (hz > 1000)
|
|
|
|
tc_tick = (hz + 500) / 1000;
|
|
|
|
else
|
|
|
|
tc_tick = 1;
|
- Make callout(9) tickless, relying on eventtimers(4) as backend for
precise time event generation. This greatly improves granularity of
callouts which are not anymore constrained to wait next tick to be
scheduled.
- Extend the callout KPI introducing a set of callout_reset_sbt* functions,
which take a sbintime_t as timeout argument. The new KPI also offers a
way for consumers to specify precision tolerance they allow, so that
callout can coalesce events and reduce number of interrupts as well as
potentially avoid scheduling a SWI thread.
- Introduce support for dispatching callouts directly from hardware
interrupt context, specifying an additional flag. This feature should be
used carefully, as long as interrupt context has some limitations
(e.g. no sleeping locks can be held).
- Enhance mechanisms to gather informations about callwheel, introducing
a new sysctl to obtain stats.
This change breaks the KBI. struct callout fields has been changed, in
particular 'int ticks' (4 bytes) has been replaced with 'sbintime_t'
(8 bytes) and another 'sbintime_t' field was added for precision.
Together with: mav
Reviewed by: attilio, bde, luigi, phk
Sponsored by: Google Summer of Code 2012, iXsystems inc.
Tested by: flo (amd64, sparc64), marius (sparc64), ian (arm),
markj (amd64), mav, Fabian Keil
2013-03-04 11:09:56 +00:00
|
|
|
tc_adjprecision();
|
|
|
|
FREQ2BT(hz, &tick_bt);
|
|
|
|
tick_sbt = bttosbt(tick_bt);
|
|
|
|
tick_rate = hz / tc_tick;
|
|
|
|
FREQ2BT(tick_rate, &tc_tick_bt);
|
|
|
|
tc_tick_sbt = bttosbt(tc_tick_bt);
|
2002-04-26 12:37:36 +00:00
|
|
|
p = (tc_tick * 1000000) / hz;
|
|
|
|
printf("Timecounters tick every %d.%03u msec\n", p / 1000, p % 1000);
|
2002-04-30 20:42:06 +00:00
|
|
|
|
2011-11-19 14:10:16 +00:00
|
|
|
#ifdef FFCLOCK
|
|
|
|
ffclock_init();
|
|
|
|
#endif
|
2002-05-03 08:46:03 +00:00
|
|
|
/* warm up new timecounter (again) and get rolling. */
|
2002-04-30 20:42:06 +00:00
|
|
|
(void)timecounter->tc_get_timecount(timecounter);
|
|
|
|
(void)timecounter->tc_get_timecount(timecounter);
|
2016-07-27 11:49:41 +00:00
|
|
|
mtx_lock_spin(&tc_setclock_mtx);
|
|
|
|
tc_windup(NULL);
|
|
|
|
mtx_unlock_spin(&tc_setclock_mtx);
|
2002-04-26 12:37:36 +00:00
|
|
|
}
|
|
|
|
|
2008-03-16 10:58:09 +00:00
|
|
|
SYSINIT(timecounter, SI_SUB_CLOCKS, SI_ORDER_SECOND, inittimecounter, NULL);
|
2006-02-07 21:22:02 +00:00
|
|
|
|
2006-02-11 09:33:07 +00:00
|
|
|
/* Cpu tick handling -------------------------------------------------*/
|
|
|
|
|
|
|
|
static int cpu_tick_variable;
|
|
|
|
static uint64_t cpu_tick_frequency;
|
|
|
|
|
2018-07-05 17:13:37 +00:00
|
|
|
DPCPU_DEFINE_STATIC(uint64_t, tc_cpu_ticks_base);
|
|
|
|
DPCPU_DEFINE_STATIC(unsigned, tc_cpu_ticks_last);
|
2015-09-25 13:03:57 +00:00
|
|
|
|
2006-03-07 22:17:26 +00:00
|
|
|
static uint64_t
|
2006-02-07 21:22:02 +00:00
|
|
|
tc_cpu_ticks(void)
|
|
|
|
{
|
|
|
|
struct timecounter *tc;
|
2015-09-25 13:03:57 +00:00
|
|
|
uint64_t res, *base;
|
|
|
|
unsigned u, *last;
|
2006-02-07 21:22:02 +00:00
|
|
|
|
2015-09-25 13:03:57 +00:00
|
|
|
critical_enter();
|
|
|
|
base = DPCPU_PTR(tc_cpu_ticks_base);
|
|
|
|
last = DPCPU_PTR(tc_cpu_ticks_last);
|
2006-02-07 21:22:02 +00:00
|
|
|
tc = timehands->th_counter;
|
|
|
|
u = tc->tc_get_timecount(tc) & tc->tc_counter_mask;
|
2015-09-25 13:03:57 +00:00
|
|
|
if (u < *last)
|
|
|
|
*base += (uint64_t)tc->tc_counter_mask + 1;
|
|
|
|
*last = u;
|
|
|
|
res = u + *base;
|
|
|
|
critical_exit();
|
|
|
|
return (res);
|
2006-02-07 21:22:02 +00:00
|
|
|
}
|
|
|
|
|
Refactor timer management code with priority to one-shot operation mode.
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.
There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.
As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.
Tested by: many (on i386, amd64, sparc64 and powerc)
H/W donated by: Gheorghe Ardelean
Sponsored by: iXsystems, Inc.
2010-09-13 07:25:35 +00:00
|
|
|
void
|
|
|
|
cpu_tick_calibration(void)
|
|
|
|
{
|
|
|
|
static time_t last_calib;
|
|
|
|
|
|
|
|
if (time_uptime != last_calib && !(time_uptime & 0xf)) {
|
|
|
|
cpu_tick_calibrate(0);
|
|
|
|
last_calib = time_uptime;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-02-11 09:33:07 +00:00
|
|
|
/*
|
2008-02-17 02:46:54 +00:00
|
|
|
* This function gets called every 16 seconds on only one designated
|
Refactor timer management code with priority to one-shot operation mode.
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.
There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.
As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.
Tested by: many (on i386, amd64, sparc64 and powerc)
H/W donated by: Gheorghe Ardelean
Sponsored by: iXsystems, Inc.
2010-09-13 07:25:35 +00:00
|
|
|
* CPU in the system from hardclock() via cpu_tick_calibration()().
|
2006-02-11 09:33:07 +00:00
|
|
|
*
|
|
|
|
* Whenever the real time clock is stepped we get called with reset=1
|
|
|
|
* to make sure we handle suspend/resume and similar events correctly.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void
|
|
|
|
cpu_tick_calibrate(int reset)
|
|
|
|
{
|
|
|
|
static uint64_t c_last;
|
|
|
|
uint64_t c_this, c_delta;
|
|
|
|
static struct bintime t_last;
|
|
|
|
struct bintime t_this, t_delta;
|
2006-03-02 08:09:46 +00:00
|
|
|
uint32_t divi;
|
2006-02-11 09:33:07 +00:00
|
|
|
|
|
|
|
if (reset) {
|
|
|
|
/* The clock was stepped, abort & reset */
|
|
|
|
t_last.sec = 0;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we don't calibrate fixed rate cputicks */
|
|
|
|
if (!cpu_tick_variable)
|
|
|
|
return;
|
|
|
|
|
|
|
|
getbinuptime(&t_this);
|
|
|
|
c_this = cpu_ticks();
|
|
|
|
if (t_last.sec != 0) {
|
|
|
|
c_delta = c_this - c_last;
|
|
|
|
t_delta = t_this;
|
|
|
|
bintime_sub(&t_delta, &t_last);
|
|
|
|
/*
|
2010-07-11 16:47:45 +00:00
|
|
|
* Headroom:
|
|
|
|
* 2^(64-20) / 16[s] =
|
|
|
|
* 2^(44) / 16[s] =
|
|
|
|
* 17.592.186.044.416 / 16 =
|
|
|
|
* 1.099.511.627.776 [Hz]
|
2006-02-11 09:33:07 +00:00
|
|
|
*/
|
2010-07-11 16:47:45 +00:00
|
|
|
divi = t_delta.sec << 20;
|
|
|
|
divi |= t_delta.frac >> (64 - 20);
|
|
|
|
c_delta <<= 20;
|
|
|
|
c_delta /= divi;
|
|
|
|
if (c_delta > cpu_tick_frequency) {
|
|
|
|
if (0 && bootverbose)
|
|
|
|
printf("cpu_tick increased to %ju Hz\n",
|
|
|
|
c_delta);
|
|
|
|
cpu_tick_frequency = c_delta;
|
2006-02-11 09:33:07 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
c_last = c_this;
|
|
|
|
t_last = t_this;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
set_cputicker(cpu_tick_f *func, uint64_t freq, unsigned var)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (func == NULL) {
|
|
|
|
cpu_ticks = tc_cpu_ticks;
|
|
|
|
} else {
|
|
|
|
cpu_tick_frequency = freq;
|
|
|
|
cpu_tick_variable = var;
|
|
|
|
cpu_ticks = func;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
cpu_tickrate(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (cpu_ticks == tc_cpu_ticks)
|
|
|
|
return (tc_getfrequency());
|
|
|
|
return (cpu_tick_frequency);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to be slightly careful converting cputicks to microseconds.
|
|
|
|
* There is plenty of margin in 64 bits of microseconds (half a million
|
|
|
|
* years) and in 64 bits at 4 GHz (146 years), but if we do a multiply
|
|
|
|
* before divide conversion (to retain precision) we find that the
|
|
|
|
* margin shrinks to 1.5 hours (one millionth of 146y).
|
2006-08-04 07:56:35 +00:00
|
|
|
* With a three prong approach we never lose significant bits, no
|
2006-02-11 09:33:07 +00:00
|
|
|
* matter what the cputick rate and length of timeinterval is.
|
|
|
|
*/
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
cputick2usec(uint64_t tick)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (tick > 18446744073709551LL) /* floor(2^64 / 1000) */
|
|
|
|
return (tick / (cpu_tickrate() / 1000000LL));
|
|
|
|
else if (tick > 18446744073709LL) /* floor(2^64 / 1000000) */
|
|
|
|
return ((tick * 1000LL) / (cpu_tickrate() / 1000LL));
|
|
|
|
else
|
|
|
|
return ((tick * 1000000LL) / cpu_tickrate());
|
|
|
|
}
|
|
|
|
|
|
|
|
cpu_tick_f *cpu_ticks = tc_cpu_ticks;
|
2012-06-22 07:06:40 +00:00
|
|
|
|
|
|
|
static int vdso_th_enable = 1;
|
|
|
|
static int
|
|
|
|
sysctl_fast_gettime(SYSCTL_HANDLER_ARGS)
|
|
|
|
{
|
|
|
|
int old_vdso_th_enable, error;
|
|
|
|
|
|
|
|
old_vdso_th_enable = vdso_th_enable;
|
|
|
|
error = sysctl_handle_int(oidp, &old_vdso_th_enable, 0, req);
|
|
|
|
if (error != 0)
|
|
|
|
return (error);
|
|
|
|
vdso_th_enable = old_vdso_th_enable;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
SYSCTL_PROC(_kern_timecounter, OID_AUTO, fast_gettime,
|
|
|
|
CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE,
|
|
|
|
NULL, 0, sysctl_fast_gettime, "I", "Enable fast time of day");
|
|
|
|
|
|
|
|
uint32_t
|
|
|
|
tc_fill_vdso_timehands(struct vdso_timehands *vdso_th)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
uint32_t enabled;
|
|
|
|
|
2012-06-23 09:33:06 +00:00
|
|
|
th = timehands;
|
|
|
|
vdso_th->th_scale = th->th_scale;
|
|
|
|
vdso_th->th_offset_count = th->th_offset_count;
|
|
|
|
vdso_th->th_counter_mask = th->th_counter->tc_counter_mask;
|
|
|
|
vdso_th->th_offset = th->th_offset;
|
2016-07-27 11:49:41 +00:00
|
|
|
vdso_th->th_boottime = th->th_boottime;
|
Implement userspace gettimeofday(2) with HPET timecounter.
Right now, userspace (fast) gettimeofday(2) on x86 only works for
RDTSC. For older machines, like Core2, where RDTSC is not C2/C3
invariant, and which fall to HPET hardware, this means that the call
has both the penalty of the syscall and of the uncached hw behind the
QPI or PCIe connection to the sought bridge. Nothing can me done
against the access latency, but the syscall overhead can be removed.
System already provides mappable /dev/hpetX devices, which gives
straight access to the HPET registers page.
Add yet another algorithm to the x86 'vdso' timehands. Libc is updated
to handle both RDTSC and HPET. For HPET, the index of the hpet device
to mmap is passed from kernel to userspace, index might be changed and
libc invalidates its mapping as needed.
Remove cpu_fill_vdso_timehands() KPI, instead require that
timecounters which can be used from userspace, to provide
tc_fill_vdso_timehands{,32}() methods. Merge i386 and amd64
libc/<arch>/sys/__vdso_gettc.c into one source file in the new
libc/x86/sys location. __vdso_gettc() internal interface is changed
to move timecounter algorithm detection into the MD code.
Measurements show that RDTSC even with the syscall overhead is faster
than userspace HPET access. But still, userspace HPET is three-four
times faster than syscall HPET on several Core2 and SandyBridge
machines.
Tested by: Howard Su <howard0su@gmail.com>
Sponsored by: The FreeBSD Foundation
MFC after: 1 month
Differential revision: https://reviews.freebsd.org/D7473
2016-08-17 09:52:09 +00:00
|
|
|
if (th->th_counter->tc_fill_vdso_timehands != NULL) {
|
|
|
|
enabled = th->th_counter->tc_fill_vdso_timehands(vdso_th,
|
|
|
|
th->th_counter);
|
|
|
|
} else
|
|
|
|
enabled = 0;
|
2012-06-22 07:06:40 +00:00
|
|
|
if (!vdso_th_enable)
|
|
|
|
enabled = 0;
|
|
|
|
return (enabled);
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef COMPAT_FREEBSD32
|
|
|
|
uint32_t
|
|
|
|
tc_fill_vdso_timehands32(struct vdso_timehands32 *vdso_th32)
|
|
|
|
{
|
|
|
|
struct timehands *th;
|
|
|
|
uint32_t enabled;
|
|
|
|
|
2012-06-23 09:33:06 +00:00
|
|
|
th = timehands;
|
|
|
|
*(uint64_t *)&vdso_th32->th_scale[0] = th->th_scale;
|
|
|
|
vdso_th32->th_offset_count = th->th_offset_count;
|
|
|
|
vdso_th32->th_counter_mask = th->th_counter->tc_counter_mask;
|
|
|
|
vdso_th32->th_offset.sec = th->th_offset.sec;
|
|
|
|
*(uint64_t *)&vdso_th32->th_offset.frac[0] = th->th_offset.frac;
|
2016-07-27 11:49:41 +00:00
|
|
|
vdso_th32->th_boottime.sec = th->th_boottime.sec;
|
|
|
|
*(uint64_t *)&vdso_th32->th_boottime.frac[0] = th->th_boottime.frac;
|
Implement userspace gettimeofday(2) with HPET timecounter.
Right now, userspace (fast) gettimeofday(2) on x86 only works for
RDTSC. For older machines, like Core2, where RDTSC is not C2/C3
invariant, and which fall to HPET hardware, this means that the call
has both the penalty of the syscall and of the uncached hw behind the
QPI or PCIe connection to the sought bridge. Nothing can me done
against the access latency, but the syscall overhead can be removed.
System already provides mappable /dev/hpetX devices, which gives
straight access to the HPET registers page.
Add yet another algorithm to the x86 'vdso' timehands. Libc is updated
to handle both RDTSC and HPET. For HPET, the index of the hpet device
to mmap is passed from kernel to userspace, index might be changed and
libc invalidates its mapping as needed.
Remove cpu_fill_vdso_timehands() KPI, instead require that
timecounters which can be used from userspace, to provide
tc_fill_vdso_timehands{,32}() methods. Merge i386 and amd64
libc/<arch>/sys/__vdso_gettc.c into one source file in the new
libc/x86/sys location. __vdso_gettc() internal interface is changed
to move timecounter algorithm detection into the MD code.
Measurements show that RDTSC even with the syscall overhead is faster
than userspace HPET access. But still, userspace HPET is three-four
times faster than syscall HPET on several Core2 and SandyBridge
machines.
Tested by: Howard Su <howard0su@gmail.com>
Sponsored by: The FreeBSD Foundation
MFC after: 1 month
Differential revision: https://reviews.freebsd.org/D7473
2016-08-17 09:52:09 +00:00
|
|
|
if (th->th_counter->tc_fill_vdso_timehands32 != NULL) {
|
|
|
|
enabled = th->th_counter->tc_fill_vdso_timehands32(vdso_th32,
|
|
|
|
th->th_counter);
|
|
|
|
} else
|
|
|
|
enabled = 0;
|
2012-06-22 07:06:40 +00:00
|
|
|
if (!vdso_th_enable)
|
|
|
|
enabled = 0;
|
|
|
|
return (enabled);
|
|
|
|
}
|
|
|
|
#endif
|