Commit Graph

116 Commits

Author SHA1 Message Date
peter
b5b26198a7 After a machine has been up for a bit more than 20 days with HZ=1000,
"ticks" goes negative.  This breaks the signed comparison in softclock.
This causes sleep() to never wake up, tcp to stop, etc etc.  This is
bad(TM).  Use the SEQ_LT() method from tcp's sequence number comparisons.
2008-10-28 03:26:25 +00:00
sam
f28149353a add callout_schedule; besides being useful it also improves
compatibility with other systems

Reviewed by:	ed, battlez
2008-08-02 17:42:38 +00:00
jeff
7ff6e9903f Fix a race which could result in some timeout buckets being skipped.
- When a tick occurs on a cpu, iterate from cs_softticks until ticks.
   The per-cpu tick processing happens asynchronously with the actual
   adjustment of the 'ticks' variable.  Sometimes the results may
   be visible before the local call and sometimes after.  Previously this
   could cause a one tick window where we didn't evaluate the bucket.
 - In softclock fetch curticks before incrementing cc_softticks so we
   don't skip insertions which were made for the current time.

Sponsored by:	Nokia
2008-07-19 05:18:29 +00:00
jeff
d1c199f415 - Correct a major error introduced in the per-cpu timeout commit. Sleep
and wakeup require the same wait channel to function properly.

Found by:	kris
Pointy hat:	me
2008-04-06 11:08:49 +00:00
jeff
b065517935 Implement per-cpu callout threads, wheels, and locks.
- Move callout thread creation from kern_intr.c to kern_timeout.c
 - Call callout_tick() on every processor via hardclock_cpu() rather than
   inspecting callout internal details in kern_clock.c.
 - Remove callout implementation details from callout.h
 - Package up all of the global variables into a per-cpu callout structure.
 - Start one thread per-cpu.  Threads are not strictly bound.  They prefer
   to execute on the native cpu but may migrate temporarily if interrupts
   are starving callout processing.
 - Run all callouts by default in the thread for cpu0 to maintain current
   ordering and concurrency guarantees.  Many consumers may not properly
   handle concurrent execution.
 - The new callout_reset_on() api allows specifying a particular cpu to
   execute the callout on.  This may migrate a callout to a new cpu.
   callout_reset() schedules on the last assigned cpu while
   callout_reset_curcpu() schedules on the current cpu.

Reviewed by:	phk
Sponsored by:	Nokia
2008-04-02 11:20:30 +00:00
alfred
b283b3e59a Fix a race where timeout/untimeout could cause crashes for Giant locked
code.

The bug:

There exists a race condition for timeout/untimeout(9) due to the
way that the softclock thread dequeues timeouts.

The softclock thread sets the c_func and c_arg of the callout to
NULL while holding the callout lock but not Giant.  It then drops
the callout lock and acquires Giant.

It is at this point where untimeout(9) on another cpu/thread could
be called.

Since c_arg and c_func are cleared, untimeout(9) does not touch the
callout and returns as if the callout is canceled.

The softclock then tries to acquire Giant and likely blocks due to
the other cpu/thread holding it.

The other cpu/thread then likely deallocates the backing store that
c_arg points to and finishes working and hence drops Giant.

Softclock resumes and acquires giant and calls the function with
the now free'd c_arg and we have corruption/crash.

The fix:

We need to track curr_callout even for timeout(9) (LOCAL_ALLOC)
callouts.  We need to free the callout after the softclock processes
it to deal with the race here.

Obtained from: Juniper Networks, iedowse
Reviewed by: jhb, iedowse
MFC After: 2 weeks.
2008-03-22 07:29:45 +00:00
jeff
3b1acbdce2 - Pass the priority argument from *sleep() into sleepq and down into
sched_sleep().  This removes extra thread_lock() acquisition and
   allows the scheduler to decide what to do with the static boost.
 - Change the priority arguments to cv_* to match sleepq/msleep/etc.
   where 0 means no priority change.  Catch -1 in cv_broadcastpri() and
   convert it to 0 for now.
 - Set a flag when sleeping in a way that is compatible with swapping
   since direct priority comparisons are meaningless now.
 - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which
   controls the boost behavior.  Turning it off gives better performance
   in some workloads but needs more investigation.
 - While we're modifying sleepq, change signal and broadcast to both
   return with the lock held as the lock was held on enter.

Reviewed by:	jhb, peter
2008-03-12 06:31:06 +00:00
attilio
acc2f89a7f Really, no explicit checks against against lock_class_* object should be
done in consumers code: using locks properties is much more appropriate.
Fix current code doing these bogus checks.

Note: Really, callout are not usable by all !(LC_SPINLOCK | LC_SLEEPABLE)
primitives like rmlocks doesn't implement the generic lock layer
functions, but they can be equipped for this, so the check is still
valid.

Tested by: matteo, kris (earlier version)
Reviewed by: jhb
2008-02-06 00:04:09 +00:00
attilio
5ed5cf01be Cache the value of c_lock as it can change, in the struct,
while the global callout spinlock is not held, and can lead to PF#.

Reported by: dougb, Mark Atkinson <atkin901 at yahoo dot com>
Tested by: dougb
Diagnosed by: jhb
2007-11-22 12:15:54 +00:00
attilio
6c4cd05be5 Add the function callout_init_rw() to callout facility in order to use
rwlocks in conjuction with callouts.  The function does basically what
callout_init_mtx() alredy does with the difference of using a rwlock
as extra argument.
CALLOUT_SHAREDLOCK flag can be used, now, in order to acquire the lock only
in read mode when running the callout handler.  It has no effects when used
in conjuction with mtx.

In order to implement this, underlying callout functions have been made
completely lock type-unaware, so accordingly with this, sysctl
debug.to_avg_mtxcalls is now changed in the generic
debug.to_avg_lockcalls.

Note: currently the allowed lock classes are mutexes and rwlocks because
callout handlers run in softclock swi, so they cannot sleep and they
cannot acquire sleepable locks like sx or lockmgr.

Requested by: kmacy, pjd, rwatson
Reviewed by: jhb
2007-11-20 00:37:45 +00:00
rwatson
950792fe22 Remove the definition and implementation of 'CALLOUT_NETGIANT', a now- (and
possibly always-) unused define.

Reported by:	kmacy
Approved by:	re (kensmith)
2007-09-15 12:33:24 +00:00
jhb
c54b68ab60 Close a race that snuck in with the recent changes to fix a LOR between
the callout_lock spin lock and the sleepqueue spin locks.  In the fix,
callout_drain() has to drop the callout_lock so it can acquire the
sleepqueue lock.  The state of the callout can change while the
callout_lock is held however (for example, it can be rescheduled via
callout_reset()).  The previous code assumed that the only state change
that could happen is that the callout could finish executing.  This change
alters callout_drain() to effectively restart and recheck everything
after it acquires the sleepqueue lock thus handling all the possible
states that the callout could be in after any changes while callout_lock
was dropped.

Approved by:	re (kensmith)
Tested by:	kris
2007-08-31 19:01:30 +00:00
attilio
6b276781fa Fix an old standing LOR between callout_lock and sleepqueues chain (which
could lead to a deadlock).
- sleepq_set_timeout acquires callout_lock (via callout_reset()) only
  with sleepq chain lock held
- msleep_spin in _callout_stop_safe lock the sleepqueue chain with
  callout_lock held

In order to solve this don't use msleep_spin in _callout_stop_safe() but
use directly sleepqueues as inline msleep_spin code. Rearrange the
wakeup path in order to have it consistent too.

Reported by: kris (via stress2 test suite)
Tested by: Timothy Redaelli <drizzt@gufi.org>
Reviewed by: jhb
Approved by: jeff (mentor)
Approved by: re
2007-06-26 21:42:01 +00:00
andre
14d6d7db44 Make the TCP timer callout obtain Giant if the network stack is marked
as non-mpsafe.

This change is to be removed when all protocols are mp-safe.
2007-05-11 20:52:47 +00:00
glebius
7fc0346961 Improve ktr(4) logging for callout(9) subsystem. Log all inserts and
removals, including failures, into the callwheel.

XXX: Most of the CTR() macros are called with callout_lock spin mutex
held, thus won't be logged into file, if KTR_ALQ is used. Moving the
CTR() macros out from the spinlocked code would require copying of all
arguments. I'm too lazy to do this.
2006-10-11 14:57:03 +00:00
jhb
c65ca5575a Use the recently added msleep_spin() function to simplify the
callout_drain() logic.  We no longer need a separate non-spin mutex to
do sleep/wakeup with, instead we can now just use the one spin mutex to
manage all the callout functionality.
2006-02-23 19:13:12 +00:00
jhb
b16fb05c6c Oops, missed adding the required include.
Pointy hat to:	jhb
2005-09-15 20:20:36 +00:00
jhb
95df5b283f Replace the dont_sleep_in_callout mutex hack (similar to g_x{up,down})
with the disallow sleeping facility.
2005-09-15 20:09:08 +00:00
glebius
8f3eb2a425 Make callout_reset() return a non-zero value if a pending callout
was rescheduled. If there was no pending callout, then return 0.

Reviewed by:	iedowse, cperciva
2005-09-08 14:20:39 +00:00
iedowse
6df119b425 When processing a timeout() callout and returning it to the free
list, set `curr_callout' to NULL. This ensures that we won't attempt
to cancel the current callout if the original callout structure
gets recycled while we wait to acquire Giant.

This is reported to fix an intermittent syscons problem that was
introduced by revision 1.96.
2005-02-11 00:14:00 +00:00
iedowse
885a9694bc Add a mechanism for associating a mutex with a callout when the
callout is first initialised, using a new function callout_init_mtx().
The callout system will acquire this mutex before calling the callout
function and release it on return.

In addition, the callout system uses the mutex to avoid most of the
complications and race conditions inherent in asynchronous timer
facilities, so mutex-protected callouts have much simpler semantics.
As long as the mutex is held when invoking callout_stop() or
callout_reset(), then these functions will guarantee that the callout
will be stopped, even if softclock() had already begun to process
the callout.

Existing Giant-locked callouts will automatically pick up the new
race-free semantics. This should close a number of race conditions
in the USB code and probably other areas of the kernel too.

There should be no change in behaviour for "MP-safe" callouts; these
still need to use the techniques mentioned in timeout(9) to avoid
race conditions.
2005-02-07 02:47:33 +00:00
cperciva
933b3f52b0 Make "c->c_func = NULL" conditional on CALLOUT_LOCAL_ALLOC in both
places where it occurs, not just one. :-)

Pointed out by:	glebius
Pointy had to:	cperciva
2005-01-19 21:15:58 +00:00
cperciva
958e0cd9a0 Make "c->c_func = NULL" conditional on the CALLOUT_LOCAL_ALLOC flag,
i.e., only clear c->c_func if the callout c is being used via the old
timeout(9) interface.

Requested by:	glebius
2005-01-19 20:34:46 +00:00
cperciva
8526221cef Clarify the description of the callout_active() macro: It is cleared by
callout_stop, callout_drain, and callout_deactivate, but is not
automatically cleared when a callout returns.
2005-01-19 19:46:35 +00:00
cperciva
30e2899111 Adjust two of my comments to the new world order: Indent protection in
the first column is performed using /**, not /*-.
2005-01-07 03:25:45 +00:00
rwatson
36a8fef8a8 Cut a KTR record whenever a callout is invoked. Mark whether it runs
with Giant or not, and include the function point so it can be looked
up against the kernel symbol table during trace analysis.
2004-08-06 21:49:00 +00:00
cperciva
b4bae139fd When reseting a pending callout, perform the deregistration in
callout_reset rather than calling callout_stop.  This results in a few
lines of code duplication, but it provides a significant performance
improvement because it avoids recursing on callout_lock.

Requested by:	rwatson
2004-08-06 02:44:58 +00:00
hmp
fdb8f55130 The paper "Hashed Timers and Hierarchical Wheels: Data Structures for the
Efficient Implementation of a Timer Facility" was co-author'ed by T. Lauk,
not A. Lauk.

Adjust nearby whitespace.
2004-04-25 04:10:17 +00:00
cperciva
9e96265f37 1. Remove callout_stop binary compatibility.
2. Document that this means that kernel modules must be rebuilt.
3. While I'm here, fix my sorting error in callout.h

Requested by:	many [1], scottl [2], bde [3]
2004-04-20 15:49:31 +00:00
cperciva
9c466edcbc Add whitespace before comment blocks. (reported by njl)
Remove spurious whitespace, add indent protection, fix punctuation,
remove initialization of static variables to zero, put wakeup_ctr
and wakeup_needed in the correct order. (reported by bde)

This doesn't fix all the style bugs I introduced, but the remaining
style bugs make it easier for me to understand what I did here.
2004-04-08 02:03:49 +00:00
cperciva
e0793884f3 Introduce a callout_drain() function. This acts in the same manner as
callout_stop(), except that if the callout being stopped is currently
in progress, it blocks attempts to reset the callout and waits until the
callout is completed before it returns.

This makes it possible to clean up callout-using code safely, e.g.,
without potentially freeing memory which is still being used by a callout.

Reviewed by:	mux, gallatin, rwatson, jhb
2004-04-06 23:08:49 +00:00
imp
74cf37bd00 Remove advertising clause from University of California Regent's license,
per letter dated July 22, 1999.

Approved by: core
2004-04-05 21:03:37 +00:00
phk
239117c33f Make the DIAGNOSTIC code which complains about long {call|time}out(9)
functions less noisy:  We printf if a new function took longer than
the previous record holder, or of the previous record holder took
more than twice as long as the current record.
2003-12-07 20:03:28 +00:00
phk
1cf4c96919 Rename the debugging mutex "callout_no_sleep" to "dont_sleep_in_callout". 2003-11-15 18:33:54 +00:00
mckusick
b2ca74654c At the request of several developers, restore the DIAGNOSIC code
deleted in 1.81. Increase the initial timeout limit to 2ms to
eliminate spurious messages of excessive timeouts in the NFS
client code.

Requested by:	Poul-Henning Kamp <phk@phk.freebsd.dk>
Requested by:	Mike Silbersack <silby@silby.com>
Requested by:	Sam Leffler <sam@errno.com>
2003-11-12 22:28:27 +00:00
mckusick
609a7ca3c1 Get rid of DIAGNOSTIC that gives false positives on slow CPUs. 2003-11-04 08:03:11 +00:00
marcel
0663329f6f On ia64 time_t is 64 bit. Explicitly cast tv_sec to long and change
the corresponding format specifier to %ld in a call to printf() in
function softclock(). The printf() is conditional upon DIAGNOSTIC.

Found by: LINT
2003-08-23 08:31:32 +00:00
phk
e5ce0c046f Don't put callout_lock under #ifdef DIAGNOSTIC despite the fact that it
works anyway.
2003-06-20 08:39:04 +00:00
phk
6689b404af Crude but efficient:
#ifdef DIAGNOSTIC hold a mutex while calling callout's so that we hear
about it if they sleep.
2003-06-20 08:07:15 +00:00
obrien
3b8fff9e4c Use __FBSDID(). 2003-06-11 00:56:59 +00:00
phk
a9b8284f73 Add instrumentation which tells us how much work softclock() does
per invocation.
2003-06-04 05:25:58 +00:00
phk
2f5b4a19f3 Under DIAGNOSTIC, only report expensive timeouts if they are more expensive
than the last on we reported.
2003-02-01 10:06:40 +00:00
phk
d5001c9818 Fix a format buglet.
Spotted by:	iedowse
2002-09-05 11:42:03 +00:00
phk
b1f33fc74e Under DIAGNOSTIC, complain if a timeout(9) routine took more than 1msec. 2002-09-04 20:05:00 +00:00
jhb
db9aa81e23 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
alfred
357e37e023 Remove __P. 2002-03-19 21:25:46 +00:00
dillon
abe30f58d8 Move most of the kernel submap initialization code, including the
timeout callwheel and buffer cache, out of the platform specific areas
and into the machine independant area.  i386 and alpha adjusted here.
Other cpus can be fixed piecemeal.

Reviewed by:    freebsd-smp, jake
2001-08-22 04:07:27 +00:00
jhb
22915435df Change callout_stop() to return an integer. If callout_stop() succeeds in
removing the callout entry, return 1.  If callout_stop() fails to remove
the callout entry because it is currently executing or has already been
executed, then the function returns 0.  The idea was obtained from BSD/OS,
however, BSD/OS changed untimeout(), and I've just changed callout_stop()
to be more conservative.

Obtained from:	BSD/OS
2001-08-10 21:06:59 +00:00
jhb
89b722b37f Axe spl's obsoleted by the callout mutex. 2001-08-10 01:36:25 +00:00
jhb
b47bfbe544 Catch up to header include changes:
- <sys/mutex.h> now requires <sys/systm.h>
- <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
2001-03-28 09:17:56 +00:00