12031 Commits

Author SHA1 Message Date
Matthew D Fleming
13434232a6 Remove the uio_yield prototype and symbol. This function has been
misnamed since it was introduced and should not be globally exposed
with this name.  The equivalent functionality is now available using
kern_yield(curthread->td_user_pri).  The function remains
undocumented.

Bump __FreeBSD_version.
2011-02-08 00:36:46 +00:00
Matthew D Fleming
e7ceb1e99b Based on discussions on the svn-src mailing list, rework r218195:
- entirely eliminate some calls to uio_yeild() as being unnecessary,
   such as in a sysctl handler.

 - move should_yield() and maybe_yield() to kern_synch.c and move the
   prototypes from sys/uio.h to sys/proc.h

 - add a slightly more generic kern_yield() that can replace the
   functionality of uio_yield().

 - replace source uses of uio_yield() with the functional equivalent,
   or in some cases do not change the thread priority when switching.

 - fix a logic inversion bug in vlrureclaim(), pointed out by bde@.

 - instead of using the per-cpu last switched ticks, use a per thread
   variable for should_yield().  With PREEMPTION, the only reasonable
   use of this is to determine if a lock has been held a long time and
   relinquish it.  Without PREEMPTION, this is essentially the same as
   the per-cpu variable.
2011-02-08 00:16:36 +00:00
Konstantin Belousov
6f9ec5aab0 Clear the padding when returning context to the usermode, for
MI ucontext_t and x86 MD parts.
Kernel allocates the structures on the stack, and not clearing
reserved fields and paddings causes leakage.

Noted and discussed with:	bde
MFC after:	2 weeks
2011-02-05 15:10:27 +00:00
John Baldwin
f7488600c0 Always assert that the turnstile chain lock is held in turnstile_wait()
and remove a duplicate hash lookup.

MFC after:	1 week
2011-02-04 14:16:41 +00:00
Alan Cox
8189ac85e9 Eliminate unnecessary page hold_count checks. These checks predate
r90944, which introduced a general mechanism for handling the freeing
of held pages.

Reviewed by:	kib@
2011-02-03 14:42:46 +00:00
Matthew D Fleming
08b163fa51 Put the general logic for being a CPU hog into a new function
should_yield().  Use this in various places.  Encapsulate the common
case of check-and-yield into a new function maybe_yield().

Change several checks for a magic number of iterations to use
should_yield() instead.

MFC after:	1 week
2011-02-02 16:35:10 +00:00
Konstantin Belousov
f7780c61e7 The unp_gc() function drops and reaquires lock between scan and
collect phases.  The unp_discard() function executes
unp_externalize_fp(), which might make the socket eligible for gc-ing,
and then, later, taskqueue will close the socket.  Since unp_gc()
dropped the list lock to do the malloc, close might happen after the
mark step but before the collection step, causing collection to not
find the socket and miss one array element.

I believe that the race was there before r216158, but the stated
revision made the window much wider by postponing the close to
taskqueue sometimes.

Only process as much array elements as we find the sockets during
second phase of gc [1].  Take linkage lock and recheck the eligibility
of the socket for gc, as well as call fhold() under the linkage lock.

Reported and tested by:	jmallett
Submitted by:   jmallett [1]
Reviewed by:	rwatson, jeff (possibly)
MFC after:	1 week
2011-02-01 13:33:49 +00:00
Konstantin Belousov
9ca9fc5380 If more than one thread allocated sf buffers for sendfile(2), and
each of the threads needs more while current pool of the buffers is
exhausted, then neither thread can make progress.

Switch to nowait allocations after we got first buffer already.

Reported by:	az
Reviewed by:	alc (previous version)
Tested by:	pho
MFC after:	1 week
2011-01-28 17:37:09 +00:00
Jilles Tjoelker
90750179ec Do not trip a KASSERT if /dev/null cannot be opened for a setuid program.
The fdcheckstd() function makes sure fds 0, 1 and 2 are open by opening
/dev/null. If this fails (e.g. missing devfs or wrong permissions),
fdcheckstd() will return failure and the process will exit as if it received
SIGABRT. The KASSERT is only to check that kern_open() returns the expected
fd, given that it succeeded.

Tripping the KASSERT is most likely if fd 0 is open but fd 1 or 2 are not.

MFC after:	2 weeks
2011-01-28 15:29:35 +00:00
Matthew D Fleming
00f0e671ff Explicitly wire the user buffer rather than doing it implicitly in
sbuf_new_for_sysctl(9).  This allows using an sbuf with a SYSCTL_OUT
drain for extremely large amounts of data where the caller knows that
appropriate references are held, and sleeping is not an issue.

Inspired by:	rwatson
2011-01-27 00:34:12 +00:00
Matthew D Fleming
73d6f8516d Remove the CTLFLAG_NOLOCK as it seems to be both unused and
unfunctional.  Wiring the user buffer has only been done explicitly
since r101422.

Mark the kern.disks sysctl as MPSAFE since it is and it seems to have
been mis-using the NOLOCK flag.

Partially break the KPI (but not the KBI) for the sysctl_req 'lock'
field since this member should be private and the "REQ_LOCKED" state
seems meaningless now.
2011-01-26 22:48:09 +00:00
Dmitry Chagin
a5c1afadeb Add macro to test the sv_flags of any process. Change some places to test
the flags instead of explicit comparing with address of known sysentvec
structures.

MFC after:	1 month
2011-01-26 20:03:58 +00:00
Konstantin Belousov
dbccdf7684 When vtruncbuf() iterates over the vnode buffer list, lock buffer object
before checking the validity of the next buffer pointer. Otherwise, the
buffer might be reclaimed after the check, causing iteration to run into
wrong buffer.

Reported and tested by:	pho
MFC after:	1 week
2011-01-25 14:04:02 +00:00
Konstantin Belousov
6fa39a7327 Allow debugger to specify that children of the traced process should be
automatically traced. Extend the ptrace(PL_LWPINFO) to report that child
just forked.

Reviewed by:	davidxu, jhb
MFC after:	2 weeks
2011-01-25 10:59:21 +00:00
Jaakko Heinonen
1a4fbae871 Replace spaces with tabs. 2011-01-24 17:08:26 +00:00
Sergey Kandaurov
4053b05b91 Make MSGBUF_SIZE kernel option a loader tunable kern.msgbufsize.
Submitted by:	perryh pluto.rain.com (previous version)
Reviewed by:	jhb
Approved by:	kib (mentor)
Tested by:	universe
2011-01-21 10:26:26 +00:00
Matthew D Fleming
cbc134ad03 Introduce signed and unsigned version of CTLTYPE_QUAD, renaming
existing uses.  Rename sysctl_handle_quad() to sysctl_handle_64().
2011-01-19 23:00:25 +00:00
Matthew D Fleming
2fee06f087 Specify a CTLTYPE_FOO so that a future sysctl(8) change does not need
to rely on the format string.
2011-01-18 21:14:18 +00:00
John Baldwin
2dc29adb9f Rework realtime priority support:
- Move the realtime priority range up above kernel sleep priorities and
  just below interrupt thread priorities.
- Contract the interrupt and kernel sleep priority ranges a bit so that
  the timesharing priority band can be increased.  The new timeshare range
  is now slightly larger than the old realtime + timeshare ranges.
- Change the ULE scheduler to no longer use realtime priorities for
  interactive threads.  Instead, the larger timeshare range is now split
  into separate subranges for interactive and non-interactive ("batch")
  threads.  The end result is that interactive threads and non-interactive
  threads still use the same priority ranges as before, but realtime
  threads now have a separate, dedicated priority range.
- Do not modify the priority of non-timeshare threads in sched_sleep()
  or via cv_broadcastpri().  Realtime and idle priority threads will
  no longer have their priorities affected by sleeping in the kernel.

Reviewed by:	jeff
2011-01-14 17:06:54 +00:00
Matthew D Fleming
52c0b557cc One more sysctl(9) type-safety that I missed before. 2011-01-13 18:20:37 +00:00
Matthew D Fleming
240577c2a7 Fix up a few more sysctl(9) mis-typing found in various LINT builds. 2011-01-13 18:20:27 +00:00
John Baldwin
12d56c0f63 Introduce two new helper macros to define the priority ranges used for
interactive timeshare threads (PRI_*_INTERACTIVE) and non-interactive
timeshare threads (PRI_*_BATCH) and use these instead of PRI_*_REALTIME
and PRI_*_TIMESHARE.  No functional change.

Reviewed by:	jeff
2011-01-13 14:22:27 +00:00
Matthew D Fleming
fbbb13f962 sysctl(9) cleanup checkpoint: amd64 GENERIC builds cleanly.
Commit the kernel changes.
2011-01-12 19:54:19 +00:00
John Baldwin
d330520523 - Retire some unused ithread priorities: PI_TTYHIGH, PI_TAPE, and
PI_DISKLOW.  While here, rename PI_TTYLOW to PI_TTY.
- Add a macro PI_SWI() that takes a SWI_* constant as an argument and
  returns the suitable thread priority.
2011-01-11 22:15:30 +00:00
John Baldwin
c9a8cba456 Always use PRI_BASE() when checking the base type of a thread's priority
class.

MFC after:	2 weeks
2011-01-11 22:13:19 +00:00
John Baldwin
58ccf5b41c Remove unneeded includes of <sys/linker_set.h>. Other headers that use
it internally contain nested includes.

Reviewed by:	bde
2011-01-11 13:59:06 +00:00
Lawrence Stewart
5a29e4d24c Fix hhook_head_is_virtualised() so that "ret" can't be used uninitialised.
Sponsored by:	FreeBSD Foundation
Submitted by:	pjd
MFC after:	9 weeks
X-MFC with:	r216615
2011-01-11 01:11:07 +00:00
Lawrence Stewart
188d9a4947 Fix some minor style/readability nits in hhook.
Sponsored by:	FreeBSD Foundation
Submitted by:	pjd
MFC after:	9 weeks
X-MFC with:	r216615
2011-01-11 00:29:17 +00:00
John Baldwin
789200082c Fix two harmless off-by-one errors.
Reviewed by:	jeff
MFC after:	2 weeks
2011-01-10 20:48:10 +00:00
Bjoern A. Zeeb
8d12fab9ae Improve style and wording of comments and sysctl descriptions [1].
Move machdep.ct_debug to debug.clocktime as there was no reason to
actually put it under machdep in r216340.

Submitted by:	bde [1]
MFC after:	3 days
2011-01-09 14:34:56 +00:00
Nathan Whitehorn
083cfea1ee Make RB_CDROM work. This should probably check for a disc in cd1 and acd1
as well.
2011-01-08 19:50:13 +00:00
Attilio Rao
08e4ac8ad6 Revert r216805.
That revision is introducing a bug which is more visible than problems
it is trying to fix.

As long as my time is very limited in this period I am going to
commit back this patch just once it is fully fixed.

Reported by:	dim, Nicholas Esborn
2011-01-08 18:51:15 +00:00
Konstantin Belousov
26d8f3e11d Use the same expression to report stack protection mode for AT_STACKEXEC
as the expression used by exec_new_vmspace().
2011-01-08 18:41:19 +00:00
Konstantin Belousov
291c06a127 In elf image activator, read and apply the stack protection mode from
PT_GNU_STACK program header, if present and enabled. Two new sysctls
are provided, kern.elf32.nxstack and kern.elf64.nxstack, that allow to
enable PT_GNU_STACK for ABIs of specified bitsize, if ABI decided to
support shared page.

Inform rtld about access mode of the stack initial mapping by
AT_STACKPROT aux vector.

At the moment, the default is disabled, waiting for the usermode
support bits.
2011-01-08 16:30:59 +00:00
Konstantin Belousov
6297a3d843 Create shared (readonly) page. Each ABI may specify the use of page by
setting SV_SHP flag and providing pointer to the vm object and mapping
address. Provide simple allocator to carve space in the page, tailored
to put the code with alignment restrictions.

Enable shared page use for amd64, both native and 32bit FreeBSD
binaries.  Page is private mapped at the top of the user address
space, moving a start of the stack one page down. Move signal
trampoline code from the top of the stack to the shared page.

Reviewed by:	 alc
2011-01-08 16:13:44 +00:00
Konstantin Belousov
ed167eaa80 Collect code to translate between vm_prot_t and p_flags into helper
functions.

MFC after:	1 week
2011-01-08 16:02:14 +00:00
John Baldwin
fd05807822 - Properly initialize the base priority (td_base_pri) of thread0 to PVM
to match the desired priority in td_priority.  Otherwise the first time
  thread0 used a borrowed priority it would drop down to PUSER instead of
  PVM.
- Explicitly initialize the starting priority of new kprocs to PVM to
  avoid inheriting some random priority from thread0.

MFC after:	2 weeks
2011-01-06 22:26:00 +00:00
John Baldwin
22d19207e9 - Move sched_fork() later in fork() after the various sections of the new
thread and proc have been copied and zeroed from the old thread and
  proc.  Otherwise attempts to modify thread or process data in sched_fork()
  could be undone.
- Don't copy td_{base,}_user_pri from the old thread to the new thread in
  sched_fork_thread() in ULE.  This is already done courtesy the bcopy()
  of the thread copy region.
- Always initialize the real priority (td_priority) of new threads to the
  new thread's base priority (td_base_pri) to avoid bogusly inheriting a
  borrowed priority from the parent thread.

MFC after:	2 weeks
2011-01-06 22:24:00 +00:00
John Baldwin
177499ebcc Only change the priority of timeshare threads to PRI_MAX_TIMESHARE
when yield() is called.  Specifically, leave the priority of real time
and idle threads unchanged.

MFC after:	2 weeks
2011-01-06 22:19:15 +00:00
John Baldwin
a8f4344f08 - Restore dropping the priority of syncer down to PPAUSE when it is idle.
This was lost when it was converted to using a condition variable instead
  of lbolt.
- Drop the priority of flowtable down to PPAUSE when it is idle as well
  since it is a similar background task.

MFC after:	2 weeks
2011-01-06 22:17:07 +00:00
John Baldwin
6226ec3ef8 Retire PCONFIG and leave the priority of thread0 alone when waiting for
interrupt config hooks to execute.
2011-01-06 22:09:37 +00:00
Edward Tomasz Napierala
7b956487e9 Fix page fault that occurred when trying to initialize preloaded kernel module,
the dependency of which was preloaded, but failed to initialize.  Previously,
kernel dereferenced NULL pointer returned by modlist_lookup2(); now, when this
happens, we unload the dependent module.  Since the depended_files list is
sorted in dependency order, this properly propagates, unloading modules that
depend on failed ones.

From the user point of view, this prevents the kernel from panicing when
trying to boot kernel compiled without KDTRACE_HOOKS with dtraceall_load="YES"
in /boot/loader.conf.

Reviewed by:	kib
2011-01-05 09:58:41 +00:00
John Baldwin
a5a07ded82 kproc_exit() is already marked __dead2 so a NOTREACHED comment here isn't
needed for lint.

Submitted by:	bde
2011-01-04 13:16:28 +00:00
Konstantin Belousov
23b70c1ae2 Finish r210923, 210926. Mark some devices as eternal.
MFC after:	2 weeks
2011-01-04 10:59:38 +00:00
John Baldwin
547ffb85d9 Small whitespace nits and add a comment explaining why kthread_exit() can
call kproc_exit() that was lost earlier.
2011-01-03 16:29:00 +00:00
Edward Tomasz Napierala
3e73ff1e94 Finishing touches to fork1() - ANSIfy missed function definition, style(9)
fixes, removal of few comments that didn't really make sense and addition
of fork_findpid() locking requirements.
2011-01-02 12:16:57 +00:00
Bjoern A. Zeeb
5cc703974c Mfp4 CH177924:
Add and export constants of array sizes of jail parameters as compiled into
the kernel.
This is the least intrusive way to allow kvm to read the (sparse) arrays
independent of the options the kernel was compiled with.

Reviewed by:	jhb (originally)
MFC after:	1 week
Sponsored by:	The FreeBSD Foundation
Sponsored by:	CK Software GmbH
2010-12-31 22:49:13 +00:00
Konstantin Belousov
50cfe7fa50 Remove OBJ_CLEANING flag. The vfs_setdirty_locked_object() is the only
consumer of the flag, and it used the flag because OBJ_MIGHTBEDIRTY
was cleared early in vm_object_page_clean, before the cleaning pass
was done. This is no longer true after r216799.

 Moreover, since OBJ_CLEANING is a flag, and not the counter, it could
be reset too prematurely when parallel vm_object_page_clean() are
performed.

Reviewed by:	alc (as a part of the bigger patch)
MFC after:	1 month (after r216799 is merged)
2010-12-29 22:26:49 +00:00
Attilio Rao
3d7acbbabf Fix several callout migration races:
- Problem1:
   Hypothesis: thread1 is doing a callout_reset_on(), within his
   callout handler, willing to implicitly or explicitly migrate the
   callout.  thread2 is draining the callout.

   Thesys:
   * thread1 calls callout_lock() and locks the old callout cpu
   * thread1 performs the checks in the first path of the
     callout_reset_on()
   * thread1 hits this codepiece:
       /*
        * If the lock must migrate we have to check the state again as
        * we can't hold both the new and old locks simultaneously.
        */
       if (c->c_cpu != cpu) {
               c->c_cpu = cpu;
               CC_UNLOCK(cc);
               goto retry;
       }

     which means it will drop the lock and 'retry'
   * thread2 will callout_lock() and locks the new callout cpu.
     thread1 spins on the new lock and will not keep going for the
     moment.
   * thread2 checks that the callout is not pending (as callout is
     currently running) and that it is not on cc->cc_curr (because cc
     now refers to the new callout and the callout is running on the
     old callout cpu) thus it thinks it is done and returns.
   * thread1  will now acquire the lock and then adds the callout
     to the new callout cpu queue

   That seems an obvious race as callout_stop() falsely reports
   the callout stopped or worse, callout_drain() falsely returns
   while the callout is still in use.
 - Solution1:
   Fixing this problem would require, in general, to lock both
   callout cpus at once while switching the c_cpu field and avoid
   cyclic deadlocks between callout cpus locks.
   The concept of CPUBLOCK is then introduced (working more or less
   like the blocked_lock for thread_lock() function) meaning:
   "in callout_lock(), spin until the c->c_cpu is not different from
   CPUBLOCK". That way the "original" callout cpu, referred to the
   above mentioned code snippet, will remain blocked until the lock
   handover is over critical path will remain covered.

 - Problem2:
   Having the callout currently executed on a specific callout cpu
   and contemporary pending on another callout cpu (as it can happen
   with current code) breaks, at least, the assumption callout_drain()
   returns just once the callout cannot be referenced anymore.
 - Solution2:
   Callout migration is deferred if the current callout is already
   under execution.
   The best place to do that is in softclock() and new members are
   added to the callout cpu structure in order to specify a pending
   migration is requested. That is necessary because the callout
   cannot be trusted (not freed) the 100% of times after the execution
   of the callout handler.
   CPUBLOCK will prevent, in the "deferred migration" case, that the
   callout gets freed in this case, stopping any callout_stop() and
   callout_drain() possible activity until the migration is
   actually performed.

 - Problem3:
   There is a further race in callout_drain().
   In order to avoid a race between sleepqueue lock and callout cpu
   spinlock, in _callout_stop_safe(), the callout cpu lock is dropped,
   the sleepqueue lock is acquired and a new callout cpu lookup is
   performed.  Note that the channel used for locking the sleepqueue is
   obtained from the "current" callout cpu (&cc->cc_waiting).
   If the callout migrated in the meanwhile, callout_drain() will end up
   using the wrong wchan for the sleepqueue (the locked one will be the
   older, while the new one will not really be locked) leading to a
   lock leak and a race access to sleepqueue.
 - Solution3:
   It is enough to check if a migration happened between the operation
   of acquiring the sleepqueue lock and the new callout cpu lock and
   eventually unwind all those and try again.

This problems can lead to deathly races on moderate (4-ways) SMP
environment, leading to easy panic or deadlocks.
The 24-ways of the reporter, could easilly panic, with completely
normal workload, almost daily.
gianni@ kindly wrote the following prof-of-concept which can
panic a FreeBSD machine in less than one hour, in smaller SMP:
http://www.freebsd.org/~attilio/callout/test.c

Reported by:	Nicholas Esborn <nick at desert dot net>, DesertNet
In collabouration with:	gianni, pho, Nicholas Esborn
Reviewed by:	jhb
MFC after:	1 week (*)

* Usually, I would aim for a larger MFC timeout, but I really want this
  in before 8.2-RELEASE, thus re@ accepted a shorter timeout as a special
  case for this patch
2010-12-29 18:17:36 +00:00
David Xu
c8e368a933 - Follow r216313, the sched_unlend_user_prio is no longer needed, always
use sched_lend_user_prio to set lent priority.
- Improve pthread priority-inherit mutex, when a contender's priority is
  lowered, repropagete priorities, this may cause mutex owner's priority
  to be lowerd, in old code, mutex owner's priority is rise-only.
2010-12-29 09:26:46 +00:00