KERN_PROC_SESSION option which had been previously defined but
never implemented.
PR: bin/65803 (a very tiny piece of the PR)`
Submitted by: Cyrille Lefevre
of not clearing the flags for execv() syscall will result that a new
program runs in KSE thread mode without enabling it.
Submitted by: tjr
Modified by: davidxu
threatened in rev.1.10 of usr.sbin/kgmon/kgmon.c more than 2 years ago.
kgmon has been recovering from the missing initialization for too
long, but the fixup there is ifdefed for i386's and shouldn't be
needed for other arches.
high resolution kernel profiling (options GUPROF. "U" in GUPROF stands
for microseconds resolution, but the resolution is now smaller than 1
nanosecond on multi-GHz machines and the accuracy is heading towards
1 nanosecond too). Arches that support GUPROF must now provide certain
macros for the calibration. GUPROF is now only supported for i386's,
so the absence of the new macros for other arches doesn't break anything
that wasn't already broken. amd64's have uncommitted support for
GUPROF, and sparc64's have support that seems to be complete except
here (there was an #error for non-i386 cases; now there are undefined
macros).
Changed the asms a little:
- declare them as __volatile. They must not be moved, and exporting a
label across asms is technically incorrect, so try harder to stop gcc
moving them.
- don't put the non-clobbered register "bx" in the clobber list. The
clobber lists are still more conservative than necessary.
- drop the non-support for gcc-1. It just gave a better error message,
and this is not useful since compiling with gcc-1 would cause thousands
of worse error messages.
- drop the support for aout.
of kmupetext(). The declaration is misplaced in <machine/profile.h>
since it is not MD and not related to the lowest level of profiling.
It will be moved, but getting it via <sys/gmon.h> already works.
and cannot handle it going away, add an explicit reference to the kobj
class inside each linker class. Without this, a class with no modules
loaded will sit with an idle refcount of 0. Loading and unloading
a module with it causes a 0->1->0 transition which frees the ops table
and causes subsequent loads using that class to explode. Normally, the
"kernel" module will remain forever loaded and prevent this happening, but
if you have more than one linker class active, only one owns the "kernel".
This finishes making modules work for kldload(8) on amd64.
(nobits) tables to simplify some code. Try and shorten some of the very
wide lines. Somewhere along the way, I think I fixed the memory
corruption that caused panics after going multiuser.
elf_reloc() backends for two reasons. First, to support the possibility
of there being two elf linkers in the kernel (eg: amd64), and second, to
pass the relocbase explicitly (for relocating .o format kld files).
is "void *" (it isn't) or that the default promotion of pid_t is int.
Instead, assume that casting "struct foo *" to "void *" and printing the
result with %p is useful, and that all pid_t's are representable as longs.
Fixed some minor style bugs (mainly spelling errors in comments).
removes a specific thread from a sleep queue. sleepq_resume_thread()
resumes scheduling of a thread that has been previously removed from a
sleep queue.
- sleepq_catch_signals() just removes a thread from the queue it was just
added to when a pending signal is found.
- sleepq_signal() and sleepq_broadcast() remove threads from a queue,
drop the queue lock, and then resume all the previously removed threads.
This doesn't completely fix the sched_lock <-> sleepq chain LOR, but it
makes it a little better as we no longer call setrunnble() with a sleep
queue lock held meaning if setrunnable() tries to wakeup the swapper we
don't try to lock two sleep queue chains at the same time.
It's not quite correct from a posix Point Of view, but it is a lot better
than what was there before. This will be revisited later
when we decide what form our priority extensions will take. Posix doesn't
specify how a system scope thread can change its priority so you need to
add non-standard extensions to be able to do it..
For now make this slightly non standard to allow it to be done.
Submitted by: Dan Eischen originally, changed by myself.
allocation and deallocation. This flag's principal use is shortly after
allocation. For such cases, clearing the flag is pointless. The only
unusual use of PG_ZERO is in vfs_bio_clrbuf(). However, allocbuf() never
requests a prezeroed page. So, vfs_bio_clrbuf() never sees a prezeroed
page.
Reviewed by: tegge@
synchronization protecting against dynamic load and unload of MAC
policies, and instead simply blocks load and unload. In a static
configuration, this allows you to avoid the synchronization costs
associated with introducing dynamicism.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, McAfee Research
as dependent on binutils features/quirks as the current one. This one
loads plain .o files without having to mess with shared object mode.
This happens to be essential on amd64, because binutils hasn't implemented
all the quirks/features that we need for producing the hack non-PIC shared
objects. As it turned out, .o format isn't all that inconvenient after
all. It looks like the ability to use the same .o files for linking
directly into a static kernel or loading as a module might be worth it.
It is still very much a work-in-progress, but it is almost usable. Other
changes are still needed in order to use it though, these have not been
committed yet. There is still a memory corruption/overrun bug somewhere.
For example, test modules load and work, but the machine explodes a few
minutes later in vm_forkproc() or the like. Notable missing things
include kldxref support, and loader(8) support. I wanted to figure out
a working baseline set of code first.
condition where kse_wakeup() doesn't yet see them in (interruptible)
sleep queues. Also add an upcall check to sleepqueue_catch_signals()
suggested by jhb.
This commit should fix recent mysql hangs.
Reviewed by: jhb, davidxu
Mysql'd by: Robin P. Blanchard <robin.blanchard at gactr uga edu>
jail, which is less restrictive but allows for more flexible
jail usage (for those who are willing to make the sacrifice).
The default is off, but allowing raw sockets within jails can
now be accomplished by tuning security.jail.allow_raw_sockets
to 1.
Turning this on will allow you to use things like ping(8)
or traceroute(8) from within a jail.
The patch being committed is not identical to the patch
in the PR. The committed version is more friendly to
APIs which pjd is working on, so it should integrate
into his work quite nicely. This change has also been
presented and addressed on the freebsd-hackers mailing
list.
Submitted by: Christian S.J. Peron <maneo@bsdpro.com>
PR: kern/65800
sched_ule, in January 2004. Looking at this, "pagezero" is (one of) the
culprit(s). We had no provision for processes with P_NOLOAD set. With
pagezero not running at PRI_ITHD, kseq_load_{add,rem} count pagezero as
another-normal-process, thus the "expected-plus-one" load reported in
the above thread.
Submitted by: Nikos Ntarmos <ntarmos@ceid.upatras.gr>
2. Document that this means that kernel modules must be rebuilt.
3. While I'm here, fix my sorting error in callout.h
Requested by: many [1], scottl [2], bde [3]
the same process as the current thread it makes absolutely
no sense to lock the parent process through the pointer in
said thread.
Submitted by: pho (with minor correction)
Pointy Hat To: mtm
Specifically, we used to enable the source after locking sched_lock
and just before we had already decided to do a context switch.
This meant that an ithread could never process more than one interrupt
per context switch. Enabling earlier in the loop before sched_lock is
acquired allows an ithread to handle multiple interrupts per context
switch if interrupts fire very rapidly. For the case of heavy interrupt
load this can reduce the number of context switches (and thus overhead)
as well as reduce interrupt latency.
- Now that we can handle multiple interrupts per context switch, add simple
interrupt storm protection to threaded interrupts. If X number of
consecutive interrupts are triggered before the itherad voluntarily
yields to another thread, then the interrupt thread will sleep with the
associated interrupt source disabled (masked) for 1/10th of a second.
The default value of X is 500, but it can be tweaked via the tunable/
sysctl hw.intr_storm_threshold. If an interrupt storm is detected, then
a message is output to the kernel console on the first occurrence per
interrupt thread. Interrupt storm protection can be disabled completely
by setting this value to 0. There is no scientific reasoning for the
1/10th of a second or 500 interrupts values, so they may require tweaking
at some point in the future.
Tested by: rwatson (an earlier version w/o the storm protection)
Tested by: mux (reportedly made a machine with two PCI interrupts
storming usable rather than hard locked)
Reviewed by: imp
unconditionally initialize the mbuf header even if cluster allocation
failed, which could result in a NULL pointer dereference in low-memory
conditions.
PR: kern/65548
Submitted by: Stephan Uphoff <ups@tree.com>
a LOR against sleepq. Fix the comment, and fix ptracestop() to pick up
sched_lock after stop() rather than before.
Reported by: Scott Sipe <cscotts@mindspring.com>
Reviewed by: rwatson, jhb