lookup vop so that it defaulted to using vop_eopnotsupp for strange
lookups like the ones for open("/dev/null/", ...) and stat("/dev/null/",
...). This mainly caused the wrong errno to be returned by vfs syscalls
(EOPNOTSUPP is not in POSIX, and is not documented in connection with
specfs in open.2 and is not documented in stat.2 at all). Also, lookup
vops are apparently required to set *ap->a_vpp to NULL on error, but
vop_eopnotsupp is too broken to do this.
allocation, as required.
If m_getm() receives NULL as a first argument, then it allocates `len'
(second argument) bytes worth of mbufs + clusters and returns the chain
only if it was able to allocate everything.
If the first argument is non-NULL, then it should be an existing mbuf
chain (e.g. pre-allocated mbuf sitting on a ring, on some list, etc.) and
so it will allocate `len' bytes worth of clusters and mbufs, as needed,
and append them to the tail of the passed in chain, only if it was able
to allocate everything requested.
If allocation fails, only what was allocated by the routine will be freed,
and NULL will be returned.
Also, get rid of existing m_getm() in netncp code and replace calls to it
to calls to this new generic code.
Heavily Reviewed by: bp
one the number of variables needed for top and other setgid kmem
utilities that could only be accessed via /dev/kmem previously.
Submitted by: Thomas Moestl <tmoestl@gmx.net>
Reviewed by: freebsd-audit
- All processes go into the same array of queues, with different
scheduling classes using different portions of the array. This
allows user processes to have their priorities propogated up into
interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
32. We used to have 4 separate arrays of 32 queues each, so this
may not be optimal. The new run queue code was written with this
in mind; changing the number of run queues only requires changing
constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter. This
is intended to be used to create per-cpu run queues. Implement
wrappers for compatibility with the old interface which pass in
the global run queue structure.
- Group the priority level, user priority, native priority (before
propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
This was used to detect when a process' priority had lowered and
it should yield. We now effectively yield on every interrupt.
- Activate propogate_priority(). It should now have the desired
effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
idle loop. It interfered with propogate_priority() because
the idle process needed to do a non-blocking acquire of Giant
and then other processes would try to propogate their priority
onto it. The idle process should not do anything except idle.
vm_page_zero_idle() will return in the form of an idle priority
kernel thread which is woken up at apprioriate times by the vm
system.
- Update struct kinfo_proc to the new priority interface. Deliberately
change its size by adjusting the spare fields. It remained the same
size, but the layout has changed, so userland processes that use it
would parse the data incorrectly. The size constraint should really
be changed to an arbitrary version number. Also add a debug.sizeof
sysctl node for struct kinfo_proc.
Some things needed bits of <i386/include/lock.h> - cy.c now has its
own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK()
has been moved to <i386/include/apic.h> (AKA <machine/apic.h>).
Reviewed by: jhb
and function argument declarations. Make sure that functions that are
supposed to return a pointer return NULL in case of failure. Don't cast
NULL. Finally, get rid of annoying `register' uses.
tracing in order to avoid duplication.
- Insert some tracepoints back into the mutex acq/rel code, thus ensuring
that we can trace all lock acq/rel's again.
- All CURPROC != NULL checks are MPASS()es (under MUTEX_DEBUG) because they
signify a serious mutex corruption.
- Change up some KASSERT()s to MPASS()es, and vice-versa, depending on the
type of problem we're debugging (INVARIANTS is used here to check that
the API is being used properly whereas MUTEX_DEBUG is used to ensure that
something general isn't happening that will have bad impact on mutex
locks).
Reminded by: jhb, jake, asmodai
attributes. This is needed for AST's to be properly posted in a preemptive
kernel. They are backed by two new flags in p_sflag: PS_ASTPENDING and
PS_NEEDRESCHED. They are still accesssed by their old macros:
aston(), astoff(), etc. For completeness, an astpending() macro has been
added to check for a pending AST, and clear_resched() has been added to
clear need_resched().
- Rename syscall2() on the x86 back to syscall() to be consistent with
other architectures.
- I can't seem to reproduce the warning I got from WITNESS anymore.
- The fix was wrong. Since a uidinfo struct is a member of proc, it
makes sense for the locking order to be such that you are allowed to
hold proc and then grab the uidinfo lock.
- Use swi_* function names.
- Use void * to hold cookies to handlers instead of struct intrhand *.
- In sio.c, use 'driver_name' instead of "sio" as the name of the driver
lock to minimize diffs with cy(4).
- Add a set of MI helper functions for interrupt threads:
- ithread_create() creates a new interrupt thread
- ithread_destroy() destroys an interrupt thread
- ithread_add_handler() attaches a new handler to an interrupt thread
- ithread_remove_handler() detaches a handler from an interrupt thread
- Rename sinthand_add() and sched_swi() to swi_add() and swi_sched()
respectively so that they live in a consistent namespace.
- struct intrhand is no longer a public type. It would be private to
kern_intr.c but the current implementation of fast interrupts on the
alpha requires the type to be exported. However, all handlers should
be treated as void * cookies in the way that new-bus treats them. This
includes references to software interrupt handlers.
will only display sleep mutexes held by the current process.
- Clean up some nits in the witness_display() function and add a ddb
command 'show witness' that dumps the hierarchy and order lists to the
console.
- Use queue(3) macros where appropriate.
- Resort the spin lock order list so that "com" is before "sched_lock".
Also, add appropriate #ifdef's around SMP and i386-specific mutexes.
- Add two new mutexes used to protect the ithread lists and tables to the
order list.
Requested by: bde (1)
follows:
- show ktr_first display the first entry
- show ktr_next display the next entry
- show ktr display the entire buffer
The /v modifiers continue to work as described previously.
Requested by: bde
only the boot processor should be running in the comments.
- Initialize curproc to point to each CPU's respective idleproc if their
curproc is NULL.
- Keep track of the number of context switches performed by idleproc.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
kmem_free() for now. Kmem_malloc() and kmem_free() now have appropriate
assertions in place, and these checks aren't feasible until more of the
networking code is locked down. Also, the extra assertions here should
already be caught by the WITNESS code as lock order violations should
mutex operations on Giant be reintroduced here later.
the index of the pollfd array to the number of fd's currently open, not
the maximum number of fd's. ie: if you had 0,1,2 open, you could not
use pollfd slots higher than 20. The specs say we only have to support
OPEN_MAX [64] entries but we allow way more than that.
by myself. It solves a serious vm_map corruption problem that can occur
with the buffer cache when block sizes > 64K are used. This code has been
heavily tested in -stable but only tested somewhat on -current. An MFC
will occur in a few days. My additions include the vm_map_simplify_entry()
and minor buffer cache boundry case fix.
Make the buffer cache use a system map for buffer cache KVM rather then a
normal map.
Ensure that VM objects are not allocated for system maps. There were cases
where a buffer map could wind up with a backing VM object -- normally
harmless, but this could also result in the buffer cache blocking in places
where it assumes no blocking will occur, possibly resulting in corrupted
maps.
Fix a minor boundry case in the buffer cache size limit is reached that
could result in non-optimal code.
Add vm_map_simplify_entry() calls to prevent 'creeping proliferation'
of vm_map_entry's in the buffer cache's vm_map. Previously only a simple
linear optimization was made. (The buffer vm_map typically has only a
handful of vm_map_entry's. This stabilizes it at that level permanently).
PR: 20609
Submitted by: (Tor Egge) tegge
With this flag set malloc() will panic if memory allocation failed.
This usable only in critical places where failed allocation is fatal.
Reviewed by: peter
machines (duh!). This was one reason why this script broke on
i386. The other being that on i386 sections did not have the
proper alignment. This has been fixed in sys/sys/linker_set.h.
o Use objdump instead of gensetdefs(1) to build the linker sets.
o Allow overriding of nm and objdump in resp. genassym.sh and
gensetdefs.pl for non-native toolchains.
Reviewed by: arch
Perl improvements: Jos Backus <josb@cncdsl.com>, benno
problem is that a mutex lock, prior to this change, is acquired before
the curproc is set to idleproc, so we mess ourselves up by calling
the mutex lock routine with curproc == NULL.
Moving it up after the aps_ready spin-wait has us hopefully setting it
after idleproc is setup.
Solved by: jake (the allmighty) :-)
Move the helper macros from sbuf.h to sbuf.c
Use ints instead of size_ts.
Relax the requirements for sbuf_finish(): it is now possible to finish an
overflowed buffer.
Make sbuf_len() return -1 instead of 0 if the sbuf overflowed.
Requested by: gibbs
instead of a trapframe directly. (Requested by bde.)
- Convert the alpha switch_trampoline to call fork_exit() and use the MI
fork_return() instead of child_return().
- Axe child_return().
- Update stopevent() to assert that the proc lock is held when it is
held and is not recursed. Note that the STOPEVENT() macro obtains
the proc lock when calling this function.
mtx right now as it makes debugging harder. When we are in optimizing
mode, we can revisit this.
- Fix the KTR trace messages to use %p rather than 0x%p to avoid duplicate
0x's in KTR output.
- During witness_fixup, release Giant so that witness doesn't get confused.
Also, grab all_mtx while walking the list of mutexes.
- Remove w_sleep and w_recurse. Instead, perform checks on mutexes using
the mutex's mtx_flags field.
- Allow debug.witness_ddb and debug.witness_skipspin to be set from the
loader.
- Add Giant to the front of existing order_list entries to help ensure
Giant is always first.
- Add an order entry for the various proc locks. Note that this only
helps keep proc in order mostly as the allproc and proctree mutexes are
only obtained during a lockmgr operation on the specified mutex.
that name as a variable. Use mtx_owned(&Giant) where appropriate
instead.
- Proc locking.
- P_FOO -> PS_FOO.
- Update comments about enable interrupts during trap and why this may be
bad if we trap while holding a spin mutex.
- Don't bother resetting p to curproc in syscall() in case we are the child
returning from fork. The child hasn't returned from fork through syscall
in a while.
- Remove fork_return() as it has been superseded by the MI version.
the alpha mp_machdep.c.
- Proc locking.
- Catch up to the P_FOO -> PS_FOO proc flags changes.
- Stick ap_init()'s prototype with the other prototypes.
- Remove the Xforwardirq IPI.
- Remove unused simplelocks.
- Don't try to psignal() from forward_statclock(), but set the appropriate
signal pending flag in p_sflag instead.
- Add in KTR_SMP tracepoints for various SMP functions. (Brought over
from the alpha port)
calls returning EACCES instead of EPERM. This patch modifies vaccess()
to return EPERM instead of EACCES if VADMIN is among the requested
rights. This affects functions normally limited to the owners of
a file, such as chmod(), as EPERM is the error indicating that
privilege would allow the operation, rather than a chance in mandatory
or discretionary rights.
Reported by: bde
inline functions non-inlined. Hide parts of the mutex implementation that
should not be exposed.
Make sure that WITNESS code is not executed during boot until the mutexes
are fully initialized by SI_SUB_MUTEX (the original motivation for this
commit).
Submitted by: peter
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context. This is also necessary
in order to context switch from sched_ithd() directly.
Reviewed By: peter
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.
This change is necessary in order to avoid some circular bootstrapping
dependencies.
for SMP; just use the same ones as UP. These weren't used without
holding Giant anyway, and the routines that use them would have to
be protected from pre-emption to avoid migrating cpus.
calls it rather than obtaining and releasing it a lot in proc_compare.
- Collect all of the data gathering and stick it just after the
proc_compare loop. This way, we only have to grab sched_lock once now
when handling SIGINFO. All the printf's are done after the values are
calculated.
Submitted mostly by: bde
The counters are incremented when a thread goes to sleep and decremented
either when a thread is woken up by another thread or when the sleep
times out. There existed a race where the sleep count could be decremented
twice resulting in an eventual underflow.
Move the decrementing of the "counters" to the thread initiating the sleep
and thus remedy the problem.
SIGXCPU signal, and killing of processes that exceed their allowed run
time until they can play nice with sched_lock. Right now they are just
potentital panics waiting to happen. The printf() has bitten several
people.
not hold sched_lock while calling ttyprintf(). If we are on a serial
console, then ttyprintf() will end up getting the sio lock, resulting in
a lock order violation.
Noticed by: des
This is an odd one. This patch appears to fix a panic related to background
bitmap writes (for FFS), though neither Kirk, Ian, or I can figure out how
B_CLUSTEROK could possibly be set on a bitmap block to cause the clustering
code to improperly cluster with a buffer undergoing a background write.
In anycase, the clustering code is very fragile and this patch helps with
that, as well as possibly fixing a bug Andre was having.
Suggested by: Ian Dowse <iedowse@maths.tcd.ie>
Testing by: Andre Albsmeier <andre.albsmeier@mchp.siemens.de>
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.
The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.
The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.
Reviewed by: jhb
allocation routines are being called safely. Since we drop our relevant
mbuf mutex and acquire Giant before we call kmem_malloc(), we have
to make sure that this does not pave the way for a fatal lock order
reversal. Check that either Giant is already held (in which case it's safe
to grab it again and recurse on it) or, if Giant is not held, that no
other locks are held before we try to acquire Giant.
Similarily, add a KASSERT valid in the WITNESS case in m_reclaim() to
nail callers who end up in m_reclaim() and hold a lock.
Pointed out by: jhb
while we are copying it to the kinfo_proc structure.
- Test against p_stat to see if we are blocked on a mutex.
- Terminate ki_mtxname with a null char rather than ki_wmesg.
m_reclaim() and re-acquire it when m_reclaim() returns. This means that
we now call the drain routines without holding the mutex lock and
recursing into it. This was done for mainly two reasons:
(i) Avoid the long recursion; long recursions are typically bad and this
is the case here because we block all other code from freeing mbufs
if they need to. Doing that is kind of counter-productive, since we're
really hoping that someone will free.
(ii) More importantly, avoid a potential lock order reversal. Right now,
not all the locks have been added to our networking code; but
without this change, we're introducing the possibility for deadlock.
Consider for example ip_drain(). We will likely eventually introduce
a lock for ipq there, and so ip_freef() will be called with ipq lock
held. But, ip_freef() calls m_freem() which in turn acquires the
mmbfree lock. Since we were previously calling ip_drain() with mmbfree
held, our lock order would be: mmbfree->ipq->mmbfree. Some other code
may very well lock ipq first and then call ip_freef(). This would
result in the regular lock order, ipq->mmbfree. Clearly, we have
deadlock if one thread acquires the ipq lock and sits waiting for
mmbfree while another thread calling m_reclaim() acquires mmbfree
and sits waiting for the ipq lock.
Also, make sure to add a comment above m_reclaim()'s definition briefly
explaining this. Also document this above the call to m_reclaim() in
m_mballoc_wait().
Suggested and reviewed by: alfred
the pipe, then we were corrupting the pipe_zone free list by calling
pipeclose on rpipe twice. NULL out rpipe to avoid this.
Reviewed by: dillon
Reviewed by: iedowse
- Provide TUNABLE_INT() hooks for ktr_cpumask, ktr_mask, and ktr_verbose
so that they can be set from the loader by their respective sysctl names.
For example, to turn on KTR_INTR and KTR_PROC in ktr_mask, one could
stick 'debug.ktr.mask="0x1200"' in /boot/loader.conf.
o Use 8 space hard tabs
o Eliminate trailing white space (while I'm here, just in a couple of places)
o wrap mostly at 80 columns (printf literal strings being the notable
exception)
o use return (foo) consistantly
o use 0 vs NULL more consistantly
o use queue(3) xxx_FOREACH macros where appropriate (some places used it
before, others didn't).
o use BSD line continuation parameters
Pendants will likely notice minor style(9) violations, but for the
most part the file now looks much much closer to style(9) and is
mostly self-consistant.
Approved in principle by: dfr
Reviewed by: md5 (no changes to the .o)
to the SYSCTL_ADD_FOO() macros is a constant that should be turned into
a string via the pre-processor. Instead, require it to be an explicit
string so that names can be generated on the fly.
- Make some of the char * arguments to sysctl_add_oid() const to quiet
warnings.
file.
While there fix the layout of function headers (noticable in long headers)
Fix up some style nits. It's Perl and should be written in that style.
out: label in psignal() did not grab sched_lock before trying to release
it. Also, the previous version had several cases where it grabbed
sched_lock before jumping to out: unneccessarily, so rework this a bit.
The runfast: and out: labels must be called with sched_lock released, and
the run: label must be called with it held. Appropriate mtx_assert()'s
have been added that should catch any bugs that may still be in this
code.
Noticed by: bde
attaching to running processes, it completely breaks normal debugging.
A better fix is in the works, but cannot be properly tested until
the problem with gdb hanging the system in -current is solved.
process. This fixes a problem when attaching to a process in gdb
and the process staying in the STOP'd state after quiting gdb.
This whole process seems a bit suspect, but this seems to work.
Reviewed by: peter
in 4.2-REL which I ripped out in -stable and -current when implementing the
low-memory handling solution. However, maxlaunder turns out to be the saving
grace in certain very heavily loaded systems (e.g. newsreader box). The new
algorithm limits the number of pages laundered in the first pageout daemon
pass. If that is not sufficient then suceessive will be run without any
limit.
Write I/O is now pipelined using two sysctls, vfs.lorunningspace and
vfs.hirunningspace. This prevents excessive buffered writes in the
disk queues which cause long (multi-second) delays for reads. It leads
to more stable (less jerky) and generally faster I/O streaming to disk
by allowing required read ops (e.g. for indirect blocks and such) to occur
without interrupting the write stream, amoung other things.
NOTE: eventually, filesystem write I/O pipelining needs to be done on a
per-device basis. At the moment it is globalized.
This is because calls with M_WAIT (now M_TRYWAIT) may not wait
forever when nothing is available for allocation, and may end up
returning NULL. Hopefully we now communicate more of the right thing
to developers and make it very clear that it's necessary to check whether
calls with M_(TRY)WAIT also resulted in a failed allocation.
M_TRYWAIT basically means "try harder, block if necessary, but don't
necessarily wait forever." The time spent blocking is tunable with
the kern.ipc.mbuf_wait sysctl.
M_WAIT is now deprecated but still defined for the next little while.
* Fix a typo in a comment in mbuf.h
* Fix some code that was actually passing the mbuf subsystem's M_WAIT to
malloc(). Made it pass M_WAITOK instead. If we were ever to redefine the
value of the M_WAIT flag, this could have became a big problem.
point in calling a function just to set a flag.
Keep better track of the syslog FAC/PRI code and try to DTRT if
they mingle.
Log all writes to /dev/console to syslog with <console.info>
priority. The formatting is not preserved, there is no robust,
way of doing it. (Ideas with patches welcome).
the kernel console. Instead, change logwakeup() to set a flag in the
softc. A callout then wakes up every so often and wakes up any processes
selecting on /dev/log (such as syslogd) if the flag is set. By default
this callout fires 5 times a second, but that can be adjusted by the
sysctl kern.log_wakeups_per_second.
Reviewed by: phk
commands have also been slightly updated as follows:
- Use ktr_idx to find the newest entry rather than walking the buffer
comparing timespecs. Timespecs are not always unique after the change
to use getnanotime(9).
- Add a new verbose setting. When the verbose setting is on, then the
timestamp is printed with each message. If KTR_EXTEND is on, then the
filename and line number are output as well. By default this option is
off. It can be turned on with the 'v' modifier passed to the 'tbuf'
and 'tall' commands. For the 'tnext' command, the 'v' modifier toggles
the verbose mode.
- Only display the cpu number for each message on SMP systems.
- Don't display anything for an empty entry that hasn't been used yet.
functions. If this flag is set, then no KTR log messages are issued.
This is useful for blocking excessive logging, such as with the internal
mutex used by the witness code.
- Use MTX_QUIET on all of the mtx_enter/exit operations on the internal
mutex used by the witness code.
- If we are in a panic, don't do witness checks in witness_enter(),
witness_exit(), and witness_try_enter(), just return.
vm86_trap() to return to the calling program directly. vm86_trap()
doesn't return, thus it was never returning to trap() to release
Giant. Thus, release Giant before calling vm86_trap().
of explicit calls to lockmgr. Also provides macros for the flags
pased to specify shared, exclusive or release which map to the
lockmgr flags. This is so that the use of lockmgr can be easily
replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
no longer contains kernel specific data structures, but rather
only scalar values and structures that are already part of the
kernel/user interface, specifically rusage and rtprio. It no
longer contains proc, session, pcred, ucred, procsig, vmspace,
pstats, mtx, sigiolst, klist, callout, pasleep, or mdproc. If
any of these changed in size, ps, w, fstat, gcore, systat, and
top would all stop working. The new structure has over 200 bytes
of unassigned space for future values to be added, yet is nearly
100 bytes smaller per entry than the structure that it replaced.
be safely held across an eventhandler function call.
- Fix an instance of the head of an eventhandler list being read without
the lock being held.
- Break down and use a SYSINIT at the new SI_SUB_EVENTHANDLER to initialize
the eventhandler global mutex and the eventhandler list of lists rather
than using a non-MP safe initialization during the first call to
eventhandler_register().
- Add in a KASSERT() to eventhandler_register() to ensure that we don't try
to register an eventhandler before things have been initialized.
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
a system call, which were missing for alpha and ia64.
can lead to further panics.
- Call getnanotime() instead of nanotime() for the timestamp. nanotime()
is more precise, but it also calls into the timer code, which results
in mutex operations on the i386 arch. If KTR_LOCK is turned on, then
ktr_tracepoint() recurses on itself until it exhausts the kernel stack.
Eventually this should change to use get_cyclecount() instead, but that
can't happen if get_cyclecount() is calling nanotime() instead of
getnanotime().
Deal with excessive dirty buffers when msync() syncs non-contiguous
dirty buffers by checking for the case in UFS *before* checking for
clusterability.
__P() prototypes when an ansi-style static inline is a prototype already.
Since vnode_if.[ch] are generated on the fly, there are no CVS diffs to
mess up.
SMP problem. Compaq, in their infinite wisdom, forgot to put the IO apic
intpin #0 connection to the 8259 PIC into the mptable. This hack is to
look and see if intpin #0 has *no* table entry and adds a fake ExtInt
entry for the remap routines to use. isa/clock.c will still test the
interrupts. This entry is only ever used on an already broken system.
where fork1() could put the process on the run queue where it could be
snatched up by another CPU before kthread_create() had set the proper
fork handler. Instead, we put the new kthread on the runqueue after its
fork handler has been sent.
Noticed by: jake
Looked over by: peter
spending, which was unused now that all software interrupts have
their own thread. Make the legacy schednetisr use an atomic op
for setting bits in the netisr mask.
Reviewed by: jhb
Also, while here, run up to 32 interrupt sources on APIC systems.
Normalize INTREN/INTRDIS so they are the same on both UP and SMP systems
rather than sometimes a macro, and sometimes a function.
Reviewed by: jhb, jakeb
and numvnodes are longs in the kernel. They should remain longs in systat,
what really needs to change is that they should be using SYSCTL_LONG rather
than SYSCTL_INT. I also changed wantfreevnodes to SYSCTL_LONG because I
happened to notice it.
I wish there was a way to find all of these automatically..
Pointed out by: bde
from struct proc, which are now unused (p_nthread already was).
Remove process flag P_KTHREADP which was untested and only set
in vfs_aio.c (it should use kthread_create). Move the yield
system call to kern_synch.c as kern_threads.c has been removed
completely.
moral support from: alfred, jhb