lock assertion to it.
- SIGPENDING() no longer needs sched_lock, so only grab sched_lock to set
the TDF_NEEDSIGCHK and TDF_ASTPENDING flags in signotify().
- Add a proc lock assertion to tdsigwakeup().
- Since we always set TDF_OLDMASK while holding the proc lock, the proc
lock is sufficient protection to check its state in postsig() and we only
need sched_lock when clearing the actual flag.
a process group.
- Call pgadjustjobc() twice in fixjobc() to avoid code duplication and
improve readability.
- Use the proc lock to protect P_SHOULDSTOP() instead of sched_lock.
- Check to see if a process is PRS_NEW with sched_lock before trying to
lock its proc lock since the lock may not be constructed yet.
since they are going on the current cpu and not their previously assigned
cpu.
- sched_runnable() should only return true in the SMP case if the other
processor has more than one thread that is runnable. We can not steal
curthread.
- Change kseq_print() to accept the cpuid instead of a kseq pointer. This
makes use of this function in ddb much easier.
the process and session. Instead, cache a true reference to the session
when we do the hold and release our reference on that session. This avoids
the need for the proc lock when dropping the reference.
protects, so don't bother locking it while we assign it to a ucred's
cr_prison.
- Fully construct the new credential for a process before assigning it to
p_ucred.
- Set p_acflag earlier while already hold the proc lock in fork1().
- Mark the realitexpire() callout MPSAFE for new processes. It was already
marked safe for proc0 a long while ago.
generation number back to a long (sizeof(u_int) != sizeof(long) on
sparc64). The alternative would have been to heavily change the libdevstat API.
Discussed with: phk, ken
don't try and convert the argument flags to malloc flags, or we risk
implicitly requesting blocking and generating witness warnings.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
When enabled, this causes m_defrag to randomly return NULL (following
its normal failure case so that extra memory leaks are not introduced.)
Code similar to this was used to find / fix a few bugs last week.
returning some additional room in the first mbuf in a chain, and
avoiding feature-specific contents in the mbuf header. To do this:
- Modify mbuf_to_label() to extract the tag, returning NULL if not
found.
- Introduce mac_init_mbuf_tag() which does most of the work
mac_init_mbuf() used to do, except on an m_tag rather than an
mbuf.
- Scale back mac_init_mbuf() to perform m_tag allocation and invoke
mac_init_mbuf_tag().
- Replace mac_destroy_mbuf() with mac_destroy_mbuf_tag(), since
m_tag's are now GC'd deep in the m_tag/mbuf code rather than
at a higher level when mbufs are directly free()'d.
- Add mac_copy_mbuf_tag() to support m_copy_pkthdr() and related
notions.
- Generally change all references to mbuf labels so that they use
mbuf_to_label() rather than &mbuf->m_pkthdr.label. This
required no changes in the MAC policies (yay!).
- Tweak mbuf release routines to not call mac_destroy_mbuf(),
tag destruction takes care of it for us now.
- Remove MAC magic from m_copy_pkthdr() and m_move_pkthdr() --
the existing m_tag support does all this for us. Note that
we can no longer just zero the m_tag list on the target mbuf,
rather, we have to delete the chain because m_tag's will
already be hung off freshly allocated mbuf's.
- Tweak m_tag copying routines so that if we're copying a MAC
m_tag, we don't do a binary copy, rather, we initialize the
new storage and do a deep copy of the label.
- Remove use of MAC_FLAG_INITIALIZED in a few bizarre places
having to do with mbuf header copies previously.
- When an mbuf is copied in ip_input(), we no longer need to
explicitly copy the label because it will get handled by the
m_tag code now.
- No longer any weird handling of MAC labels in if_loop.c during
header copies.
- Add MPC_LOADTIME_FLAG_LABELMBUFS flag to Biba, MLS, mac_test.
In mac_test, handle the label==NULL case, since it can be
dynamically loaded.
In order to improve performance with this change, introduce the notion
of "lazy MAC label allocation" -- only allocate m_tag storage for MAC
labels if we're running with a policy that uses MAC labels on mbufs.
Policies declare this intent by setting the MPC_LOADTIME_FLAG_LABELMBUFS
flag in their load-time flags field during declaration. Note: this
opens up the possibility of post-boot policy modules getting back NULL
slot entries even though they have policy invariants of non-NULL slot
entries, as the policy might have been loaded after the mbuf was
allocated, leaving the mbuf without label storage. Policies that cannot
handle this case must be declared as NOTLATE, or must be modified.
- mac_labelmbufs holds the current cumulative status as to whether
any policies require mbuf labeling or not. This is updated whenever
the active policy set changes by the function mac_policy_updateflags().
The function iterates the list and checks whether any have the
flag set. Write access to this variable is protected by the policy
list; read access is currently not protected for performance reasons.
This might change if it causes problems.
- Add MAC_POLICY_LIST_ASSERT_EXCLUSIVE() to permit the flags update
function to assert appropriate locks.
- This makes allocation in mac_init_mbuf() conditional on the flag.
Reviewed by: sam
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
mbuf_to_label(). This permits the vast majority of entry point code
to be unaware that labels are stored in m->m_pkthdr.label, such that
we can experiment storage of labels elsewhere (such as in m_tags).
Reviewed by: sam
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
the current queue if its priority is really elevated. This needs more work
as there are cases where a next queue kse could be holding up what would
be a curr queue kse, and thus hurting interactivity. Also, when a thread
with an elevated priority has its priority lowered it should be placed
back on the next queue.
the target process exiting which causes attempts to register the kevent
to randomly fail depending on whether the target runs to completion before
the parent can call kevent(2). The bug actually effects EVFILT_PROC
events on any zombie process, but the most common manifestation is with
parents trying to monitor child processes.
MFC after: 2 weeks
Sponsored by: NTT Multimedia Communications Labs
the second kseq's run queue so that it is referenced by the kse when
it is switched out.
- Spell ksq_rslices properly.
Reported by: Ian Freislich <ianf@za.uu.net>
- Allow user adjustable min and max time slices (suggested by hiten).
- Change the SLP_RUN_MAX to 100ms from 2 seconds so that we learn whether a
process is interactive or not much more quickly.
- Place a process on the current run queue if it is interactive or if it is
running at an interrupt thread priority due to priority prop.
- Use the 'current' timeshare queue for interrupt threads, realtime threads,
and idle threads that are running at higher priority due to priority prop.
This fixes problems where priorities would have been elevated but we would
not check the timeshare run queue until other lower priority tasks were
no longer runnable.
- Keep an array of loads indexed by the priority class as well as a global
load.
- Keep an bucket of nice values with a count of the number of kses currently
runnable with that nice value.
- Keep track of the minimum nice value of any running thread.
- Remove the unused short term sleep accounting. I was attempting to use
this for load balancing but it didn't work out.
- Define a kseq_print() for use with debugging.
- Add KTR debugging at useful places so we can easily debug slice and
priority assignment.
- Decouple the runq assignment from the kseq assignment. kseq_add now keeps
track of statistics. This is done so that the nice and load is still
tracked for the currently running process. Previously if a niced process
was added while a non nice process was running the niced process would
still get a slice since it was not aware of the unnice process.
- Make adjustments for the sched api changes.
of ksegs since they primarily operation on processes.
- KSEs take ticks so pass the kse through sched_clock().
- Add a sched_class() routine that adjusts a ksegrp pri class.
- Define a sched_fork_{kse,thread,ksegrp} and sched_exit_{kse,thread,ksegrp}
that will be used to tell the scheduler about new instances of these
structures within the same process. These will be used by THR and KSE.
- Change sched_4bsd to reflect this API update.
by allprison_mtx), a unique prison/jail identifier field, two path
fields (pr_path for reporting and pr_root vnode instance) to store
the chroot() point of each jail.
o Add jail_attach(2) to allow a process to bind to an existing jail.
o Add change_root() to perform the chroot operation on a specified
vnode.
o Generalize change_dir() to accept a vnode, and move namei() calls
to callers of change_dir().
o Add a new sysctl (security.jail.list) which is a group of
struct xprison instances that represent a snapshot of active jails.
Reviewed by: rwatson, tjr
of asserting that an mbuf has a packet header. Use it instead of hand-
rolled versions wherever applicable.
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
- Treat each class specially in kseq_{choose,add,rem}. Let the rest of the
code be less aware of scheduling classes.
- Skip the interactivity calculation for non TIMESHARE ksegrps.
- Move slice and runq selection into kseq_add(). Uninline it now that it's
big.
as it could be and can do with some more cleanup. Currently its under
options LAZY_SWITCH. What this does is avoid %cr3 reloads for short
context switches that do not involve another user process. ie: we can
take an interrupt, switch to a kthread and return to the user without
explicitly flushing the tlb. However, this isn't as exciting as it could
be, the interrupt overhead is still high and too much blocks on Giant
still. There are some debug sysctls, for stats and for an on/off switch.
The main problem with doing this has been "what if the process that you're
running on exits while we're borrowing its address space?" - in this case
we use an IPI to give it a kick when we're about to reclaim the pmap.
Its not compiled in unless you add the LAZY_SWITCH option. I want to fix a
few more things and get some more feedback before turning it on by default.
This is NOT a replacement for Bosko's lazy interrupt stuff. This was more
meant for the kthread case, while his was for interrupts. Mine helps a
little for interrupts, but his helps a lot more.
The stats are enabled with options SWTCH_OPTIM_STATS - this has been a
pseudo-option for years, I just added a bunch of stuff to it.
One non-trivial change was to select a new thread before calling
cpu_switch() in the first place. This allows us to catch the silly
case of doing a cpu_switch() to the current process. This happens
uncomfortably often. This simplifies a bit of the asm code in cpu_switch
(no longer have to call choosethread() in the middle). This has been
implemented on i386 and (thanks to jake) sparc64. The others will come
soon. This is actually seperate to the lazy switch stuff.
Glanced at by: jake, jhb
to select a KSE with a slice of 0 we will update its slice and insert it
onto the next queue.
- Pass the KSE instead of the ksegrp into sched_slice(). This more
accurately reflects the behavior of the code. Slices are granted to kses.
- Add a function kseq_nice_min() which finds the smallest nice value
assigned to the kseg of any KSE on the queue.
- Rewrite the logic in sched_slice(). Add a large comment describing the
new slice selection scheme. To summarize, slices are assigned based on
the nice value. Priorities are still calculated based on the nice and
interactivity of a process. Slice sizes of 0 may be granted for KSEs
whos nice is 20 or futher away from the lowest nice on the run queue.
Other nice values are scaled across the range [min, min+20]. This fixes
ULEs bad behavior with positively niced processes.
if (p->p_numthreads > 1) and not a flag because action is only necessary
if there are other threads. The rest of the system has no need to
identify thr threaded processes.
- In kern_thread.c use thr_exit1() instead of thread_exit() if P_THREADED
is not set.
- umtx_lock() is defined as an inline in umtx.h. It tries to do an
uncontested acquire of a lock which falls back to the _umtx_lock()
system-call if that fails.
- umtx_unlock() is also an inline which falls back to _umtx_unlock() if the
uncontested unlock fails.
- Locks are keyed off of the thr_id_t of the currently running thread which
is currently just the pointer to the 'struct thread' in kernel.
- _umtx_lock() uses the proc pointer to synchronize access to blocked thread
queues which are stored in the first blocked thread.
- sys/thr.h contains the user space visible api that is intended only for
use in threading library packages.
- kern/kern_thr.c contains thr system calls and other thr specific code.
kern_sigtimedwait() which is capable of supporting all of their semantics.
- These should be POSIX compliant but more careful review is needed before
we announce this.
a follow on commit to kern_sig.c
- signotify() now operates on a thread since unmasked pending signals are
stored in the thread.
- PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK.
- Change all consumers to pass in a thread.
Right now this does not cause any functional changes but it will be important
later when signals can be delivered to specific threads.
of sf_buf_alloc() instead of expecting sf_buf_alloc()'s caller to map it.
The ultimate reason for this change is to enable two optimizations:
(1) that there never be more than one sf_buf mapping a vm_page at a time
and (2) 64-bit architectures can transparently use their 1-1 virtual
to physical mapping (e.g., "K0SEG") avoiding the overhead of pmap_qenter()
and pmap_qremove().
time a character is written. Use this at boot time to reject the
existing buffer contents if they are corrupt. This fixes a problem
seen on some hardware (especially laptops) where the message buffer
gets partially corrupted during a short power cycle or reset, but
the msgbuf structure is left intact so it gets reused, resulting
in random junk and control characters appearing in dmesg and
/var/log/messages.
PR: kern/28497
that the feature can be enabled during the boot process. Note the
continued limitation that FreeBSD fails so rapidly with this setting
enabled that it's hard to narrow down particular failures for
correction; we really need per-malloc type failure rates.
in a debugging feature causing M_NOWAIT allocations to fail at
a specified rate. This can be useful for detecting poor
handling of M_NOWAIT: the most frequent problems I've bumped
into are unconditional deference of the pointer even though
it's NULL, and hangs as a result of a lost event where memory
for the event couldn't be allocated. Two sysctls are added:
debug.malloc.failure_rate
How often to generate a failure: if set to 0 (default), this
feature is disabled. Otherwise, the frequency of failures --
I've been using 10 (one in ten mallocs fails), but other
popular settings might be much lower or much higher.
debug.malloc.failure_count
Number of times a coerced malloc failure has occurred as a
result of this feature. Useful for tracking what might have
happened and whether failures are being generated.
Useful possible additions: tying failure rate to malloc type,
printfs indicating the thread that experienced the coerced
failure.
Reviewed by: jeffr, jhb
additional flags argument to indicate blocking disposition, and
pass in M_NOWAIT from the IP reassembly code to indicate that
blocking is not OK when labeling a new IP fragment reassembly
queue. This should eliminate some of the WITNESS warnings that
have started popping up since fine-grained IP stack locking
started going in; if memory allocation fails, the creation of
the fragment queue will be aborted.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
where physical addresses larger than virtual addresses, such as i386s
with PAE.
- Use this to represent physical addresses in the MI vm system and in the
i386 pmap code. This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
detection code, and due to kvtop returning vm_paddr_t instead of u_long.
Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.
Sponsored by: DARPA, Network Associates Laboratories
Discussed with: re, phk (cdevsw change)
flexible process_fork, process_exec, and process_exit eventhandlers. This
reduces code duplication and also means that I don't have to go duplicate
the eventhandler locking three more times for each of at_fork, at_exec, and
at_exit.
Reviewed by: phk, jake, almost complete silence on arch@
is set to 0, it now has the same affect as setting witness_dead used to
have.
- Added a sysctl handler that allows root to change witness_watch from a
non-zero value to zero to disable witness at runtime. Note that you
can't turn witness back on once it is off. You can only turn it off as
a one-way switch.
- Added a comment describing the possible values of witness_watch.
kse_mailbox to schedule an upcall, this is useful for userland timeout
routine, for example pthread_cond_timedwait().
Also extract upcall scheduling code from kse_reassign and create
a new function called thread_switchout to include these code.
Reviewed by: julain
the devstat is for an "interior" GEOM node and register using the
name argument as a geom identity pointer. Do not put these devstat
structures on the list returned by the sysctl.
This gives us the ability to tell the two kinds of nodes apart and
leave the current "strictly physical" view of devstat intact without
modifications, yet be able to use devstat for both kinds of devices.
It also saves us bloating struct devstat with another 48 bytes of
space for the name. At least for now.
Reviewed by: ken
Add a mutex and protect the allocation and traversal of the list with it.
When we allocate a page for devstat use we drop the mutex and use
M_WAITOK this is not nice, but under the given circumstances the
best we can do.
In the sysctl handler for returning the devstat entries we do not want to
hold the mutex across copyout(9) calls, so we keep a very careful eye on
the devstat_generation count, and abandon with EBUSY if it changes under
our feet.
Specifically test for BIO_WRITE, rather than default non-read,non-deletes
as write. Make the default be DEVSTAT_NO_DATA.
Add atomic increments of the sequence[01] fields so applications using the
mmap'ed view stand a chance of detecting updates in progress.
Reviewed by: ken
but I decided that it was important for this patch to not bit-rot, and
since it is mainly moving code around, the total amount of entropy is
epsilon /phk)
This is a patch to move the common parts of linux_getcwd() back into
kern/vfs_cache.c so that the standard FreeBSD libc getcwd() can use it's
extended functionality. The linux syscall linux_getcwd() in
compat/linux/linux_getcwd.c has been rewritten to use it too. It should
be possible to simplify libc's getcwd() after this. No doubt this code
needs some cleaning up, since I've left in the sysctl variables I used
for debugging.
PR: 48169
Submitted by: James Whitwell <abacau@yahoo.com.au>
Kernel:
Change statistics to use the *uptime() timescale (ie: relative to
boottime) rather than the UTC aligned timescale. This makes the
device statistics code oblivious to clock steps.
Change timestamps to bintime format, they are cheaper.
Remove the "busy_count", and replace it with two counter fields:
"start_count" and "end_count", which are updated in the down and
up paths respectively. This removes the locking constraint on
devstat.
Add a timestamp argument to devstat_start_transaction(), this will
normally be a timestamp set by the *_bio() function in bp->bio_t0.
Use this field to calculate duration of I/O operations.
Add two timestamp arguments to devstat_end_transaction(), one is
the current time, a NULL pointer means "take timestamp yourself",
the other is the timestamp of when this transaction started (see
above).
Change calculation of busy_time to operate on "the salami principle":
Only when we are idle, which we can determine by the start+end
counts being identical, do we update the "busy_from" field in the
down path. In the up path we accumulate the timeslice in busy_time
and update busy_from.
Change the byte_* and num_* fields into two arrays: bytes[] and
operations[].
Userland:
Change the misleading "busy_time" name to be called "snap_time" and
make the time long double since that is what most users need anyway,
fill it using clock_gettime(CLOCK_MONOTONIC) to put it on the same
timescale as the kernel fields.
Change devstat_compute_etime() to operate on struct bintime.
Remove the version 2 legacy interface: the change to bintime makes
compatibility far too expensive.
Fix a bug in systat's "vm" page where boot relative busy times would
be bogus.
Bump __FreeBSD_version to 500107
Review & Collaboration by: ken
KTRFAC_DROP to track instances when ktrace events are dropped due to the
request pool being exhausted. When a thread tries to post a ktrace event
and is unable to due to no available ktrace request objects, it sets
KTRFAC_DROP in its process' p_traceflag field. The next trace event to
successfully post from that process will set the KTR_DROP flag in the
header of the request going out and clear KTRFAC_DROP.
The KTR_DROP flag is the high bit in the type field of the ktr_header
structure. Older kdump binaries will simply complain about an unknown type
when seeing an entry with KTR_DROP set. Note that KTR_DROP being set on a
record in a ktrace file does not tell you anything except that at least one
event from this process was dropped prior to this event. The user has no
way of knowing what types of events were dropped nor how many were dropped.
Requested by: phk
struct proc as p_tracecred alongside the current cache of the vnode in
p_tracep. This credential is then used for all later ktrace operations on
this file rather than using the credential of the current thread at the
time of each ktrace event.
- Now that we have multiple ktrace-related items in struct proc that are
pointers, rename p_tracep to p_tracevp to make it less ambiguous.
Requested by: rwatson (1)
- Create a new function bdone() which sets B_DONE and calls wakup(bp). This
is suitable for use as b_iodone for buf consumers who are not going
through the buf cache.
- Create a new function bwait() which waits for the buf to be done at a set
priority and with a specific wmesg.
- Replace several cases where the above functionality was implemented
without locking with the new functions.
possible for some time.
- Lock the buf before accessing fields. This should very rarely be locked.
- Assert that B_DELWRI is set after we acquire the buf. This should always
be the case now.
requiring locked bufs in vfs_bio_awrite(). Previously the buf could
have been written out by fsync before we acquired the buf lock if it
weren't for giant. The cluster_wbuild() handles this race properly but
the single write at the end of vfs_bio_awrite() would not.
- Modify flushbufqueues() so there is only one copy of the loop. Pass a
parameter in that says whether or not we should sync bufs with deps.
- Call flushbufqueues() a second time and then break if we couldn't find
any bufs without deps.
than a MAXPHYS size block ahead. Having this set too high just leaves
other processes starved for IO and screws up interactive response. Let the
users with RAID set it higher when they need it.