marked idle, thus breaking cpu load balancing.
- Change sched_interact_update() to fix cases where the stored history
has expanded significantly rather than handling them in the callers. This
fixes a case where sched_priority() could compute a bad value.
- Add a sysctl to disable the global load balancer for experimentation.
server.
Don't complain about a hard loop id of 0xffff- we get this in
point-to-point topologies with the 2300 and 2K Login firmware.
Up the timeout on register FC4 types commands.
- Rename confusing AGP_INTEL_I845_MCHCFG to AGP_INTEL_I845_AGPM.
- Move E7205 and E7505 from i8x5 to i8x0 family. It probably worked
because the actual offset is the same.
In fact, all three families have the bit at the exact same place. Only
differences are name and width of the registers, i.e., NBXCFG (0x50, dword),
RDCR (0x51, byte), AGPM (0x51, byte), MCHCFG (0x50, word) depending on
the family of the chipsets.
human-readable format. Note that we report 'realloc(p, 0)' as 'free(p)'
since both cases are encoded the same way and 'free()' is more common
than a realloc() to 0.
MFC after: 1 week
sysctl and socket teardown by adding a reference count to the UNIX domain
pcb object and fixing the sysctl that enumerates unpcbs to grab a
reference on each unpcb while it builds the list to copy out to userland.
- Close a race between UNIX domain pcb garbage collection (unp_gc()) and
file descriptor teardown (fdrop()) by adding a new garbage collection
flag FWAIT. unp_gc() sets FWAIT while it walks the message buffers
in a UNIX domain socket looking for nested file descriptor references
and clears the flag when it is finished. fdrop() checks to see if the
flag is set on a file descriptor whose refcount just dropped to 0 and
waits for unp_gc() to clear the flag before completely destroying the
file descriptor.
MFC after: 1 week
Reviewed by: rwatson
Submitted by: ups
Hopefully makes the panics go away: mx1
- Add a printf in swp_pager_meta_build() to warn if the swapzone becomes
exhausted so that there's at least a warning before a box that runs out
of swapzone space before running out of swap space deadlocks.
MFC after: 1 week
Reviwed by: alc
functions now more closely resemble similar functions in nullfs.
This also eliminates some errors.
Submitted by: daichi, Masanori OZAWA <ozawa ongs co jp>
wrap this within #if/#else/#endif so that it will only take effect once
ARCHIVE_API_VERSION is increased (which should happen on HEAD some time
between now and when RELENG_7 is branched).
setting ftick = ltick = ticks in schedinit().
- Update the priority when we are pulled off of the run queue and when we
are inserted onto the run queue so that it more accurately reflects our
present status. This is important for efficient priority propagation
functioning.
- Move the frequency test into sched_pctcpu_update() so we don't repeat it
each time we'd like to call it.
- Put some temporary work-around code in sched_priority() in case the tick
mechanism produces a bad priority. Eventually this should revert to an
assert again.
a spin mutex since it doesn't have an INTR_FAST interrupt handler.
Beyond that the driver is still under Giant anyway.
- Remove unneeded locking during attach across operations that can't be
called with locks held (such as bus_dma_tag_create()).
MFC after: 1 week
Not objected to by: scottl
returning the length skipped in a ssize_t to using off_t for both. This
does not break any A[BP]Is, since compression_skip is entirely internal
to libarchive.
If a skip request is > SSIZE_MAX, don't pass it down to the client layer
skip function, since those still uses size_t / ssize_t. Instead, just
read the data and throw it away.
With this commit, libarchive/bsdtar should now successfully skip archive
entries of >2GB on 32-bit systems, but does so slower than necessary.
The performance will improve with a future A[BP]I breaking commit which
makes client layer skip functions use off_t.
Discussed with: kientzle
MFC after: 1 week
the most recently chosen index. This significantly improves nice
behavior. This allows a lower priority thread to run some multiple of
times before the higher priority thread makes it to the front of
the queue. A nice +20 cpu hog now only gets ~5% of the cpu when running
with a nice 0 cpu hog and about 1.5% with a nice -20 hog. A nice
difference of 1 makes a 4% difference in cpu usage between two hogs.
- Track a seperate insert and removal index. When the removal index is
empty it is updated to point at the current insert index.
- Don't remove and re-add a thread to the runq when it is being adjusted
down in priority.
- Pull some conditional code out of sched_tick(). It's looking a bit
large now.
- Remove the double queue mechanism for timeshare threads. It was slow
due to excess cache lines in play, caused suboptimal scheduling behavior
with niced and other non-interactive processes, complicated priority
lending, etc.
- Use a circular queue with a floating starting index for timeshare threads.
Enforces fairness by moving the insertion point closer to threads with
worse priorities over time.
- Give interactive timeshare threads real-time user-space priorities and
place them on the realtime/ithd queue.
- Select non-interactive timeshare thread priorities based on their cpu
utilization over the last 10 seconds combined with the nice value. This
gives us more sane priorities and behavior in a loaded system as
compared to the old method of using the interactivity score. The
interactive score quickly hit a ceiling if threads were non-interactive
and penalized new hog threads.
- Use one slice size for all threads. The slice is not currently
dynamically set to adjust scheduling behavior of different threads.
- Add some new sysctls for scheduling parameters.
Bug fixes/Clean up:
- Fix zeroing of td_sched after initialization in sched_fork_thread() caused
by recent ksegrp removal.
- Fix KSE interactivity issues related to frequent forking and exiting of
kse threads. We simply disable the penalty for thread creation and exit
for kse threads.
- Cleanup the cpu estimator by using tickincr here as well. Keep ticks and
ltick/ftick in the same frequency. Previously ticks were stathz and
others were hz.
- Lots of new and updated comments.
- Many many others.
Tested on: up x86/amd64, 8way amd64.
- runq_add_pri allows the caller to position the thread at any rqindex
regardless of priority.
- runq_choose_from() chooses the lowest priority thread starting from a given
index. The index is updated with the rqindex of the chosen thread. This
routine is used to pick the lowest priority relative to a given index.
- runq_remove_idx() updates the index if the run queue that held the removed
thread is now empty.