faster and iterate to over its work list a few times in an attempt
to empty the work list before the syncer terminates. This leaves
fewer dirty blocks to be written at the "syncing disks" stage and
keeps the the "giving up on N buffers" problem from being triggered
by the presence of a large soft updates work list at system shutdown
time. The downside is that the syncer takes noticeably longer to
terminate.
Tested by: "Arjan van Leeuwen" <avleeuwen AT piwebs DOT com>
Approved by: mckusick
devremoved events. This reduces the races around these events. We
now include the pnp info in both. This lets one do more interesting
thigns with devd on device insertion.
Submitted by: Bernd Walter
creation of the sysctl tree for the turnstile profiling stats until a
SI_SUB_LOCK sysinit. Doing it in init_turnstiles() is too early as it is
called before mi_startup().
hash tables used in the sleep queue and turnstile code. Each option adds
a sysctl tree under debug containing the maximum depth of any bucket in
the hash table as well as a separate node for each bucket (or chain)
containing the current depth and maximum depth for that bucket.
the queue has been removed from the global taskqueue_queues list. This
removes the need for the draining queue hack.
- Allow taskqueue_run() to be called with the taskqueue mutex held. It
can still be called without the lock for API compatiblity. In that case
it will acquire the lock internally.
- Don't lock the individual queue mutex in taskqueue_find() until after the
strcmp as the global queues mutex is sufficient for the strcmp.
- Simplify taskqueue_thread_loop() now that it can hold the lock across
taskqueue_run().
Submitted by: bde (mostly)
so_gencnt, numopensockets, and the per-socket field so_gencnt. Annotate
this this might be better done with atomic operations.
Annotate what accept_mtx protects.
associated with performing a wakeup on the socket buffer:
- When performing an sbappend*() followed by a so[rw]wakeup(), explicitly
acquire the socket buffer lock and use the _locked() variants of both
calls. Note that the _locked() sowakeup() versions unlock the mutex on
return. This is done in uipc_send(), divert_packet(), mroute
socket_send(), raw_append(), tcp_reass(), tcp_input(), and udp_append().
- When the socket buffer lock is dropped before a sowakeup(), remove the
explicit unlock and use the _locked() sowakeup() variant. This is done
in soisdisconnecting(), soisdisconnected() when setting the can't send/
receive flags and dropping data, and in uipc_rcvd() which adjusting
back-pressure on the sockets.
For UNIX domain sockets running mpsafe with a contention-intensive SMP
mysql benchmark, this results in a 1.6% query rate improvement due to
reduce mutex costs.
The overhead of unconditionally allocating TIDs (and likewise,
unconditionally deallocating them), is amortized across multiple
thread creations by the way UMA makes it possible to have type-stable
storage.
Previously the cost was kept down by having threads created as part
of a fork operation use the process' PID as the TID. While this had
some nice properties, it also introduced complexity in the way TIDs
were allocated. Most importantly, by using the type-stable storage
that UMA gives us this was also unnecessary.
This change affects how core dumps are created and in particular how
the PRSTATUS notes are dumped. Since we don't have a thread with a
TID equalling the PID, we now need a different way to preserve the
old and previous behavior. We do this by having the given thread (i.e.
the thread passed to the core dump code in td) dump it's state first
and fill in pr_pid with the actual PID. All other threads will have
pr_pid contain their TIDs. The upshot of all this is that the debugger
will now likely select the right LWP (=TID) as the initial thread.
Credits to: julian@ for spotting how we can utilize UMA.
Thanks to: all who provided julian@ with test results.
copies.
No current line disciplines have a dynamically changing hotchar, and
expecting to receive anything sensible during a change in ldisc is
insane so no locking of the hotchar field is necessary.
we have to revert to TTYDISC which we know will successfully open
rather than try the previous ldisc which might also fail to open.
Do not let ldisc implementations muck about with ->t_line, and remove
code which checks for reopens, it should never happen.
Move ldisc->l_hotchar to tty->t_hotchar and have ldisc implementation
initialize it in their open routines. Reset to zero when we enter
TTYDISC. ("no" should really be -1 since zero could be a valid
hotchar for certain old european mainframe protocols.)
older API to list attributes on a file (zero-length attribute name)
to function. extattr_list_*() are now the only available APIs to
use when listing attributes.
the socket buffer having its limits adjusted. sbreserve() now acquires
the lock before calling sbreserve_locked(). In soreserve(), acquire
socket buffer locks across read-modify-writes of socket buffer fields,
and calls into sbreserve/sbrelease; make sure to acquire in keeping
with the socket buffer lock order. In tcp_mss(), acquire the socket
buffer lock in the calling context so that we have atomic read-modify
-write on buffer sizes.
the SS_NBIO flag from the parent socket to the child socket during an
accept() operation.
The file descriptor O_NONBLOCK flag would have been propagated already
by the fflag assignment, and therefore would have been inconsistent
with the underlying socket's so_state member.
This makes accept() more closely adhere to the API contract we effectively
outline in the manual page. Note also that Linux continues to differ here;
O_NONBLOCK is not propagated. The other BSDs do propagate the flag, as
does Solaris. The Single UNIX Specification does not offer specific
advice on this issue.
PR: kern/45733
Requested by: Jayanth Vijayaraghavan
Reviewed by: rwatson
* Obtain/release schedlock around calls to calcru.
* Sort switch cases which do not cascade per style(9).
* Sort local variables per style(9).
* Remove "superfluous" whitespace.
* Cleanup handling of NULL uap->tp in clock_getres(). It would probably
be better to return EFAULT like clock_gettime() does by passing the
pointer to copyout(), but I presume it was written to not fail on
purpose in the original code. I'll defer to -standards on this one.
Reported by: bde
This is not really used by the process but it's confusing to some
status readers to see zombie processes the "runnin" threads.
Pointed out by: Don Lewis <truckman@FreeBSD.org>
where it is known to detect a problem but the problem is not very easy
to fix. The warning became very common recently after a call to calcru()
was added to fill_kinfo_thread().
Another (much older) cause of "negative times" (actually non-monotonic
times) was fixed in rev.1.237 of kern_exit.c.
Print separate messages for non-monotonic and negative times.
from exit1(). sched_exit() must be called unconditionally from exit1().
It was called almost unconditionally because the only exits on system
shutdown if at all.
(2) Removed the comment that presumed to know what sched_exit() does.
sched_exit() does different things for the ULE case. The call became
essential when it started doing load average stuff, but its caller
should not know that.
(3) Didn't fix bugs caused by bitrot in the condition. The condition was
last correct in rev.1.208 when it was in wait1(). There p was spelled
curthread->td_proc and was for the waiting parent; now p is for the
exiting child. The condition was to avoid lowering init's priority.
It should be in sched_exit() itself. Lowering of priorities is broken
in other ways in at least the 4BSD scheduler, and doing it for init
causes less noticeable problems than doing it for for shells.
Noticed by: julian (1)