"debug.mpsafevm" results in (almost) Giant-free execution of zero-fill
page faults. (Giant is held only briefly, just long enough to determine
if there is a vnode backing the faulting address.)
Also, condition the acquisition and release of Giant around calls to
pmap_remove() on "debug.mpsafevm".
The effect on performance is significant. On my dual Opteron, I see a
3.6% reduction in "buildworld" time.
- Use atomic operations to update several counters in vm_fault().
before dereferencing sotounpcb() and checking its value, as so_pcb
is protected by protocol locking, not subsystem locking. This
prevents races during close() by one thread and use of ths socket
in another.
unp_bind() now assert the UNP lock, and uipc_bind() now acquires
the lock around calls to unp_bind().
wait for system wires to disappear, do so (much more trivially) by
instead only checking for system wires of user maps and not kernel maps.
Alternative by: tor
Reviewed by: alc
o Update to match 5-CURRENT reality.
o Bump up minimum system requirements.
o Make examples work.
PR: docs/70485
Submitted by: Gavin Atkinson <gavin.atkinson@ury.york.ac.uk>
- pipespace is now able to resize non-empty pipes; this allows
for many more resizing opportunities
- Backing is no longer pre-allocated for the reverse direction
of pipes. This direction is rarely (if ever) used, so this cuts the
amount of map space allocated to a pipe in half.
- Pipe growth is now much more dynamic; a pipe will now grow when
the total amount of data it contains and the size of the write are
larger than the size of pipe. Previously, only individual writes greater
than the size of the pipe would cause growth.
- In low memory situations, pipes will now shrink during both read
and write operations, where possible. Once the memory shortage
ends, the growth code will cause these pipes to grow back to an appropriate
size.
- If the full PIPE_SIZE allocation fails when a new pipe is created, the
allocation will be retried with SMALL_PIPE_SIZE. This helps to deal
with the situation of a fragmented map after a low memory period has
ended.
- Minor documentation + code changes to support the above.
In total, these changes increase the total number of pipes that
can be allocated simultaneously, drastically reducing the chances that
pipe allocation will fail.
Performance appears unchanged due to dynamic resizing.
Decrease log severity to debug if a protocol is not supported by the
kernel (rpcbind checks /etc/netconfig if a protocol is available).
This avoids "rpcbind: cannot create socket for tcp6" messages
at startup on IPv4-only kernels.
for EBus, ISA and PCI, by compiling ofw_isa.c and ofw_pci_if.m unconditio-
nally. The correct way is to rewrite OF_decode_addr() in ofw_machdep.c in
a bus-neutral way. That's certainly possible but we unfortunately didn't
make it for FreeBSD 5.3.
Approved by: tmm
meaningless. In particular, don't assume that it is left untouched if
stat(2) fails; that assumption happens to fail at high optimization
levels on some platforms.
MFC after: 1 week
contained "sanity" checks that could be violated if another CPU modified
the pmap between the emulation trap and locking the pmap in
pmap_emulate_reference(). As a result, the pte could be inconsistent
with the access that caused the emulation trap. In such cases,
pmap_emulate_reference() now flushes the current CPU's TLB entry and
returns.
- Make pmap_changebit() an inline function, reducing object code size.
- Add the manufacturer name to each item in the device list.
- Make the note about supporting "IBM e335" into a general list and
change the entry to use the full product name ("IBM eServer xSeries
335").
- Add Dell PowerEdge 1750 to the list of systems with mpt onboard.
use it, only those with FCode. Add references to dc(4), gem(4) and hme(4)
for obtaining further information about such devices presently supported
by FreeBSD.
- Correct the HISTORY section. There was an eeprom(8) utility in 4.4BSD and
early versions of FreeBSD 2.x.
- Add an AUTHORS section.
Don't count busy buffers before the initial call to sync() and
don't skip the initial sync() if no busy buffers were called.
Always call sync() at least once if syncing is requested. This
defers the "Syncing disks, buffers remaining..." message until
after the initial sync() call and the first count of busy
buffers. This backs out changes in kern_shutdown 1.162.
Print a different message when there are no busy buffers after the
initial sync(), which is now the expected situation.
Print an additional message when syncing has completed successfully
in the unusual situation where the work of syncing was done by
boot().
Uppercase one message to make it consistent with all of the other
kernel shutdown messages.
Discussed with: bde (in a much earlier form, prior to 1.162)
Reviewed by: njl (in an earlier form)
to specify an alternative padding character when using a conversion
mode, or when using noerror with sync and an input error occurs. This
facilities reading old and error-prone media by allowing the user to
more effectively mark error blocks in the output stream.
- Remove reference to the NOTES section in the entry for Sun DMFE,
since ot doesn't work well with the auto generated Hardware Notes. [1]
OK'ed by: marius [1]
logical CPUs on a system to be used as a dedicated watchdog to cause a
drop to the debugger and/or generate an NMI to the boot processor if
the kernel ceases to respond. A sysctl enables the watchdog running
out of the processor's idle thread; a callout is launched to reset a
timer in the watchdog. If the callout fails to reset the timer for ten
seconds, the watchdog will fire. The sysctl allows you to select which
CPU will run the watchdog.
A sample "debug.leak_schedlock" is included, which causes a sysctl to
spin holding sched_lock in order to trigger the watchdog. On my Xeons,
the watchdog is able to detect this failure mode and break into the
debugger, which cannot otherwise be done without an NMI button.
This option does not currently work with sched_ule due to ule's push
notion of scheduling, similar to machdep.hlt_logical_cpus failing to
work with that scheduler.
On face value, this might seem somewhat inefficient, but there are a
lot of dual-processor Xeons with HTT around, so using one as a watchdog
for testing is not as inefficient as one might fear.