This is a followup to r212964.
stack_print call chain obtains linker sx lock and thus potentially may
lead to a deadlock depending on a kind of a panic.
stack_print_ddb doesn't acquire any locks and it doesn't use any
facilities of ddb backend.
Using stack_print_ddb outside of DDB ifdef required taking a number of
helper functions from under it as well.
It is a good idea to rename linker_ddb_* and stack_*_ddb functions to
have 'unlocked' component in their name instead of 'ddb', because those
functions do not use any DDB services, but instead they provide unlocked
access to linker symbol information. The latter was previously needed
only for DDB, hence the 'ddb' name component.
Alternative is to ditch unlocked versions altogether after implementing
proper panic handling:
1. stop other cpus upon a panic
2. make all non-spinlock lock operations (mutex, sx, rwlock) be a no-op
when panicstr != NULL
Suggested by: mdf
Discussed with: attilio
MFC after: 2 weeks
If timer capabilities forcing us to change periodicity mode, try to restore
it back later, as soon as new choosen timer capable to do it. Without this,
timer change like HPET->RTC->HPET always results in enabling periodic mode.
the first line of a script exceeded MAXSHELLCMDLEN characters, then
exec_imgact_shell() silently truncated the line and passed on the truncated
interpreter name or argument. Now, exec_imgact_shell() will fail and return
ENOEXEC, which is the commonly used errno among Unix variants for this type
of error. (2) Previously, exec_imgact_shell()'s check on the length of the
interpreter's name was ineffective. In other words, exec_imgact_shell()
could not possibly fail and return ENAMETOOLONG. The reason being that the
length of the interpreter name had to exceed MAXSHELLCMDLEN characters in
order that ENAMETOOLONG be returned. But, the search for the end of the
interpreter name stops after at most MAXSHELLCMDLEN - 2 characters are
scanned. (In the end, this particular error is eventually discovered
outside of exec_imgact_shell() and ENAMETOOLONG is returned. So, the real
effect of this second change is that the error is detected earlier, in
exec_imgact_shell().)
Update the definition of MAXINTERP to the actual limit on the size of
the interpreter name that has been in effect since r142453 (from
2005).
In collaboration with: kib
The idea is to add KDB and KDB_TRACE options to GENERIC kernels on
stable branches, so that at least the minimal information is produced
for non-specific panics like traps on page faults.
The GENERICs in stable branches seem to already include STACK option.
Reviewed by: attilio
MFC after: 2 weeks
is running on "dummy" time counter. But to function properly in one-shot
mode, event timer management code requires working time counter. Slow
moving "dummy" time counter delays first hardclock() call by few seconds
on my systems, even though timer interrupts were correctly kicking kernel.
That causes few seconds delay during boot with one-shot mode enabled.
To break this loop, explicitly call tc_windup() first time during
initialization process to let it switch to some real time counter.
acl_is_trivial_np(3) properly recognize the new trivial ACLs. From
the user point of view, that means "ls -l" no longer shows plus signs
for all the files when running ZFS v28.
Instead of adding custom checks to wait for DCD on open(), just modify
the termios structure to set CLOCAL. This means SIGHUP is no longer
generated when losing DCD as well.
Reviewed by: kib@
MFC after: 1 week
This makes /dev/console more fail-safe and prevents a potential console
lock-up during boot.
Discussed on: stable@
Tested by: koitsu@
MFC after: 1 week
was eliminated: all references to sockets are explicitly managed by sorele()
and the protocols. As such, garbage collect sotryfree(), and update
sofree() comments to make the new world order more clear.
MFC after: 3 days
Reported by: Anuranjan Shukla <anshukla at juniper dot net>
This is just a cosmetic change for prettier output.
'indent' variable/parameter serves two purposes: it specifies whitespace
indentation level and also implies cpu group level/depth.
It would have been better to split those two uses,
but for now just a simple change.
MFC after: 1 week
sending IPI to other CPUs. Otherwise, other CPUs will try to honor stale
value, programming timer for zero interval. If timer is fast enough,
it caused extra interrupt before timer correctly reprogrammed by BSP.
Add a drain function for struct sysctl_req, and use it for a variety
of handlers, some of which had to do awkward things to get a large
enough SBUF_FIXEDLEN buffer.
Note that some sysctl handlers were explicitly outputting a trailing
NUL byte. This behaviour was preserved, though it should not be
necessary.
Reviewed by: phk (original patch)
to handle current timecounter wraps. Make kern_clocksource.c to honor that
requirement, scheduling sleeps on first CPU for no more then specified
period. Allow other CPUs to sleep up to 1/4 second (for any case).
unexpected things in copyout(9) and so wiring the user buffer is not
sufficient to perform a copyout(9) while holding a random mutex.
Requested by: nwhitehorn
If a kobj method doesn't have any explicitly provided default
implementation, then it is auto-assigned kobj_error_method.
kobj_error_method is proper only for methods that return error code,
because it just returns ENXIO.
So, in the case of unimplemented bus_add_child caller would get
(device_t)ENXIO as a return value, which would cause the mistake to go
unnoticed, because return value is typically checked for NULL.
Thus, a specialized null_add_child is added. It would have sufficied
for correctness to return NULL, but this type of mistake was deemed to
be rare and serious enough to call panic instead.
Watch out for this kind of problem with other kobj methods.
Suggested by: jhb, imp
MFC after: 2 weeks
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.
There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.
As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.
Tested by: many (on i386, amd64, sparc64 and powerc)
H/W donated by: Gheorghe Ardelean
Sponsored by: iXsystems, Inc.
when mount and update are executed in parallel.
Encapsulate syncer vnode deallocation into the helper function
vfs_deallocate_syncvnode(), to not externalize sync_mtx from vfs_subr.c.
Found and reviewed by: jh (previous version of the patch)
Tested by: pho
MFC after: 3 weeks
- Teach SCHED_4BSD to inform cpu_idle() about high sleep/wakeup rate to
choose optimized handler. In case of x86 it is MONITOR/MWAIT. Also it
will be needed to bypass forthcoming idle tick skipping logic to not
consume resources on events rescheduling when it won't give any benefits.
- Teach SCHED_4BSD to wake up idle CPUs without using IPI. In case of x86,
when MONITOR/MWAIT is active, it require just single memory write. This
doubles performance on some heavily switching test loads.
code associated with overflow or with the drain function. While this
function is not expected to be used often, it produces more information
in the form of an errno that sbuf_overflowed() did.
This reflects actual type used to store and compare child device orders.
Change is mostly done via a Coccinelle (soon to be devel/coccinelle)
semantic patch.
Verified by LINT+modules kernel builds.
Followup to: r212213
MFC after: 10 days
handlers, some of which had to do awkward things to get a large enough
FIXEDLEN buffer.
Note that some sysctl handlers were explicitly outputting a trailing NUL
byte. This behaviour was preserved, though it should not be necessary.
Reviewed by: phk
called when the sbuf internal buffer is filled. For kernel sbufs with a
drain, the internal buffer will never be expanded. For userland sbufs
with a drain, the internal buffer may still be expanded by
sbuf_[v]printf(3).
Sbufs now have three basic uses:
1) static string manipulation. Overflow is marked.
2) dynamic string manipulation. Overflow triggers string growth.
3) drained string manipulation. Overflow triggers draining.
In all cases the manipulation is 'safe' in that overflow is detected and
managed.
Reviewed by: phk (the previous version)
syscall and the same function, but are very different and share almost no code.
To make it easier to read and analyze, split vfs_domount() into
vfs_domount_first() and vfs_domount_update().
Reviewed by: kib
- Correct error paths. The system will be useless on devfs_fixup() failure, so
why bother? Maybe for the same reason why a dead body is washed and dressed
in a nice suit before it is put into a coffin? Maybe system's last will is to
panic without any locks held?
Reviewed by: kib
Also change int -> u_int for order parameter in device_add_child_ordered.
There should not be any ABI change as struct device is private to subr_bus.c
and the API change should be compatible.
To do: change int -> u_int for order parameter of bus_add_child method
and its implementations. The change should also be API compatible, but
is a bit more churn.
Suggested by: imp, jhb
MFC after: 1 week
Add the BIO_ORDERED flag for struct bio and update bio clients to use it.
The barrier semantics of bioq_insert_tail() were broken in two ways:
o In bioq_disksort(), an added bio could be inserted at the head of
the queue, even when a barrier was present, if the sort key for
the new entry was less than that of the last queued barrier bio.
o The last_offset used to generate the sort key for newly queued bios
did not stay at the position of the barrier until either the
barrier was de-queued, or a new barrier (which updates last_offset)
was queued. When a barrier is in effect, we know that the disk
will pass through the barrier position just before the
"blocked bios" are released, so using the barrier's offset for
last_offset is the optimal choice.
sys/geom/sched/subr_disk.c:
sys/kern/subr_disk.c:
o Update last_offset in bioq_insert_tail().
o Only update last_offset in bioq_remove() if the removed bio is
at the head of the queue (typically due to a call via
bioq_takefirst()) and no barrier is active.
o In bioq_disksort(), if we have a barrier (insert_point is non-NULL),
set prev to the barrier and cur to it's next element. Now that
last_offset is kept at the barrier position, this change isn't
strictly necessary, but since we have to take a decision branch
anyway, it does avoid one, no-op, loop iteration in the while
loop that immediately follows.
o In bioq_disksort(), bypass the normal sort for bios with the
BIO_ORDERED attribute and instead insert them into the queue
with bioq_insert_tail(). bioq_insert_tail() not only gives
the desired command order during insertion, but also provides
barrier semantics so that commands disksorted in the future
cannot pass the just enqueued transaction.
sys/sys/bio.h:
Add BIO_ORDERED as bit 4 of the bio_flags field in struct bio.
sys/cam/ata/ata_da.c:
sys/cam/scsi/scsi_da.c
Use an ordered command for SCSI/ATA-NCQ commands issued in
response to bios with the BIO_ORDERED flag set.
sys/cam/scsi/scsi_da.c
Use an ordered tag when issuing a synchronize cache command.
Wrap some lines to 80 columns.
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c
sys/geom/geom_io.c
Mark bios with the BIO_FLUSH command as BIO_ORDERED.
Sponsored by: Spectra Logic Corporation
MFC after: 1 month