These changes prevent sysctl(8) from returning proper output,
such as:
1) no output from sysctl(8)
2) erroneously returning ENOMEM with tools like truss(1)
or uname(1)
truss: can not get etype: Cannot allocate memory
there is an environment variable which shall initialize the SYSCTL
during early boot. This works for all SYSCTL types both statically and
dynamically created ones, except for the SYSCTL NODE type and SYSCTLs
which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to
be used in the case a tunable sysctl has a custom initialisation
function allowing the sysctl to still be marked as a tunable. The
kernel SYSCTL API is mostly the same, with a few exceptions for some
special operations like iterating childrens of a static/extern SYSCTL
node. This operation should probably be made into a factored out
common macro, hence some device drivers use this. The reason for
changing the SYSCTL API was the need for a SYSCTL parent OID pointer
and not only the SYSCTL parent OID list pointer in order to quickly
generate the sysctl path. The motivation behind this patch is to avoid
parameter loading cludges inside the OFED driver subsystem. Instead of
adding special code to the OFED driver subsystem to post-load tunables
into dynamically created sysctls, we generalize this in the kernel.
Other changes:
- Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask"
to "hw.pcic.intr_mask".
- Removed redundant TUNABLE statements throughout the kernel.
- Some minor code rewrites in connection to removing not needed
TUNABLE statements.
- Added a missing SYSCTL_DECL().
- Wrapped two very long lines.
- Avoid malloc()/free() inside sysctl string handling, in case it is
called to initialize a sysctl from a tunable, hence malloc()/free() is
not ready when sysctls from the sysctl dataset are registered.
- Bumped FreeBSD version to indicate SYSCTL API change.
MFC after: 2 weeks
Sponsored by: Mellanox Technologies
- Preallocate some memory for ACPI tasks early enough. We cannot use
malloc(9) any more because spin mutex may be held here. The reserved
memory can be tuned via debug.acpi.max_tasks tunable or ACPI_MAX_TASKS
in kernel configuration. The default is 32 tasks.
- Implement a custom taskqueue_fast to wrap the new memory allocation.
This implementation is not the fastest in the world but we are being
conservative here.
use process ID as ACPI thread ID. Concurrent requests with equal thread
IDs broke ACPI mutexes operation causing unpredictable errors including
AE_AML_MUTEX_NOT_ACQUIRED that I have seen.
Use kernel thread ID instead of process ID for ACPI thread.
zone code. The GPE handler method (i.e. _L00) generates various Notify
events that need to be run to completion before the GPE is re-enabled.
In ACPI-CA, we queue an asynch callback at the same priority as a Notify
so that it will only run after all Notify handlers have completed. The
callback re-enables the GPE afterwards. We also changed the priority of
Notifies to be the same as GPEs, given the possibility that another GPE
could arrive before the Notifies have completed and we don't want it to
get queued ahead of the rest.
The ACPI-CA change was submitted by Alexey Starikovskiy (SUSE) and will
appear in a later release. Special thanks to him for helping track this
bug down.
MFC after: 1 week
Tested by: jhb, Yousif Hassan <yousif / alumni.jmu.edu>
triggers a KASSERT) or local variables. In the case of kern_ndis, the
tsleep() actually used a common sleep address (curproc) making it
susceptible to a premature wakeup.
taskqueue_start_threads(struct taskqueue **, int count, int pri,
const char *name, ...);
This allows the creation of 1 or more threads that will service a single
taskqueue. Also rework the taskqueue_create() API to remove the API change
that was introduced a while back. Creating a taskqueue doesn't rely on
the presence of a process structure, and the proc mechanics are much better
encapsulated in taskqueue_start_threads(). Also clean up the
taskqueue_terminate() and taskqueue_free() functions to safely drain
pending tasks and remove all associated threads.
The TASKQUEUE_DEFINE and TASKQUEUE_DEFINE_THREAD macros have been changed
to use the new API, but drivers compiled against the old definitions will
still work. Thus, recompiling drivers is not a strict requirement.
of swi. This allows us to use the taskqueue_thread_* functions instead of
rolling our own. It also avoids a double trip through the queue.
Submitted by: njl
Reviewed by: sam
number of task threads to start on boot. Go back to a default of 3
threads to work around lost battery state problems. Users that need
a setting of 1 can set this via the tunable. I am investigating the
underlying issues and this tunable can be removed once they are solved.
MFC after: 2 days
AcpiEnterSleepState() calling a long AcpiOsStall() with interrupts
disabled. This fix will instead be added to ACPI-CA.
PR:
Submitted by:
Reviewed by:
Approved by:
Obtained from:
MFC after:
In that case use proc0's pid to return the thread ID.
- For 4-stable, use the generic swi taskqueue for ACPI events rather than
implementing our own.
Sponsored by: The Weather Channel
doesn't give them enough stack to do much before blowing away the pcb.
This adds MI and MD code to allow the allocation of an alternate kstack
who's size can be speficied when calling kthread_create. Passing the
value 0 prevents the alternate kstack from being created. Note that the
ia64 MD code is missing for now, and PowerPC was only partially written
due to the pmap.c being incomplete there.
Though this patch does not modify anything to make use of the alternate
kstack, acpi and usb are good candidates.
Reviewed by: jake, peter, jhb
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.
Tested on: i386, alpha, sparc64
mutex releases to not require flags for the cases when preemption is
not allowed:
The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe. Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer. This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs. Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called. (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)
I've tested the changes to the interrupt code on i386 and alpha. I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine. PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken. Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.
Reviewed by: peter
Tested on: i386, alpha
- Temporary fix a bug of Intel ACPI CA core code.
- Add OS layer ACPI mutex support. This can be disabled by
specifying option ACPI_NO_SEMAPHORES.
- Add ACPI threading support. Now that we have a dedicate taskqueue for
ACPI tasks and more ACPI task threads can be created by specifying option
ACPI_MAX_THREADS.
- Change acpi_EvaluateIntoBuffer() behavior slightly to reuse given
caller's buffer unless AE_BUFFER_OVERFLOW occurs. Also CM battery's
evaluations were changed to use acpi_EvaluateIntoBuffer().
- Add new utility function acpi_ConvertBufferToInteger().
- Add simple locking for CM battery and temperature updating.
- Fix a minor problem on EC locking.
- Make the thermal zone polling rate to be changeable.
- Change minor things on AcpiOsSignal(); in ACPI_SIGNAL_FATAL case,
entering Debugger is easier to investigate the problem rather than panic.
while it is on a queue with the queue lock and remove the per-task locks.
- Remove TASK_DESTROY now that it is no longer needed.
- Go back to inlining TASK_INIT now that it is short again.
Inspired by: dfr
queue, and a mutex to protect the global list of taskqueues. The only
visible change is that a TASK_DESTROY() macro has been added to mirror
the TASK_INIT() macro to destroy a task before it is free'd.
Submitted by: Andrew Reiter <awr@watson.org>