Commit Graph

41 Commits

Author SHA1 Message Date
iwasaki
57bb0b6ca4 Resolve conflicts arising from the ACPI CA 20020611 import. 2002-07-09 17:54:02 +00:00
iwasaki
e6a8bab402 Fix wrong use of ACPI_NO_UNIT_LIMIT which is for as_maxunits, not as_units. 2002-07-06 13:59:59 +00:00
peter
51f5a10d66 Brutally deal with __func__ being 'const char *' on gcc-3.1. 2002-05-19 06:16:47 +00:00
jhb
db9aa81e23 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
peter
83444279ce Fix a gcc-3.1+ warning.
warning: deprecated use of label at end of compound statement

ie: you cannot do this anymore:
switch(foo) {
....

default:
}
2002-03-19 11:02:06 +00:00
peter
4bd977bea6 Do not do string concatenation with __func__ (which is not a string) 2002-03-12 00:12:59 +00:00
peter
e19329dfa6 Recent acpica imports have changed the lengths from UINT32 to ACPI_SIZE,
which is 64 bit on ia64.  Fix it.
2002-03-12 00:10:40 +00:00
msmith
9c77442559 AcpiOsPrintf and AcpiOsVprintf now return void. 2002-02-23 05:32:51 +00:00
msmith
298fdbbdac AcpiOsCallocate is no longer required. 2002-02-23 05:32:10 +00:00
msmith
2fc99bc1ee Match namespace cleanup changes in ACPI CA 20020217 update. 2002-02-23 05:31:38 +00:00
msmith
40b38b35ec find_devclass -> devclass_find. 2002-01-08 19:14:59 +00:00
msmith
a855b09ad9 Staticise devclasses and some unnecessarily global variables. 2002-01-08 06:46:01 +00:00
jhb
1ce407b675 Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:

The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe.  Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer.  This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs.  Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called.  (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)

I've tested the changes to the interrupt code on i386 and alpha.  I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine.  PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken.  Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.

Reviewed by:	peter
Tested on:	i386, alpha
2002-01-05 08:47:13 +00:00
iwasaki
4c7abcd337 Add OS layer ACPI mutex and threading support.
- Temporary fix a bug of Intel ACPI CA core code.
 - Add OS layer ACPI mutex support.  This can be disabled by
   specifying option ACPI_NO_SEMAPHORES.
 - Add ACPI threading support.  Now that we have a dedicate taskqueue for
   ACPI tasks and more ACPI task threads can be created by specifying option
   ACPI_MAX_THREADS.
 - Change acpi_EvaluateIntoBuffer() behavior slightly to reuse given
   caller's buffer unless AE_BUFFER_OVERFLOW occurs.  Also CM battery's
   evaluations were changed to use acpi_EvaluateIntoBuffer().
 - Add new utility function acpi_ConvertBufferToInteger().
 - Add simple locking for CM battery and temperature updating.
 - Fix a minor problem on EC locking.
 - Make the thermal zone polling rate to be changeable.
 - Change minor things on AcpiOsSignal(); in ACPI_SIGNAL_FATAL case,
   entering Debugger is easier to investigate the problem rather than panic.
2001-12-22 16:05:41 +00:00
msmith
adb7d36871 Synch with minor changes in the ACPI CA 20011120 snapshot. 2001-11-28 04:36:29 +00:00
jhb
39b22ee165 - Change the taskqueue locking to protect the necessary parts of a task
while it is on a queue with the queue lock and remove the per-task locks.
- Remove TASK_DESTROY now that it is no longer needed.
- Go back to inlining TASK_INIT now that it is short again.

Inspired by:	dfr
2001-10-26 18:46:48 +00:00
jhb
e1bba71fc9 Add locking to taskqueues. There is one mutex per task, one mutex per
queue, and a mutex to protect the global list of taskqueues.  The only
visible change is that a TASK_DESTROY() macro has been added to mirror
the TASK_INIT() macro to destroy a task before it is free'd.

Submitted by:	Andrew Reiter <awr@watson.org>
2001-10-26 06:32:21 +00:00
jhb
0857f7bbf9 Use TASK_INIT to initialize taskqueue task instead of violating the
abstraction.

Submitted by:	Andrew Reiter <arr@watson.org>
2001-10-25 19:56:02 +00:00
dfr
9c65ee6d6b Add busspace hacks for ia64. 2001-10-04 08:33:16 +00:00
iwasaki
936a16ea86 Don't call tsleep from AcpiOsStall(), call DELAY() always instead.
Process switching during calling AcpiOsStall() caused fatal trap 12 at
sleeping/wakeup on some machines.
2001-09-08 17:03:26 +00:00
msmith
b6bb56ed4e Move OsdEnvironment.c into MD code; searching for the ACPI tables is not
portable.
2001-09-07 02:55:00 +00:00
iwasaki
1eb9c3ef31 Better checking of duplicated interrupt handler installation.
Reviewed by:	msmith
2001-07-25 16:13:30 +00:00
msmith
78bfafdf45 Update the OSD module to match the ACPI CA 20010717 import.
Submitted by:	"Grover, Andrew" <andrew.grover@intel.com> (OsdHardware.c)
2001-07-21 04:10:01 +00:00
msmith
719efadfd6 We haven't used this for ages, and we're not going to either. 2001-07-20 09:44:55 +00:00
msmith
6f330cc079 Get the ACPI softc before we potentially dereference it. 2001-07-07 10:18:10 +00:00
msmith
7bf89cea2b Wrap the interrupt handler so that we can get the ACPI lock. 2001-06-29 21:21:08 +00:00
msmith
789b531931 What I get for "fixing" at the last minute. Correct a mis-merge of takawata's
timeout fix and put proc.h into the right file.

Submitted by:	nnd@mail.nsk.ru
2001-05-30 05:34:10 +00:00
msmith
7996f19f43 - Updates for new constant naming in the ACPI CA 20010518 update.
- Use __func__ instead of __FUNCTION.
 - Support power-off to S3 or S5 (takawata)
 - Enable ACPI debugging earlier (with a sysinit)
 - Fix a deadlock in the EC code (takawata)
 - Improve arithmetic and reduce the risk of spurious wakeup in
   AcpiOsSleep.
 - Add AcpiOsGetThreadId.
 - Simplify mutex code (still disabled).
2001-05-29 20:13:42 +00:00
jhb
b47bfbe544 Catch up to header include changes:
- <sys/mutex.h> now requires <sys/systm.h>
- <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
2001-03-28 09:17:56 +00:00
jake
55d5108ac5 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
msmith
86f58a13be Add some debugging.
Turn off semaphores.  Nobody else implements them, and there is lots of
AML out there which does totally absurd things with them, meaning that
if we try to do the right thing we are guaranteed to fail.
2001-01-31 09:35:50 +00:00
msmith
3f669a5477 Add some debugging statements. 2001-01-31 09:34:54 +00:00
peter
ac8d71e326 In answer to the comment: /* XXX is it OK to block here? */, the answer
is definately NO! as we are in interrupt context and malloc() does a
KASSERT() to be sure.
2001-01-23 09:43:23 +00:00
msmith
71aff7f1c7 Plug a memory leak in AcpiOsDeleteSemaphore where the mutex is not properly
destroyed.

Submitted by:	bmilekic
2001-01-22 05:33:36 +00:00
takawata
4a891e0ef1 Re-Enable OSD_PRIORITY_GPE. Now 20001215 has been commited. 2000-12-21 07:47:43 +00:00
iwasaki
05d0cf88f1 Disable my previous committed code for a moment.
Note to myself: this needs to be enabled again when newer version of
ACPI is imported.
2000-12-20 20:22:47 +00:00
iwasaki
e18ed82a61 Add task priority definition for OSD_PRIORITY_GPE in AcpiOsQueueForExecution().
This is needed to next ACPICA import.
2000-12-20 19:15:38 +00:00
msmith
5c07e9c2eb Staticise some malloc pools
Submitted by:	phk
2000-12-08 20:48:33 +00:00
msmith
c4c211fc90 AcpiOsMem primitives as required by the new ACPI CA snapshot 2000-12-01 10:19:28 +00:00
msmith
64e150eaa4 FreeBSD-specific OSD (operating system dependant) modules for the Intel
ACPICA code.
2000-10-28 06:56:15 +00:00