2005-01-06 01:43:34 +00:00
|
|
|
/*-
|
2000-10-06 00:09:46 +00:00
|
|
|
* ichsmb.c
|
|
|
|
*
|
2003-08-24 17:55:58 +00:00
|
|
|
* Author: Archie Cobbs <archie@freebsd.org>
|
2000-10-06 00:09:46 +00:00
|
|
|
* Copyright (c) 2000 Whistle Communications, Inc.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Subject to the following obligations and disclaimer of warranty, use and
|
|
|
|
* redistribution of this software, in source or object code forms, with or
|
|
|
|
* without modifications are expressly permitted by Whistle Communications;
|
|
|
|
* provided, however, that:
|
|
|
|
* 1. Any and all reproductions of the source or object code must include the
|
|
|
|
* copyright notice above and the following disclaimer of warranties; and
|
|
|
|
* 2. No rights are granted, in any manner or form, to use Whistle
|
|
|
|
* Communications, Inc. trademarks, including the mark "WHISTLE
|
|
|
|
* COMMUNICATIONS" on advertising, endorsements, or otherwise except as
|
|
|
|
* such appears in the above copyright notice or in the software.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS BEING PROVIDED BY WHISTLE COMMUNICATIONS "AS IS", AND
|
|
|
|
* TO THE MAXIMUM EXTENT PERMITTED BY LAW, WHISTLE COMMUNICATIONS MAKES NO
|
|
|
|
* REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, REGARDING THIS SOFTWARE,
|
|
|
|
* INCLUDING WITHOUT LIMITATION, ANY AND ALL IMPLIED WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
|
|
|
|
* WHISTLE COMMUNICATIONS DOES NOT WARRANT, GUARANTEE, OR MAKE ANY
|
|
|
|
* REPRESENTATIONS REGARDING THE USE OF, OR THE RESULTS OF THE USE OF THIS
|
|
|
|
* SOFTWARE IN TERMS OF ITS CORRECTNESS, ACCURACY, RELIABILITY OR OTHERWISE.
|
|
|
|
* IN NO EVENT SHALL WHISTLE COMMUNICATIONS BE LIABLE FOR ANY DAMAGES
|
|
|
|
* RESULTING FROM OR ARISING OUT OF ANY USE OF THIS SOFTWARE, INCLUDING
|
|
|
|
* WITHOUT LIMITATION, ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
|
|
|
|
* PUNITIVE, OR CONSEQUENTIAL DAMAGES, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
|
|
* SERVICES, LOSS OF USE, DATA OR PROFITS, HOWEVER CAUSED AND UNDER ANY
|
|
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
|
|
|
* THIS SOFTWARE, EVEN IF WHISTLE COMMUNICATIONS IS ADVISED OF THE POSSIBILITY
|
|
|
|
* OF SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2003-08-24 17:55:58 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
2000-10-06 00:09:46 +00:00
|
|
|
/*
|
|
|
|
* Support for the SMBus controller logical device which is part of the
|
|
|
|
* Intel 81801AA (ICH) and 81801AB (ICH0) I/O controller hub chips.
|
2000-12-07 02:09:39 +00:00
|
|
|
*
|
|
|
|
* This driver assumes that the generic SMBus code will ensure that
|
|
|
|
* at most one process at a time calls into the SMBus methods below.
|
2000-10-06 00:09:46 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/errno.h>
|
2001-03-28 09:17:56 +00:00
|
|
|
#include <sys/lock.h>
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
#include <sys/module.h>
|
2000-12-07 02:09:39 +00:00
|
|
|
#include <sys/mutex.h>
|
2000-10-06 00:09:46 +00:00
|
|
|
#include <sys/syslog.h>
|
|
|
|
#include <sys/bus.h>
|
|
|
|
|
|
|
|
#include <machine/bus.h>
|
|
|
|
#include <sys/rman.h>
|
|
|
|
#include <machine/resource.h>
|
|
|
|
|
|
|
|
#include <dev/smbus/smbconf.h>
|
|
|
|
|
|
|
|
#include <dev/ichsmb/ichsmb_var.h>
|
|
|
|
#include <dev/ichsmb/ichsmb_reg.h>
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Enable debugging by defining ICHSMB_DEBUG to a non-zero value.
|
|
|
|
*/
|
|
|
|
#define ICHSMB_DEBUG 0
|
2005-03-02 21:33:29 +00:00
|
|
|
#if ICHSMB_DEBUG != 0 && defined(__CC_SUPPORTS___FUNC__)
|
2000-10-06 00:09:46 +00:00
|
|
|
#define DBG(fmt, args...) \
|
2001-12-10 08:09:49 +00:00
|
|
|
do { log(LOG_DEBUG, "%s: " fmt, __func__ , ## args); } while (0)
|
2000-10-06 00:09:46 +00:00
|
|
|
#else
|
|
|
|
#define DBG(fmt, args...) do { } while (0)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Our child device driver name
|
|
|
|
*/
|
|
|
|
#define DRIVER_SMBUS "smbus"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Internal functions
|
|
|
|
*/
|
|
|
|
static int ichsmb_wait(sc_p sc);
|
|
|
|
|
|
|
|
/********************************************************************
|
|
|
|
BUS-INDEPENDENT BUS METHODS
|
|
|
|
********************************************************************/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle probe-time duties that are independent of the bus
|
|
|
|
* our device lives on.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ichsmb_probe(device_t dev)
|
|
|
|
{
|
2005-03-05 18:17:35 +00:00
|
|
|
return (BUS_PROBE_DEFAULT);
|
2000-10-06 00:09:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle attach-time duties that are independent of the bus
|
|
|
|
* our device lives on.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ichsmb_attach(device_t dev)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int error;
|
2005-06-10 16:12:43 +00:00
|
|
|
|
|
|
|
/* Add child: an instance of the "smbus" device */
|
2005-07-29 00:20:50 +00:00
|
|
|
if ((sc->smb = device_add_child(dev, DRIVER_SMBUS, -1)) == NULL) {
|
2005-06-10 16:12:43 +00:00
|
|
|
log(LOG_ERR, "%s: no \"%s\" child found\n",
|
|
|
|
device_get_nameunit(dev), DRIVER_SMBUS);
|
|
|
|
return (ENXIO);
|
|
|
|
}
|
2000-10-06 00:09:46 +00:00
|
|
|
|
|
|
|
/* Clear interrupt conditions */
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_STA, 0xff);
|
|
|
|
|
|
|
|
/* Add "smbus" child */
|
|
|
|
if ((error = bus_generic_attach(dev)) != 0) {
|
|
|
|
log(LOG_ERR, "%s: failed to attach child: %d\n",
|
|
|
|
device_get_nameunit(dev), error);
|
2000-12-07 02:09:39 +00:00
|
|
|
return (ENXIO);
|
2000-10-06 00:09:46 +00:00
|
|
|
}
|
|
|
|
|
2000-12-07 02:09:39 +00:00
|
|
|
/* Create mutex */
|
2002-04-04 21:03:38 +00:00
|
|
|
mtx_init(&sc->mutex, device_get_nameunit(dev), "ichsmb", MTX_DEF);
|
2000-12-07 02:09:39 +00:00
|
|
|
return (0);
|
2000-10-06 00:09:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/********************************************************************
|
|
|
|
SMBUS METHODS
|
|
|
|
********************************************************************/
|
|
|
|
|
|
|
|
int
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
ichsmb_callback(device_t dev, int index, void *data)
|
2000-10-06 00:09:46 +00:00
|
|
|
{
|
|
|
|
int smb_error = 0;
|
|
|
|
|
|
|
|
DBG("index=%d how=%d\n", index, data ? *(int *)data : -1);
|
|
|
|
switch (index) {
|
|
|
|
case SMB_REQUEST_BUS:
|
|
|
|
break;
|
|
|
|
case SMB_RELEASE_BUS:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
smb_error = SMB_EABORT; /* XXX */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_quick(device_t dev, u_char slave, int how)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x how=%d\n", slave, how);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
2000-10-06 00:09:46 +00:00
|
|
|
switch (how) {
|
|
|
|
case SMB_QREAD:
|
|
|
|
case SMB_QWRITE:
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_QUICK;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | (how == SMB_QREAD ?
|
|
|
|
ICH_XMIT_SLVA_READ : ICH_XMIT_SLVA_WRITE));
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
smb_error = ichsmb_wait(sc);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
smb_error = SMB_ENOTSUPP;
|
|
|
|
}
|
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_sendb(device_t dev, u_char slave, char byte)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x byte=0x%02x\n", slave, (u_char)byte);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_BYTE;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_WRITE);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, byte);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
smb_error = ichsmb_wait(sc);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_recvb(device_t dev, u_char slave, char *byte)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x\n", slave);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_BYTE;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_READ);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
if ((smb_error = ichsmb_wait(sc)) == SMB_ENOERR)
|
|
|
|
*byte = bus_space_read_1(sc->io_bst, sc->io_bsh, ICH_D0);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d byte=0x%02x\n", smb_error, (u_char)*byte);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_writeb(device_t dev, u_char slave, char cmd, char byte)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x byte=0x%02x\n",
|
|
|
|
slave, (u_char)cmd, (u_char)byte);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_BYTE_DATA;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_WRITE);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D0, byte);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
smb_error = ichsmb_wait(sc);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_writew(device_t dev, u_char slave, char cmd, short word)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x word=0x%04x\n",
|
|
|
|
slave, (u_char)cmd, (u_int16_t)word);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_WORD_DATA;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_WRITE);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D0, word & 0xff);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D1, word >> 8);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
smb_error = ichsmb_wait(sc);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_readb(device_t dev, u_char slave, char cmd, char *byte)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x\n", slave, (u_char)cmd);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_BYTE_DATA;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_READ);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
if ((smb_error = ichsmb_wait(sc)) == SMB_ENOERR)
|
|
|
|
*byte = bus_space_read_1(sc->io_bst, sc->io_bsh, ICH_D0);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d byte=0x%02x\n", smb_error, (u_char)*byte);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_readw(device_t dev, u_char slave, char cmd, short *word)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x\n", slave, (u_char)cmd);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_WORD_DATA;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_READ);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
if ((smb_error = ichsmb_wait(sc)) == SMB_ENOERR) {
|
|
|
|
*word = (bus_space_read_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_D0) & 0xff)
|
|
|
|
| (bus_space_read_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_D1) << 8);
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d word=0x%04x\n", smb_error, (u_int16_t)*word);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_pcall(device_t dev, u_char slave, char cmd, short sdata, short *rdata)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x sdata=0x%04x\n",
|
|
|
|
slave, (u_char)cmd, (u_int16_t)sdata);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_PROC_CALL;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_WRITE);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D0, sdata & 0xff);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D1, sdata >> 8);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
if ((smb_error = ichsmb_wait(sc)) == SMB_ENOERR) {
|
|
|
|
*rdata = (bus_space_read_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_D0) & 0xff)
|
|
|
|
| (bus_space_read_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_D1) << 8);
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d rdata=0x%04x\n", smb_error, (u_int16_t)*rdata);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
ichsmb_bwrite(device_t dev, u_char slave, char cmd, u_char count, char *buf)
|
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x count=%d\n", slave, (u_char)cmd, count);
|
|
|
|
#if ICHSMB_DEBUG
|
|
|
|
#define DISP(ch) (((ch) < 0x20 || (ch) >= 0x7e) ? '.' : (ch))
|
|
|
|
{
|
|
|
|
u_char *p;
|
|
|
|
|
|
|
|
for (p = (u_char *)buf; p - (u_char *)buf < 32; p += 8) {
|
|
|
|
DBG("%02x: %02x %02x %02x %02x %02x %02x %02x %02x"
|
|
|
|
" %c%c%c%c%c%c%c%c", (p - (u_char *)buf),
|
|
|
|
p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7],
|
|
|
|
DISP(p[0]), DISP(p[1]), DISP(p[2]), DISP(p[3]),
|
|
|
|
DISP(p[4]), DISP(p[5]), DISP(p[6]), DISP(p[7]));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#undef DISP
|
|
|
|
#endif
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
2000-10-06 00:09:46 +00:00
|
|
|
if (count < 1 || count > 32)
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
return (SMB_EINVAL);
|
2000-10-06 00:09:46 +00:00
|
|
|
bcopy(buf, sc->block_data, count);
|
|
|
|
sc->block_count = count;
|
|
|
|
sc->block_index = 1;
|
|
|
|
sc->block_write = 1;
|
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_BLOCK;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_WRITE);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D0, count);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_BLOCK_DB, buf[0]);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
|
|
|
smb_error = ichsmb_wait(sc);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
ichsmb_bread(device_t dev, u_char slave, char cmd, u_char *count, char *buf)
|
2000-10-06 00:09:46 +00:00
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
|
|
|
int smb_error;
|
|
|
|
|
|
|
|
DBG("slave=0x%02x cmd=0x%02x count=%d\n", slave, (u_char)cmd, count);
|
|
|
|
KASSERT(sc->ich_cmd == -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
if (*count < 1 || *count > 32)
|
|
|
|
return (SMB_EINVAL);
|
2000-10-06 00:09:46 +00:00
|
|
|
bzero(sc->block_data, sizeof(sc->block_data));
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
sc->block_count = 0;
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->block_index = 0;
|
|
|
|
sc->block_write = 0;
|
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
sc->ich_cmd = ICH_HST_CNT_SMB_CMD_BLOCK;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_XMIT_SLVA,
|
|
|
|
(slave << 1) | ICH_XMIT_SLVA_READ);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CMD, cmd);
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_D0, *count); /* XXX? */
|
2000-10-06 00:09:46 +00:00
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_START | ICH_HST_CNT_INTREN | sc->ich_cmd);
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
if ((smb_error = ichsmb_wait(sc)) == SMB_ENOERR) {
|
|
|
|
bcopy(sc->block_data, buf, min(sc->block_count, *count));
|
|
|
|
*count = sc->block_count;
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
DBG("smb_error=%d\n", smb_error);
|
|
|
|
#if ICHSMB_DEBUG
|
|
|
|
#define DISP(ch) (((ch) < 0x20 || (ch) >= 0x7e) ? '.' : (ch))
|
|
|
|
{
|
|
|
|
u_char *p;
|
|
|
|
|
|
|
|
for (p = (u_char *)buf; p - (u_char *)buf < 32; p += 8) {
|
|
|
|
DBG("%02x: %02x %02x %02x %02x %02x %02x %02x %02x"
|
|
|
|
" %c%c%c%c%c%c%c%c", (p - (u_char *)buf),
|
|
|
|
p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7],
|
|
|
|
DISP(p[0]), DISP(p[1]), DISP(p[2]), DISP(p[3]),
|
|
|
|
DISP(p[4]), DISP(p[5]), DISP(p[6]), DISP(p[7]));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#undef DISP
|
|
|
|
#endif
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/********************************************************************
|
|
|
|
OTHER FUNCTIONS
|
|
|
|
********************************************************************/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This table describes what interrupts we should ever expect to
|
|
|
|
* see after each ICH command, not including the SMBALERT interrupt.
|
|
|
|
*/
|
|
|
|
static const u_int8_t ichsmb_state_irqs[] = {
|
|
|
|
/* quick */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR),
|
|
|
|
/* byte */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR),
|
|
|
|
/* byte data */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR),
|
|
|
|
/* word data */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR),
|
|
|
|
/* process call */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR),
|
|
|
|
/* block */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR
|
|
|
|
| ICH_HST_STA_BYTE_DONE_STS),
|
|
|
|
/* i2c read (not used) */
|
|
|
|
(ICH_HST_STA_BUS_ERR | ICH_HST_STA_DEV_ERR | ICH_HST_STA_INTR
|
|
|
|
| ICH_HST_STA_BYTE_DONE_STS)
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Interrupt handler. This handler is bus-independent. Note that our
|
|
|
|
* interrupt may be shared, so we must handle "false" interrupts.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ichsmb_device_intr(void *cookie)
|
|
|
|
{
|
|
|
|
const sc_p sc = cookie;
|
|
|
|
const device_t dev = sc->dev;
|
|
|
|
const int maxloops = 16;
|
|
|
|
u_int8_t status;
|
|
|
|
u_int8_t ok_bits;
|
|
|
|
int cmd_index;
|
|
|
|
int count;
|
|
|
|
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
for (count = 0; count < maxloops; count++) {
|
|
|
|
|
|
|
|
/* Get and reset status bits */
|
|
|
|
status = bus_space_read_1(sc->io_bst, sc->io_bsh, ICH_HST_STA);
|
|
|
|
#if ICHSMB_DEBUG
|
|
|
|
if ((status & ~(ICH_HST_STA_INUSE_STS | ICH_HST_STA_HOST_BUSY))
|
|
|
|
|| count > 0) {
|
|
|
|
DBG("%d stat=0x%02x\n", count, status);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
status &= ~(ICH_HST_STA_INUSE_STS | ICH_HST_STA_HOST_BUSY);
|
|
|
|
if (status == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* Check for unexpected interrupt */
|
|
|
|
ok_bits = ICH_HST_STA_SMBALERT_STS;
|
|
|
|
cmd_index = sc->ich_cmd >> 2;
|
|
|
|
if (sc->ich_cmd != -1) {
|
|
|
|
KASSERT(cmd_index < sizeof(ichsmb_state_irqs),
|
|
|
|
("%s: ich_cmd=%d", device_get_nameunit(dev),
|
|
|
|
sc->ich_cmd));
|
|
|
|
ok_bits |= ichsmb_state_irqs[cmd_index];
|
|
|
|
}
|
|
|
|
if ((status & ~ok_bits) != 0) {
|
|
|
|
log(LOG_ERR, "%s: irq 0x%02x during %d\n",
|
|
|
|
device_get_nameunit(dev), status, cmd_index);
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh,
|
|
|
|
ICH_HST_STA, (status & ~ok_bits));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Handle SMBALERT interrupt */
|
|
|
|
if (status & ICH_HST_STA_SMBALERT_STS) {
|
|
|
|
static int smbalert_count = 16;
|
|
|
|
if (smbalert_count > 0) {
|
|
|
|
log(LOG_WARNING, "%s: SMBALERT# rec'd\n",
|
|
|
|
device_get_nameunit(dev));
|
|
|
|
if (--smbalert_count == 0) {
|
|
|
|
log(LOG_WARNING,
|
|
|
|
"%s: not logging anymore\n",
|
|
|
|
device_get_nameunit(dev));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for bus error */
|
|
|
|
if (status & ICH_HST_STA_BUS_ERR) {
|
|
|
|
sc->smb_error = SMB_ECOLLI; /* XXX SMB_EBUSERR? */
|
|
|
|
goto finished;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for device error */
|
|
|
|
if (status & ICH_HST_STA_DEV_ERR) {
|
|
|
|
sc->smb_error = SMB_ENOACK; /* or SMB_ETIMEOUT? */
|
|
|
|
goto finished;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for byte completion in block transfer */
|
|
|
|
if (status & ICH_HST_STA_BYTE_DONE_STS) {
|
|
|
|
if (sc->block_write) {
|
|
|
|
if (sc->block_index < sc->block_count) {
|
|
|
|
|
|
|
|
/* Write next byte */
|
|
|
|
bus_space_write_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_BLOCK_DB,
|
|
|
|
sc->block_data[sc->block_index++]);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
|
|
|
|
/* First interrupt, get the count also */
|
|
|
|
if (sc->block_index == 0) {
|
|
|
|
sc->block_count = bus_space_read_1(
|
|
|
|
sc->io_bst, sc->io_bsh, ICH_D0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get next byte, if any */
|
|
|
|
if (sc->block_index < sc->block_count) {
|
|
|
|
|
|
|
|
/* Read next byte */
|
|
|
|
sc->block_data[sc->block_index++] =
|
|
|
|
bus_space_read_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_BLOCK_DB);
|
|
|
|
|
|
|
|
/* Set "LAST_BYTE" bit before reading
|
|
|
|
the last byte of block data */
|
|
|
|
if (sc->block_index
|
|
|
|
>= sc->block_count - 1) {
|
|
|
|
bus_space_write_1(sc->io_bst,
|
|
|
|
sc->io_bsh, ICH_HST_CNT,
|
|
|
|
ICH_HST_CNT_LAST_BYTE
|
|
|
|
| ICH_HST_CNT_INTREN
|
|
|
|
| sc->ich_cmd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check command completion */
|
|
|
|
if (status & ICH_HST_STA_INTR) {
|
|
|
|
sc->smb_error = SMB_ENOERR;
|
|
|
|
finished:
|
|
|
|
sc->ich_cmd = -1;
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh,
|
|
|
|
ICH_HST_STA, status);
|
|
|
|
wakeup(sc);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Clear status bits and try again */
|
|
|
|
bus_space_write_1(sc->io_bst, sc->io_bsh, ICH_HST_STA, status);
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock(&sc->mutex);
|
2000-10-06 00:09:46 +00:00
|
|
|
|
|
|
|
/* Too many loops? */
|
|
|
|
if (count == maxloops) {
|
|
|
|
log(LOG_ERR, "%s: interrupt loop, status=0x%02x\n",
|
|
|
|
device_get_nameunit(dev),
|
|
|
|
bus_space_read_1(sc->io_bst, sc->io_bsh, ICH_HST_STA));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2000-12-07 02:09:39 +00:00
|
|
|
* Wait for command completion. Assumes mutex is held.
|
2000-10-06 00:09:46 +00:00
|
|
|
* Returns an SMB_* error code.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ichsmb_wait(sc_p sc)
|
|
|
|
{
|
|
|
|
const device_t dev = sc->dev;
|
|
|
|
int error, smb_error;
|
|
|
|
|
|
|
|
KASSERT(sc->ich_cmd != -1,
|
2001-12-10 08:09:49 +00:00
|
|
|
("%s: ich_cmd=%d\n", __func__ , sc->ich_cmd));
|
2000-12-07 02:09:39 +00:00
|
|
|
mtx_assert(&sc->mutex, MA_OWNED);
|
2006-01-03 17:01:43 +00:00
|
|
|
error = msleep(sc, &sc->mutex, PZERO, "ichsmb", hz / 4);
|
2000-12-07 02:09:39 +00:00
|
|
|
DBG("msleep -> %d\n", error);
|
2000-10-06 00:09:46 +00:00
|
|
|
switch (error) {
|
|
|
|
case 0:
|
|
|
|
smb_error = sc->smb_error;
|
|
|
|
break;
|
|
|
|
case EWOULDBLOCK:
|
|
|
|
log(LOG_ERR, "%s: device timeout, status=0x%02x\n",
|
|
|
|
device_get_nameunit(dev),
|
|
|
|
bus_space_read_1(sc->io_bst, sc->io_bsh, ICH_HST_STA));
|
|
|
|
sc->ich_cmd = -1;
|
|
|
|
smb_error = SMB_ETIMEOUT;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
smb_error = SMB_EABORT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return (smb_error);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Release resources associated with device.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ichsmb_release_resources(sc_p sc)
|
|
|
|
{
|
|
|
|
const device_t dev = sc->dev;
|
|
|
|
|
|
|
|
if (sc->irq_handle != NULL) {
|
|
|
|
bus_teardown_intr(dev, sc->irq_res, sc->irq_handle);
|
|
|
|
sc->irq_handle = NULL;
|
|
|
|
}
|
|
|
|
if (sc->irq_res != NULL) {
|
|
|
|
bus_release_resource(dev,
|
|
|
|
SYS_RES_IRQ, sc->irq_rid, sc->irq_res);
|
|
|
|
sc->irq_res = NULL;
|
|
|
|
}
|
|
|
|
if (sc->io_res != NULL) {
|
|
|
|
bus_release_resource(dev,
|
|
|
|
SYS_RES_IOPORT, sc->io_rid, sc->io_res);
|
|
|
|
sc->io_res = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
int
|
|
|
|
ichsmb_detach(device_t dev)
|
2005-06-10 16:12:43 +00:00
|
|
|
{
|
|
|
|
const sc_p sc = device_get_softc(dev);
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
int error;
|
2005-07-29 00:20:50 +00:00
|
|
|
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
error = bus_generic_detach(dev);
|
|
|
|
if (error)
|
|
|
|
return (error);
|
2005-06-10 16:12:43 +00:00
|
|
|
device_delete_child(dev, sc->smb);
|
|
|
|
ichsmb_release_resources(sc);
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
mtx_destroy(&sc->mutex);
|
2005-06-10 16:12:43 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
Minor overhaul of SMBus support:
- Change smbus_callback() to pass a void * rather than caddr_t.
- Change smbus_bread() to pass a pointer to the count and have it be an
in/out parameter. The input is the size of the buffer (same as before),
but on return it will contain the actual amount of data read back from
the bus. Note that this value may be larger than the input value. It
is up to the caller to treat this as an error if desired.
- Change the SMB_BREAD ioctl to write out the updated struct smbcmd which
will contain the actual number of bytes read in the 'count' field. To
preserve the previous ABI, the old ioctl value is mapped to SMB_OLD_BREAD
which doesn't copy the updated smbcmd back out to userland. I doubt anyone
actually used the old BREAD anyway as it was rediculous to do a bulk-read
but not tell the using program how much data was actually read.
- Make the smbus driver and devclass public in the smbus module and
push all the DRIVER_MODULE()'s for attaching the smbus driver to
various foosmb drivers out into the foosmb modules. This makes all
the foosmb logic centralized and allows new foosmb modules to be
self-contained w/o having to hack smbus.c everytime a new smbus driver
is added.
- Add a new SMB_EINVAL error bit and use it in place of EINVAL to return
an error for bad arguments (such as invalid counts for bread and bwrite).
- Map SMB bus error bits to EIO in smbus_error().
- Make the smbus driver call bus_generic_probe() and require child drivers
such as smb(4) to create device_t's via identify routines. Previously,
smbus just created one anonymous device during attach, and if you had
multiple drivers that could attach it was just random chance as to which
driver got to probe for the sole device_t first.
- Add a mutex to the smbus(4) softc and use it in place of dummy splhigh()
to protect the 'owner' field and perform necessary synchronization for
smbus_request_bus() and smbus_release_bus().
- Change the bread() and bwrite() methods of alpm(4), amdpm(4), and
viapm(4) to only perform a single transaction and not try to use a
loop of multiple transactions for a large request. The framing and
commands to use for a large transaction depend on the upper-layer
protocol (such as SSIF for IPMI over SMBus) from what I can tell, and the
smb(4) driver never allowed bulk read/writes of more than 32-bytes
anyway. The other smb drivers only performed single transactions.
- Fix buffer overflows in the bread() methods of ichsmb(4), alpm(4),
amdpm(4), amdsmb(4), intpm(4), and nfsmb(4).
- Use SMB_xxx errors in viapm(4).
- Destroy ichsmb(4)'s mutex after bus_generic_detach() to avoid problems
from child devices making smb upcalls that would use the mutex during
their detach methods.
MFC after: 1 week
Reviewed by: jmg (mostly)
2006-09-11 20:52:41 +00:00
|
|
|
|
|
|
|
DRIVER_MODULE(smbus, ichsmb, smbus_driver, smbus_devclass, 0, 0);
|