disable MWI on 2300
based on function code, set an 'isp_port' for the 2312- it's a
separate instance, but the NVRAM is shared, and the second port's
NVRAM is at offset 256.
+ Enable RIO operation for LVD SCSI cards. This makes a *big* difference
as even under reasonable load we get batched completions of about 30
commands at a time on, say, an ISP1080.
+ Do 'continuation' mailbox commands- this allows us to specify a work
area within the softc and 'continue' repeated mailbox commands. This is
more or less on an ad hoc basis and is currently only used for firmware
loading (which f/w now loads substantially faster becuase the calling
thread is only woken when all the f/w words are loaded- not for each
one of the 40000 f/w words that gets loaded).
+ If we're about to return from isp_intr with a 'bogus interrupt' indication,
and we're not a 23XX card, check to see whether the semaphore register is
currently *2* (not *1* as it should be) and whether there's an async completion
sitting in outgoing mailbox0. This seems to capture cases of lost fast posting
and RIO interrupts that the 12160 && 1080 have been known to pump out under
extreme load (extreme, as in > 250 active commands).
+ FC_SCRATCH_ACQUIRE/FC_SCRATCH_RELEASE macros.
+ Endian correct swizzle/unswizzle of an ATIO2 that has a WWPN in it.
MFC after: 1 week
the response queue. Instead of the ad hoc ISP_SWIZZLE_REQUEST, we now have
a complete set of inline functions in isp_inline.h. Each platform is
responsible for providing just one of a set of ISP_IOX_{GET,PUT}{8,16,32}
macros.
The reason this needs to be done is that we need to have a single set of
functions that will work correctly on multiple architectures for both little
and big endian machines. It also needs to work correctly in the case that
we have the request or response queues in memory that has to be treated
specially (e.g., have ddi_dma_sync called on it for Solaris after we update
it or before we read from it). It also has to handle the SBus cards (for
platforms that have them) which, while on a Big Endian machine, do *not*
require *most* of the request/response queue entry fields to be swizzled
or unswizzled.
One thing that falls out of this is that we no longer build requests in the
request queue itself. Instead, we build the request locally (e.g., on the
stack) and then as part of the swizzling operation, copy it to the request
queue entry we've allocated. I thought long and hard about whether this was
too expensive a change to make as it in a lot of cases requires an extra
copy. On balance, the flexbility is worth it. With any luck, the entry that
we build locally stays in a processor writeback cache (after all, it's only
64 bytes) so that the cost of actually flushing it to the memory area that is
the shared queue with the PCI device is not all that expensive. We may examine
this again and try to get clever in the future to try and avoid copies.
Another change that falls out of this is that MEMORYBARRIER should be taken
a lot more seriously. The macro ISP_ADD_REQUEST does a MEMORYBARRIER on the
entry being added. But there had been many other places this had been missing.
It's now very important that it be done.
Additional changes:
Fix a longstanding buglet of sorts. When we get an entry via isp_getrqentry,
the iptr value that gets returned is the value we intend to eventually plug
into the ISP registers as the entry *one past* the last one we've written-
*not* the current entry we're updating. All along we've been calling sync
functions on the wrong index value. Argh. The 'fix' here is to rename all
'iptr' variables as 'nxti' to remember that this is the 'next' pointer-
not the current pointer.
Devote a single bit to mboxbsy- and set aside bits for output mbox registers
that we need to pick up- we can have at least one command which does not
have any defined output registers (MBOX_EXECUTE_FIRMWARE).
MFC after: 2 weeks
appropriate cache flush that provides MEMORY_BARRIER in between handoffs
between host && RISC processor for the shared memory request/response
queues.
Submitted by: dfr@nlsystems.com
per-command component that we *don't* try and pass thru CAM. CAM just
is too risky and too much of a pain- structures get copied, but not
all info of interest can be considered safely transported thru all
consumers (including user space) from the incoming ATIO to the outgoing
CTIO- it's just much safer to have a buddy structure, identified by the
command's tag which *does* make it thru safely.
Pay attention to link speed and report 200MB/s xfer speed for a
23XX card in 2GPs mode.
MFC after: 1 week
SIM (as is true for the 1280 and the 12160), then I have to have separate
flags && status for *both* busses. *Whap*.
Implement condition variables for coordination with some target mode
events. It's nice to use these and not panic in obscure little places
in the kernel like 'propagate_priority' just because we went to sleep
holding a mutex, or some other absurd thing.
MFC after: 4 weeks
some reworking (and consequent cleanup) of the interrupt service code.
Also begin to start a cleanup of target mode support that will (eventually)
not require more inforamtion routed with the ATIO to come back with the
CTIO other than tag.
MFC after: 4 weeks
----
Make a device for each ISP- really usable only with devfs and add an ioctl
entry point (this can be used to (re)set debug levels, reset the HBA,
rescan the fabric, issue lips, etc).
----
Add in a kernel thread for Fibre Channel cards. The purpose of this
thread is to be woken up to clean up after Fibre Channel events
block things. Basically, any FC event that casts doubt on the
location or identify of FC devices blocks the queues. When, and
if, we get the PORT DATABASE CHANGED or NAME SERVER DATABASE CHANGED
async event, we activate the kthread which will then, in full thread
context, re-evaluate the local loop and/or the fabric. When it's
satisfied that things are stable, it can then release the blocked
queues and let commands flow again.
The prior mechanism was a lazy evaluation. That is, the next command
to come down the pipe after change events would pay the full price
for re-evaluation. And if this was done off of a softcall, it really
could hang up the system.
These changes brings the FreeBSD port more in line with the Solaris,
Linux and NetBSD ports. It also, more importantly, gets us being
more proactive about topology changes which could then be reflected
upwards to CAM so that the periph driver can be informed sooner
rather than later when things arrive or depart.
---
Add in the (correct) usage of locking macros- we now have lock transition
macros which allow us to transition from holding the CAM lock (Giant)
and grabbing the softc lock and vice versa. Switch over to having this
HBA do real locking. Some folks claim this won't be a win. They're right.
But you have to start somewhere, and this will begin to teach us how
to DTRT for HBAs, etc.
--
Start putting in prototype 2300 support. Add back in LIP
and Loop Reset as async events that each platform will handle.
Add in another int_bogus instrumentation point.
Do some more substantial target mode cleanups.
MFC after: 8 weeks
Comment out usage of ISP_SMPLOCK- I have my doubts that this works sanely
as yet because CAM itself still needs Giant. I *was* dropping my lock
and grabbing Giant when doing the upcall for completion, but this is all
seems ridiculous until CAM is fixed.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
Add a test against isp->isp_osinfo.islocked prior to trying to see
whether --isp->isp_osinfo.islocked is zero to cause us to unlock
(non-SMPLOCK case).
compile time will build in mutex locks, otherwise the old locking (splcam/splx
with a recursion counter) will be compiled in.
We still depend on config_intr_hook to tell us when it's okay to call
msleep instead of polling. It'd be real nice if we could do this early
enough to not hang up a machine struggling with a bad Fibre Channel loop,
but that's still to come.
quite a bit so that all of the ports have a similar set of required
macros/definitions (and in similar places in the isp_<platform>.h
file).
Some new macros/functions added- Mailbox Acquire/Relase macros,
NANOTIME macros, SNPRINTf and STRNCAT. MemoryBarrier beomes
MEMORYBARRIER with much stronger types.
to isp_osinfo substructure (all in prep for SMP). Define MBOX_WAIT_COMPLETE
and MBOX_NOTIFY_COMPLETE macros so that we can now (temp) use tsleep
to wait for mailbox completion. Requires us to guess whether we're
servicing an interrupt or not- will use intr_nesting_level.
Add local strncat function.
store a bitmask of whether we've set a value into ccb->ccb_h.status,
whether we're in the watchdog routine for this command now, whether
we've set a grace period for this command and whether this command is
actually done.
See comments of rev 1.45 of isp.c for more complete information.
(we always support fabric now). Remove SCCLUN definition (we always
support SCCLUN now, if we load the f/w). Add typedef definition of an
external firmware fetch function.
Add in null SWIZZLE definitions. Add in CFGPRINTF define. Change default
debug level to refer to an external isp_debug variable. Remove inline
functions as they're now in isp_inline.h and include that file.
thwank in register layout goop). A different mboxcmd approach. Some PDB change
infrastructure. Some better management of loopdown/loopup events (keep them
distinct from resource starvation for simq freeze/unfreeze actions).
the startup code. Implement a call to outer framework function so that
asynchronous events can be handled (e.g., speed negotiation, target mode).
Roll internal release tags.