2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
2001-11-09 20:14:41 +00:00
|
|
|
* Copyright (c) 1999 Matthew R. Green
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. The name of the author may not be used to endorse or promote products
|
|
|
|
* derived from this software without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
|
|
|
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
|
|
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
|
|
|
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
|
|
|
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
|
|
|
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
|
|
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
|
|
|
|
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
|
|
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
2002-02-13 15:59:17 +00:00
|
|
|
* from: NetBSD: iommuvar.h,v 1.9 2001/07/20 00:07:13 eeh Exp
|
2001-11-09 20:14:41 +00:00
|
|
|
*
|
|
|
|
* $FreeBSD$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _MACHINE_IOMMUVAR_H_
|
|
|
|
#define _MACHINE_IOMMUVAR_H_
|
|
|
|
|
2002-02-13 15:59:17 +00:00
|
|
|
#define IO_PAGE_SIZE PAGE_SIZE_8K
|
|
|
|
#define IO_PAGE_MASK PAGE_MASK_8K
|
|
|
|
#define IO_PAGE_SHIFT PAGE_SHIFT_8K
|
|
|
|
#define round_io_page(x) round_page(x)
|
|
|
|
#define trunc_io_page(x) trunc_page(x)
|
|
|
|
|
2001-11-09 20:14:41 +00:00
|
|
|
/*
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
* Per-IOMMU state. The parenthesized comments indicate the locking strategy:
|
|
|
|
* i - protected by iommu_mtx.
|
|
|
|
* r - read-only after initialization.
|
|
|
|
* * - comment refers to pointer target / target hardware registers
|
|
|
|
* (for bus_addr_t).
|
|
|
|
* iommu_map_lruq is also locked by iommu_mtx. Elements of iommu_tsb may only
|
|
|
|
* be accessed from functions operating on the map owning the corresponding
|
|
|
|
* resource, so the locking the user is required to do to protect the map is
|
|
|
|
* sufficient. As soon as the TSBs are divorced, these will be moved into struct
|
|
|
|
* iommu_state, and each state struct will get its own lock.
|
|
|
|
* iommu_dvma_rman needs to be moved there too, but has its own internal lock.
|
2001-11-09 20:14:41 +00:00
|
|
|
*/
|
|
|
|
struct iommu_state {
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
int is_tsbsize; /* (r) 0 = 8K, ... */
|
|
|
|
u_int64_t is_dvmabase; /* (r) */
|
|
|
|
int64_t is_cr; /* (r) Control reg value */
|
2001-11-09 20:14:41 +00:00
|
|
|
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
vm_paddr_t is_flushpa[2]; /* (r) */
|
|
|
|
volatile int64_t *is_flushva[2]; /* (r, *i) */
|
2002-02-13 15:59:17 +00:00
|
|
|
/*
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
* (i)
|
2002-02-13 15:59:17 +00:00
|
|
|
* When a flush is completed, 64 bytes will be stored at the given
|
|
|
|
* location, the first double word being 1, to indicate completion.
|
|
|
|
* The lower 6 address bits are ignored, so the addresses need to be
|
|
|
|
* suitably aligned; over-allocate a large enough margin to be able
|
|
|
|
* to adjust it.
|
|
|
|
* Two such buffers are needed.
|
|
|
|
*/
|
|
|
|
volatile char is_flush[STRBUF_FLUSHSYNC_NBYTES * 3 - 1];
|
2001-11-09 20:14:41 +00:00
|
|
|
|
|
|
|
/* copies of our parents state, to allow us to be self contained */
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
bus_space_tag_t is_bustag; /* (r) Our bus tag */
|
|
|
|
bus_space_handle_t is_bushandle; /* (r) */
|
|
|
|
bus_addr_t is_iommu; /* (r, *i) IOMMU registers */
|
|
|
|
bus_addr_t is_sb[2]; /* (r, *i) Streaming buffer */
|
|
|
|
/* Tag diagnostics access */
|
|
|
|
bus_addr_t is_dtag; /* (r, *r) */
|
|
|
|
/* Data RAM diagnostic access */
|
|
|
|
bus_addr_t is_ddram; /* (r, *r) */
|
|
|
|
/* LRU queue diag. access */
|
|
|
|
bus_addr_t is_dqueue; /* (r, *r) */
|
|
|
|
/* Virtual address diagnostics register */
|
|
|
|
bus_addr_t is_dva; /* (r, *r) */
|
|
|
|
/* Tag compare diagnostics access */
|
|
|
|
bus_addr_t is_dtcmp; /* (r, *r) */
|
2002-07-16 18:17:03 +00:00
|
|
|
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
STAILQ_ENTRY(iommu_state) is_link; /* (r) */
|
2001-11-09 20:14:41 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/* interfaces for PCI/SBUS code */
|
2002-07-16 18:17:03 +00:00
|
|
|
void iommu_init(char *, struct iommu_state *, int, u_int32_t, int);
|
2002-03-21 00:06:55 +00:00
|
|
|
void iommu_reset(struct iommu_state *);
|
2002-03-23 20:42:23 +00:00
|
|
|
void iommu_decode_fault(struct iommu_state *, vm_offset_t);
|
2001-11-09 20:14:41 +00:00
|
|
|
|
Further cleanup of the sparc64 busdma implementation:
- Move prototypes for sparc64-specific helper functions from bus.h to
bus_private.h
- Move the method pointers from struct bus_dma_tag into a separate
structure; this saves some memory, and allows to use a single method
table for each busdma backend, so that the bus drivers need no longer
be changed if the methods tables need to be modified.
- Remove the hierarchical tag method lookup. It was never really useful,
since the layering is fixed, and the current implementations do not
need to call into parent implementations anyway. Each tag inherits
its method table pointer and cookie from the parent (or the root tag)
now, and the method wrapper macros directly use the method table
of the tag.
- Add a method table to the non-IOMMU backend, remove unnecessary
prototypes, remove the extra parent tag argument.
- Rename sparc64_dmamem_alloc_map() and sparc64_dmamem_free_map() to
sparc64_dma_alloc_map() and sparc64_dma_free_map(), move them to a
better place and use them for all map allocations and deallocations.
- Add a method table to the iommu backend, and staticize functions,
remove the extra parent tag argument.
- Change the psycho and sbus drivers to just set cookie and method table
in the root tag.
- Miscellaneous small fixes.
2003-06-18 16:41:36 +00:00
|
|
|
extern struct bus_dma_methods iommu_dma_methods;
|
2001-11-09 20:14:41 +00:00
|
|
|
|
|
|
|
#endif /* !_MACHINE_IOMMUVAR_H_ */
|