2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
2001-11-09 20:14:41 +00:00
|
|
|
* Copyright (c) 1999 Matthew R. Green
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
|
|
|
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
|
|
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
|
|
|
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
|
|
|
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
|
|
|
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
|
|
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
|
|
|
|
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
|
|
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
2011-03-12 14:33:32 +00:00
|
|
|
* from: NetBSD: iommuvar.h,v 1.6 2008/05/29 14:51:26 mrg Exp
|
2001-11-09 20:14:41 +00:00
|
|
|
*
|
|
|
|
* $FreeBSD$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _MACHINE_IOMMUVAR_H_
|
2008-05-07 21:22:15 +00:00
|
|
|
#define _MACHINE_IOMMUVAR_H_
|
2001-11-09 20:14:41 +00:00
|
|
|
|
2002-02-13 15:59:17 +00:00
|
|
|
#define IO_PAGE_SIZE PAGE_SIZE_8K
|
|
|
|
#define IO_PAGE_MASK PAGE_MASK_8K
|
|
|
|
#define IO_PAGE_SHIFT PAGE_SHIFT_8K
|
|
|
|
#define round_io_page(x) round_page(x)
|
|
|
|
#define trunc_io_page(x) trunc_page(x)
|
|
|
|
|
2007-08-05 11:56:44 +00:00
|
|
|
/*
|
|
|
|
* LRU queue handling for lazy resource allocation
|
|
|
|
*/
|
|
|
|
TAILQ_HEAD(iommu_maplruq_head, bus_dmamap);
|
|
|
|
|
2001-11-09 20:14:41 +00:00
|
|
|
/*
|
2008-11-16 19:53:49 +00:00
|
|
|
* Per-IOMMU state; the parenthesized comments indicate the locking strategy:
|
2007-08-05 11:56:44 +00:00
|
|
|
* i - protected by is_mtx.
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
* r - read-only after initialization.
|
|
|
|
* * - comment refers to pointer target / target hardware registers
|
|
|
|
* (for bus_addr_t).
|
2008-11-16 19:53:49 +00:00
|
|
|
* is_maplruq is also locked by is_mtx. Elements of is_tsb may only be
|
2007-08-05 11:56:44 +00:00
|
|
|
* accessed from functions operating on the map owning the corresponding
|
|
|
|
* resource, so the locking the user is required to do to protect the
|
|
|
|
* map is sufficient.
|
|
|
|
* dm_reslist of all maps are locked by is_mtx as well.
|
|
|
|
* is_dvma_rman has its own internal lock.
|
2001-11-09 20:14:41 +00:00
|
|
|
*/
|
|
|
|
struct iommu_state {
|
2007-08-05 11:56:44 +00:00
|
|
|
struct mtx is_mtx;
|
|
|
|
struct rman is_dvma_rman; /* DVMA space rman */
|
|
|
|
struct iommu_maplruq_head is_maplruq; /* (i) LRU queue */
|
|
|
|
vm_paddr_t is_ptsb; /* (r) TSB physical address */
|
2008-05-07 21:22:15 +00:00
|
|
|
uint64_t *is_tsb; /* (*i) TSB virtual address */
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
int is_tsbsize; /* (r) 0 = 8K, ... */
|
2008-05-07 21:22:15 +00:00
|
|
|
uint64_t is_pmaxaddr; /* (r) max. physical address */
|
|
|
|
uint64_t is_dvmabase; /* (r) */
|
2009-12-23 22:02:34 +00:00
|
|
|
uint64_t is_cr; /* (r) Control reg value */
|
2001-11-09 20:14:41 +00:00
|
|
|
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
vm_paddr_t is_flushpa[2]; /* (r) */
|
2009-12-23 22:02:34 +00:00
|
|
|
volatile uint64_t *is_flushva[2]; /* (r, *i) */
|
2002-02-13 15:59:17 +00:00
|
|
|
/*
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
* (i)
|
2002-02-13 15:59:17 +00:00
|
|
|
* When a flush is completed, 64 bytes will be stored at the given
|
|
|
|
* location, the first double word being 1, to indicate completion.
|
|
|
|
* The lower 6 address bits are ignored, so the addresses need to be
|
|
|
|
* suitably aligned; over-allocate a large enough margin to be able
|
|
|
|
* to adjust it.
|
|
|
|
* Two such buffers are needed.
|
|
|
|
*/
|
|
|
|
volatile char is_flush[STRBUF_FLUSHSYNC_NBYTES * 3 - 1];
|
2001-11-09 20:14:41 +00:00
|
|
|
|
2008-11-16 19:53:49 +00:00
|
|
|
/* copies of our parent's state, to allow us to be self contained */
|
Lock down the IOMMU bus_dma implementation to make it safe to use
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
2003-07-10 23:27:35 +00:00
|
|
|
bus_space_tag_t is_bustag; /* (r) Our bus tag */
|
|
|
|
bus_space_handle_t is_bushandle; /* (r) */
|
|
|
|
bus_addr_t is_iommu; /* (r, *i) IOMMU registers */
|
|
|
|
bus_addr_t is_sb[2]; /* (r, *i) Streaming buffer */
|
|
|
|
/* Tag diagnostics access */
|
|
|
|
bus_addr_t is_dtag; /* (r, *r) */
|
|
|
|
/* Data RAM diagnostic access */
|
|
|
|
bus_addr_t is_ddram; /* (r, *r) */
|
|
|
|
/* LRU queue diag. access */
|
|
|
|
bus_addr_t is_dqueue; /* (r, *r) */
|
|
|
|
/* Virtual address diagnostics register */
|
|
|
|
bus_addr_t is_dva; /* (r, *r) */
|
|
|
|
/* Tag compare diagnostics access */
|
|
|
|
bus_addr_t is_dtcmp; /* (r, *r) */
|
2008-11-16 19:53:49 +00:00
|
|
|
/* behavior flags */
|
|
|
|
u_int is_flags; /* (r) */
|
|
|
|
#define IOMMU_RERUN_DISABLE (1 << 0)
|
2009-12-23 22:02:34 +00:00
|
|
|
#define IOMMU_FIRE (1 << 1)
|
|
|
|
#define IOMMU_FLUSH_CACHE (1 << 2)
|
|
|
|
#define IOMMU_PRESERVE_PROM (1 << 3)
|
2001-11-09 20:14:41 +00:00
|
|
|
};
|
|
|
|
|
2007-08-05 11:56:44 +00:00
|
|
|
/* interfaces for PCI/SBus code */
|
2009-12-23 22:02:34 +00:00
|
|
|
void iommu_init(const char *name, struct iommu_state *is, u_int tsbsize,
|
|
|
|
uint32_t iovabase, u_int resvpg);
|
2008-05-07 21:22:15 +00:00
|
|
|
void iommu_reset(struct iommu_state *is);
|
|
|
|
void iommu_decode_fault(struct iommu_state *is, vm_offset_t phys);
|
2001-11-09 20:14:41 +00:00
|
|
|
|
Further cleanup of the sparc64 busdma implementation:
- Move prototypes for sparc64-specific helper functions from bus.h to
bus_private.h
- Move the method pointers from struct bus_dma_tag into a separate
structure; this saves some memory, and allows to use a single method
table for each busdma backend, so that the bus drivers need no longer
be changed if the methods tables need to be modified.
- Remove the hierarchical tag method lookup. It was never really useful,
since the layering is fixed, and the current implementations do not
need to call into parent implementations anyway. Each tag inherits
its method table pointer and cookie from the parent (or the root tag)
now, and the method wrapper macros directly use the method table
of the tag.
- Add a method table to the non-IOMMU backend, remove unnecessary
prototypes, remove the extra parent tag argument.
- Rename sparc64_dmamem_alloc_map() and sparc64_dmamem_free_map() to
sparc64_dma_alloc_map() and sparc64_dma_free_map(), move them to a
better place and use them for all map allocations and deallocations.
- Add a method table to the iommu backend, and staticize functions,
remove the extra parent tag argument.
- Change the psycho and sbus drivers to just set cookie and method table
in the root tag.
- Miscellaneous small fixes.
2003-06-18 16:41:36 +00:00
|
|
|
extern struct bus_dma_methods iommu_dma_methods;
|
2001-11-09 20:14:41 +00:00
|
|
|
|
|
|
|
#endif /* !_MACHINE_IOMMUVAR_H_ */
|