freebsd-nq/sys/sparc64/include/iommuvar.h

114 lines
4.3 KiB
C
Raw Normal View History

/*-
* Copyright (c) 1999 Matthew R. Green
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: NetBSD: iommuvar.h,v 1.6 2008/05/29 14:51:26 mrg Exp
*
* $FreeBSD$
*/
#ifndef _MACHINE_IOMMUVAR_H_
#define _MACHINE_IOMMUVAR_H_
#define IO_PAGE_SIZE PAGE_SIZE_8K
#define IO_PAGE_MASK PAGE_MASK_8K
#define IO_PAGE_SHIFT PAGE_SHIFT_8K
#define round_io_page(x) round_page(x)
#define trunc_io_page(x) trunc_page(x)
- Divorce the IOTSBs, which so far where handled via a global list instead of per IOMMU, so we no longer need to program all of them identically in systems having multiple IOMMUs. This continues the rototilling of the nexus(4) done about 5 months ago, which amongst others changed nexus(4) and the drivers for host-to-foo bridges to provide bus_get_dma_tag methods, allowing to handle DMA tags in a hierarchical way and to link them with devices. This still doesn't move the silicon bug workarounds for Sabre (and in the uncommitted schizo(4) for Tomatillo) bridges into special bus_dma_tag_create() and bus_dmamap_sync() methods though, as w/o fully newbus'ified bus_dma_tag_create() and bus_dma_tag_destroy() this still requires too much hackery, i.e. per-child parent DMA tags in the parent driver. - Let the host-to-foo drivers supply the maximum physical address of the IOMMU accompanying the bridges. Previously iommu(4) hard- coded an upper limit of 16GB, which actually only applies to the IOMMUs of the Hummingbird and Sabre bridges. The Psycho variants as well as the U2S in fact can can translate to up to 2TB, i.e. translate to 41-bit physical addresses. According to the recently available Tomatillo documentation these bridges even translate to 43-bit physical addresses and hints at the Schizo bridges doing 43 bits as well. This fixes the issue the FreeBSD 6.0 todo list item "Max RAM on sparc64" was refering to and pretty much obsoletes the lack of support for bounce buffers on sparc64. Thanks to Nathan Whitehorn for pointing me at the Tomatillo manual. Approved by: re (kensmith)
2007-08-05 11:56:44 +00:00
/*
* LRU queue handling for lazy resource allocation
*/
TAILQ_HEAD(iommu_maplruq_head, bus_dmamap);
/*
* Per-IOMMU state; the parenthesized comments indicate the locking strategy:
- Divorce the IOTSBs, which so far where handled via a global list instead of per IOMMU, so we no longer need to program all of them identically in systems having multiple IOMMUs. This continues the rototilling of the nexus(4) done about 5 months ago, which amongst others changed nexus(4) and the drivers for host-to-foo bridges to provide bus_get_dma_tag methods, allowing to handle DMA tags in a hierarchical way and to link them with devices. This still doesn't move the silicon bug workarounds for Sabre (and in the uncommitted schizo(4) for Tomatillo) bridges into special bus_dma_tag_create() and bus_dmamap_sync() methods though, as w/o fully newbus'ified bus_dma_tag_create() and bus_dma_tag_destroy() this still requires too much hackery, i.e. per-child parent DMA tags in the parent driver. - Let the host-to-foo drivers supply the maximum physical address of the IOMMU accompanying the bridges. Previously iommu(4) hard- coded an upper limit of 16GB, which actually only applies to the IOMMUs of the Hummingbird and Sabre bridges. The Psycho variants as well as the U2S in fact can can translate to up to 2TB, i.e. translate to 41-bit physical addresses. According to the recently available Tomatillo documentation these bridges even translate to 43-bit physical addresses and hints at the Schizo bridges doing 43 bits as well. This fixes the issue the FreeBSD 6.0 todo list item "Max RAM on sparc64" was refering to and pretty much obsoletes the lack of support for bounce buffers on sparc64. Thanks to Nathan Whitehorn for pointing me at the Tomatillo manual. Approved by: re (kensmith)
2007-08-05 11:56:44 +00:00
* i - protected by is_mtx.
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
* r - read-only after initialization.
* * - comment refers to pointer target / target hardware registers
* (for bus_addr_t).
* is_maplruq is also locked by is_mtx. Elements of is_tsb may only be
- Divorce the IOTSBs, which so far where handled via a global list instead of per IOMMU, so we no longer need to program all of them identically in systems having multiple IOMMUs. This continues the rototilling of the nexus(4) done about 5 months ago, which amongst others changed nexus(4) and the drivers for host-to-foo bridges to provide bus_get_dma_tag methods, allowing to handle DMA tags in a hierarchical way and to link them with devices. This still doesn't move the silicon bug workarounds for Sabre (and in the uncommitted schizo(4) for Tomatillo) bridges into special bus_dma_tag_create() and bus_dmamap_sync() methods though, as w/o fully newbus'ified bus_dma_tag_create() and bus_dma_tag_destroy() this still requires too much hackery, i.e. per-child parent DMA tags in the parent driver. - Let the host-to-foo drivers supply the maximum physical address of the IOMMU accompanying the bridges. Previously iommu(4) hard- coded an upper limit of 16GB, which actually only applies to the IOMMUs of the Hummingbird and Sabre bridges. The Psycho variants as well as the U2S in fact can can translate to up to 2TB, i.e. translate to 41-bit physical addresses. According to the recently available Tomatillo documentation these bridges even translate to 43-bit physical addresses and hints at the Schizo bridges doing 43 bits as well. This fixes the issue the FreeBSD 6.0 todo list item "Max RAM on sparc64" was refering to and pretty much obsoletes the lack of support for bounce buffers on sparc64. Thanks to Nathan Whitehorn for pointing me at the Tomatillo manual. Approved by: re (kensmith)
2007-08-05 11:56:44 +00:00
* accessed from functions operating on the map owning the corresponding
* resource, so the locking the user is required to do to protect the
* map is sufficient.
* dm_reslist of all maps are locked by is_mtx as well.
* is_dvma_rman has its own internal lock.
*/
struct iommu_state {
- Divorce the IOTSBs, which so far where handled via a global list instead of per IOMMU, so we no longer need to program all of them identically in systems having multiple IOMMUs. This continues the rototilling of the nexus(4) done about 5 months ago, which amongst others changed nexus(4) and the drivers for host-to-foo bridges to provide bus_get_dma_tag methods, allowing to handle DMA tags in a hierarchical way and to link them with devices. This still doesn't move the silicon bug workarounds for Sabre (and in the uncommitted schizo(4) for Tomatillo) bridges into special bus_dma_tag_create() and bus_dmamap_sync() methods though, as w/o fully newbus'ified bus_dma_tag_create() and bus_dma_tag_destroy() this still requires too much hackery, i.e. per-child parent DMA tags in the parent driver. - Let the host-to-foo drivers supply the maximum physical address of the IOMMU accompanying the bridges. Previously iommu(4) hard- coded an upper limit of 16GB, which actually only applies to the IOMMUs of the Hummingbird and Sabre bridges. The Psycho variants as well as the U2S in fact can can translate to up to 2TB, i.e. translate to 41-bit physical addresses. According to the recently available Tomatillo documentation these bridges even translate to 43-bit physical addresses and hints at the Schizo bridges doing 43 bits as well. This fixes the issue the FreeBSD 6.0 todo list item "Max RAM on sparc64" was refering to and pretty much obsoletes the lack of support for bounce buffers on sparc64. Thanks to Nathan Whitehorn for pointing me at the Tomatillo manual. Approved by: re (kensmith)
2007-08-05 11:56:44 +00:00
struct mtx is_mtx;
struct rman is_dvma_rman; /* DVMA space rman */
struct iommu_maplruq_head is_maplruq; /* (i) LRU queue */
vm_paddr_t is_ptsb; /* (r) TSB physical address */
uint64_t *is_tsb; /* (*i) TSB virtual address */
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
int is_tsbsize; /* (r) 0 = 8K, ... */
uint64_t is_pmaxaddr; /* (r) max. physical address */
uint64_t is_dvmabase; /* (r) */
uint64_t is_cr; /* (r) Control reg value */
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
vm_paddr_t is_flushpa[2]; /* (r) */
volatile uint64_t *is_flushva[2]; /* (r, *i) */
/*
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
* (i)
* When a flush is completed, 64 bytes will be stored at the given
* location, the first double word being 1, to indicate completion.
* The lower 6 address bits are ignored, so the addresses need to be
* suitably aligned; over-allocate a large enough margin to be able
* to adjust it.
* Two such buffers are needed.
*/
volatile char is_flush[STRBUF_FLUSHSYNC_NBYTES * 3 - 1];
/* copies of our parent's state, to allow us to be self contained */
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
bus_space_tag_t is_bustag; /* (r) Our bus tag */
bus_space_handle_t is_bushandle; /* (r) */
bus_addr_t is_iommu; /* (r, *i) IOMMU registers */
bus_addr_t is_sb[2]; /* (r, *i) Streaming buffer */
/* Tag diagnostics access */
bus_addr_t is_dtag; /* (r, *r) */
/* Data RAM diagnostic access */
bus_addr_t is_ddram; /* (r, *r) */
/* LRU queue diag. access */
bus_addr_t is_dqueue; /* (r, *r) */
/* Virtual address diagnostics register */
bus_addr_t is_dva; /* (r, *r) */
/* Tag compare diagnostics access */
bus_addr_t is_dtcmp; /* (r, *r) */
/* behavior flags */
u_int is_flags; /* (r) */
#define IOMMU_RERUN_DISABLE (1 << 0)
#define IOMMU_FIRE (1 << 1)
#define IOMMU_FLUSH_CACHE (1 << 2)
#define IOMMU_PRESERVE_PROM (1 << 3)
};
- Divorce the IOTSBs, which so far where handled via a global list instead of per IOMMU, so we no longer need to program all of them identically in systems having multiple IOMMUs. This continues the rototilling of the nexus(4) done about 5 months ago, which amongst others changed nexus(4) and the drivers for host-to-foo bridges to provide bus_get_dma_tag methods, allowing to handle DMA tags in a hierarchical way and to link them with devices. This still doesn't move the silicon bug workarounds for Sabre (and in the uncommitted schizo(4) for Tomatillo) bridges into special bus_dma_tag_create() and bus_dmamap_sync() methods though, as w/o fully newbus'ified bus_dma_tag_create() and bus_dma_tag_destroy() this still requires too much hackery, i.e. per-child parent DMA tags in the parent driver. - Let the host-to-foo drivers supply the maximum physical address of the IOMMU accompanying the bridges. Previously iommu(4) hard- coded an upper limit of 16GB, which actually only applies to the IOMMUs of the Hummingbird and Sabre bridges. The Psycho variants as well as the U2S in fact can can translate to up to 2TB, i.e. translate to 41-bit physical addresses. According to the recently available Tomatillo documentation these bridges even translate to 43-bit physical addresses and hints at the Schizo bridges doing 43 bits as well. This fixes the issue the FreeBSD 6.0 todo list item "Max RAM on sparc64" was refering to and pretty much obsoletes the lack of support for bounce buffers on sparc64. Thanks to Nathan Whitehorn for pointing me at the Tomatillo manual. Approved by: re (kensmith)
2007-08-05 11:56:44 +00:00
/* interfaces for PCI/SBus code */
void iommu_init(const char *name, struct iommu_state *is, u_int tsbsize,
uint32_t iovabase, u_int resvpg);
void iommu_reset(struct iommu_state *is);
void iommu_decode_fault(struct iommu_state *is, vm_offset_t phys);
extern struct bus_dma_methods iommu_dma_methods;
#endif /* !_MACHINE_IOMMUVAR_H_ */