freebsd-dev/sys/sparc64/include/bus_private.h

81 lines
3.2 KiB
C
Raw Normal View History

/*-
* Copyright (c) 1997, 1998 Justin T. Gibbs.
* Copyright (c) 2002 by Thomas Moestl <tmm@FreeBSD.org>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* from: FreeBSD: src/sys/i386/i386/busdma_machdep.c,v 1.25 2002/01/05
*
* $FreeBSD$
*/
#ifndef _MACHINE_BUS_PRIVATE_H_
#define _MACHINE_BUS_PRIVATE_H_
#include <sys/queue.h>
/*
* Helpers
*/
Make sparc64 compatible with NEW_PCIB and enable it: - Implement bus_adjust_resource() methods as far as necessary and in non-PCI bridge drivers as far as feasible without rototilling them. - As NEW_PCIB does a layering violation by activating resources at layers above pci(4) without previously bubbling up their allocation there, move the assignment of bus tags and handles from the bus_alloc_resource() to the bus_activate_resource() methods like at least the other NEW_PCIB enabled architectures do. This is somewhat unfortunate as previously sparc64 (ab)used resource activation to indicate whether SYS_RES_MEMORY resources should be mapped into KVA, which is only necessary if their going to be accessed via the pointer returned from rman_get_virtual() but not for bus_space(9) as the later always uses physical access on sparc64. Besides wasting KVA if we always map in SYS_RES_MEMORY resources, a driver also may deliberately not map them in if the firmware already has done so, possibly in a special way. So in order to still allow a driver to decide whether a SYS_RES_MEMORY resource should be mapped into KVA we let it indicate that by calling bus_space_map(9) with BUS_SPACE_MAP_LINEAR as actually documented in the bus_space(9) page. This is implemented by allocating a separate bus tag per SYS_RES_MEMORY resource and passing the resource via the previously unused bus tag cookie so we later on can call rman_set_virtual() in sparc64_bus_mem_map(). As a side effect this now also allows to actually indicate that a SYS_RES_MEMORY resource should be mapped in as cacheable and/or read-only via BUS_SPACE_MAP_CACHEABLE and BUS_SPACE_MAP_READONLY respectively. - Do some minor cleanup like taking advantage of rman_init_from_resource(), factor out the common part of bus tag allocation into a newly added sparc64_alloc_bus_tag(), hook up some missing newbus methods and replace some homegrown versions with the generic counterparts etc. - While at it, let apb_attach() (which can't use the generic NEW_PCIB code as APB bridges just don't have the base and limit registers implemented) regarding the config space registers cached in pcib_softc and the SYSCTL reporting nodes set up.
2011-10-02 23:22:38 +00:00
int sparc64_bus_mem_map(bus_space_tag_t tag, bus_addr_t addr, bus_size_t size,
int flags, vm_offset_t vaddr, bus_space_handle_t *hp);
int sparc64_bus_mem_unmap(bus_space_tag_t tag, bus_space_handle_t handle,
bus_size_t size);
bus_space_tag_t sparc64_alloc_bus_tag(void *cookie, int type);
Make sparc64 compatible with NEW_PCIB and enable it: - Implement bus_adjust_resource() methods as far as necessary and in non-PCI bridge drivers as far as feasible without rototilling them. - As NEW_PCIB does a layering violation by activating resources at layers above pci(4) without previously bubbling up their allocation there, move the assignment of bus tags and handles from the bus_alloc_resource() to the bus_activate_resource() methods like at least the other NEW_PCIB enabled architectures do. This is somewhat unfortunate as previously sparc64 (ab)used resource activation to indicate whether SYS_RES_MEMORY resources should be mapped into KVA, which is only necessary if their going to be accessed via the pointer returned from rman_get_virtual() but not for bus_space(9) as the later always uses physical access on sparc64. Besides wasting KVA if we always map in SYS_RES_MEMORY resources, a driver also may deliberately not map them in if the firmware already has done so, possibly in a special way. So in order to still allow a driver to decide whether a SYS_RES_MEMORY resource should be mapped into KVA we let it indicate that by calling bus_space_map(9) with BUS_SPACE_MAP_LINEAR as actually documented in the bus_space(9) page. This is implemented by allocating a separate bus tag per SYS_RES_MEMORY resource and passing the resource via the previously unused bus tag cookie so we later on can call rman_set_virtual() in sparc64_bus_mem_map(). As a side effect this now also allows to actually indicate that a SYS_RES_MEMORY resource should be mapped in as cacheable and/or read-only via BUS_SPACE_MAP_CACHEABLE and BUS_SPACE_MAP_READONLY respectively. - Do some minor cleanup like taking advantage of rman_init_from_resource(), factor out the common part of bus tag allocation into a newly added sparc64_alloc_bus_tag(), hook up some missing newbus methods and replace some homegrown versions with the generic counterparts etc. - While at it, let apb_attach() (which can't use the generic NEW_PCIB code as APB bridges just don't have the base and limit registers implemented) regarding the config space registers cached in pcib_softc and the SYSCTL reporting nodes set up.
2011-10-02 23:22:38 +00:00
bus_space_handle_t sparc64_fake_bustag(int space, bus_addr_t addr,
struct bus_space_tag *ptag);
struct bus_dmamap_res {
struct resource *dr_res;
bus_size_t dr_used;
SLIST_ENTRY(bus_dmamap_res) dr_link;
};
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
/*
* Callers of the bus_dma interfaces must always protect their tags and maps
* appropriately against concurrent access. However, when a map is on a LRU
* queue, there is a second access path to it; for this case, the locking rules
* are given in the parenthesized comments below:
* q - locked by the mutex protecting the queue.
* p - private to the owner of the map, no access through the queue.
* * - comment refers to pointer target.
* Only the owner of the map is allowed to insert the map into a queue. Removal
* and repositioning (i.e. temporal removal and reinsertion) is allowed to all
* if the queue lock is held.
*/
struct bus_dmamap {
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
TAILQ_ENTRY(bus_dmamap) dm_maplruq; /* (q) */
SLIST_HEAD(, bus_dmamap_res) dm_reslist; /* (q, *q) */
int dm_onq; /* (q) */
int dm_flags; /* (p) */
};
/* Flag values */
#define DMF_LOADED (1 << 0) /* Map is loaded. */
#define DMF_COHERENT (1 << 1) /* Coherent mapping requested. */
#define DMF_STREAMED (1 << 2) /* Streaming cache used. */
Lock down the IOMMU bus_dma implementation to make it safe to use without Giant held. A quick outline of the locking strategy: Since all IOMMUs are synchronized, there is a single lock, iommu_mtx, which protects the hardware registers (where needed) and the global and per-IOMMU software states. As soon as the IOMMUs are divorced, each struct iommu_state will have its own mutex (and the remaining global state will be moved into the struct). The dvma rman has its own internal mutex; the TSB slots may only be accessed by the owner of the corresponding resource, so neither needs extra protection. Since there is a second access path to maps via LRU queues, the consumer- provided locking is not sufficient; therefore, each map which is on a queue is additionally protected by iommu_mtx (in part, there is one member which only the map owner may access). Each map on a queue may be accessed and removed from or repositioned in a queue in any context as long as the lock is held; only the owner may insert a map. To reduce lock contention, some bus_dma functions remove the map from the queue temporarily (on behalf of the map owner) for some operations and reinsert it when they are done. Shorter operations and operations which are not done on behalf of the lock owner are completely covered by the lock. To facilitate the locking, reorganize the streaming buffer handling; while being there, fix an old oversight which would cause the streaming buffer to always be flushed, regardless of whether streaming was enabled in the TSB entry. The streaming buffer is still disabled for now, since there are a number of drivers which lack critical bus_dmamp_sync() calls. Additional testing by: jake
2003-07-10 23:27:35 +00:00
int sparc64_dma_alloc_map(bus_dma_tag_t dmat, bus_dmamap_t *mapp);
void sparc64_dma_free_map(bus_dma_tag_t dmat, bus_dmamap_t map);
#endif /* !_MACHINE_BUS_PRIVATE_H_ */