map. Use this new feature to implement iommu_dvmamap_load_mbuf() and
iommu_dvmamap_load_uio() functions in terms of a new helper function,
iommu_dvmamap_load_buffer(). Reimplement the iommu_dvmamap_load()
to use it, too.
This requires some changes to the map format; in addition to that,
remove unused or redundant members.
Add SBus and Psycho wrappers for the new functions, and make them
available through the respective DMA tags.
_nexus_dmamap_load_buffer()
- implement nexus_dmamap_load() in terms of _nexus_dmamap_load_buffer().
Note that this is untested, as this code is not currently used (but
might be later for UPA devices).
- move BUS_DMAMAP_NSEGS to bus_private.h
- disable the ecache flushing in nexus_dmamap_sync(); it should not be
needed, although the docs are not entirely clear on that.
- move some constants into iommureg.h
- correct some comments
- use KASSERT() in one place instead of rolling our own
- take a sanity check out of #ifdef DIAGNOSTIC
- fix a syntax error in normally #ifdef'ed out debug code
- tweak the announce message a bit
- remove '\n's from a few panic() calls
- don't use the DVMA base adress the firmware reports; instead, figure
it out from the appropriate register on Sabres and let the IOMMU code
choose it on Psychos. This also makes the IOMMU TSB size freely
selectable.
2.) pass the requesting child device (instead of the bus one) up when
handling interrupt resources
3.) remeber to mark the resource list entry as unused in
sbus_release_resource().
Reported by: scottl (3)
With a 1 byte transmit fifo, 3 byte receive fifo, and wierd multiplexed I/O
designed for a Z80 cpu, this chip redefines suckage.
Based on the openbsd and netbsd drivers. Only really works as a console,
modem support is not complete since I can't test it.
- Restore %g6 and %g7 for kernel traps if we are returning to prom code.
This allows complex traps (ones that call into C code) to be handled from
the prom.
be used for zones that allocate objects of less 1 page. The biggest advantage
of this is that all of a sudden the majority of kernel malloc-ed data doesn't
need kva allocated for it. Besides microbenchmarks I haven't seen a measurable
performance improvement from doing this.
useful for accessing more than 1 page of contiguous physical memory, and
to use 4mb tlb entries instead of 8k. This requires that the system only
use the direct mapped addresses when they have the same virtual colour as
all other mappings of the same page, instead of being able to choose the
colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
being able to choose the colour or cachability of the direct mapped
address. This adds a lot more cases to handle. Basically when a page has
a different colour than its direct mapped address we have a choice between
bypassing the data cache and using physical addresses directly, which
requires a cache flush, or mapping it at the right colour, which requires
a tlb flush. For now we choose to map the page and do the tlb flush.
This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
- Put the kernel tsb before before the kernel load address, below
VM_MIN_KERNEL_ADDRESS, instead of after the kernel where it consumes
usable kva. This is magic mapped so the virtual address is irrelevant,
it just needs to be out of the way.
a mapping belongs to by setting it in the vm_page_t structure that backs
the tsb page that the tte for a mapping is in. This allows the pmap that
a mapping belongs to to be found without keeping a pointer to it in the
tte itself.
- Remove the pmap pointer from struct tte and use the space to make the
tte pv lists doubly linked (TAILQs), like on other architectures. This
makes entering or removing a mapping O(1) instead of O(n) where n is the
number of pmaps a page is mapped by (including kernel_pmap).
- Use atomic ops for setting and clearing bits in the ttes, now that they
return the old value and can be easily used for this purpose.
- Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will
inline it (4 inline stores using %g0 instead of a function call).
- Initially set the virtual colour for all the vm_page_ts to be equal to their
physical colour. This will be more useful once uma_small_alloc is
implemented, but basically pages with virtual colour equal to phsyical
colour are easier to handle at the pmap level because they can be safely
accessed through cachable direct virtual to physical mappings with that
colour, without fear of causing illegal dcache aliases.
In total these changes give a minor performance improvement, about 1%
reduction in system time during buildworld.
namely the ones for the timers, error handling and power management.
The registers for the timers, power management and PCI bus b errors are
reserved on Sabres (US-IIi) and can lead to false matches there.
Since all of them are never used for devices on the bus, they can be omitted
safely.
Approved by: re
import, as it breaks the relocation kernel modules built with the new
binutils.
Note that this, together with the binutils import, marks a kernel module
flag day on sparc64: modules built with the old binutils will not work
with new kernels and vice versa. Mismatches will result in panics.
Approved by: re
register to the one of the processor doing the interrupt setup. This
is required since this field is preinitialized to 0, but there exist
machines which have no processor with a MID of 0 (e.g. e450s with 1 or 2
processors).
Add some more macros for handle the interrupt mapping registers, and
rename some existing ones for consistency.
Approved by: re
are nevers used for PCI interrupts, but can cause false matches since
they are fully programmable.
2.) Skip the mapping registers for slot a2 and a3 on "psycho" bridges,
since they are not present there. Again, this could cause false matches,
which would result in the interrupt being delivered at most once.
Submitted by: jake (2)
Approved by: re
this is now done on all machines except for some known problematic ones.
Add an additional guard to make sure that the interrupt numbers are
in the correct range before swizzling. This should catch any remaining
models for which the swizzle is inappropriate.
Correct the swizzle calculation to account for the fact that the parent
interrupt numbers to be swizzled are 1-based.
Approved by: re