return BUS_PROBE_NOWILDCARD from their probe routines to avoid claiming
wildcard devices on their parent bus. Do a sweep through the MIPS tree.
MFC after: 2 weeks
transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_*
- Those that allocate address space are named kva_*
- Those that operate on maps are named kmap_*
- Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc
Tested by: pho
Sponsored by: EMC / Isilon Storage Division
implementations or no implementation on all platforms.
Some of these functions might be good ideas, but their semantics were unclear
given the lack of implementation, and an unlucky porter could be fooled into
trying to implement them or, worse, being baffled when something like
platform_trap_enter() failed to be called.
o) Get rid of some unused macros related to features we don't intend to
provide.
o) Get rid of macro definitions for MIPS-I CPUs. We are not likely to
support anything that predartes MIPS-III.
o) Respell MIPS3_* macros as MIPS_*, which is how most of them were being
used already.
o) Eliminate a duplicate and mostly-unused set of exception vector macros.
There's still considerable duplication and lots more obsolete in our headers,
but this reduces one of the larger files to a size where one could reckon
about the correctness of its contents with a mere few hours of contemplation.
There is, of course, a question of whether we need definitions for fields,
registers and configurations that we are unlikely to ever use or implement,
even if they're not obsolete since 1991. FreeBSD is not a processor
reference manual, and things that aren't used may be wrong, or may be
duplicated because nobody could possibly actually know whether they're
already defined.
a number of cores, this allows for a sparse set of CPUs. Implement support
for sparse core masks on Octeon.
XXX jeff@ suggests that all_cpus should include cores that are offline or
running other applications/OSes, so the platform API should be further
extended to allow us to set all_cpus to include all cores that are
physically-present as opposed to only those that are running FreeBSD.
Submitted by: Bhanu Prakash (with modifications)
Reviewed by: jchandra
Glanced at by: kib, jeff, jhb
mipsel' or 'machine mips mipseb' into the config file (with a few 64's
tossed in for good measure). This will let us build the proper
kernels with different worlds as part of make universe.
This reflects actual type used to store and compare child device orders.
Change is mostly done via a Coccinelle (soon to be devel/coccinelle)
semantic patch.
Verified by LINT+modules kernel builds.
Followup to: r212213
MFC after: 10 days
'counter_upper' and 'counter_lower_last'. The race exists because
interrupts are enabled even though tick_ticker() executes in a
critical section.
Fix a bug in clock_intr() in how it updates the cached values of
'counter_upper' and 'counter_lower_last'. They are updated only
when the COUNT register rolls over. More interestingly it will *never*
update the cached values if 'counter_lower_last' happens to be zero.
Get rid of superfluous critical section in clock_intr(). There is no
reason to do this because clock_intr() executes in hard interrupt
context.
Switch back to using 'tick_ticker()' as the cpu ticker for Sibyte.
Reviewed by: jmallett, mav
physical addresses.
o) Set a local maxmem in sb_machdep.c to avoid trying to use pages over 2^64
under 32-bit ABIs. Our pmap needs corrected to use vm_paddr_t consistently,
then we can make vm_paddr_t 64-bit under 32-bit ABIs and add code in pmap
to limit phys_avail by the maximum PFN that a 32-bit PTE can hold.
frequency. This counter can be accessed coherently from both cores.
Use this as the preferred timecounter for the SWARM kernels.
The CP0 COUNT register is unusable as the timecounter on SMP platforms because
the COUNT registers on different CPUs are not guaranteed to be in sync.
when sb_load64() returns.
Some 32-bit arithmetic operations (e.g. subu) have unpredicatable results
when operating on 64-bit registers that are not properly sign-extended.
- We don't need to fall back to uncacheable memory to satisfy BUS_DMA_COHERENT
requests on these CPUs.
- The bus_dmamap_sync() is a no-op for these CPUs.
A side-effect of this change is rename DMAMAP_COHERENT flag to
DMAMAP_UNCACHEABLE. This conveys the purpose of the flag more accurately.
Reviewed by: gonzo, imp
Getting the little-endian PCI bus working on the big-endian CPU proved to be
quite challenging. We let the PCI devices be mapped in the "match byte lanes"
address window. This is where they are mapped by the CFE and DMA transfers
generated to or from addresses within this window are not subject to automatic
byte-swapping.
However any access by the driver to memory-mapped pci space is redirected
via the "match bit lanes" address window. We get the benefit of automatic
byte swapping through this address window and drivers don't need to change
to deal with CPU big-endianness.
this in the Sibyte PCI hostbridge driver instead.
The nexus driver sees resource allocation requests for memory and irq
resources only. These are legitimate resources on all MIPS platforms.
Suggested by: imp
The platform that supports SMP currently is a SWARM with a dual-core Sibyte
processor. The kernel config file to use is SWARM_SMP.
Reviewed by: imp, rrs
The only reason we need to have the sb_load64() and sb_store64()
functions in assembly is to cheat the compiler and generate the
'ld' and 'sd' instructions which it otherwise will not do when
compiling for a 32-bit architecture. There are some 64-bit
registers in the SCD unit that must be accessed using 64-bit
load and store instructions.
This is a workaround for the fact that the CFE is compiled as a 64-bit
application and therefore sets the SR_KX bit every time we call into
it (for e.g. console).
A TLB miss for any address above 0xc0000000 with the SR_KX bit set will
end up at the XTLB exception vector. We workaround this by copying the
standard TLB handler at the XTLB exception vector.
Approved by: imp (mentor)
symbols resolving in DDB
- When zeroing .bss/.sbss do not round end address to page boundary,
it's not neccessary and might destroy data pased by trampoline or
boot loader
interrupt sources feeding into a hardintr anymore. The
mips_mask_hard_irq() function does that for us while an interrupt is
being processed.
Submitted by: neel@
1) Adds future RMI directories
2) Places intr_machdep.c in specfic files.arch pointing to the generic
intr_machdep.c. This allows us to have an architecture dependant intr_machdep.c
(which we will need for RMI) in the machine specific directory
3) removes intr_machdep.c from files.mips
4) Adds some TARGET_XLR_XLS ifdef's for the machine specific intra_machdep.h. We
may need to look at finding a better place to put this. But first I want to
get this thing compiling.