- The contents of 'feroceon_cpufuncs' dispatch table was really dedicated for the
new Sheeva CPU (in 88F6xxx and MV-78xxx SOCs), and NOT Feroceon.
- Feroceon CPU (in 88F5xxx SOCs) appears as a regular ARM926EJ-S core and does
not require dedicated routines.
This will be accompanied by a file rename commit.
FPA floating-point format is identical to the VFP format,
but is always stored in big-endian.
Introduce _IEEE_WORD_ORDER to describe the byte-order of
the FP representation.
Obtained from: Juniper Networks, Inc
o recognize ixp435 cpu
o change memory layout for for ixp4xx to not assume memory is aliases
to 0x10000000 (Cambria/ixp435 memory starts at zero)
o handle 64 irqs for ixp435
o dual EHCI USB 2.0 controller integral to ixp435
o overhaul NPE code for ixp435 and better MAC+MII naming
o updated NPE firmware (including NPE-A image for ixp435/ixp465)
o Gateworks Cambria board support:
- IDE compact flash
- MCU
- front panel LED on i2c bus
- Octal LED latch
Sanity-tested with NFS-root on Avila and Cambria boards. Requires
pending boot2 mods for CF-boot on Cambria.
and ifnet functions
- add memory barriers to <machine/atomic.h>
- update drivers to only conditionally define their own
- add lockless producer / consumer ring buffer
- remove ring buffer implementation from cxgb and update its callers
- add if_transmit(struct ifnet *ifp, struct mbuf *m) to ifnet to
allow drivers to efficiently manage multiple hardware queues
(i.e. not serialize all packets through one ifq)
- expose if_qflush to allow drivers to flush any driver managed queues
This work was supported by Bitgravity Inc. and Chelsio Inc.
This uses the common U-Boot support lib (sys/boot/uboot, already used on
FreeBSD/powerpc), and assumes the underlying firmware has the modern API for
stand-alone apps enabled in the config (CONFIG_API).
Only netbooting is supported at the moment.
Obtained from: Marvell, Semihalf
* Orion
- 88F5181
- 88F5182
- 88F5281
* Kirkwood
- 88F6281
* Discovery
- MV78100
The above families of SOCs are built around CPU cores compliant with ARMv5TE
instruction set architecture definition. They share a number of integrated
peripherals. This commit brings support for the following basic elements:
* GPIO
* Interrupt controller
* L1, L2 cache
* Timers, watchdog, RTC
* TWSI (I2C)
* UART
Other peripherals drivers will be introduced separately.
Reviewed by: imp, marcel, stass (Thanks guys!)
Obtained from: Marvell, Semihalf
- Fix nexus_setup_intr() abuse of setting up multiple IRQs in one go. Calling
arm_setup_irqhandler() in loop is bogus, as there's just one cookie given
from the caller and it is overwritten in each iteration so that only the
last handler's cookie value prevails.
- Proper intr masking/unmasking handling: the IRQ source is masked at PIC level
only after the last handler has been removed from the list.
Reviewed by: cognet, imp, sam, stass
Obtained from: Grzegorz Bernacki gjb ! semihalf dot com
Now that st_rdev is being automatically generated by the kernel, there
is no need to define static major/minor numbers for the iodev and
memdev. We still need the minor numbers for the memdev, however, to
distinguish between /dev/mem and /dev/kmem.
Approved by: philip (mentor)
boards. This is enough to net-boot to multiuser.
Also supported is the SMSC LAN91C111 parts used on the netCF, netDUO and netMMC
add-on boards.
I'll be putting some instructions on how to boot this on the Gumstix boards
online soon.
This is still fairly rough and will be refined over time but I felt it was
better to get this out there where other people can help out.
interrupt. So, add a new function pointer, arm_post_filter, which defaults
to NULL, and which will be used as the post_filter arg for
intr_event_create(). Set it properly for the AT91, so that it boots again.
Reported by: hps
- Pull all the code to deal with the trampoline stuff into one
centeralized place and use it from everywhere.
- Some minor style tidiness
Reviewed by: tinguely
De-hardcode usage of ARM_TP_ADDRESS and RAS local storage, and move this
special purpose page to a more convenient place i.e. after the vectors high
page, more towards the end of address space. Previous location (0xe000_0000)
caused grief if KVA was to go beyond the default limit.
Note that ARM world rebuilding is required after this change since the
location of ARM_TP_ADDRESS is shared between kernel and userland.
Submitted by: Grzegorz Bernacki (gjb AT semihalf dot com)
Reviewed by: imp
Approved by: cognet (mentor)
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.
In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.
In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().
We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).
In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.
In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.
In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
defined, or also if "options DDB" is defined to provide compatibility
with existing users of stack(9).
Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to. It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.
Update stack(9) man page.
Build tested: amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested: amd64 (rwatson), arm (cognet), i386 (rwatson)
The RAS implementation would set the end address, then the start
address. These were used by the kernel to restart a RAS sequence if
it was interrupted. When the thread switching code ran, it would
check these values and adjust the PC and clear them if it did.
However, there's a small flaw in this scheme. Thread T1, sets the end
address and gets preempted. Thread T2 runs and also does a RAS
operation. This resets end to zero. Thread T1 now runs again and
sets start and then begins the RAS sequence, but is preempted before
the RAS sequence executes its last instruction. The kernel code that
would ordinarily restart the RAS sequence doesn't because the PC isn't
between start and 0, so the PC isn't set to the start of the sequence.
So when T1 is resumed again, it is at the wrong location for RAS to
produce the correct results. This causes the wrong results for the
atomic sequence.
The window for the first race is 3 instructions. The window for the
second race is 5-10 instructions depending on the atomic operation.
This makes this failure fairly rare and hard to reproduce.
Mutexs are implemented in libthr using atomic operations. When the
above race would occur, a lock could get stuck locked, causing many
downstream problems, as you might expect.
Also, make sure to reset the start and end address when doing a syscall, or
a malicious process could set them before doing a syscall.
Reviewed by: imp, ups (thanks guys)
Pointy hat to: cognet
MFC After: 3 days
It should just contain the value we want to add, as if we're interrupted
between the add and the str, we will restart from the beginning. Just use
a register we can scratch instead.
MFC After: 1 week
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
caches with data caches after writing to memory. This typically
is required to make breakpoints work on ia64 and powerpc. For
those architectures the function is implemented.
- Rename PCPU_LAZY_INC into PCPU_INC
- Add the PCPU_ADD interface which just does an add on the pcpu member
given a specific value.
Note that for most architectures PCPU_INC and PCPU_ADD are not safe.
This is a point that needs some discussions/work in the next days.
Reviewed by: alc, bde
Approved by: jeff (mentor)