Commit Graph

239 Commits

Author SHA1 Message Date
raj
032a270ba5 Support kernel crash mini dumps on ARM architecture.
Obtained from:	Juniper Networks, Semihalf
2008-11-06 16:20:27 +00:00
raj
745c5c702f Initial support of loader(8) for ARM machines running U-Boot.
This uses the common U-Boot support lib (sys/boot/uboot, already used on
FreeBSD/powerpc), and assumes the underlying firmware has the modern API for
stand-alone apps enabled in the config (CONFIG_API).

Only netbooting is supported at the moment.

Obtained from:	Marvell, Semihalf
2008-10-14 10:11:14 +00:00
raj
3226c13778 Introduce basic support for Marvell families of system-on-chip ARM devices:
*  Orion
     - 88F5181
     - 88F5182
     - 88F5281

  * Kirkwood
     - 88F6281

  * Discovery
     - MV78100

The above families of SOCs are built around CPU cores compliant with ARMv5TE
instruction set architecture definition. They share a number of integrated
peripherals. This commit brings support for the following basic elements:

  * GPIO
  * Interrupt controller
  * L1, L2 cache
  * Timers, watchdog, RTC
  * TWSI (I2C)
  * UART

Other peripherals drivers will be introduced separately.

Reviewed by:	imp, marcel, stass (Thanks guys!)
Obtained from:	Marvell, Semihalf
2008-10-13 20:07:13 +00:00
raj
b9c565987a Introduce low-level support for new Marvell core CPUs: 88FR131, 88FR571.
They are compliant with ARMv5TE and integrated on 88F6281 (Kirkwood) and
MV78100 (Discovery) system-on-chip families.

Obtained from:	Marvell, Semihalf
2008-10-13 18:16:54 +00:00
cognet
a815eb98d3 Remove the unused field "pc_prvspace" from the MD fields for the struct
pcpu. There's not even a thing such as a "struct pcup".
While I'm there, remove a comment that makes no sense for arm.

Spotted out by:	Mark Tinguely
2008-09-11 20:39:46 +00:00
raj
df986188e9 ARM interrupts improvements.
- Fix nexus_setup_intr() abuse of setting up multiple IRQs in one go. Calling
  arm_setup_irqhandler() in loop is bogus, as there's just one cookie given
  from the caller and it is overwritten in each iteration so that only the
  last handler's cookie value prevails.

- Proper intr masking/unmasking handling: the IRQ source is masked at PIC level
  only after the last handler has been removed from the list.

Reviewed by:	cognet, imp, sam, stass
Obtained from:	Grzegorz Bernacki gjb ! semihalf dot com
2008-09-11 12:36:13 +00:00
imp
783eb36c6c Whitespace nit. 2008-08-23 23:35:08 +00:00
jhb
d90774443d Export 'struct pcpu' to userland w/o requiring _KERNEL. A few ports
already define _KERNEL to get to this and I'm about to add hooks to
libkvm to access per-CPU data.

MFC after:	1 week
2008-08-19 19:53:52 +00:00
cognet
28def2c5ae Add "add pc, whatever" as a branch instruction, we use it in memcpy().
MFC after:	3 days
2008-08-03 15:35:32 +00:00
cognet
17ddb2b745 Add blx as a branch instruction.
MFC after:	3 days
2008-08-03 01:51:30 +00:00
cognet
91e09275cc Add yet another branch instruction.
Obtained from:	NetBSD
MFC after:	3 days
2008-08-02 12:48:30 +00:00
ed
4d6a9685e8 Remove the unused major/minor numbers from iodev and memdev.
Now that st_rdev is being automatically generated by the kernel, there
is no need to define static major/minor numbers for the iodev and
memdev. We still need the minor numbers for the memdev, however, to
distinguish between /dev/mem and /dev/kmem.

Approved by:	philip (mentor)
2008-06-25 07:45:31 +00:00
benno
980aab4117 Support for the XScale PXA255 SoC as found on the Gumstix Basix and Connex
boards.  This is enough to net-boot to multiuser.

Also supported is the SMSC LAN91C111 parts used on the netCF, netDUO and netMMC
add-on boards.

I'll be putting some instructions on how to boot this on the Gumstix boards
online soon.

This is still fairly rough and will be refined over time but I felt it was
better to get this out there where other people can help out.
2008-06-06 05:08:09 +00:00
cognet
19c46f4f03 On the AT91, we need to write on the EOI register after we handle an
interrupt. So, add a new function pointer, arm_post_filter, which defaults
to NULL, and which will be used as the post_filter arg for
intr_event_create(). Set it properly for the AT91, so that it boots again.

Reported by:	hps
2008-04-20 23:29:06 +00:00
imp
13cdc14d99 Take the first baby step towards unifying and cleaning up arminit():
- Pull all the code to deal with the trampoline stuff into one
	  centeralized place and use it from everywhere.
	- Some minor style tidiness

Reviewed by: tinguely
2008-04-03 16:44:50 +00:00
jb
34e730ca27 When building a kernel module, define MAXCPU the same as SMP so
that modules work with and without SMP.
2008-03-27 05:03:26 +00:00
cognet
26dccf5aec Remove unused pv_list_count from the vm_page, and pm_count from the struct
pmap.

Submitted by:	Mark Tinguely
2008-03-06 21:59:47 +00:00
rwatson
1f588d3b57 Remove errant % in license comment.
MFC after:	3 days
2008-02-26 11:45:32 +00:00
raj
a6d33e3164 Improve ARM_TP_ADDRESS and RAS area.
De-hardcode usage of ARM_TP_ADDRESS and RAS local storage, and move this
special purpose page to a more convenient place i.e. after the vectors high
page, more towards the end of address space. Previous location (0xe000_0000)
caused grief if KVA was to go beyond the default limit.

Note that ARM world rebuilding is required after this change since the
location of ARM_TP_ADDRESS is shared between kernel and userland.

Submitted by:	Grzegorz Bernacki (gjb AT semihalf dot com)
Reviewed by:	imp
Approved by:	cognet (mentor)
2008-02-05 10:22:33 +00:00
cognet
4c1734c71d Bring in the nice work from Mark Tinguely on arm pmap.
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.

In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.

In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().

We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).

In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.

In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.

In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
2008-01-31 00:05:40 +00:00
alc
37cdbd87f5 Add configuration knobs for the superpage reservation system. Initially,
the reservation will only be enabled on amd64.
2007-12-27 16:45:39 +00:00
jkoshy
39d4b4accf Add stubs to unbreak LINT. 2007-12-07 13:45:47 +00:00
rwatson
99285f7544 Break out stack(9) from ddb(4):
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
  definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
  defined, or also if "options DDB" is defined to provide compatibility
  with existing users of stack(9).

Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to.  It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.

Update stack(9) man page.

Build tested:	amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested:	amd64 (rwatson), arm (cognet), i386 (rwatson)
2007-12-02 20:40:35 +00:00
cognet
db18da5d15 Close a race.
The RAS implementation would set the end address, then the start
address.  These were used by the kernel to restart a RAS sequence if
it was interrupted.  When the thread switching code ran, it would
check these values and adjust the PC and clear them if it did.

However, there's a small flaw in this scheme.  Thread T1, sets the end
address and gets preempted.  Thread T2 runs and also does a RAS
operation.  This resets end to zero.  Thread T1 now runs again and
sets start and then begins the RAS sequence, but is preempted before
the RAS sequence executes its last instruction.  The kernel code that
would ordinarily restart the RAS sequence doesn't because the PC isn't
between start and 0, so the PC isn't set to the start of the sequence.
So when T1 is resumed again, it is at the wrong location for RAS to
produce the correct results.  This causes the wrong results for the
atomic sequence.

The window for the first race is 3 instructions.  The window for the
second race is 5-10 instructions depending on the atomic operation.
This makes this failure fairly rare and hard to reproduce.

Mutexs are implemented in libthr using atomic operations.  When the
above race would occur, a lock could get stuck locked, causing many
downstream problems, as you might expect.

Also, make sure to reset the start and end address when doing a syscall, or
a malicious process could set them before doing a syscall.

Reviewed by: imp, ups (thanks guys)
Pointy hat to:	cognet
MFC After:	3 days
2007-12-02 12:49:28 +00:00
cognet
8ee5104341 In atomic_fetchadd_32(), do not blindly increase the value of %3.
It should just contain the value we want to add, as if we're interrupted
between the add and the str, we will restart from the beginning. Just use
a register we can scratch instead.

MFC After:	1 week
2007-11-27 22:12:05 +00:00
kevlo
ad6bad8ed6 __CPU_XSCALE_PXA2XX -> CPU_XSCALE_PXA2X0 2007-11-01 10:01:15 +00:00
imp
003f26732c Merge support from p4 (from NetBSD) for arm9e and arm10, arm11 cores. Not
yet connected to the build, but reduces diffs to p4 repo.

Obtained from: NetBSD
2007-10-18 05:33:06 +00:00
imp
7df4171529 Merge definitions for ARM9E, ARM10 and ARM11 processors from p4 (which
got them from NetBSD).
2007-10-18 05:06:58 +00:00
cognet
66c3bb340b Define _ARM_ARCH_5E too, so that we know if pld/strd/ldrd are available.
MFC After:	3 days
2007-10-13 12:04:10 +00:00
alc
d1bce06c64 Change the management of cached pages (PQ_CACHE) in two fundamental
ways:

(1) Cached pages are no longer kept in the object's resident page
splay tree and memq.  Instead, they are kept in a separate per-object
splay tree of cached pages.  However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock.  Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.

This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE).  The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held.  Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.

Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case.  Cached pages
are reclaimed far, far more often than they are reactivated.  Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.

(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.

Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated.  Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page.  Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.

Discussed with: many over the course of the summer, including jeff@,
   Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
cognet
910d500785 Twist the RAS logic a bit to avoid branching.
MFC After:	1 week
Approved by:	re (blanket)
2007-09-22 14:23:52 +00:00
cognet
31dec38dbc In __bswap16_var(), make sure the 16 upper bits are cleared; while
optimizing, gcc4 doesn't always do so.

Reported by:	Nathan Whitehorn
Approved by:	re (blanket)
2007-09-09 11:58:38 +00:00
cognet
477cdd4e83 XScale core 3 definitions.
Approved by:	re (blanket)
2007-07-27 14:54:27 +00:00
cognet
123f2b86f7 Fix the cache mode description.
Approved by:	re (blanket)
2007-07-27 14:45:33 +00:00
cognet
8b22cea67f Properly handle supersections.
Make sure we cache entries in the L2 cache.

Approved by:	re (blanket)
2007-07-27 14:45:04 +00:00
cognet
30161fe629 Add a new set of functions to handle L2 cache. Make them no-op for every
CPU except Xscale core 3.

Approved by:	re (blanket)
2007-07-27 14:39:41 +00:00
cognet
25994fd742 The iop34x has 128 interrupts. 2007-06-16 15:03:33 +00:00
cognet
2349f48ce0 Introduce pmap_kenter_supersection(), which maps 16MB super-sections into
the kernel pmap.
Document a bit more the behavior of the xscale core 3.
2007-06-11 21:29:26 +00:00
marcel
75588c5a15 Add kdb_cpu_sync_icache(), intended to synchronize instruction
caches with data caches after writing to memory. This typically
is required to make breakpoints work on ia64 and powerpc. For
those architectures the function is implemented.
2007-06-09 21:55:17 +00:00
jeff
fe66d0ac91 - PCPU_ADD is no longer spelled with LAZY_ in the middle.
Submitted by:	attilio
2007-06-06 23:23:47 +00:00
attilio
e333d0ff0e Rework the PCPU_* (MD) interface:
- Rename PCPU_LAZY_INC into PCPU_INC
- Add the PCPU_ADD interface which just does an add on the pcpu member
  given a specific value.

Note that for most architectures PCPU_INC and PCPU_ADD are not safe.
This is a point that needs some discussions/work in the next days.

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:38:48 +00:00
alc
ccfe66d477 Add the machine-specific definitions for configuring the new physical
memory allocator.

Approved by:	re
2007-06-04 08:02:22 +00:00
alc
1dfb7ec904 Eliminate some unused definitions that came from NetBSD. 2007-05-28 21:04:22 +00:00
cognet
643c457209 Use __mcount() instead of _mcount() to reduce diffs with NetBSD. 2007-05-19 16:20:37 +00:00
cognet
6e70ec73e9 Switch the kernel's pmap domain from 15 to 0.
This should be a no-op, and this is needed for xscale core 3 supersections
support, as they are always part of the domain 0
2007-05-19 12:47:34 +00:00
alc
b34f6f7ab1 Define every architecture as either VM_PHYSSEG_DENSE or
VM_PHYSSEG_SPARSE depending on whether the physical address space is
densely or sparsely populated with memory.  The effect of this
definition is to determine which of two implementations of
vm_page_array and PHYS_TO_VM_PAGE() is used.  The legacy
implementation is obtained by defining VM_PHYSSEG_DENSE, and a new
implementation that trades off time for space is obtained by defining
VM_PHYSSEG_SPARSE.  For now, all architectures except for ia64 and
sparc64 define VM_PHYSSEG_DENSE.  Defining VM_PHYSSEG_SPARSE on ia64
allows the entirety of my Itanium 2's memory to be used.  Previously,
only the first 1 GB could be used.  Defining VM_PHYSSEG_SPARSE on
sparc64 allows USIIIi-based systems to boot without crashing.

This change is a combination of Nathan Whitehorn's patch and my own
work in perforce.

Discussed with: kmacy, marius, Nathan Whitehorn
PR:		112194
2007-05-05 19:50:28 +00:00
kevlo
44a4513fbb Remove __P 2007-03-21 03:28:16 +00:00
alc
b03ddb707b Push down the implementation of PCPU_LAZY_INC() into the machine-dependent
header file.  Reimplement PCPU_LAZY_INC() on amd64 and i386 making it
atomic with respect to interrupts.

Reviewed by: bde, jhb
2007-03-11 05:54:29 +00:00
piso
6a2ffa86e5 o break newbus api: add a new argument of type driver_filter_t to
bus_setup_intr()

o add an int return code to all fast handlers

o retire INTR_FAST/IH_FAST

For more info: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=465712+0+current/freebsd-current

Reviewed by: many
Approved by: re@
2007-02-23 12:19:07 +00:00
cognet
b0b19280da - Add bounce pages for arm, largely based on the i386 implementation.
- Add a default parent dma tag, similar to what has been done for sparc64.
- Before invalidating the dcache in POSTREAD, save the bits which are in the
same cachelines than our buffers, but not part of it, and restore them after
the invalidation.
2007-01-17 00:53:05 +00:00