Commit Graph

262 Commits

Author SHA1 Message Date
Alan Cox
3153e878dd Add support to the virtual memory system for configuring machine-
dependent memory attributes:

Rename vm_cache_mode_t to vm_memattr_t.  The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.

Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.

Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes.  Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures.  The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map.  The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:

  kmem_alloc_contig() can now be used to allocate kernel memory with
  non-default memory attributes on amd64 and i386.

  vm_page_alloc() and the device pager will set the memory attributes
  for the real or fictitious page according to the object's default
  memory attributes.

Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.

Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386.  In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.

In collaboration with: jhb

Approved by:	re (kib)
2009-07-12 23:31:20 +00:00
Sam Leffler
8c393fd1f0 Cleanup ALIGNED_POINTER:
o add to platforms where it was missing (arm, i386, powerpc, sparc64, sun4v)
o define as "1" on amd64 and i386 where there is no restriction
o make the type returned consistent with ALIGN
o remove _ALIGNED_POINTER
o make associated comments consistent

Reviewed by:	bde, imp, marcel
Approved by:	re (kensmith)
2009-07-05 17:45:48 +00:00
Alan Cox
5797795f5a Correct the #endif comment.
Noticed by:	jmallett
Approved by:	re (kib)
2009-06-26 16:22:24 +00:00
Alan Cox
e999111ae7 This change is the next step in implementing the cache control functionality
required by video card drivers.  Specifically, this change introduces
vm_cache_mode_t with an appropriate VM_CACHE_DEFAULT definition on all
architectures.  In addition, this changes adds a vm_cache_mode_t parameter
to kmem_alloc_contig() and vm_phys_alloc_contig().  These will be the
interfaces for allocating mapped kernel memory and physical memory,
respectively, with non-default cache modes.

In collaboration with:	jhb
2009-06-26 04:47:43 +00:00
Andrew Thompson
f06a3a36ac Track the kernel mapping of a physical page by a new entry in vm_page
structure. When the page is shared, the kernel mapping becomes a special
type of managed page to force the cache off the page mappings. This is
needed to avoid stale entries on all ARM VIVT caches, and VIPT caches
with cache color issue.

Submitted by:	Mark Tinguely
Reviewed by:	alc
Tested by:	Grzegorz Bernacki, thompsa
2009-06-18 20:42:37 +00:00
Marcel Moolenaar
272489fe59 Pass the previously returned IRQ back to arm_get_next_irq() so that
the implementation can guarantee forward progress in the event of
a stuck interrupt or interrupt storm. This is especially critical
for fast interrupt handlers, as they can cause a hard hang in that
case. When first called, arm_get_next_irq() is passed -1.

Obtained from:	Juniper Networks, Inc.
2009-06-09 18:18:41 +00:00
Alan Cox
f83954e24a Define the kernel pmap in the same way on arm as on every other
architecture.

Eliminate an unused definition.

Tested by:	cognet
2009-05-07 05:42:13 +00:00
Robert Watson
9725389e1e Don't conditionally define CACHE_LINE_SHIFT, as we anticipate sizing
a fair number of static data structures, making this an unlikely
option to try to change without also changing source code. [1]

Change default cache line size on ia64, sparc64, and sun4v to 128
bytes, as this was what rtld-elf was already using on those
platforms. [2]

Suggested by:	bde [1], jhb [2]
MFC after:	2 weeks
2009-04-20 12:59:23 +00:00
Robert Watson
22037b2d2c Add description and cautionary note regarding CACHE_LINE_SIZE.
MFC after:	2 weeks
Suggested by:	alc
2009-04-19 21:26:36 +00:00
Robert Watson
a93fa8f2bb For each architecture, define CACHE_LINE_SHIFT and a derived
CACHE_LINE_SIZE constant.  These constants are intended to
over-estimate the cache line size, and be used at compile-time
when a run-time tuning alternative isn't appropriate or
available.

Defaults for all architectures are 64 bytes, except powerpc
where it is 128 bytes (used on G5 systems).

MFC after:	2 weeks
Discussed on:   arch@
2009-04-19 20:19:13 +00:00
Alan Cox
beb3c3a9c5 Retire VM_PROT_READ_IS_EXEC. It was intended to be a micro-optimization,
but I see no benefit from it today.

VM_PROT_READ_IS_EXEC was only intended for use on processors that do not
distinguish between read and execute permission.  On an mmap(2) or
mprotect(2), it automatically added execute permission if the caller
specified permissions included read permission.  The hope was that this
would reduce the number of vm map entries needed to implement an address
space because there would be fewer neighboring vm map entries that differed
only in the presence or absence of VM_PROT_EXECUTE.  (See vm/vm_mmap.c
revision 1.56.)

Today, I don't see any real applications that benefit from
VM_PROT_READ_IS_EXEC.  In any case, vm map entries are now organized
as a self-adjusting binary search tree instead of an ordered list.  So,
the need for coalescing vm map entries is not as great as it once was.
2009-04-04 23:12:14 +00:00
Olivier Houchard
a471e1eda3 Fix the userland, RAS, version of atomic_fetchadd_32 :
return the correct value, and do not store the wrong one in the supplied
pointer.

Submitted by:	Mark Tinguely <tinguely casselton net>
2009-03-31 23:47:18 +00:00
Konstantin Belousov
a4f2b2b0c6 Add AT_EXECPATH ELF auxinfo entry type. The value's a_ptr is a pointer
to the full path of the image that is being executed.
Increase AT_COUNT.

Remove no longer true comment about types used in Linux ELF binaries,
listed types contain FreeBSD-specific entries.

Reviewed by:	kan
2009-03-17 12:50:16 +00:00
Olivier Houchard
a43268a746 To prevent various race conditions in the RAS code, store and restore the
values in ARM_RAS_START and ARM_RAS_END at context switch time.

MFC after:	1 week
2009-02-12 23:23:30 +00:00
Sam Leffler
22f8f5fe92 force atomic_cmpset_ptr types to match atomic_cmpset_32;
this matches what powerpc does

Submitted by:	stass
MFC after:	2 weeks
2009-02-03 19:06:12 +00:00
Olivier Houchard
7202abb694 Add a comment explaining what ARM_KERN_DIRECTMAP is all about.
Suggested by:	raj
2009-01-22 15:36:11 +00:00
Rafal Jaworowski
1ee5b3b422 Fix confusing naming of Marvell ARM CPU specific routines.
- The contents of 'feroceon_cpufuncs' dispatch table was really dedicated for the
  new Sheeva CPU (in 88F6xxx and MV-78xxx SOCs), and NOT Feroceon.

- Feroceon CPU (in 88F5xxx SOCs) appears as a regular ARM926EJ-S core and does
  not require dedicated routines.

This will be accompanied by a file rename commit.
2009-01-09 10:45:04 +00:00
Marcel Moolenaar
74aed9855d Add support for the FPA floating-point format on ARM. The
FPA floating-point format is identical to the VFP format,
but is always stored in big-endian.
Introduce _IEEE_WORD_ORDER to describe the byte-order of
the FP representation.

Obtained from:	Juniper Networks, Inc
2008-12-23 22:20:59 +00:00
Sam Leffler
b74f293f33 add IXP465 and generic IXP425 definition 2008-12-23 04:46:13 +00:00
Sam Leffler
41fe50f5de MFH @ 186335 2008-12-20 01:29:19 +00:00
Warner Losh
db3cd725a5 AT_DEBUG and AT_BRK were OBE like 10 years ago, so retire them.
Reviewed by:	peter
2008-12-17 06:56:58 +00:00
Sam Leffler
d212022417 Merge WIP from p4:
o recognize ixp435 cpu
o change memory layout for for ixp4xx to not assume memory is aliases
  to 0x10000000 (Cambria/ixp435 memory starts at zero)
o handle 64 irqs for ixp435
o dual EHCI USB 2.0 controller integral to ixp435
o overhaul NPE code for ixp435 and better MAC+MII naming
o updated NPE firmware (including NPE-A image for ixp435/ixp465)
o Gateworks Cambria board support:
  - IDE compact flash
  - MCU
  - front panel LED on i2c bus
  - Octal LED latch

Sanity-tested with NFS-root on Avila and Cambria boards.  Requires
pending boot2 mods for CF-boot on Cambria.
2008-12-13 01:21:37 +00:00
Kip Macy
db7f0b974f - bump __FreeBSD version to reflect added buf_ring, memory barriers,
and ifnet functions

- add memory barriers to <machine/atomic.h>
- update drivers to only conditionally define their own

- add lockless producer / consumer ring buffer
- remove ring buffer implementation from cxgb and update its callers

- add if_transmit(struct ifnet *ifp, struct mbuf *m) to ifnet to
  allow drivers to efficiently manage multiple hardware queues
  (i.e. not serialize all packets through one ifq)
- expose if_qflush to allow drivers to flush any driver managed queues

This work was supported by Bitgravity Inc. and Chelsio Inc.
2008-11-22 05:55:56 +00:00
Rafal Jaworowski
8e321b7943 Support kernel crash mini dumps on ARM architecture.
Obtained from:	Juniper Networks, Semihalf
2008-11-06 16:20:27 +00:00
Rafal Jaworowski
0ed948780c Initial support of loader(8) for ARM machines running U-Boot.
This uses the common U-Boot support lib (sys/boot/uboot, already used on
FreeBSD/powerpc), and assumes the underlying firmware has the modern API for
stand-alone apps enabled in the config (CONFIG_API).

Only netbooting is supported at the moment.

Obtained from:	Marvell, Semihalf
2008-10-14 10:11:14 +00:00
Rafal Jaworowski
373bbe25ff Introduce basic support for Marvell families of system-on-chip ARM devices:
*  Orion
     - 88F5181
     - 88F5182
     - 88F5281

  * Kirkwood
     - 88F6281

  * Discovery
     - MV78100

The above families of SOCs are built around CPU cores compliant with ARMv5TE
instruction set architecture definition. They share a number of integrated
peripherals. This commit brings support for the following basic elements:

  * GPIO
  * Interrupt controller
  * L1, L2 cache
  * Timers, watchdog, RTC
  * TWSI (I2C)
  * UART

Other peripherals drivers will be introduced separately.

Reviewed by:	imp, marcel, stass (Thanks guys!)
Obtained from:	Marvell, Semihalf
2008-10-13 20:07:13 +00:00
Rafal Jaworowski
ba6faad63c Introduce low-level support for new Marvell core CPUs: 88FR131, 88FR571.
They are compliant with ARMv5TE and integrated on 88F6281 (Kirkwood) and
MV78100 (Discovery) system-on-chip families.

Obtained from:	Marvell, Semihalf
2008-10-13 18:16:54 +00:00
Olivier Houchard
5ac74cc6e4 Remove the unused field "pc_prvspace" from the MD fields for the struct
pcpu. There's not even a thing such as a "struct pcup".
While I'm there, remove a comment that makes no sense for arm.

Spotted out by:	Mark Tinguely
2008-09-11 20:39:46 +00:00
Rafal Jaworowski
f4e42148d7 ARM interrupts improvements.
- Fix nexus_setup_intr() abuse of setting up multiple IRQs in one go. Calling
  arm_setup_irqhandler() in loop is bogus, as there's just one cookie given
  from the caller and it is overwritten in each iteration so that only the
  last handler's cookie value prevails.

- Proper intr masking/unmasking handling: the IRQ source is masked at PIC level
  only after the last handler has been removed from the list.

Reviewed by:	cognet, imp, sam, stass
Obtained from:	Grzegorz Bernacki gjb ! semihalf dot com
2008-09-11 12:36:13 +00:00
Warner Losh
5f00fec406 Whitespace nit. 2008-08-23 23:35:08 +00:00
John Baldwin
70d12a18f2 Export 'struct pcpu' to userland w/o requiring _KERNEL. A few ports
already define _KERNEL to get to this and I'm about to add hooks to
libkvm to access per-CPU data.

MFC after:	1 week
2008-08-19 19:53:52 +00:00
Olivier Houchard
f0fe5e9127 Add "add pc, whatever" as a branch instruction, we use it in memcpy().
MFC after:	3 days
2008-08-03 15:35:32 +00:00
Olivier Houchard
697292d902 Add blx as a branch instruction.
MFC after:	3 days
2008-08-03 01:51:30 +00:00
Olivier Houchard
4ed897041f Add yet another branch instruction.
Obtained from:	NetBSD
MFC after:	3 days
2008-08-02 12:48:30 +00:00
Ed Schouten
721351876c Remove the unused major/minor numbers from iodev and memdev.
Now that st_rdev is being automatically generated by the kernel, there
is no need to define static major/minor numbers for the iodev and
memdev. We still need the minor numbers for the memdev, however, to
distinguish between /dev/mem and /dev/kmem.

Approved by:	philip (mentor)
2008-06-25 07:45:31 +00:00
Benno Rice
9722a61504 Support for the XScale PXA255 SoC as found on the Gumstix Basix and Connex
boards.  This is enough to net-boot to multiuser.

Also supported is the SMSC LAN91C111 parts used on the netCF, netDUO and netMMC
add-on boards.

I'll be putting some instructions on how to boot this on the Gumstix boards
online soon.

This is still fairly rough and will be refined over time but I felt it was
better to get this out there where other people can help out.
2008-06-06 05:08:09 +00:00
Olivier Houchard
e19357d3a5 On the AT91, we need to write on the EOI register after we handle an
interrupt. So, add a new function pointer, arm_post_filter, which defaults
to NULL, and which will be used as the post_filter arg for
intr_event_create(). Set it properly for the AT91, so that it boots again.

Reported by:	hps
2008-04-20 23:29:06 +00:00
Warner Losh
aa037d58ea Take the first baby step towards unifying and cleaning up arminit():
- Pull all the code to deal with the trampoline stuff into one
	  centeralized place and use it from everywhere.
	- Some minor style tidiness

Reviewed by: tinguely
2008-04-03 16:44:50 +00:00
John Birrell
e483943791 When building a kernel module, define MAXCPU the same as SMP so
that modules work with and without SMP.
2008-03-27 05:03:26 +00:00
Olivier Houchard
41c0d2813b Remove unused pv_list_count from the vm_page, and pm_count from the struct
pmap.

Submitted by:	Mark Tinguely
2008-03-06 21:59:47 +00:00
Robert Watson
1fb18eea38 Remove errant % in license comment.
MFC after:	3 days
2008-02-26 11:45:32 +00:00
Rafal Jaworowski
e081d0ac19 Improve ARM_TP_ADDRESS and RAS area.
De-hardcode usage of ARM_TP_ADDRESS and RAS local storage, and move this
special purpose page to a more convenient place i.e. after the vectors high
page, more towards the end of address space. Previous location (0xe000_0000)
caused grief if KVA was to go beyond the default limit.

Note that ARM world rebuilding is required after this change since the
location of ARM_TP_ADDRESS is shared between kernel and userland.

Submitted by:	Grzegorz Bernacki (gjb AT semihalf dot com)
Reviewed by:	imp
Approved by:	cognet (mentor)
2008-02-05 10:22:33 +00:00
Olivier Houchard
8f2948f1c1 Bring in the nice work from Mark Tinguely on arm pmap.
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.

In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.

In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().

We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).

In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.

In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.

In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
2008-01-31 00:05:40 +00:00
Alan Cox
b8e7fc24fe Add configuration knobs for the superpage reservation system. Initially,
the reservation will only be enabled on amd64.
2007-12-27 16:45:39 +00:00
Joseph Koshy
0da7aa7a7d Add stubs to unbreak LINT. 2007-12-07 13:45:47 +00:00
Robert Watson
3c90d1ea74 Break out stack(9) from ddb(4):
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
  definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
  defined, or also if "options DDB" is defined to provide compatibility
  with existing users of stack(9).

Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to.  It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.

Update stack(9) man page.

Build tested:	amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested:	amd64 (rwatson), arm (cognet), i386 (rwatson)
2007-12-02 20:40:35 +00:00
Olivier Houchard
b21a1da537 Close a race.
The RAS implementation would set the end address, then the start
address.  These were used by the kernel to restart a RAS sequence if
it was interrupted.  When the thread switching code ran, it would
check these values and adjust the PC and clear them if it did.

However, there's a small flaw in this scheme.  Thread T1, sets the end
address and gets preempted.  Thread T2 runs and also does a RAS
operation.  This resets end to zero.  Thread T1 now runs again and
sets start and then begins the RAS sequence, but is preempted before
the RAS sequence executes its last instruction.  The kernel code that
would ordinarily restart the RAS sequence doesn't because the PC isn't
between start and 0, so the PC isn't set to the start of the sequence.
So when T1 is resumed again, it is at the wrong location for RAS to
produce the correct results.  This causes the wrong results for the
atomic sequence.

The window for the first race is 3 instructions.  The window for the
second race is 5-10 instructions depending on the atomic operation.
This makes this failure fairly rare and hard to reproduce.

Mutexs are implemented in libthr using atomic operations.  When the
above race would occur, a lock could get stuck locked, causing many
downstream problems, as you might expect.

Also, make sure to reset the start and end address when doing a syscall, or
a malicious process could set them before doing a syscall.

Reviewed by: imp, ups (thanks guys)
Pointy hat to:	cognet
MFC After:	3 days
2007-12-02 12:49:28 +00:00
Olivier Houchard
9acb0e651b In atomic_fetchadd_32(), do not blindly increase the value of %3.
It should just contain the value we want to add, as if we're interrupted
between the add and the str, we will restart from the beginning. Just use
a register we can scratch instead.

MFC After:	1 week
2007-11-27 22:12:05 +00:00
Kevin Lo
92e7748daf __CPU_XSCALE_PXA2XX -> CPU_XSCALE_PXA2X0 2007-11-01 10:01:15 +00:00
Warner Losh
63b2597849 Merge support from p4 (from NetBSD) for arm9e and arm10, arm11 cores. Not
yet connected to the build, but reduces diffs to p4 repo.

Obtained from: NetBSD
2007-10-18 05:33:06 +00:00