Commit Graph

35 Commits

Author SHA1 Message Date
John Baldwin
60c7b36b7a Update various places that store or manipulate CPU masks to use cpumask_t
instead of int or u_int.  Since cpumask_t is currently u_int on all
platforms this should just be a cosmetic change.
2010-08-11 23:22:53 +00:00
Kip Macy
2965a45315 On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.

Supported by: Bitgravity Inc.

Discussed with: alc, jeffr, and kib
2010-04-30 00:46:43 +00:00
Alan Cox
3153e878dd Add support to the virtual memory system for configuring machine-
dependent memory attributes:

Rename vm_cache_mode_t to vm_memattr_t.  The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.

Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.

Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes.  Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures.  The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map.  The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:

  kmem_alloc_contig() can now be used to allocate kernel memory with
  non-default memory attributes on amd64 and i386.

  vm_page_alloc() and the device pager will set the memory attributes
  for the real or fictitious page according to the object's default
  memory attributes.

Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.

Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386.  In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.

In collaboration with: jhb

Approved by:	re (kib)
2009-07-12 23:31:20 +00:00
Andrew Thompson
f06a3a36ac Track the kernel mapping of a physical page by a new entry in vm_page
structure. When the page is shared, the kernel mapping becomes a special
type of managed page to force the cache off the page mappings. This is
needed to avoid stale entries on all ARM VIVT caches, and VIPT caches
with cache color issue.

Submitted by:	Mark Tinguely
Reviewed by:	alc
Tested by:	Grzegorz Bernacki, thompsa
2009-06-18 20:42:37 +00:00
Alan Cox
f83954e24a Define the kernel pmap in the same way on arm as on every other
architecture.

Eliminate an unused definition.

Tested by:	cognet
2009-05-07 05:42:13 +00:00
Rafal Jaworowski
8e321b7943 Support kernel crash mini dumps on ARM architecture.
Obtained from:	Juniper Networks, Semihalf
2008-11-06 16:20:27 +00:00
Olivier Houchard
41c0d2813b Remove unused pv_list_count from the vm_page, and pm_count from the struct
pmap.

Submitted by:	Mark Tinguely
2008-03-06 21:59:47 +00:00
Olivier Houchard
8f2948f1c1 Bring in the nice work from Mark Tinguely on arm pmap.
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.

In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.

In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().

We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).

In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.

In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.

In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
2008-01-31 00:05:40 +00:00
Olivier Houchard
b4db6fd942 Properly handle supersections.
Make sure we cache entries in the L2 cache.

Approved by:	re (blanket)
2007-07-27 14:45:04 +00:00
Olivier Houchard
10d8c18005 Introduce pmap_kenter_supersection(), which maps 16MB super-sections into
the kernel pmap.
Document a bit more the behavior of the xscale core 3.
2007-06-11 21:29:26 +00:00
Olivier Houchard
fe85f6cee8 Switch the kernel's pmap domain from 15 to 0.
This should be a no-op, and this is needed for xscale core 3 supersections
support, as they are always part of the domain 0
2007-05-19 12:47:34 +00:00
Olivier Houchard
47010239a8 - Add bounce pages for arm, largely based on the i386 implementation.
- Add a default parent dma tag, similar to what has been done for sparc64.
- Before invalidating the dcache in POSTREAD, save the bits which are in the
same cachelines than our buffers, but not part of it, and restore them after
the invalidation.
2007-01-17 00:53:05 +00:00
Ruslan Ermilov
26af9ac7d0 Fix a comment. 2006-11-13 06:26:57 +00:00
Alan Cox
cc0d48ffb6 Eliminate unused global variables. 2006-11-11 20:57:52 +00:00
Olivier Houchard
676b1fbdbf Identify the xscale 81342. 2006-11-07 22:36:57 +00:00
Olivier Houchard
49953e11d7 Rewrite ARM_USE_SMALL_ALLOC so that instead of the current behavior, it maps
whole the physical memory, cached, using 1MB section mappings. This reduces
the address space available for user processes a bit, but given the amount of
memory a typical arm machine has, it is not (yet) a big issue.
It then provides a uma_small_alloc() that works as it does for architectures
which have a direct mapping.
2006-08-08 20:59:38 +00:00
Alan Cox
ed48a217f6 Add partial pmap locking.
Eliminate the unused allpmaps list.

Tested by: cognet@
2006-06-06 04:32:20 +00:00
Olivier Houchard
87adbb81cc Include machine/cpuconf.h in pmap.h in order to get ARM_NMMUS defined,
to appease -Wundef.
2006-05-31 11:57:37 +00:00
Olivier Houchard
d5d776c16b Resurrect Skyeye support :
Add a new option, SKYEYE_WORKAROUNDS, which as the name suggests adds
workarounds for things skyeye doesn't simulate. Specifically :
- Use USART0 instead of DBGU as the console, make it not use DMA, and           manually provoke an interrupt when we're done in the transmit function.
- Skyeye maintains an internal counter for clock, but apparently there's
no way to access it, so hack the timecounter code to return a value which
is increased at every clock interrupts. This is gross, but I didn't find a
better way to implement timecounters without hacking Skyeye to get the
counter value.
- Force the write-back of PTEs once we're done writing them, even if they
are supposed to be write-through. I don't know why I have to do that.
2006-05-13 23:41:16 +00:00
Olivier Houchard
174329aff2 MFp4: Don't write-back the PTEs if they are mapped write-through, this was
apparently only needed because skyeye has bugs in its cache emulation.
2006-04-09 20:03:03 +00:00
Olivier Houchard
2456c0ea88 Try to honor BUS_DMA_COHERENT : if the flag is set, normally allocate memory
with malloc() or contigmalloc() as usual, but try to re-map the allocated
memory into a VA outside the KVA, non-cached, thus making the calls to
bus_dmamap_sync() for these buffers useless.
2006-03-01 23:04:25 +00:00
Olivier Houchard
94d8cf9916 Force pmap to write-back the pte cacheline after each pte modification,
even if the pte is supposed to be cached in write through mode (might be a
skyeye bug, I'll have to check).
2005-11-21 19:10:44 +00:00
Olivier Houchard
812779897c MFi386 rev 1.536 (sort of)
Move what can be moved (UMA zones creation, pv_entry_* initialization) from
pmap_init2() to pmap_init().
Create a new function, pmap_postinit(), called from cpu_startup(), to do the
L1 tables allocation.
pmap_init2() is now empty for arm as well.
2005-11-06 16:10:28 +00:00
Olivier Houchard
db7db23dd8 dump_avail has nothing to do with ARM_USE_SMALL_ALLOC, so move its
declaration out of the #ifdef.
2005-10-04 16:29:31 +00:00
Olivier Houchard
b834efd591 Provide a dump_avail[] variable, which contains the page ranges to be
dumped.

For iq31244_machdep.c, attempt to recognize hints provided by the elf
trampoline.
2005-10-03 14:15:50 +00:00
Olivier Houchard
56e472e2b5 Add a new arm-specific option, ARM_USE_SMALL_ALLOC. If defined, it provides
an implementation of uma_small_alloc() which tries to preallocate memory
1MB per 1MB, and maps it into a section mapping.
2005-06-07 23:04:24 +00:00
Olivier Houchard
139e3f7c33 - Try harder to report dirty page.
- Garbage-collect pmap_update(), it became quite useless.
2005-04-07 22:01:53 +00:00
Olivier Houchard
f4c01f1508 Instead of using sysarch() to store-retrieve the tp, add a magic address,
ARM_TP_ADDRESS, where the tp will be stored. On CPUs that support it, a cache
line will be allocated and locked for this address, so that it will never go
to RAM. On CPUs that does not, a page is allocated for it (it will be a bit
slower, and is wrong for SMP, but should be fine for UP).
The tp is still stored in the mdthread struct, and at each context switch,
ARM_TP_ADDRESS gets updated.

Suggested by:   davidxu
2005-02-26 18:59:01 +00:00
Warner Losh
d8315c79d9 Start all license statements with /*- 2005-01-05 21:58:49 +00:00
Olivier Houchard
b62e66eb1f Remove an unused field from the struct pv_entry.
While I'm there, fix style.
2004-12-05 22:46:30 +00:00
Olivier Houchard
e099742e25 Import md bits for mem(4) on arm.
While I'm there, cleanup a bit pmap.h.
2004-11-07 23:01:36 +00:00
Olivier Houchard
8e90166a08 Implement pmap_growkernel() and pmap_extract_and_hold().
Remove the cache state logic : right now, it provides more problems than it
helps.
Add helper functions for mapping devices while bootstrapping.
Reorganize the code a bit, and remove dead code.

Obtained from:	NetBSD (partially)
2004-09-23 21:54:25 +00:00
Olivier Houchard
6933f3a5ca Define pmap_page_is_mapped(). 2004-07-21 22:02:48 +00:00
Olivier Houchard
dd7c1e993e Forward declare "struct pcb", so that one does not need to include
<machine/pcb.h> before including <machine/pmap.h>.
2004-07-12 21:22:40 +00:00
Olivier Houchard
6fc729af63 Import FreeBSD/arm kernel bits.
It only supports sa1110 (on simics) right now, but xscale support should come
soon.
Some of the initial work has been provided by :
Stephane Potvin <sepotvin at videotron.ca>
Most of this comes from NetBSD.
2004-05-14 11:46:45 +00:00