Commit Graph

65 Commits

Author SHA1 Message Date
Ian Lepore
13a98c8536 Begin reducing code duplication in arm pmap.c and pmap-v6.c by factoring
out common code related to mapping device memory into a new devmap.c file.

Remove the growing duplication of code that used pmap_devmap_find_pa() and
then did some math with the returned results to generate a virtual address,
and likewise in reverse to get a physical address.  Now there are a pair
of functions, arm_devmap_vtop() and arm_devmap_ptov(), to do that.  The
bus_space_map() implementations are rewritten in terms of these.
2013-11-04 19:44:37 +00:00
Zbigniew Bodek
0efe42a2e3 Fix condition that determines PMAP_NEEDS_PTE_SYNC value for ARM
Use values of the correct defines to determine statement's result.
ARM_ARCH_ symbols are always defined, hence only values are relevant.

Reviewed by:	cognet
2013-10-28 23:42:44 +00:00
Ian Lepore
99af02e3b6 Retire arm_remap_nocache() and the data and constants associated with it.
The only remaining user was the code that allocates bounce pages for armv4
busdma.  It's not clear why bounce pages would need uncached memory, but
if that ever changes, kmem_alloc_attr() would be the way to get it.
2013-10-27 03:13:26 +00:00
Olivier Houchard
f4b13928b8 Spell cpu_l2cache_wb_range correctly. 2013-10-17 21:38:14 +00:00
Olivier Houchard
f81c09049a - Switch to use WBWA mappings for page tables on armv6, this is needed for SMP.
- Fix PTE_SYNC() for PIPT L2 caches, using the virtual address wasn't so useful.
- Use PTE_SYNC() for >= armv6
2013-10-17 21:06:19 +00:00
Rafal Jaworowski
b949475db0 Introduce superpages support for ARMv6/v7.
Promoting base pages to superpages can increase TLB coverage and allow for
efficient use of page table entries.  This development provides FreeBSD/ARM
with superpages management mechanism roughly equivalent to what we have for
i386 and amd64 architectures.

1. Add mechanism for automatic promotion of 4KB page mappings to 1MB section
   mappings (and demotion when not needed, respectively).

2. Managed and non-kernel mappings are now superpages-aware.

3. The functionality can be enabled by setting "vm.pmap.sp_enabled" tunable to
   a non-zero value (either in loader.conf or by modifying "sp_enabled"
   variable in pmap-v6.c file).  By default, automatic promotion is currently
   disabled.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-08-26 17:12:30 +00:00
Rafal Jaworowski
836f82ff43 Do not use pv_kva on ARMv6/v7 and save some space on each vm_page. It's only
relevant for older ARM variants (with virtual cache).

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	gber
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-08-19 16:16:49 +00:00
Rafal Jaworowski
30f7f10e66 Clear all L2 PTE protection bits before their configuration.
Revise L2_S_PROT_MASK to include all of the protection bits.  Notice that
clearing these bits does not always take away the corresponding permissions
(for example, permission is granted when the bit is cleared). The bits are
cleared but are to be set or left cleared accordingly in pmap_set_prot(),
pmap_enter_locked(), etc.

Clear L2_XN along with L2_S_PROT_MASK in pmap_set_prot() so that all
permissions related bits are cleared before actual configuration.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	gber
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-08-19 15:12:36 +00:00
Grzegorz Bernacki
3bc567b6ad Stop using PVF_MOD, PVF_REF & PVF_EXEC flags in pv_entry, use PTE.
Using PVF_MOD, PVF_REF and PVF_EXEC is redundant as we can get the proper
info from PTE bits.
When the mapping is marked as executable and has been referenced we assume
that it has been executed. Similarly, when the mapping is set to be writable
and is referenced, it must have been due to write access to it.
PVF_MOD and PVF_REF flags are kept just for pmap_clearbit() usage,
to pass the information on which bit should be cleared.

Submitted by:   Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:   The FreeBSD Foundation, Semihalf
2013-05-23 12:23:18 +00:00
Grzegorz Bernacki
2b3e821bcc Improve, optimize and clean-up ARMv6/v7 memory management related code.
Use pmap_find_pv if needed instead of multiplying its code throughout
pmap-v6.

Avoid possible NULL pointer dereference in pmap_enter_locked()
When trying to get m->md.pv_memattr, make sure that m != NULL,
in particular that vector_page is set to be NULL.

Do not set PGA_REFERENCED flag in pmap_enter_pv().
On ARM any new page reference will result in either entering the new
mapping by calling pmap_enter, etc. or fixing-up the existing mapping in
pmap_fault_fixup().
Therefore we set PGA_REFERENCED flag in the earlier mentioned cases and
setting it later in pmap_enter_pv() is just waste of cycles.

Delete unused pm_pdir pointer from the pmap structure.

Rearrange brackets in the fault cause detection in trap.c
Place the brackets correctly in order to see course of the conditions
instantaneously.

Unify naming in pmap-v6.c and improve style
Use naming common for whole pmap and compatible with other pmaps,
improve style where possible:
pm   -> pmap
pg   -> m
opg  -> om
*pt  -> *ptep
*pte -> *ptep
*pde -> *pdep

Submitted by:   Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:   The FreeBSD Foundation, Semihalf
2013-05-23 12:15:23 +00:00
Grzegorz Bernacki
b8b08befd0 Switch to AP[2:1] access permissions model. Store "referenced"
bit in PTE.

Enable Access Flag in CPU control. With AF enabled each valid mapping
needs to have referenced bit in PTE set in order to be able to cache
it in the TLB.

AP[0] bit is to be used as reference flag.
All access permissions are encoded by AP[2:1] wherein AP[1] is in fact
"user enable" and AP[2](APX) is "write disable".

All mappings are always set to be valid. Reference emulation is performed
by setting/clearing reference flag in PTE.

md.pvh_attrs are no longer necessary however pv_flags are still being used
for now.

Marking vm_page as "dirty" or "referenced" is being performed on:
- page or flag fault servicing in pmap_fault_fixup(), basing on the fault
  type
- vm_fault servicing in pmap_enter() according to the desired protections
  and faulty access type
Redundant page marking has been removed as on ARM we know exactly when the
particular page is referenced or is going to be written.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-05-23 12:07:41 +00:00
Grzegorz Bernacki
4442f74b81 Port the new PV entry allocator from amd64/i386/mips to armv6/v7.
PV entries are now roughly half the size.
Instead of using a shared UMA zone for 28 byte pv entries
(two 8-byte tailq nodes, a 4 byte pointer, a 4 byte address and 4 byte
flags), we allocate a page at a time per process.
This provides 252 pv entries per process (actually, per pmap address space)
and eliminates one of the 8-byte tailq entries since we now can track
per-process pv entries implicitly.
The pointer to the pmap can be eliminated by doing address arithmetic to
find the metadata on the page headers to find a single pointer shared by
all 252 entries. There is an 8-int bitmap for the freelist of those 252
entries.
When in serious low memory condition, allocation of another pv_chunk is
possible by freeing some pages in pmap_pv_reclaim().

Added pv_entry/pv_chunk related statistics to pmap.
pv_entry/pv_chunk statistics can be accessed via sysctl vm.pmap.

Ported PTE freelist of KVA allocation and maintenance from i386.
Using an idea from Stephan Uphoff, use the empty pte's that correspond
to the unused kva in the pv memory block to thread a freelist through.
This allows us to free pages that used to be used for pv entry chunks
since we can now track holes in the kva memory block.

As both ARM pmap.c and pmap-v6.c use the same header and pv_entry, pmap and
md_page structures are different, it was needed to separate code designed
for ARMv6/7 from the one for other ARMs.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-05-14 09:47:58 +00:00
Grzegorz Bernacki
4c8add8a96 Fix L2 PTE access permissions management.
Keep following access permissions:

APX     AP     Kernel     User
 1      01       R         N
 1      10       R         R
 0      01      R/W        N
 0      11      R/W       R/W

Avoid using reserved in ARMv6 APX|AP settings:
- In case of unprivileged (user) access without permission to write,
  the access permission bits were being set to reserved for ARMv6
  (but valid for ARMv7) value of APX|AP = 111.

Fix-up faulting userland accesses properly:
- Wrong condition statement in pmap_fault_fixup() caused that
  any genuine, unprivileged access was being fixed-up instead of
  just skip doing anything and return. Staring from now we ensure
  proper reaction for illicit user accesses.

L2_S_PROT_R and L2_S_PROT_U names might be misleading as they do not
reflect real permission levels. It will be clarified in following
patches (switch to AP[2:1] permissions model).

Obtained from: Semihalf
2013-05-06 15:30:34 +00:00
Konstantin Belousov
e8a4a618cf Add pmap function pmap_copy_pages(), which copies the content of the
pages around, taking array of vm_page_t both for source and
destination.  Starting offsets and total transfer size are specified.

The function implements optimal algorithm for copying using the
platform-specific optimizations.  For instance, on the architectures
were the direct map is available, no transient mappings are created,
for i386 the per-cpu ephemeral page frame is used.  The code was
typically borrowed from the pmap_copy_page() for the same
architecture.

Only i386/amd64, powerpc aim and arm/arm-v6 implementations were
tested at the time of commit. High-level code, not committed yet to
the tree, ensures that the use of the function is only allowed after
explicit enablement.

For sparc64, the existing code has known issues and a stab is added
instead, to allow the kernel linking.

Sponsored by:	The FreeBSD Foundation
Tested by:	pho (i386, amd64), scottl (amd64), ian (arm and arm-v6)
MFC after:	2 weeks
2013-03-14 20:18:12 +00:00
Alan Cox
5c9f7b1a91 Initialize vm_max_kernel_address on non-FDT platforms. (This should have
been included in r246926.)

The second parameter to pmap_bootstrap() is redundant.  Eliminate it.

Reviewed by:	andrew
2013-02-20 16:48:52 +00:00
Oleksandr Tymoshenko
fdf78e7823 Switch default cache type for ARMv6/ARMv7 from write-through to
writeback-writeallocate
2013-01-08 02:40:20 +00:00
Oleksandr Tymoshenko
b9a3b76662 Fix misleading comment 2012-12-20 03:33:33 +00:00
Olivier Houchard
65d79ed70c Properly implement pmap_[get|set]_memattr
Submitted by:	Ian Lepore <freebsd@damnhippie.dyndns.org>
2012-12-19 00:24:31 +00:00
Alan Cox
a1685193bc Eliminate an unused declaration. 2012-09-29 22:28:00 +00:00
Alan Cox
703205f3c6 Implementing pmap_kextract(va) as pmap_extract(kernel_pmap, va) is
problematic because some callers to pmap_kextract() expect its
implementation to be lock-less.  In particular, uma_dbg_alloc() implicitly
requires this.  Otherwise, lock-order reversals occur between pmap locks and
UMA zone locks.  So, this change introduces a lock-less implementation of
pmap_kextract().

Disable recursion on the pvh global lock in the new armv6 pmap.  While
recursion on this locks occurs in the old arm pmap, it thankfully doesn't
occur in the armv6 pmap.

Tested by:	jmg
2012-09-27 05:39:42 +00:00
Alan Cox
b95d8becdb Eliminate an unused macro. 2012-09-07 01:33:25 +00:00
Oleksandr Tymoshenko
cf1a573f04 Merging projects/armv6, part 1
Cummulative patch of changes that are not vendor-specific:
	- ARMv6 and ARMv7 architecture support
	- ARM SMP support
	- VFP/Neon support
	- ARM Generic Interrupt Controller driver
	- Simplification of startup code for all platforms
2012-08-15 03:03:03 +00:00
Alan Cox
6031c68de4 The page flag PGA_WRITEABLE is set and cleared exclusively by the pmap
layer, but it is read directly by the MI VM layer.  This change introduces
pmap_page_is_write_mapped() in order to completely encapsulate all direct
access to PGA_WRITEABLE in the pmap layer.

Aesthetics aside, I am making this change because amd64 will likely begin
using an alternative method to track write mappings, and having
pmap_page_is_write_mapped() in place allows me to make such a change
without further modification to the MI VM layer.

As an added bonus, tidy up some nearby comments concerning page flags.

Reviewed by:	kib
MFC after:	6 weeks
2012-06-16 18:56:19 +00:00
Warner Losh
ee5cac8ab0 trim trailing whitespace 2012-06-13 05:02:51 +00:00
Rafal Jaworowski
68f8dc2950 ARM pmap fixes:
- Write Buffers have to be drained after write to Page Table even if caches
  are in write-through mode.

- Make sure to sync PTE in pmap_zero_page_generic().

Submitted by:	Michal Mazur
Reviewed by:	cognet
Obtained from:	Semihalf
MFC after:	1 month
2011-12-15 12:14:15 +00:00
Attilio Rao
71a19bdc64 Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).

Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.

The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN

while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.

Some technical notes:
- This commit may be considered an ABI nop for all the architectures
  different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
  accessed avoiding migration, because the size of cpuset_t should be
  considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
  primirally done in order to leave some more space in userland to cope
  with KBI extensions). If you need to access kernel cpuset_t from the
  userland please refer to example in this patch on how to do that
  correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now

The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.

Tested by:	pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by:	jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
Alan Cox
e6ffa21488 Remove pmap fields that are either unused or not fully implemented.
Discussed with:	kib
2011-02-17 15:36:29 +00:00
John Baldwin
d7899b19f5 Whitespace tweak. 2011-02-09 14:37:33 +00:00
Warner Losh
49a4353505 phys_addr is a PA not a VA so declare it as a vm_paddr_t not a vm_offset_t. 2011-02-05 03:36:34 +00:00
Warner Losh
79f4804918 Remove ancient simulation code. Skyeye simulation never really worked
quite right and hasn't been used in ages and is likely broken.  QEMU
with GUMSTIX is a more promising road to FreeBSD/arm in emulation
anyway.

Reviewed by:	cognet@
2011-01-05 22:15:57 +00:00
John Baldwin
60c7b36b7a Update various places that store or manipulate CPU masks to use cpumask_t
instead of int or u_int.  Since cpumask_t is currently u_int on all
platforms this should just be a cosmetic change.
2010-08-11 23:22:53 +00:00
Kip Macy
2965a45315 On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.

Supported by: Bitgravity Inc.

Discussed with: alc, jeffr, and kib
2010-04-30 00:46:43 +00:00
Alan Cox
3153e878dd Add support to the virtual memory system for configuring machine-
dependent memory attributes:

Rename vm_cache_mode_t to vm_memattr_t.  The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.

Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.

Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes.  Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures.  The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map.  The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:

  kmem_alloc_contig() can now be used to allocate kernel memory with
  non-default memory attributes on amd64 and i386.

  vm_page_alloc() and the device pager will set the memory attributes
  for the real or fictitious page according to the object's default
  memory attributes.

Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.

Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386.  In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.

In collaboration with: jhb

Approved by:	re (kib)
2009-07-12 23:31:20 +00:00
Andrew Thompson
f06a3a36ac Track the kernel mapping of a physical page by a new entry in vm_page
structure. When the page is shared, the kernel mapping becomes a special
type of managed page to force the cache off the page mappings. This is
needed to avoid stale entries on all ARM VIVT caches, and VIPT caches
with cache color issue.

Submitted by:	Mark Tinguely
Reviewed by:	alc
Tested by:	Grzegorz Bernacki, thompsa
2009-06-18 20:42:37 +00:00
Alan Cox
f83954e24a Define the kernel pmap in the same way on arm as on every other
architecture.

Eliminate an unused definition.

Tested by:	cognet
2009-05-07 05:42:13 +00:00
Rafal Jaworowski
8e321b7943 Support kernel crash mini dumps on ARM architecture.
Obtained from:	Juniper Networks, Semihalf
2008-11-06 16:20:27 +00:00
Olivier Houchard
41c0d2813b Remove unused pv_list_count from the vm_page, and pm_count from the struct
pmap.

Submitted by:	Mark Tinguely
2008-03-06 21:59:47 +00:00
Olivier Houchard
8f2948f1c1 Bring in the nice work from Mark Tinguely on arm pmap.
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.

In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.

In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().

We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).

In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.

In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.

In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
2008-01-31 00:05:40 +00:00
Olivier Houchard
b4db6fd942 Properly handle supersections.
Make sure we cache entries in the L2 cache.

Approved by:	re (blanket)
2007-07-27 14:45:04 +00:00
Olivier Houchard
10d8c18005 Introduce pmap_kenter_supersection(), which maps 16MB super-sections into
the kernel pmap.
Document a bit more the behavior of the xscale core 3.
2007-06-11 21:29:26 +00:00
Olivier Houchard
fe85f6cee8 Switch the kernel's pmap domain from 15 to 0.
This should be a no-op, and this is needed for xscale core 3 supersections
support, as they are always part of the domain 0
2007-05-19 12:47:34 +00:00
Olivier Houchard
47010239a8 - Add bounce pages for arm, largely based on the i386 implementation.
- Add a default parent dma tag, similar to what has been done for sparc64.
- Before invalidating the dcache in POSTREAD, save the bits which are in the
same cachelines than our buffers, but not part of it, and restore them after
the invalidation.
2007-01-17 00:53:05 +00:00
Ruslan Ermilov
26af9ac7d0 Fix a comment. 2006-11-13 06:26:57 +00:00
Alan Cox
cc0d48ffb6 Eliminate unused global variables. 2006-11-11 20:57:52 +00:00
Olivier Houchard
676b1fbdbf Identify the xscale 81342. 2006-11-07 22:36:57 +00:00
Olivier Houchard
49953e11d7 Rewrite ARM_USE_SMALL_ALLOC so that instead of the current behavior, it maps
whole the physical memory, cached, using 1MB section mappings. This reduces
the address space available for user processes a bit, but given the amount of
memory a typical arm machine has, it is not (yet) a big issue.
It then provides a uma_small_alloc() that works as it does for architectures
which have a direct mapping.
2006-08-08 20:59:38 +00:00
Alan Cox
ed48a217f6 Add partial pmap locking.
Eliminate the unused allpmaps list.

Tested by: cognet@
2006-06-06 04:32:20 +00:00
Olivier Houchard
87adbb81cc Include machine/cpuconf.h in pmap.h in order to get ARM_NMMUS defined,
to appease -Wundef.
2006-05-31 11:57:37 +00:00
Olivier Houchard
d5d776c16b Resurrect Skyeye support :
Add a new option, SKYEYE_WORKAROUNDS, which as the name suggests adds
workarounds for things skyeye doesn't simulate. Specifically :
- Use USART0 instead of DBGU as the console, make it not use DMA, and           manually provoke an interrupt when we're done in the transmit function.
- Skyeye maintains an internal counter for clock, but apparently there's
no way to access it, so hack the timecounter code to return a value which
is increased at every clock interrupts. This is gross, but I didn't find a
better way to implement timecounters without hacking Skyeye to get the
counter value.
- Force the write-back of PTEs once we're done writing them, even if they
are supposed to be write-through. I don't know why I have to do that.
2006-05-13 23:41:16 +00:00
Olivier Houchard
174329aff2 MFp4: Don't write-back the PTEs if they are mapped write-through, this was
apparently only needed because skyeye has bugs in its cache emulation.
2006-04-09 20:03:03 +00:00