Commit Graph

77 Commits

Author SHA1 Message Date
ian
8e7c521fdb Add a pmap_kremove_device() to undo mappings made with pmap_kenter_device().
Previously we used pmap_kremove(), but with ARM_NEW_PMAP it does the remove
in a way that isn't SMP-coherent (which is appropriate in some circumstances
such as mapping/unmapping sf buffers).  With matching enter/remove routines
for device mappings, each low-level implementation can do the right thing.

Reviewed by:	Svatopluk Kraus <onwahe@gmail.com>
2015-04-10 13:26:35 +00:00
andrew
eb7a3e863e Remove ARM9_CACHE_WRITE_THROUGH, none of our configs define it. 2015-03-29 18:59:04 +00:00
andrew
4458b2f4b7 Remove support for CPU_ARM10. No kernel configs could possibly use this as
it's not an available option. Along with this we will never support this
cpu type as very few arm10 chips were made.
2015-03-29 17:13:44 +00:00
ian
6fb7bdd343 New pmap code for armv6. Disabled by default, option ARM_NEW_PMAP enables it.
This is pretty much a complete rewrite based on the existing i386 code.  The
patches have been circulating for a couple years and have been looked at by
plenty of people, but I'm not putting anybody on the hook as having reviewed
this in any formal sense except myself.

After this has gotten wider testing from the user community, ARM_NEW_PMAP
will become the default and various dregs of the old pmap code will be
removed.

Submitted by:	Svatopluk Kraus <onwahe@gmail.com>,
	  	Michal Meloun <meloun@miracle.cz>
2015-03-26 21:13:53 +00:00
andrew
0808eece71 Rename pmap_kenter_temp to pmap_kenter_temporary to be consistent with the
other architectures with this function.

Submitted by:	Svatopluk Kraus <onwahe at gmail.com>
Submitted by:	Michal Meloun <meloun at miracle.cz>
2014-09-11 10:53:57 +00:00
andrew
8e6fa54228 Move an else case that was missed in r263676 2014-03-24 08:24:32 +00:00
andrew
30470cc7be Reorder the pmap macros so "ARM_MMU_V6 + ARM_MMU_V7" is first. As they are
identical this allows us to build for both v6 and v7 together.
2014-03-23 21:08:18 +00:00
ian
ab495f6c9b Remove all traces of support for ARM chips prior to the arm9 series. We
never actually ran on these chips (other than using SA1 support in an
emulator to do the early porting to FreeBSD long long ago).  The clutter
and complexity of some of this code keeps getting in the way of other
maintenance, so it's time to go.
2014-03-09 21:12:31 +00:00
zbb
2edbe39b71 Always clear L1 PTE descriptor when removing superpage on ARM
Invalidate L1 PTE regardles of existance of the corresponding
l2_bucket. This is relevant when superpage is entered via
pmap_enter_object() and will fix crash on entering page
in place of not properly removed superpage.
2014-02-15 13:13:00 +00:00
ian
5da54698fa Remove the ARM_USE_SMALL_ALLOC option and code related to it.
This was an optimization used only by a few xscale platforms.  Part of
the optimization was to create a direct map for all physical pages, and
that resulted in making multiple mappings of pages in a way that bypassed
the logic in pmap.c to handle VIVT cache aliasing.  It also just generally
made the code more complex and hard to maintain for all SoCs.

Reviewed by:	cognet
2014-02-08 22:21:38 +00:00
ian
85e06f3b7c Make PTE_DEVICE a synonym for PTE_NOCACHE on armv4, to make it easier to
share the same code on both architectures.
2013-11-05 04:06:29 +00:00
ian
fefbe5ab0a Move remaining code and data related to static device mapping into the
new devmap.[ch] files.  Emphasize the MD nature of these things by using
the prefix arm_devmap_ on the function and type names (already a few of
these things found their way into MI code, hopefully it will be harder to
do by accident in the future).
2013-11-04 22:45:26 +00:00
ian
64750d8ea3 Begin reducing code duplication in arm pmap.c and pmap-v6.c by factoring
out common code related to mapping device memory into a new devmap.c file.

Remove the growing duplication of code that used pmap_devmap_find_pa() and
then did some math with the returned results to generate a virtual address,
and likewise in reverse to get a physical address.  Now there are a pair
of functions, arm_devmap_vtop() and arm_devmap_ptov(), to do that.  The
bus_space_map() implementations are rewritten in terms of these.
2013-11-04 19:44:37 +00:00
zbb
2408916435 Fix condition that determines PMAP_NEEDS_PTE_SYNC value for ARM
Use values of the correct defines to determine statement's result.
ARM_ARCH_ symbols are always defined, hence only values are relevant.

Reviewed by:	cognet
2013-10-28 23:42:44 +00:00
ian
7b011cd937 Retire arm_remap_nocache() and the data and constants associated with it.
The only remaining user was the code that allocates bounce pages for armv4
busdma.  It's not clear why bounce pages would need uncached memory, but
if that ever changes, kmem_alloc_attr() would be the way to get it.
2013-10-27 03:13:26 +00:00
cognet
d8a25b54a7 Spell cpu_l2cache_wb_range correctly. 2013-10-17 21:38:14 +00:00
cognet
8abb82aa67 - Switch to use WBWA mappings for page tables on armv6, this is needed for SMP.
- Fix PTE_SYNC() for PIPT L2 caches, using the virtual address wasn't so useful.
- Use PTE_SYNC() for >= armv6
2013-10-17 21:06:19 +00:00
raj
f925cc353c Introduce superpages support for ARMv6/v7.
Promoting base pages to superpages can increase TLB coverage and allow for
efficient use of page table entries.  This development provides FreeBSD/ARM
with superpages management mechanism roughly equivalent to what we have for
i386 and amd64 architectures.

1. Add mechanism for automatic promotion of 4KB page mappings to 1MB section
   mappings (and demotion when not needed, respectively).

2. Managed and non-kernel mappings are now superpages-aware.

3. The functionality can be enabled by setting "vm.pmap.sp_enabled" tunable to
   a non-zero value (either in loader.conf or by modifying "sp_enabled"
   variable in pmap-v6.c file).  By default, automatic promotion is currently
   disabled.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-08-26 17:12:30 +00:00
raj
f55114b9af Do not use pv_kva on ARMv6/v7 and save some space on each vm_page. It's only
relevant for older ARM variants (with virtual cache).

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	gber
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-08-19 16:16:49 +00:00
raj
f8dbb37330 Clear all L2 PTE protection bits before their configuration.
Revise L2_S_PROT_MASK to include all of the protection bits.  Notice that
clearing these bits does not always take away the corresponding permissions
(for example, permission is granted when the bit is cleared). The bits are
cleared but are to be set or left cleared accordingly in pmap_set_prot(),
pmap_enter_locked(), etc.

Clear L2_XN along with L2_S_PROT_MASK in pmap_set_prot() so that all
permissions related bits are cleared before actual configuration.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	gber
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-08-19 15:12:36 +00:00
gber
5145ecb154 Stop using PVF_MOD, PVF_REF & PVF_EXEC flags in pv_entry, use PTE.
Using PVF_MOD, PVF_REF and PVF_EXEC is redundant as we can get the proper
info from PTE bits.
When the mapping is marked as executable and has been referenced we assume
that it has been executed. Similarly, when the mapping is set to be writable
and is referenced, it must have been due to write access to it.
PVF_MOD and PVF_REF flags are kept just for pmap_clearbit() usage,
to pass the information on which bit should be cleared.

Submitted by:   Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:   The FreeBSD Foundation, Semihalf
2013-05-23 12:23:18 +00:00
gber
8e77450528 Improve, optimize and clean-up ARMv6/v7 memory management related code.
Use pmap_find_pv if needed instead of multiplying its code throughout
pmap-v6.

Avoid possible NULL pointer dereference in pmap_enter_locked()
When trying to get m->md.pv_memattr, make sure that m != NULL,
in particular that vector_page is set to be NULL.

Do not set PGA_REFERENCED flag in pmap_enter_pv().
On ARM any new page reference will result in either entering the new
mapping by calling pmap_enter, etc. or fixing-up the existing mapping in
pmap_fault_fixup().
Therefore we set PGA_REFERENCED flag in the earlier mentioned cases and
setting it later in pmap_enter_pv() is just waste of cycles.

Delete unused pm_pdir pointer from the pmap structure.

Rearrange brackets in the fault cause detection in trap.c
Place the brackets correctly in order to see course of the conditions
instantaneously.

Unify naming in pmap-v6.c and improve style
Use naming common for whole pmap and compatible with other pmaps,
improve style where possible:
pm   -> pmap
pg   -> m
opg  -> om
*pt  -> *ptep
*pte -> *ptep
*pde -> *pdep

Submitted by:   Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:   The FreeBSD Foundation, Semihalf
2013-05-23 12:15:23 +00:00
gber
8c69e11fe8 Switch to AP[2:1] access permissions model. Store "referenced"
bit in PTE.

Enable Access Flag in CPU control. With AF enabled each valid mapping
needs to have referenced bit in PTE set in order to be able to cache
it in the TLB.

AP[0] bit is to be used as reference flag.
All access permissions are encoded by AP[2:1] wherein AP[1] is in fact
"user enable" and AP[2](APX) is "write disable".

All mappings are always set to be valid. Reference emulation is performed
by setting/clearing reference flag in PTE.

md.pvh_attrs are no longer necessary however pv_flags are still being used
for now.

Marking vm_page as "dirty" or "referenced" is being performed on:
- page or flag fault servicing in pmap_fault_fixup(), basing on the fault
  type
- vm_fault servicing in pmap_enter() according to the desired protections
  and faulty access type
Redundant page marking has been removed as on ARM we know exactly when the
particular page is referenced or is going to be written.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-05-23 12:07:41 +00:00
gber
a2ea3b5e3c Port the new PV entry allocator from amd64/i386/mips to armv6/v7.
PV entries are now roughly half the size.
Instead of using a shared UMA zone for 28 byte pv entries
(two 8-byte tailq nodes, a 4 byte pointer, a 4 byte address and 4 byte
flags), we allocate a page at a time per process.
This provides 252 pv entries per process (actually, per pmap address space)
and eliminates one of the 8-byte tailq entries since we now can track
per-process pv entries implicitly.
The pointer to the pmap can be eliminated by doing address arithmetic to
find the metadata on the page headers to find a single pointer shared by
all 252 entries. There is an 8-int bitmap for the freelist of those 252
entries.
When in serious low memory condition, allocation of another pv_chunk is
possible by freeing some pages in pmap_pv_reclaim().

Added pv_entry/pv_chunk related statistics to pmap.
pv_entry/pv_chunk statistics can be accessed via sysctl vm.pmap.

Ported PTE freelist of KVA allocation and maintenance from i386.
Using an idea from Stephan Uphoff, use the empty pte's that correspond
to the unused kva in the pv memory block to thread a freelist through.
This allows us to free pages that used to be used for pv entry chunks
since we can now track holes in the kva memory block.

As both ARM pmap.c and pmap-v6.c use the same header and pv_entry, pmap and
md_page structures are different, it was needed to separate code designed
for ARMv6/7 from the one for other ARMs.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-05-14 09:47:58 +00:00
gber
961ec795d0 Fix L2 PTE access permissions management.
Keep following access permissions:

APX     AP     Kernel     User
 1      01       R         N
 1      10       R         R
 0      01      R/W        N
 0      11      R/W       R/W

Avoid using reserved in ARMv6 APX|AP settings:
- In case of unprivileged (user) access without permission to write,
  the access permission bits were being set to reserved for ARMv6
  (but valid for ARMv7) value of APX|AP = 111.

Fix-up faulting userland accesses properly:
- Wrong condition statement in pmap_fault_fixup() caused that
  any genuine, unprivileged access was being fixed-up instead of
  just skip doing anything and return. Staring from now we ensure
  proper reaction for illicit user accesses.

L2_S_PROT_R and L2_S_PROT_U names might be misleading as they do not
reflect real permission levels. It will be clarified in following
patches (switch to AP[2:1] permissions model).

Obtained from: Semihalf
2013-05-06 15:30:34 +00:00
kib
63efc821c3 Add pmap function pmap_copy_pages(), which copies the content of the
pages around, taking array of vm_page_t both for source and
destination.  Starting offsets and total transfer size are specified.

The function implements optimal algorithm for copying using the
platform-specific optimizations.  For instance, on the architectures
were the direct map is available, no transient mappings are created,
for i386 the per-cpu ephemeral page frame is used.  The code was
typically borrowed from the pmap_copy_page() for the same
architecture.

Only i386/amd64, powerpc aim and arm/arm-v6 implementations were
tested at the time of commit. High-level code, not committed yet to
the tree, ensures that the use of the function is only allowed after
explicit enablement.

For sparc64, the existing code has known issues and a stab is added
instead, to allow the kernel linking.

Sponsored by:	The FreeBSD Foundation
Tested by:	pho (i386, amd64), scottl (amd64), ian (arm and arm-v6)
MFC after:	2 weeks
2013-03-14 20:18:12 +00:00
alc
d04ffbb5a5 Initialize vm_max_kernel_address on non-FDT platforms. (This should have
been included in r246926.)

The second parameter to pmap_bootstrap() is redundant.  Eliminate it.

Reviewed by:	andrew
2013-02-20 16:48:52 +00:00
gonzo
d6fdadb6d6 Switch default cache type for ARMv6/ARMv7 from write-through to
writeback-writeallocate
2013-01-08 02:40:20 +00:00
gonzo
1fa6530f64 Fix misleading comment 2012-12-20 03:33:33 +00:00
cognet
58faac84ca Properly implement pmap_[get|set]_memattr
Submitted by:	Ian Lepore <freebsd@damnhippie.dyndns.org>
2012-12-19 00:24:31 +00:00
alc
bc883ac4c7 Eliminate an unused declaration. 2012-09-29 22:28:00 +00:00
alc
ffd834ae9a Implementing pmap_kextract(va) as pmap_extract(kernel_pmap, va) is
problematic because some callers to pmap_kextract() expect its
implementation to be lock-less.  In particular, uma_dbg_alloc() implicitly
requires this.  Otherwise, lock-order reversals occur between pmap locks and
UMA zone locks.  So, this change introduces a lock-less implementation of
pmap_kextract().

Disable recursion on the pvh global lock in the new armv6 pmap.  While
recursion on this locks occurs in the old arm pmap, it thankfully doesn't
occur in the armv6 pmap.

Tested by:	jmg
2012-09-27 05:39:42 +00:00
alc
55f61bbb9b Eliminate an unused macro. 2012-09-07 01:33:25 +00:00
gonzo
032427f3e9 Merging projects/armv6, part 1
Cummulative patch of changes that are not vendor-specific:
	- ARMv6 and ARMv7 architecture support
	- ARM SMP support
	- VFP/Neon support
	- ARM Generic Interrupt Controller driver
	- Simplification of startup code for all platforms
2012-08-15 03:03:03 +00:00
alc
6eeaee04e4 The page flag PGA_WRITEABLE is set and cleared exclusively by the pmap
layer, but it is read directly by the MI VM layer.  This change introduces
pmap_page_is_write_mapped() in order to completely encapsulate all direct
access to PGA_WRITEABLE in the pmap layer.

Aesthetics aside, I am making this change because amd64 will likely begin
using an alternative method to track write mappings, and having
pmap_page_is_write_mapped() in place allows me to make such a change
without further modification to the MI VM layer.

As an added bonus, tidy up some nearby comments concerning page flags.

Reviewed by:	kib
MFC after:	6 weeks
2012-06-16 18:56:19 +00:00
imp
e9a3ea5243 trim trailing whitespace 2012-06-13 05:02:51 +00:00
raj
b9a8b56456 ARM pmap fixes:
- Write Buffers have to be drained after write to Page Table even if caches
  are in write-through mode.

- Make sure to sync PTE in pmap_zero_page_generic().

Submitted by:	Michal Mazur
Reviewed by:	cognet
Obtained from:	Semihalf
MFC after:	1 month
2011-12-15 12:14:15 +00:00
attilio
fe4de567b5 Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).

Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.

The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN

while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.

Some technical notes:
- This commit may be considered an ABI nop for all the architectures
  different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
  accessed avoiding migration, because the size of cpuset_t should be
  considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
  primirally done in order to leave some more space in userland to cope
  with KBI extensions). If you need to access kernel cpuset_t from the
  userland please refer to example in this patch on how to do that
  correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now

The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.

Tested by:	pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by:	jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
alc
2f4da8e71e Remove pmap fields that are either unused or not fully implemented.
Discussed with:	kib
2011-02-17 15:36:29 +00:00
jhb
059cd3cb74 Whitespace tweak. 2011-02-09 14:37:33 +00:00
imp
84d5c43402 phys_addr is a PA not a VA so declare it as a vm_paddr_t not a vm_offset_t. 2011-02-05 03:36:34 +00:00
imp
97b4cbd8b4 Remove ancient simulation code. Skyeye simulation never really worked
quite right and hasn't been used in ages and is likely broken.  QEMU
with GUMSTIX is a more promising road to FreeBSD/arm in emulation
anyway.

Reviewed by:	cognet@
2011-01-05 22:15:57 +00:00
jhb
1c3734f021 Update various places that store or manipulate CPU masks to use cpumask_t
instead of int or u_int.  Since cpumask_t is currently u_int on all
platforms this should just be a cosmetic change.
2010-08-11 23:22:53 +00:00
kmacy
1dc1263413 On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.

Supported by: Bitgravity Inc.

Discussed with: alc, jeffr, and kib
2010-04-30 00:46:43 +00:00
alc
ea60573817 Add support to the virtual memory system for configuring machine-
dependent memory attributes:

Rename vm_cache_mode_t to vm_memattr_t.  The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.

Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.

Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes.  Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures.  The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map.  The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:

  kmem_alloc_contig() can now be used to allocate kernel memory with
  non-default memory attributes on amd64 and i386.

  vm_page_alloc() and the device pager will set the memory attributes
  for the real or fictitious page according to the object's default
  memory attributes.

Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.

Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386.  In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.

In collaboration with: jhb

Approved by:	re (kib)
2009-07-12 23:31:20 +00:00
thompsa
f3a1b951fc Track the kernel mapping of a physical page by a new entry in vm_page
structure. When the page is shared, the kernel mapping becomes a special
type of managed page to force the cache off the page mappings. This is
needed to avoid stale entries on all ARM VIVT caches, and VIPT caches
with cache color issue.

Submitted by:	Mark Tinguely
Reviewed by:	alc
Tested by:	Grzegorz Bernacki, thompsa
2009-06-18 20:42:37 +00:00
alc
5cc7fa2b00 Define the kernel pmap in the same way on arm as on every other
architecture.

Eliminate an unused definition.

Tested by:	cognet
2009-05-07 05:42:13 +00:00
raj
032a270ba5 Support kernel crash mini dumps on ARM architecture.
Obtained from:	Juniper Networks, Semihalf
2008-11-06 16:20:27 +00:00
cognet
26dccf5aec Remove unused pv_list_count from the vm_page, and pm_count from the struct
pmap.

Submitted by:	Mark Tinguely
2008-03-06 21:59:47 +00:00
cognet
4c1734c71d Bring in the nice work from Mark Tinguely on arm pmap.
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.

In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.

In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().

We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).

In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.

In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.

In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
2008-01-31 00:05:40 +00:00