Commit Graph

63 Commits

Author SHA1 Message Date
Pedro F. Giffuni
51369649b0 sys: further adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 3-Clause license.

The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.

Special thanks to Wind River for providing access to "The Duke of
Highlander" tool: an older (2014) run over FreeBSD tree was useful as a
starting point.
2017-11-20 19:43:44 +00:00
Warner Losh
fbbd9655e5 Renumber copyright clause 4
Renumber cluase 4 to 3, per what everybody else did when BSD granted
them permission to remove clause 3. My insistance on keeping the same
numbering for legal reasons is too pedantic, so give up on that point.

Submitted by:	Jan Schaumann <jschauma@stevens.edu>
Pull Request:	https://github.com/freebsd/freebsd/pull/96
2017-02-28 23:42:47 +00:00
Attilio Rao
66bacd7e17 Remove unused member.
Sponsored by:	EMC / Isilon storage division
Reviewed by:	alc
Tested by:	pho
2013-08-04 21:17:05 +00:00
Attilio Rao
cfedf924d3 Rework the known rwlock to benefit about staying on their own
cache line in order to avoid manual frobbing but using
struct rwlock_padalign.

Reviewed by:	alc, jimharris
2012-11-03 23:03:14 +00:00
Marius Strobl
55afc4edf1 Merge r236494 from x86:
Isolate the global TTE list lock from data and other locks to prevent false
sharing within the cache.

MFC after:	3 days
2012-08-05 22:03:13 +00:00
Alan Cox
6031c68de4 The page flag PGA_WRITEABLE is set and cleared exclusively by the pmap
layer, but it is read directly by the MI VM layer.  This change introduces
pmap_page_is_write_mapped() in order to completely encapsulate all direct
access to PGA_WRITEABLE in the pmap layer.

Aesthetics aside, I am making this change because amd64 will likely begin
using an alternative method to track write mappings, and having
pmap_page_is_write_mapped() in place allows me to make such a change
without further modification to the MI VM layer.

As an added bonus, tidy up some nearby comments concerning page flags.

Reviewed by:	kib
MFC after:	6 weeks
2012-06-16 18:56:19 +00:00
Alan Cox
b10ed4a911 Replace all uses of the vm page queues lock by a r/w lock that is private
to this pmap.c.  This new r/w lock is used primarily to synchronize access
to the TTE lists.  However, it will be used in a somewhat unconventional
way.  As finer-grained TTE list locking is added to each of the pmap
functions that acquire this r/w lock, its acquisition will be changed from
write to read, enabling concurrent execution of the pmap functions with
finer-grained locking.

Reviewed by:	attilio
Tested by:	flo
MFC after:	10 days
2012-05-29 01:52:38 +00:00
Marius Strobl
0e5b645f76 - pmap_cache_remove() and pmap_protect_tte() are only used within pmap.c
so static'ize them.
- Correct a typo.
2011-07-05 18:50:40 +00:00
Attilio Rao
0d9fa7bd31 Add sparc64 support.
Compiled (and helped) by:	pluknet
2011-05-06 21:53:29 +00:00
Alan Cox
e6ffa21488 Remove pmap fields that are either unused or not fully implemented.
Discussed with:	kib
2011-02-17 15:36:29 +00:00
Marius Strobl
4d05e7b184 On UltraSPARC-III+ and greater take advantage of ASI_ATOMIC_QUAD_LDD_PHYS,
which takes an physical address instead of an virtual one, for loading TTEs
of the kernel TSB so we no longer need to lock the kernel TSB into the dTLB,
which only has a very limited number of lockable dTLB slots. The net result
is that we now basically can handle a kernel TSB of any size and no longer
need to limit the kernel address space based on the number of dTLB slots
available for locked entries. Consequently, other parts of the trap handlers
now also only access the the kernel TSB via its physical address in order
to avoid nested traps, as does the PMAP bootstrap code as we haven't taken
over the trap table at that point, yet. Apart from that the kernel TSB now
is accessed via a direct mapping when we are otherwise taking advantage of
ASI_ATOMIC_QUAD_LDD_PHYS so no further code changes are needed. Most of this
is implemented by extending the patching of the TSB addresses and mask as
well as the ASIs used to load it into the trap table so the runtime overhead
of this change is rather low. Currently the use of ASI_ATOMIC_QUAD_LDD_PHYS
is not yet enabled on SPARC64 CPUs due to lack of testing and due to the
fact it might require minor adjustments there.
Theoretically it should be possible to use the same approach also for the
user TSB, which already is not locked into the dTLB, avoiding nested traps.
However, for reasons I don't understand yet OpenSolaris only does that with
SPARC64 CPUs. On the other hand I think that also addressing the user TSB
physically and thus avoiding nested traps would get us closer to sharing
this code with sun4v, which only supports trap level 0 and 1, so eventually
we could have a single kernel which runs on both sun4u and sun4v (as does
Linux and OpenBSD).

Developed at and committed from:	27C3
2010-12-29 16:59:33 +00:00
John Baldwin
60c7b36b7a Update various places that store or manipulate CPU masks to use cpumask_t
instead of int or u_int.  Since cpumask_t is currently u_int on all
platforms this should just be a cosmetic change.
2010-08-11 23:22:53 +00:00
Kip Macy
2965a45315 On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.

Supported by: Bitgravity Inc.

Discussed with: alc, jeffr, and kib
2010-04-30 00:46:43 +00:00
Marius Strobl
9b824f84d5 Some machines can not only consist of CPUs running at different speeds
but also of different types, f.e. Sun Fire V890 can be equipped with a
mix of UltraSPARC IV and IV+ CPUs, requiring different MMU initialization
and different workarounds for model specific errata. Therefore move the
CPU implementation number from a global variable to the per-CPU data.
Functions which are called before the latter is available are passed the
implementation number as a parameter now.
2010-02-20 23:24:19 +00:00
Alan Cox
3153e878dd Add support to the virtual memory system for configuring machine-
dependent memory attributes:

Rename vm_cache_mode_t to vm_memattr_t.  The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.

Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.

Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes.  Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures.  The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map.  The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:

  kmem_alloc_contig() can now be used to allocate kernel memory with
  non-default memory attributes on amd64 and i386.

  vm_page_alloc() and the device pager will set the memory attributes
  for the real or fictitious page according to the object's default
  memory attributes.

Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.

Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386.  In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.

In collaboration with: jhb

Approved by:	re (kib)
2009-07-12 23:31:20 +00:00
Marius Strobl
49c8326a79 - Work around the broken loader behavior of not demapping no longer
used kernel TLB slots when unloading the kernel or modules, which
  results in havoc when loading a kernel and modules which take up
  less TLB slots afterwards as the unused but locked ones aren't
  accounted for in virtual_avail. Eventually this should be fixed
  in the loader which isn't straight forward though and the kernel
  should be robust against this anyway. [1]
- Ensure that the addresses allocated directly from phys_avail[] by
  pmap_bootstrap_alloc() are always colored properly. This implicit
  assumption was broken in r194784 as unlike the other consumers the
  DPCPU area allocated for the BSP isn't a multiple of PAGE_SIZE *
  DCACHE_COLORS. [2]
- Remove the no longer used global msgbuf_phys.
- Remove the redundant ekva parameter of pmap_bootstrap_alloc().
- Correct some outdated function names in ktr(9) invocations.

Requested by:	jhb [1]
Reported by:	gavin [2]
Approved by:	re (kib)
MFC after:	2 weeks
2009-06-28 22:42:51 +00:00
Alan Cox
3cad40e517 Add pmap_clear_write() to the interface between the virtual memory
system's machine-dependent and machine-independent layers.  Once
pmap_clear_write() is implemented on all of our supported
architectures, I intend to replace all calls to pmap_page_protect() by
calls to pmap_clear_write().  Why?  Both the use and implementation of
pmap_page_protect() in our virtual memory system has subtle errors,
specifically, the management of execute permission is broken on some
architectures.  The "prot" argument to pmap_page_protect() should
behave differently from the "prot" argument to other pmap functions.
Instead of meaning, "give the specified access rights to all of the
physical page's mappings," it means "don't take away the specified
access rights from all of the physical page's mappings, but do take
away the ones that aren't specified."  However, owing to our i386
legacy, i.e., no support for no-execute rights, all but one invocation
of pmap_page_protect() specifies VM_PROT_READ only, when the intent
is, in fact, to remove only write permission.  Consequently, a
faithful implementation of pmap_page_protect(), e.g., ia64, would
remove execute permission as well as write permission.  On the other
hand, some architectures that support execute permission have
basically ignored whether or not VM_PROT_EXECUTE is passed to
pmap_page_protect(), e.g., amd64 and sparc64.  This change represents
the first step in replacing pmap_page_protect() by the less subtle
pmap_clear_write() that is already implemented on amd64, i386, and
sparc64.

Discussed with: grehan@ and marcel@
2006-07-20 17:48:41 +00:00
Alan Cox
63ad764514 MFalpha/amd64/arm/ia64
Retire pmap_track_modified().  We no longer need it because we do not
 create managed mappings within the clean submap.  To prevent regressions,
 add assertions blocking the creation of managed mappings within the clean
 submap.
2006-05-29 06:12:01 +00:00
John Baldwin
696effb697 - Cleanup whitespace and extra ()s in vtophys() macros.
- Move vtophys() macros next to vtopte() where vtopte() exists to match
  comments above vtopte().
- Remove references to the alternate address space in the comment above
  vtopte().  amd64 never had the alternate address space, and i386 lost it
  prior to PAE support being added.
- s/entires/entries/ in comments.

Reviewed by:	alc
2005-12-06 21:09:01 +00:00
Marius Strobl
7b50d90fde Silence witness warnings about duplicate pmap lock emitted since
rev. 1.145 of sys/sparc64/sparc64/pmap.c.

Submitted by:	alc
2005-02-18 15:37:34 +00:00
Warner Losh
60727d8b86 /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
Alan Cox
2c0680e659 Add pmap locking to many of the functions.
Implement the protection check required by the pmap_extract_and_hold()
specification.

Remove the acquisition and release of Giant from pmap_extract_and_hold() and
pmap_protect().

Many thanks to Ken Smith for resolving a sparc64-specific initialization
problem in my original patch.

Tested by: kensmith@
2004-08-10 20:53:26 +00:00
Alan Cox
6b95d60a7f Correct the implementation of pmap_page_is_mapped(): It should return TRUE
only if the page has one or more managed mappings.
2004-05-09 19:09:14 +00:00
Alan Cox
1f51408ade Remove avail_end. It is not used. 2004-04-11 06:02:24 +00:00
Warner Losh
2fcbca0d85 Remove advertising clause from University of California Regent's
license, per letter dated July 22, 1999 and email from Peter Wemm,
Alan Cox and Robert Watson.

Approved by: core, peter, alc, rwatson
2004-04-07 05:00:01 +00:00
Alan Cox
c8607538c8 Remove avail_start on those platforms that no longer use it. (Only amd64
does anything with it beyond simple initialization.)
2004-04-05 04:08:00 +00:00
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Jake Burkholder
50e24eb628 - Move the routine for flushing all user mappings from the tlb from pmap to
the cpu dependent files.  It will need to be done differently for USIII.
- Simplify the logic for detecting context rollovers.  Instead of dealing
  with it when the next context switch would cause the context numbers to
  rollover, deal with it when they actually do rollover.
- Move some things around in cpu_switch so that we only do 1 membar #Sync
  when switching address space, instead of 2.
- Detect kernel threads by comparing the new vm space to vmspace0, instead
  if checking if the tlb context is 0.
- Removed some debug code.
2003-04-13 21:54:58 +00:00
Jake Burkholder
58d7ebfa7c Use vm_paddr_t for physical addresses. 2003-04-08 06:35:09 +00:00
Jake Burkholder
c81c0cf196 Make the pmap stats writeable. It can be useful to clear them. 2003-04-06 18:17:31 +00:00
Jake Burkholder
00aabd830d - Remove unused cache flushing routines. These will not necessary work
on future UltraSPARC cpus for which the data cache is not direct mapped.
- Move UltraSPARC I and II (spitfire, blackbird, sapphire, sabre) specific
  functions to spitfire.c, and add cheetah.c for UltraSPARC III specific
  functions.  Initially just cache flushing, but there are a few other
  functions that will need to move here.
- Add an ipi handler for data cache flushing on UltraSPARC III.
- Use function pointers to select the right cache flushing functions based
  on cpu_impl.

With this it is possible to boot single user from an mfs root on UltraSPARC
III systems, including spinning up secondary processors.  There is currently
no support for the host to pci bridge, and no documentation for it is
publically available.

Thanks to Oleg Derevenetz for providing access to a system with UltraSPARC
III+ cpus.
2003-03-19 06:55:37 +00:00
Jake Burkholder
5501d40bb9 Made the prototypes for pmap_kenter and pmap_kremove MD. These functions
are machine dependent because they are not required to update the tlb when
mappings are added or removed, and doing so is machine dependent.
In addition, an implementation may require that pages mapped with pmap_kenter
have a backing vm_page_t, which is not necessarily true of all physical
pages, and so may choose to pass the vm_page_t to pmap_kenter instead of the
physical address in order to make this requirement clear.
2003-03-16 04:16:03 +00:00
Jake Burkholder
e8237e53da - Reorganize PMAP_STATS to scale a little better.
- Add some more stats for things that are now considered interesting.
2003-01-05 05:30:40 +00:00
Jake Burkholder
b8eb0267c0 - Add a pmap pointer to struct md_page, and use this to find the pmap that
a mapping belongs to by setting it in the vm_page_t structure that backs
  the tsb page that the tte for a mapping is in.  This allows the pmap that
  a mapping belongs to to be found without keeping a pointer to it in the
  tte itself.
- Remove the pmap pointer from struct tte and use the space to make the
  tte pv lists doubly linked (TAILQs), like on other architectures.  This
  makes entering or removing a mapping O(1) instead of O(n) where n is the
  number of pmaps a page is mapped by (including kernel_pmap).
- Use atomic ops for setting and clearing bits in the ttes, now that they
  return the old value and can be easily used for this purpose.
- Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will
  inline it (4 inline stores using %g0 instead of a function call).
- Initially set the virtual colour for all the vm_page_ts to be equal to their
  physical colour.  This will be more useful once uma_small_alloc is
  implemented, but basically pages with virtual colour equal to phsyical
  colour are easier to handle at the pmap level because they can be safely
  accessed through cachable direct virtual to physical mappings with that
  colour, without fear of causing illegal dcache aliases.

In total these changes give a minor performance improvement, about 1%
reduction in system time during buildworld.
2002-12-21 22:43:19 +00:00
Jake Burkholder
fabf7ce58c Removed unused pmap_qenter_flags. 2002-12-21 10:04:14 +00:00
Alan Cox
eea85e9bb6 Move pmap_collect() out of the machine-dependent code, rename it
to reflect its new location, and add page queue and flag locking.

Notes: (1) alpha, i386, and ia64 had identical implementations
of pmap_collect() in terms of machine-independent interfaces;
(2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
2002-11-13 05:39:58 +00:00
Alan Cox
6372d61e3e - Clear the page's PG_WRITEABLE flag in the i386's pmap_changebit()
if we're removing write access from the page's PTEs.
 - Export pmap_remove_all() on alpha, i386, and ia64.  (It's already
   exported on sparc64.)
2002-11-11 05:17:34 +00:00
Jake Burkholder
e557d82ace Add needed include of queue.h. 2002-10-01 02:50:26 +00:00
Jake Burkholder
6856ac3294 Minor style. Removed unused declaration. 2002-08-16 01:35:00 +00:00
Alan Cox
33559722db o Introduce pmap_page_is_mapped(). Its purpose is to obsolete
the PG_MAPPED flag.
2002-08-07 18:03:00 +00:00
Jake Burkholder
4fbe520926 Forgot to commit this.
Spotted by:	scottl
2002-08-01 21:39:54 +00:00
Jake Burkholder
30bbe52432 Implement a direct mapped address region, like alpha and ia64. This
basically maps all of physical memory 1:1 to a range of virtual addresses
outside of normal kva.  The advantage of doing this instead of accessing
phsyical addresses directly is that memory accesses will go through the
data cache, and will participate in the normal cache coherency algorithm
for invalidating lines in our own and in other cpus' data caches.  So
we don't have to flush the cache manually or send IPIs to do so on other
cpus.  Also, since the mappings never change, we don't have to flush them
from the tlb manually.
This makes pmap_copy_page and pmap_zero_page MP safe, allowing the idle
zero proc to run outside of giant.

Inspired by:	ia64
2002-07-27 21:57:38 +00:00
Jake Burkholder
218fd301cd pmap_kremove can no longer be used to remove the magic device mappings
installed with pmap_kenter_flags, since the physical addresses may not
have an associated vm_page.  Add a function to do this.

Tested by:	Tomi Vainio <Tomi.Vainio@Sun.COM>
2002-06-25 15:13:09 +00:00
Jake Burkholder
20bd6675fb Add an MD page flag for tracking if a page is cacheable or not, so that
we don't flush all mappings of a physical page in order to make it
virtually cachable again, if it is already cachable.
2002-05-29 06:12:13 +00:00
Jake Burkholder
1982efc5c2 Merge the code in pv.c into pmap.c directly. Place all page mappings onto
the pv lists in the vm_page, even unmanaged kernel mappings.  This is so
that the virtual cachability of these mappings can be tracked when a page
is mapped to more than one virtual address.  All virtually cachable
mappings of a physical page must have the same virtual colour, or illegal
alises can be created in the data cache.  This is a bit tricky because we
still have to recognize managed and unmanaged mappings, even though they
are all on the pv lists.
2002-05-29 06:08:45 +00:00
Jake Burkholder
e793e4d0b3 Add pv list linkage and a pmap pointer to struct tte. Remove separately
allocated pv entries and use the linkage in the tte for pv operations.
2002-05-29 05:56:05 +00:00
Jake Burkholder
b08270ba0f Remove pmap.pm_pvlist and make the functions that use it no-ops. These are
all optimizations for architectures which have large sparse page tables,
and/or can't put the pv linkage inside of the page table entries.
2002-05-29 05:24:16 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Jake Burkholder
5573db3f0b Allocate tlb contexts on the fly in cpu_switch, instead of statically 1 to 1
with pmaps.  When the context numbers wrap around we flush all user mappings
from the tlb.  This makes use of the array indexed by cpuid to allow a pmap
to have a different context number on a different cpu.  If the context numbers
are then divided evenly among cpus such that none are shared, we can avoid
sending tlb shootdown ipis in an smp system for non-shared pmaps.  This also
removes a limit of 8192 processes (pmaps) that could be active at any given
time due to running out of tlb contexts.

Inspired by:		the brown book
Crucial bugfix from:	tmm
2002-03-04 05:20:29 +00:00
Jake Burkholder
de3fee8992 Convert pmap.pm_context to an array of contexts indexed by cpuid. This
doesn't make sense for SMP right now, but it is a means to an end.
2002-02-26 06:57:30 +00:00