Commit Graph

37 Commits

Author SHA1 Message Date
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Jake Burkholder
50e24eb628 - Move the routine for flushing all user mappings from the tlb from pmap to
the cpu dependent files.  It will need to be done differently for USIII.
- Simplify the logic for detecting context rollovers.  Instead of dealing
  with it when the next context switch would cause the context numbers to
  rollover, deal with it when they actually do rollover.
- Move some things around in cpu_switch so that we only do 1 membar #Sync
  when switching address space, instead of 2.
- Detect kernel threads by comparing the new vm space to vmspace0, instead
  if checking if the tlb context is 0.
- Removed some debug code.
2003-04-13 21:54:58 +00:00
Jake Burkholder
58d7ebfa7c Use vm_paddr_t for physical addresses. 2003-04-08 06:35:09 +00:00
Jake Burkholder
c81c0cf196 Make the pmap stats writeable. It can be useful to clear them. 2003-04-06 18:17:31 +00:00
Jake Burkholder
00aabd830d - Remove unused cache flushing routines. These will not necessary work
on future UltraSPARC cpus for which the data cache is not direct mapped.
- Move UltraSPARC I and II (spitfire, blackbird, sapphire, sabre) specific
  functions to spitfire.c, and add cheetah.c for UltraSPARC III specific
  functions.  Initially just cache flushing, but there are a few other
  functions that will need to move here.
- Add an ipi handler for data cache flushing on UltraSPARC III.
- Use function pointers to select the right cache flushing functions based
  on cpu_impl.

With this it is possible to boot single user from an mfs root on UltraSPARC
III systems, including spinning up secondary processors.  There is currently
no support for the host to pci bridge, and no documentation for it is
publically available.

Thanks to Oleg Derevenetz for providing access to a system with UltraSPARC
III+ cpus.
2003-03-19 06:55:37 +00:00
Jake Burkholder
5501d40bb9 Made the prototypes for pmap_kenter and pmap_kremove MD. These functions
are machine dependent because they are not required to update the tlb when
mappings are added or removed, and doing so is machine dependent.
In addition, an implementation may require that pages mapped with pmap_kenter
have a backing vm_page_t, which is not necessarily true of all physical
pages, and so may choose to pass the vm_page_t to pmap_kenter instead of the
physical address in order to make this requirement clear.
2003-03-16 04:16:03 +00:00
Jake Burkholder
e8237e53da - Reorganize PMAP_STATS to scale a little better.
- Add some more stats for things that are now considered interesting.
2003-01-05 05:30:40 +00:00
Jake Burkholder
b8eb0267c0 - Add a pmap pointer to struct md_page, and use this to find the pmap that
a mapping belongs to by setting it in the vm_page_t structure that backs
  the tsb page that the tte for a mapping is in.  This allows the pmap that
  a mapping belongs to to be found without keeping a pointer to it in the
  tte itself.
- Remove the pmap pointer from struct tte and use the space to make the
  tte pv lists doubly linked (TAILQs), like on other architectures.  This
  makes entering or removing a mapping O(1) instead of O(n) where n is the
  number of pmaps a page is mapped by (including kernel_pmap).
- Use atomic ops for setting and clearing bits in the ttes, now that they
  return the old value and can be easily used for this purpose.
- Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will
  inline it (4 inline stores using %g0 instead of a function call).
- Initially set the virtual colour for all the vm_page_ts to be equal to their
  physical colour.  This will be more useful once uma_small_alloc is
  implemented, but basically pages with virtual colour equal to phsyical
  colour are easier to handle at the pmap level because they can be safely
  accessed through cachable direct virtual to physical mappings with that
  colour, without fear of causing illegal dcache aliases.

In total these changes give a minor performance improvement, about 1%
reduction in system time during buildworld.
2002-12-21 22:43:19 +00:00
Jake Burkholder
fabf7ce58c Removed unused pmap_qenter_flags. 2002-12-21 10:04:14 +00:00
Alan Cox
eea85e9bb6 Move pmap_collect() out of the machine-dependent code, rename it
to reflect its new location, and add page queue and flag locking.

Notes: (1) alpha, i386, and ia64 had identical implementations
of pmap_collect() in terms of machine-independent interfaces;
(2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
2002-11-13 05:39:58 +00:00
Alan Cox
6372d61e3e - Clear the page's PG_WRITEABLE flag in the i386's pmap_changebit()
if we're removing write access from the page's PTEs.
 - Export pmap_remove_all() on alpha, i386, and ia64.  (It's already
   exported on sparc64.)
2002-11-11 05:17:34 +00:00
Jake Burkholder
e557d82ace Add needed include of queue.h. 2002-10-01 02:50:26 +00:00
Jake Burkholder
6856ac3294 Minor style. Removed unused declaration. 2002-08-16 01:35:00 +00:00
Alan Cox
33559722db o Introduce pmap_page_is_mapped(). Its purpose is to obsolete
the PG_MAPPED flag.
2002-08-07 18:03:00 +00:00
Jake Burkholder
4fbe520926 Forgot to commit this.
Spotted by:	scottl
2002-08-01 21:39:54 +00:00
Jake Burkholder
30bbe52432 Implement a direct mapped address region, like alpha and ia64. This
basically maps all of physical memory 1:1 to a range of virtual addresses
outside of normal kva.  The advantage of doing this instead of accessing
phsyical addresses directly is that memory accesses will go through the
data cache, and will participate in the normal cache coherency algorithm
for invalidating lines in our own and in other cpus' data caches.  So
we don't have to flush the cache manually or send IPIs to do so on other
cpus.  Also, since the mappings never change, we don't have to flush them
from the tlb manually.
This makes pmap_copy_page and pmap_zero_page MP safe, allowing the idle
zero proc to run outside of giant.

Inspired by:	ia64
2002-07-27 21:57:38 +00:00
Jake Burkholder
218fd301cd pmap_kremove can no longer be used to remove the magic device mappings
installed with pmap_kenter_flags, since the physical addresses may not
have an associated vm_page.  Add a function to do this.

Tested by:	Tomi Vainio <Tomi.Vainio@Sun.COM>
2002-06-25 15:13:09 +00:00
Jake Burkholder
20bd6675fb Add an MD page flag for tracking if a page is cacheable or not, so that
we don't flush all mappings of a physical page in order to make it
virtually cachable again, if it is already cachable.
2002-05-29 06:12:13 +00:00
Jake Burkholder
1982efc5c2 Merge the code in pv.c into pmap.c directly. Place all page mappings onto
the pv lists in the vm_page, even unmanaged kernel mappings.  This is so
that the virtual cachability of these mappings can be tracked when a page
is mapped to more than one virtual address.  All virtually cachable
mappings of a physical page must have the same virtual colour, or illegal
alises can be created in the data cache.  This is a bit tricky because we
still have to recognize managed and unmanaged mappings, even though they
are all on the pv lists.
2002-05-29 06:08:45 +00:00
Jake Burkholder
e793e4d0b3 Add pv list linkage and a pmap pointer to struct tte. Remove separately
allocated pv entries and use the linkage in the tte for pv operations.
2002-05-29 05:56:05 +00:00
Jake Burkholder
b08270ba0f Remove pmap.pm_pvlist and make the functions that use it no-ops. These are
all optimizations for architectures which have large sparse page tables,
and/or can't put the pv linkage inside of the page table entries.
2002-05-29 05:24:16 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Jake Burkholder
5573db3f0b Allocate tlb contexts on the fly in cpu_switch, instead of statically 1 to 1
with pmaps.  When the context numbers wrap around we flush all user mappings
from the tlb.  This makes use of the array indexed by cpuid to allow a pmap
to have a different context number on a different cpu.  If the context numbers
are then divided evenly among cpus such that none are shared, we can avoid
sending tlb shootdown ipis in an smp system for non-shared pmaps.  This also
removes a limit of 8192 processes (pmaps) that could be active at any given
time due to running out of tlb contexts.

Inspired by:		the brown book
Crucial bugfix from:	tmm
2002-03-04 05:20:29 +00:00
Jake Burkholder
de3fee8992 Convert pmap.pm_context to an array of contexts indexed by cpuid. This
doesn't make sense for SMP right now, but it is a means to an end.
2002-02-26 06:57:30 +00:00
Jake Burkholder
07d99740b6 Remove code to lock the user tsb into the tlb. We can handle faults on it
now, as we do for normal wired kernel memory.
2002-02-25 22:58:41 +00:00
Jake Burkholder
70f550ae58 Prototype pmap_map_tsb(). 2002-01-08 05:06:39 +00:00
Thomas Moestl
62ec4add59 1. Implement an optimization for pmap_remove() and pmap_protect(): if a
substantial fraction of the number of entries of tte's in the tsb
   would need to be looked up, traverse the tsb instead. This is crucial
   in some places, e.g. when swapping out a process, where a certain
   pmap_remove() call would take very long time to complete without this.
2. Implement pmap_qenter_flags(), which will become used later
3. Reactivate the instruction cache flush done when mapping as executable.
   This is required e.g. when executing files via NFS, but is known to
   cause problems on UltraSPARC-IIe CPU's. If you have such a CPU, you
   will need to comment this call out for now.

Submitted by:	jake (3)
2002-01-02 18:49:20 +00:00
Jake Burkholder
a4e1e6e5de Add definitions for dcache color bits, which may move to cache.h.
Add fields to md_page for tracking virtual page color, and pv entry
lists.
Fix pmap_track_modified to work for non-kernel pmaps.  This is due to
kernel virtual addresses potentially overlapping with userland addresses.
2001-12-29 08:19:24 +00:00
Jake Burkholder
6ef2d9a02d Parameterize the size of the kernel virtual address space on KVA_PAGES.
Don't use a hard coded address constant for the virtual address of the
kernel tsb.  Allocate kernel virtual address space for the kernel tsb
at runtime.
Remove unused parameter to pmap_bootstrap.
Adapt pmap.c to use KVA_PAGES.
Map the message buffer too.
Add some traces.
Implement pmap_protect.
2001-10-20 16:17:04 +00:00
Thomas Moestl
4bc38523ec Add pmap_kenter_flags(), which is used by MD bus code that will be
committed soon, add a stub form pmap_kenter_temporary(), and implement
pmap_extract() and pmap_kextract().
2001-10-12 15:49:51 +00:00
David E. O'Brien
bcc9d95fe0 style(9) the structure definitions. 2001-09-05 05:18:35 +00:00
Jake Burkholder
3d450a75c1 Use the correct copyrights. Note where most of this came from.
Requested by:	obrien
2001-09-03 22:27:23 +00:00
David E. O'Brien
73a4930297 The author isn't a [UC] Regents. Correct the copyright language. 2001-08-09 02:09:34 +00:00
Jake Burkholder
8d94222282 Add a vm_object and page count to struct pmap for allocating tsb pages. 2001-08-06 02:19:52 +00:00
Jake Burkholder
7cabcd9841 Move some code related to managing pv entries from the pmap module to
the pv module.  It works now that vtophys for sttes works.
2001-08-03 01:27:15 +00:00
Jake Burkholder
89bf8575ee Flesh out the sparc64 port considerably. This contains:
- mostly complete kernel pmap support, and tested but currently turned
  off userland pmap support
- low level assembly language trap, context switching and support code
- fully implemented atomic.h and supporting cpufunc.h
- some support for kernel debugging with ddb
- various header tweaks and filling out of machine dependent structures
2001-07-31 06:05:05 +00:00
Jake Burkholder
98bb5304e1 Add skeleton machine dependent headers and c files for a port of freebsd
to a new architecture.  This is the base of the sparc64 port, but contains
limited machine dependent code, and can be used a base for ports.  Included
are:
- standard machine dependent headers, tweaked for a 64 bit, big endian
  architecture, including empty versions of all the machine dependent
  structures
- a machine independent atomic.h, which can be used until a port has
  support for interrupts and the operations really need to be atomic
- stub versions of all the machine dependent functions, which panic
  when called and print out the name of the function that needs to
  be implemented.  functions which are normally in assembly files are
  not included, but this should reduce the number of different undefined
  references on the first few compiles from hundreds to 5 or 6
Given minimal startup code and console support it should be trivial to
make this compile and run the first few sysinits on almost any architecture.

Requested by:   alfred, imp, jhb
2001-07-31 05:45:16 +00:00