Commit Graph

143 Commits

Author SHA1 Message Date
Alan Cox
cf3508519c Handle a race between pmap_kextract() and pmap_promote_pde(). This race is
known to cause a kernel crash in ZFS on i386 when superpage promotion is
enabled.

Tested by:	netchild
MFC after:	1 week
2010-01-23 18:42:28 +00:00
Alan Cox
28a5e2a5d7 Make pmap_set_pg() static. 2010-01-07 17:34:45 +00:00
John Baldwin
75e66e421a Improve pmap_change_attr() so that it is able to demote a large (2/4MB)
page into 4KB pages as needed.  This should be fairly rare in practice
on i386.  This includes merging the following changes from the amd64 pmap:
180430, 180485, 180845, 181043, 181077, and 196318.
- Add basic support for changing attributes on PDEs to pmap_change_attr()
  similar to the support in the initial version of pmap_change_attr() on
  amd64 including inlines for pmap_pde_attr() and pmap_pte_attr().
- Extend pmap_demote_pde() to include the ability to instantiate a new page
  table page where none existed before.
- Enhance pmap_change_attr().  Use pmap_demote_pde() to demote a 2/4MB page
  mapping to 4KB page mappings when the specified attribute change only
  applies to a portion of the 2/4MB page.  Previously, in such cases,
  pmap_change_attr() gave up and returned an error.
- Correct a critical accounting error in pmap_demote_pde().

Reviewed by:	alc
MFC after:	3 days
2009-08-31 17:42:52 +00:00
Konstantin Belousov
8a5ac5d56f As was done in r195820 for amd64, use clflush for flushing cache lines
when memory page caching attributes changed, and CPU does not support
self-snoop, but implemented clflush, for i386.

Take care of possible mappings of the page by sf buffer by utilizing
the mapping for clflush, otherwise map the page transiently. Amd64
used direct map.

Proposed and reviewed by:  alc
Approved by:   re (kensmith)
2009-07-29 08:49:58 +00:00
Alan Cox
3153e878dd Add support to the virtual memory system for configuring machine-
dependent memory attributes:

Rename vm_cache_mode_t to vm_memattr_t.  The new name reflects the
fact that there are machine-dependent memory attributes that have
nothing to do with controlling the cache's behavior.

Introduce vm_object_set_memattr() for setting the default memory
attributes that will be given to an object's pages.

Introduce and use pmap_page_{get,set}_memattr() for getting and
setting a page's machine-dependent memory attributes.  Add full
support for these functions on amd64 and i386 and stubs for them on
the other architectures.  The function pmap_page_set_memattr() is also
responsible for any other machine-dependent aspects of changing a
page's memory attributes, such as flushing the cache or updating the
direct map.  The uses include kmem_alloc_contig(), vm_page_alloc(),
and the device pager:

  kmem_alloc_contig() can now be used to allocate kernel memory with
  non-default memory attributes on amd64 and i386.

  vm_page_alloc() and the device pager will set the memory attributes
  for the real or fictitious page according to the object's default
  memory attributes.

Update the various pmap functions on amd64 and i386 that map pages to
incorporate each page's memory attributes in the mapping.

Notes: (1) Inherent to this design are safety features that prevent
the specification of inconsistent memory attributes by different
mappings on amd64 and i386.  In addition, the device pager provides a
warning when a device driver creates a fictitious page with memory
attributes that are inconsistent with the real page that the
fictitious page is an alias for. (2) Storing the machine-dependent
memory attributes for amd64 and i386 as a dedicated "int" in "struct
md_page" represents a compromise between space efficiency and the ease
of MFCing these changes to RELENG_7.

In collaboration with: jhb

Approved by:	re (kib)
2009-07-12 23:31:20 +00:00
Alan Cox
0f6766f3da Eliminate dead code. These definitions should have been deleted with the
introduction of i686_mem.c in r45405.

Merge adjacent #ifdef _KERNEL/#endif blocks.
2009-06-22 04:21:02 +00:00
Ed Schouten
3c7553cc20 Simplify the inline assembler (and correct potential error) of pte_load_store().
Submitted by:	Christoph Mallon
2009-06-13 13:56:06 +00:00
Alan Cox
b4862e19af Update stale comments. The alternate address space mapping was eliminated
when PAE support was added to i386.  The direct mapping exists on amd64.
2009-03-22 18:56:26 +00:00
Kip Macy
7e9608c858 PT_UPDATES_FLUSH() is used in common code so it needs to be defined
even in the !defined(XEN) case

MFC after:	1 month
2008-08-18 21:35:09 +00:00
Kip Macy
93ee134a24 Integrate support for xen in to i386 common code.
MFC after:	1 month
2008-08-15 20:51:31 +00:00
Alan Cox
494c177e81 Make pmap_kenter_attr() static. 2008-08-04 08:04:09 +00:00
Alan Cox
97dbe5e48e MFamd64 with few changes:
1. Add support for automatic promotion of 4KB page mappings to 2MB page
   mappings.  Automatic promotion can be enabled by setting the tunable
   "vm.pmap.pg_ps_enabled" to a non-zero value.  By default, automatic
   promotion is disabled.  Tested by: kris

2. To date, we have assumed that the TLB will only set the PG_M bit in a
   PTE if that PTE has the PG_RW bit set.  However, this assumption does
   not hold on recent processors from Intel.  For example, consider a PTE
   that has the PG_RW bit set but the PG_M bit clear.  Suppose this PTE
   is cached in the TLB and later the PG_RW bit is cleared in the PTE,
   but the corresponding TLB entry is not (yet) invalidated.
   Historically, upon a write access using this (stale) TLB entry, the
   TLB would observe that the PG_RW bit had been cleared and initiate a
   page fault, aborting the setting of the PG_M bit in the PTE.  Now,
   however, P4- and Core2-family processors will set the PG_M bit before
   observing that the PG_RW bit is clear and initiating a page fault.  In
   other words, the write does not occur but the PG_M bit is still set.

   The real impact of this difference is not that great.  Specifically,
   we should no longer assert that any PTE with the PG_M bit set must
   also have the PG_RW bit set, and we should ignore the state of the
   PG_M bit unless the PG_RW bit is set.
2008-03-27 04:34:17 +00:00
Peter Wemm
2577760fca Update the KVA_PAGES comments for the effect that PAE has on it. It
becomes a unit size of 2MB instead of 4MB and must be a multiple of 8 to
get a valid KERNBASE.
2008-01-14 22:53:01 +00:00
Alan Cox
5cccf58676 Shrink the size of struct vm_page on amd64 and i386 by eliminating
pv_list_count from struct md_page.  Ever since Peter rewrote the pv
entry allocator for amd64 and i386 pv_list_count has been correctly
maintained but otherwise unused.
2008-01-06 18:51:04 +00:00
Peter Wemm
6dd3a6c06e Drastically simplify the i386 pcpu backend by merging parts of the
amd64 mechanism over.  Instead of page table hackery that isn't
actually needed, just use 'struct pcpu __pcpu[MAXCPU]' for backing like
all the other platforms do.  Get rid of 'struct privatespace' and a
while mess of #ifdef SMP garbage that set it up.  As a bonus, this
returns the 4MB of KVA that we stole to implement it the old way.
This also allows you to read the pcpu data for each cpu when reading a
minidump.

Background information:  Originally, pcpu stuff was implemented as having
per-cpu page tables and magic to make different data structures appear
at the same actual address.  In order to share page tables, we switched
to using the GDT and %fs/%gs to access it.  But we still did the evil
magic to set it up for the old way.  The "idle stacks" are not used
for the idle process anymore and are just used for a few functions during
bootup, then ignored.  (excercise for reader: free these afterwards).
2007-11-13 23:00:24 +00:00
Alan Cox
1434c1f34b MFamd64
Define PGEX_RSV.
2007-04-12 17:00:56 +00:00
Ruslan Ermilov
2e137367b4 Add the PG_NX support for i386/PAE.
Reviewed by:	alc
2007-04-06 18:15:03 +00:00
Alan Cox
8cfba7267f Eliminate an unused parameter. 2007-03-17 19:42:06 +00:00
Alan Cox
da44960498 The global variable avail_end is redundant and only used once. Eliminate
it.  Make avail_start static to the pmap on amd64.  (It no longer exists
on other architectures.)
2006-11-19 20:54:58 +00:00
Ruslan Ermilov
d77f5882e7 Fix NKPT comments to match reality. Note that the current value
of NKPT is no longer enough to run amd64 with 16G of RAM, as it
doesn't have space for mapping a kernel (16M kernel would require
additionally 8 page tables).
2006-11-13 20:33:54 +00:00
Ruslan Ermilov
26af9ac7d0 Fix a comment. 2006-11-13 06:26:57 +00:00
John Baldwin
7e9f73f3ed First pass at allowing memory to be mapped using cache modes other than
WB (write-back) on x86 via control bits in PTEs and PDEs (including making
use of the PAT MSR).  Changes include:
- A new pmap_mapdev_attr() function for amd64 and i386 which takes an
  additional parameter (relative to pmap_mapdev()) specifying the cache
  mode for this mapping.  Note that on amd64 only WB mappings are done with
  the direct map, all other modes result in a private mapping.
- pmap_mapdev() on i386 and amd64 now defaults to using UC (uncached)
  mappings rather than WB.  Previously we relied on the BIOS setting up
  MTRR's to enforce memio regions being treated as UC.  This might make
  hw.cbb_start_memory unnecessary in some cases now for example.
- A new pmap_mapbios()/pmap_unmapbios() API has been added to allow places
  that used pmap_mapdev() to map non-device memory (such as ACPI tables)
  to do so using WB as before.
- A new pmap_change_attr() function for amd64 and i386 that changes the
  caching mode for a range of KVA.

Reviewed by:	alc
2006-08-11 19:22:57 +00:00
John Baldwin
2b8a339c7e Add various constants for the PAT MSR and the PAT PTE and PDE flags.
Initialize the PAT MSR during boot to map PAT type 2 to Write-Combining
(WC) instead of Uncached (UC-).

MFC after:	1 month
2006-05-01 22:07:00 +00:00
John Baldwin
4ac60df584 Add a new 'pmap_invalidate_cache()' to flush the CPU caches via the
wbinvd() instruction.  This includes a new IPI so that all CPU caches on
all CPUs are flushed for the SMP case.

MFC after:	1 month
2006-05-01 21:36:47 +00:00
Peter Wemm
041a991fa7 MFamd64: shrink pv entries from 24 bytes to about 12 bytes. (336 pv entries
per page = effectively 12.19 bytes per pv entry after overheads).
Instead of using a shared UMA zone for 24 byte pv entries (two 8-byte tailq
nodes, a 4 byte pointer, and a 4 byte address), we allocate a page at a
time per process.  This provides 336 pv entries per process (actually, per
pmap address space) and eliminates one of the 8-byte tailq entries since
we now can track per-process pv entries implicitly.  The pointer to
the pmap can be eliminated by doing address arithmetic to find the metadata
on the page headers to find a single pointer shared by all 336 entries.
There is an 11-int bitmap for the freelist of those 336 entries.

This is mostly a mechanical conversion from amd64, except:
* i386 has to allocate kvm and map the pages, amd64 has them outside of kvm
* native word size is smaller, so bitmaps etc become 32 bit instead of 64
* no dump_add_page() etc stuff because they are in kvm always.
* various pmap internals tweaks because pmap uses direct map on amd64 but
  on i386 it has to use sched_pin and temporary mappings.

Also, sysctl vm.pmap.pv_entry_max and vm.pmap.shpgperproc are now
dynamic sysctls.  Like on amd64, i386 can now tune the pv entry limits
without a recompile or reboot.

This is important because of the following scenario.   If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries.  At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB.  A 157MB saving is significant.

Test-run by:  scottl (Thanks!)
2006-04-26 21:49:20 +00:00
John Baldwin
696effb697 - Cleanup whitespace and extra ()s in vtophys() macros.
- Move vtophys() macros next to vtopte() where vtopte() exists to match
  comments above vtopte().
- Remove references to the alternate address space in the comment above
  vtopte().  amd64 never had the alternate address space, and i386 lost it
  prior to PAE support being added.
- s/entires/entries/ in comments.

Reviewed by:	alc
2005-12-06 21:09:01 +00:00
Peter Wemm
235a54de9d Switch AMD64 and i386 platforms to using ELF as their kernel crash
dump format.  The key reason to do this is so that we can dump sparse
address space.  For example, we need to be able to skip the PCI hole
just below the 4GB boundary.  Trying to destructively dump MMIO device
registers is Really Bad(TM).  The frequent result of trying to do a
crash dump on a machine with 4GB or more ram was ugly (lockup or reboot).

This code has been taken directly from the IA64 dump_machdep.c code,
with just a few (mostly minor) mods.

Introduce a dump_avail[] array in the machdep.c code so that we have a
source of truth for what memory is present in a machine that needs to be
dumped.  We can't use phys_avail[] because all sorts of things slice
memory out of it that we really need to dump.  eg: the vm page array
and the dmesg buffer.  dump_avail[] is pretty much an unmolested version
of phys_avail[].  It does have Maxmem correction.

Bump the i386 and amd64 dump format to version 2, but nothing actually
uses this.  amd64 was actually using the i386 dump version number.

libkvm support to follow.

Approved by:	re
2005-06-29 22:28:46 +00:00
Warner Losh
86cb007f9f /* -> /*- for copyright notices, minor format tweaks as necessary 2005-01-06 22:18:23 +00:00
Alan Cox
aced26ce6e Make pte_load_store() an atomic operation in all cases, not just i386 PAE.
Restructure pmap_enter() to prevent the loss of a page modified (PG_M) bit
in a race between processors.  (This restructuring assumes the newly atomic
pte_load_store() for correct operation.)

Reviewed by: tegge@
PR: i386/61852
2004-10-08 08:23:43 +00:00
Alan Cox
0a752e9843 Prevent the unexpected deallocation of a page table page while performing
pmap_copy().  This entails additional locking in pmap_copy() and the
addition of a "flags" parameter to the page table page allocator for
specifying whether it may sleep when memory is unavailable.  (Already,
pmap_copy() checks the availability of memory, aborting if it is scarce.
In theory, another CPU could, however, allocate memory between
pmap_copy()'s check and the call to the page table page allocator,
causing the current thread to release its locks and sleep.  This change
makes this scenario impossible.)

Reviewed by: tegge@
2004-09-29 19:20:40 +00:00
Scott Long
9e0c3bdf64 Double the number of kernel page tables for amd64 and for i386/PAE. The old
value was only enough for 8GB of RAM, the new value can do 16GB.  This still
isn't optimal since it doesn't scale.  Fixing this for amd64 looks to be
fairly easy, but for i386 will be quite difficult.

Reviewed by: peter
2004-09-11 01:31:26 +00:00
Peter Wemm
654bd0e802 Reduce the size of pv entries by 15%. This saves 1MB of KVA for mapping
pv entries per 1GB of user virtual memory.  (eg: if we had 1GB file was
mmaped into 30 processes, that would theoretically reduce the KVA demand by
30MB for pv entries.  In reality though, we limit pv entries so we don't
have that many at once.)

We used to store the vm_page_t for the page table page.  But we recently
had the pa of the ptp, or can calculate it fairly quickly.  If we wanted
to avoid the shift/mask operation in pmap_pde(), we could recover the
pa but that means we have to store it for a while.

This does not measurably change performance.

Suggested by:  alc
Tested by:  alc
2004-06-29 15:57:05 +00:00
Bruce Evans
4df7435a78 Include <sys/_lock.h>'s prerequisite <sys/queue.h> before including the
former, not after.
2004-06-20 00:33:14 +00:00
Alan Cox
4d831945a7 MFamd64
Introduce pmap locking to many of the pmap functions.
2004-06-16 07:03:15 +00:00
Alan Cox
8559e0a291 - Remove an unused declaration.
- Move a definition inside the scope of a #ifdef _KERNEL.
2004-06-13 03:44:11 +00:00
Alan Cox
0af0eeacb6 - pmap_kenter_temporary()'s first parameter, which is a physical address,
should be declared as vm_paddr_t not vm_offset_t.
2004-04-10 23:28:49 +00:00
Alan Cox
b14d6acced - pmap_kenter_temporary() is unused by machine-independent code. Therefore,
move its declaration to the machine-dependent header file on those
   machines that use it.  In principle, only i386 should have it.
   Alpha and AMD64 should use their direct virtual-to-physical mapping.
 - Remove pmap_kenter_temporary() from ia64.  It is unused.  Approved
   by: marcel@
2004-04-10 22:41:46 +00:00
Warner Losh
f36cfd49ad Remove advertising clause from University of California Regent's
license, per letter dated July 22, 1999 and email from Peter Wemm,
Alan Cox and Robert Watson.

Approved by: core, peter, alc, rwatson
2004-04-07 20:46:16 +00:00
Alan Cox
c8607538c8 Remove avail_start on those platforms that no longer use it. (Only amd64
does anything with it beyond simple initialization.)
2004-04-05 04:08:00 +00:00
Alan Cox
925d2fedf5 Remove unused declarations. (Some time ago, these variables became fields
of vm/vm.h's struct kva_md_info.)
2004-03-07 07:13:15 +00:00
Alan Cox
5e4a2fc9aa - Similar to post-PAE RELENG_4 split pmap_pte_quick() into two cases,
pmap_pte() and pmap_pte_quick().  The distinction being based upon the
   locks that are held by the caller.  When the given pmap is not the
   current pmap, pmap_pte() should be used when Giant is held and
   pmap_pte_quick() should be used when the vm page queues lock is held.
 - When assigning to PMAP1 or PMAP2, include PG_A anf PG_M.
 - Reenable the inlining of pmap_is_current().

In collaboration with:	tegge
2003-11-08 03:01:26 +00:00
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Peter Wemm
6ccf265bb0 Commit Bosko's patch to clean up the PSE/PG_G initialization to and
avoid problems with some Pentium 4 cpus and some older PPro/Pentium2
cpus.  There are several problems, some documented in Intel errata.
This patch:
1) moves the kernel to the second page in the PSE case.  There is an
errata that says that you Must Not point a 4MB page at physical
address zero on older cpus.  We avoided bugs here due to sheer luck.
2) sets up PSE page tables right from the start in locore, rather than
trying to switch from 4K to 4M (or 2M) pages part way through the boot
sequence at the same time that we're messing with PG_G.

For some reason, the pmap work over the last 18 months seems to tickle
the problems, and the PAE infrastructure changes disturb the cpu
bugs even more.

A couple of people have reported a problem with APM bios calls during
boot.  I'll work with people to get this resolved.

Obtained from:	bmilekic
2003-10-01 23:46:08 +00:00
Alan Cox
f3fd831cdd - Eliminate the pte object.
- Use kmem_alloc_nofault() rather than kmem_alloc_pageable() to allocate
   KVA space for the page directory page(s).  Submitted by: tegge
2003-09-25 02:51:06 +00:00
Jake Burkholder
14ce5bd49b Use inlines for loading and storing page table entries. Use cmpxchg8b for
the PAE case to ensure idempotent 64 bit loads and stores.

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-28 20:35:36 +00:00
Jake Burkholder
ac00210525 Remove invalid cast to vm_offset_t to avoid truncating a physical address
when doing pmap_kextract on a 2MB page.

Spotted by:	peter
Sponsored by:	DARPA, Network Associates Laboratories
2003-04-08 18:22:41 +00:00
Jake Burkholder
46ea68dd10 Better fix for previous previous which still allows the 4megs of kva at
the top of the address space to be reclaimed.  The problem is that with
the APTD gone the mapable kernel address space runs right to the end of
the 32 bit address space.  As a max this is 0x100000000, which can't be
represented in 32 bits, so we have to use ptd entry n-1 and pte offset
n-1, instead of ptd entry n and pte offset 0.  There's still 1 page we
can't use, but we gain just under 4 megs of kva (8 megs with PAE).

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-07 14:27:19 +00:00
Jake Burkholder
d1d03c2b72 Bandaid fix for previous commit while I figure out why it broke. This
caused crashes early in boot on i386 UP machines.

Reported by:	phk
Pointy hat to:	jake
2003-04-04 10:09:44 +00:00
Jake Burkholder
163529c2b3 - Removed APTD and associated macros, it is no longer used.
BANG BANG BANG etc.

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-03 23:44:35 +00:00
Peter Wemm
cc66ebe2a9 Commit a partial lazy thread switch mechanism for i386. it isn't as lazy
as it could be and can do with some more cleanup.  Currently its under
options LAZY_SWITCH.  What this does is avoid %cr3 reloads for short
context switches that do not involve another user process.  ie: we can
take an interrupt, switch to a kthread and return to the user without
explicitly flushing the tlb.  However, this isn't as exciting as it could
be, the interrupt overhead is still high and too much blocks on Giant
still.  There are some debug sysctls, for stats and for an on/off switch.

The main problem with doing this has been "what if the process that you're
running on exits while we're borrowing its address space?" - in this case
we use an IPI to give it a kick when we're about to reclaim the pmap.

Its not compiled in unless you add the LAZY_SWITCH option.  I want to fix a
few more things and get some more feedback before turning it on by default.

This is NOT a replacement for Bosko's lazy interrupt stuff.  This was more
meant for the kthread case, while his was for interrupts.  Mine helps a
little for interrupts, but his helps a lot more.

The stats are enabled with options SWTCH_OPTIM_STATS - this has been a
pseudo-option for years, I just added a bunch of stuff to it.

One non-trivial change was to select a new thread before calling
cpu_switch() in the first place.  This allows us to catch the silly
case of doing a cpu_switch() to the current process.  This happens
uncomfortably often.  This simplifies a bit of the asm code in cpu_switch
(no longer have to call choosethread() in the middle).  This has been
implemented on i386 and (thanks to jake) sparc64.  The others will come
soon.  This is actually seperate to the lazy switch stuff.

Glanced at by:  jake, jhb
2003-04-02 23:53:30 +00:00