Commit Graph

139 Commits

Author SHA1 Message Date
Alan Cox
5cccf58676 Shrink the size of struct vm_page on amd64 and i386 by eliminating
pv_list_count from struct md_page.  Ever since Peter rewrote the pv
entry allocator for amd64 and i386 pv_list_count has been correctly
maintained but otherwise unused.
2008-01-06 18:51:04 +00:00
Ruslan Ermilov
3cbc967ef7 Use a different bitmask for superpages' base address so that it
doesn't conflict with the PG_PDE_PAT bit.  (We still don't mask
off all the reserved bits but that's okay for now.)

Reviewed by:	alc
2006-12-05 11:31:33 +00:00
Alan Cox
da44960498 The global variable avail_end is redundant and only used once. Eliminate
it.  Make avail_start static to the pmap on amd64.  (It no longer exists
on other architectures.)
2006-11-19 20:54:58 +00:00
Ruslan Ermilov
d77f5882e7 Fix NKPT comments to match reality. Note that the current value
of NKPT is no longer enough to run amd64 with 16G of RAM, as it
doesn't have space for mapping a kernel (16M kernel would require
additionally 8 page tables).
2006-11-13 20:33:54 +00:00
Ruslan Ermilov
26af9ac7d0 Fix a comment. 2006-11-13 06:26:57 +00:00
John Baldwin
7e9f73f3ed First pass at allowing memory to be mapped using cache modes other than
WB (write-back) on x86 via control bits in PTEs and PDEs (including making
use of the PAT MSR).  Changes include:
- A new pmap_mapdev_attr() function for amd64 and i386 which takes an
  additional parameter (relative to pmap_mapdev()) specifying the cache
  mode for this mapping.  Note that on amd64 only WB mappings are done with
  the direct map, all other modes result in a private mapping.
- pmap_mapdev() on i386 and amd64 now defaults to using UC (uncached)
  mappings rather than WB.  Previously we relied on the BIOS setting up
  MTRR's to enforce memio regions being treated as UC.  This might make
  hw.cbb_start_memory unnecessary in some cases now for example.
- A new pmap_mapbios()/pmap_unmapbios() API has been added to allow places
  that used pmap_mapdev() to map non-device memory (such as ACPI tables)
  to do so using WB as before.
- A new pmap_change_attr() function for amd64 and i386 that changes the
  caching mode for a range of KVA.

Reviewed by:	alc
2006-08-11 19:22:57 +00:00
Alan Cox
f8883c0160 Define the additional page fault error codes that are implemented by amd64. 2006-08-02 16:24:23 +00:00
John Baldwin
2b8a339c7e Add various constants for the PAT MSR and the PAT PTE and PDE flags.
Initialize the PAT MSR during boot to map PAT type 2 to Write-Combining
(WC) instead of Uncached (UC-).

MFC after:	1 month
2006-05-01 22:07:00 +00:00
John Baldwin
4ac60df584 Add a new 'pmap_invalidate_cache()' to flush the CPU caches via the
wbinvd() instruction.  This includes a new IPI so that all CPU caches on
all CPUs are flushed for the SMP case.

MFC after:	1 month
2006-05-01 21:36:47 +00:00
Peter Wemm
68ac481184 Shrink the amd64 pv entry from 48 bytes to about 24 bytes. On a machine
with large mmap files mapped into many processes, this saves hundreds of
megabytes of ram.
pv entries were individually allocated and had two tailq entries and two
pointers (or addresses).  Each pv entry was linked to a vm_page_t and
a process's address space (pmap).  It had the virtual address and a
pointer to the pmap.
This change replaces the individual allocation with a per-process
allocation system.  A page ("pv chunk") is allocated and this provides
168 pv entries for that process.  We can now eliminate one of the 16 byte
tailq entries because we can simply iterate through the pv chunks to find
all the pv entries for a process.  We can eliminate one of the 8 byte
pointers because the location of the pv entry implies the containing
pv chunk, which has the pointer.  After overheads from the pv chunk
bitmap and tailq linkage, this works out that each pv entry has an
effective size of 24.38 bytes.

Future work still required, and other problems:
* when running low on pv entries or system ram, we may need to defrag
  the chunk pages and free any spares.  The stats (vm.pmap.*) show that
  this doesn't seem to be that much of a problem, but it can be done if
  needed.
* running low on pv entries is now a much bigger problem.  The old
  get_pv_entry() routine just needed to reclaim one other pv entry.
  Now, since they are per-process, we can only use pv entries that are
  assigned to our current process, or by stealing an entire page worth
  from another process.  Under normal circumstances, the pmap_collect()
  code should be able to dislodge some pv entries from the current
  process.  But if needed, it can still reclaim entire pv chunk pages
  from other processes.
* This should port to i386 really easily, except there it would reduce
  pv entries from 24 bytes to about 12 bytes.

(I have integrated Alan's recent changes.)
2006-04-03 21:36:01 +00:00
Peter Wemm
8d0593f54e Merge/sync with i386: various cosmetic tweaks 2006-03-14 00:01:56 +00:00
John Baldwin
696effb697 - Cleanup whitespace and extra ()s in vtophys() macros.
- Move vtophys() macros next to vtopte() where vtopte() exists to match
  comments above vtopte().
- Remove references to the alternate address space in the comment above
  vtopte().  amd64 never had the alternate address space, and i386 lost it
  prior to PAE support being added.
- s/entires/entries/ in comments.

Reviewed by:	alc
2005-12-06 21:09:01 +00:00
Peter Wemm
235a54de9d Switch AMD64 and i386 platforms to using ELF as their kernel crash
dump format.  The key reason to do this is so that we can dump sparse
address space.  For example, we need to be able to skip the PCI hole
just below the 4GB boundary.  Trying to destructively dump MMIO device
registers is Really Bad(TM).  The frequent result of trying to do a
crash dump on a machine with 4GB or more ram was ugly (lockup or reboot).

This code has been taken directly from the IA64 dump_machdep.c code,
with just a few (mostly minor) mods.

Introduce a dump_avail[] array in the machdep.c code so that we have a
source of truth for what memory is present in a machine that needs to be
dumped.  We can't use phys_avail[] because all sorts of things slice
memory out of it that we really need to dump.  eg: the vm page array
and the dmesg buffer.  dump_avail[] is pretty much an unmolested version
of phys_avail[].  It does have Maxmem correction.

Bump the i386 and amd64 dump format to version 2, but nothing actually
uses this.  amd64 was actually using the i386 dump version number.

libkvm support to follow.

Approved by:	re
2005-06-29 22:28:46 +00:00
Warner Losh
46280ae719 Begin all license/copyright comments with /*- 2005-01-05 20:17:21 +00:00
Alan Cox
aced26ce6e Make pte_load_store() an atomic operation in all cases, not just i386 PAE.
Restructure pmap_enter() to prevent the loss of a page modified (PG_M) bit
in a race between processors.  (This restructuring assumes the newly atomic
pte_load_store() for correct operation.)

Reviewed by: tegge@
PR: i386/61852
2004-10-08 08:23:43 +00:00
Alan Cox
0a752e9843 Prevent the unexpected deallocation of a page table page while performing
pmap_copy().  This entails additional locking in pmap_copy() and the
addition of a "flags" parameter to the page table page allocator for
specifying whether it may sleep when memory is unavailable.  (Already,
pmap_copy() checks the availability of memory, aborting if it is scarce.
In theory, another CPU could, however, allocate memory between
pmap_copy()'s check and the call to the page table page allocator,
causing the current thread to release its locks and sleep.  This change
makes this scenario impossible.)

Reviewed by: tegge@
2004-09-29 19:20:40 +00:00
Scott Long
9e0c3bdf64 Double the number of kernel page tables for amd64 and for i386/PAE. The old
value was only enough for 8GB of RAM, the new value can do 16GB.  This still
isn't optimal since it doesn't scale.  Fixing this for amd64 looks to be
fairly easy, but for i386 will be quite difficult.

Reviewed by: peter
2004-09-11 01:31:26 +00:00
Alan Cox
fa543780cc Remove the allpmaps list. It's unused.
Reviewed by: peter@
2004-07-20 02:40:56 +00:00
Peter Wemm
6897c4aef7 Like on i386, eliminate pv_ptem (which was suggested by alc). This
reduces the size of the pv_entry structure a small but significant amount.

This is implemented a little differently because it isn't so cheap to get
the physical address of the page tabke page on amd64.. instead of it
being directly accessible from the top level page directory, it is now
two additional tree levels down.  However.. In almost all cases, we
recently had the physical address if the page table page a short while
before we needed it, but it slipped through our fingers.  This patch
saves it for when we do need it.  Also, for the one case where we do not
have the ptp paddr, we are always running in curproc context and so we
can do a vtopte-like trick.  I've implemented vtopde() for this purpose.

There is still a CYA entry in pmap_unuse_pt() that needs to be removed.  I
think it can be removed now but I forgot to test with it gone.
2004-07-14 07:13:35 +00:00
Bruce Evans
7a637a637e Include <sys/_lock.h>'s prerequisite <sys/queue.h> before including the
former, not after.

Don't hide this bug by including <sys/queue.h> in <sys/_lock.h>.
2004-06-19 14:58:35 +00:00
Alan Cox
50f91a9445 Introduce pmap locking to many of the pmap functions. There is more to
come later.
2004-06-14 01:17:50 +00:00
Alan Cox
8559e0a291 - Remove an unused declaration.
- Move a definition inside the scope of a #ifdef _KERNEL.
2004-06-13 03:44:11 +00:00
Peter Wemm
430e272c7e Initial PG_NX support (no-execute page bit)
- export the rest of the cpu features (and amd's features).
- turn on EFER_NXE, depending on the NX amd feature bit
- reorg the identcpu stuff a bit in order to stop treating the
  amd features as second class features (since it is now a primary feature
  bit set) and make it easier to export.
2004-06-08 01:02:52 +00:00
Peter Wemm
7d95d34bb7 Mask pte's with PG_FRAME before passing it to PHYS_TO_VM_PAGE().. PG_NX
lives in the top 12 'available' bits.  atop() in the PHYS_TO_VM_PAGE()
macro only masks off the lower bits (by accident) and the upper bits
in the 64 bit ptes turn into "interesting" index values.
2004-06-08 00:29:42 +00:00
Peter Wemm
792e29ba26 For consistency with i386, have pmap_kenter_temporary() take a vm_paddr_t
argument.  It is actually the same type on amd64 (vm_paddr_t = vm_offset_t)
but this reduces the i386<->amd64 diffs a little.
2004-05-16 20:44:41 +00:00
Peter Wemm
463e5aa66e MFi386: numerous interrupt and acpi updates 2004-05-16 20:30:47 +00:00
Alan Cox
b14d6acced - pmap_kenter_temporary() is unused by machine-independent code. Therefore,
move its declaration to the machine-dependent header file on those
   machines that use it.  In principle, only i386 should have it.
   Alpha and AMD64 should use their direct virtual-to-physical mapping.
 - Remove pmap_kenter_temporary() from ia64.  It is unused.  Approved
   by: marcel@
2004-04-10 22:41:46 +00:00
Warner Losh
9a80fddc71 Remove advertising clause from University of California Regent's license,
per letter dated July 22, 1999 and email from Peter Wemm.

Approved by: core, peter
2004-04-05 23:55:14 +00:00
Alan Cox
803e5b9ccc Remove ptmmap and ptvmmap. They are unused on amd64. 2004-04-03 20:53:50 +00:00
Alan Cox
925d2fedf5 Remove unused declarations. (Some time ago, these variables became fields
of vm/vm.h's struct kva_md_info.)
2004-03-07 07:13:15 +00:00
Peter Wemm
0d2a298904 Initial landing of SMP support for FreeBSD/amd64.
- This is heavily derived from John Baldwin's apic/pci cleanup on i386.
- I have completely rewritten or drastically cleaned up some other parts.
  (in particular, bootstrap)
- This is still a WIP.  It seems that there are some highly bogus bioses
  on nVidia nForce3-150 boards.  I can't stress how broken these boards
  are.  I have a workaround in mind, but right now the Asus SK8N is broken.
  The Gigabyte K8NPro (nVidia based) is also mind-numbingly hosed.
- Most of my testing has been with SCHED_ULE.  SCHED_4BSD works.
- the apic and acpi components are 'standard'.
- If you have an nVidia nForce3-150 board, you are stuck with 'device
  atpic' in addition, because they somehow managed to forget to connect the
  8254 timer to the apic, even though its in the same silicon!  ARGH!
  This directly violates the ACPI spec.
2003-11-17 08:58:16 +00:00
Peter Wemm
fcfe57d640 Update the graffiti. 2003-11-08 04:39:22 +00:00
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Alan Cox
9060731130 Eliminate the pte object. 2003-09-27 20:53:01 +00:00
Peter Wemm
bf8ca114e2 Fix the VADDR() macros to use either KVADDR() or UVADDR(), depending
on the implied sign extension.  The single unified VADDR() macro was
not able to avoid sign extending the VM_MAXUSER_ADDRESS/USRSTACK values.
Be explicit about UVADDR() (positive address space) and KVADDR()
(kernel negative address space) to make mistakes show up more
spectacularly.

Increase user VM space from 1/2TB (512GB) to 128TB.
2003-07-09 23:04:23 +00:00
Hidetoshi Shimokawa
e07324646e Move KERNBASE to -2GB.
Currently, we cannot increase KVA more than 2GB.
2003-06-22 13:02:45 +00:00
Peter Wemm
cbd667fa2f Update comments. Note that the kernel is at -1GB, not -2GB as erroniously
implied by the previous commit.  KVM is still only 1GB until
pmap_growkernel() learns about the extra page table level.

Approved by:  re (blanket)
2003-05-23 06:35:45 +00:00
Peter Wemm
3c9a3c9ca3 Major pmap rework to take advantage of the larger address space on amd64
systems.  Of note:
- Implement a direct mapped region using 2MB pages.  This eliminates the
  need for temporary mappings when getting ptes.  This supports up to
  512GB of physical memory for now.  This should be enough for a while.
- Implement a 4-tier page table system.  Most of the infrastructure is
  there for 128TB of userland virtual address space, but only 512GB is
  presently enabled due to a mystery bug somewhere.  The design of this
  was heavily inspired by the alpha pmap.c.
- The kernel is moved into the negative address space(!).
- The kernel has 2GB of KVM available.
- Provide a uma memory allocator to use the direct map region to take
  advantage of the 2MB TLBs.
- Fixed some assumptions in the bus_space macros about the ability
  to fit virtual addresses in an 'int'.

Notable missing things:
- pmap_growkernel() should be able to grow to 512GB of KVM by expanding
  downwards below kernbase.  The kernel must be at the top 2GB of the
  negative address space because of gcc code generation strategies.
- need to fix the >512GB user vm code.

Approved by:	re (blanket)
2003-05-23 05:04:54 +00:00
Peter Wemm
be52ef1399 Use compile time constants for things like PTmap[] etc because they're
about to move outside of the +/- 2GB range

Suggested by:	jake
Approved by:	re (amd64/* blanket)
2003-05-15 00:20:17 +00:00
Peter Wemm
afa8862328 Commit MD parts of a loosely functional AMD64 port. This is based on
a heavily stripped down FreeBSD/i386 (brutally stripped down actually) to
attempt to get a stable base to start from.  There is a lot missing still.
Worth noting:
- The kernel runs at 1GB in order to cheat with the pmap code.  pmap uses
  a variation of the PAE code in order to avoid having to worry about 4
  levels of page tables yet.
- It boots in 64 bit "long mode" with a tiny trampoline embedded in the
  i386 loader.  This simplifies locore.s greatly.
- There are still quite a few fragments of i386-specific code that have
  not been translated yet, and some that I cheated and wrote dumb C
  versions of (bcopy etc).
- It has both int 0x80 for syscalls (but using registers for argument
  passing, as is native on the amd64 ABI), and the 'syscall' instruction
  for syscalls.  int 0x80 preserves all registers, 'syscall' does not.
- I have tried to minimize looking at the NetBSD code, except in a couple
  of places (eg: to find which register they use to replace the trashed
  %rcx register in the syscall instruction).  As a result, there is not a
  lot of similarity.  I did look at NetBSD a few times while debugging to
  get some ideas about what I might have done wrong in my first attempt.
2003-05-01 01:05:25 +00:00
Jake Burkholder
14ce5bd49b Use inlines for loading and storing page table entries. Use cmpxchg8b for
the PAE case to ensure idempotent 64 bit loads and stores.

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-28 20:35:36 +00:00
Jake Burkholder
ac00210525 Remove invalid cast to vm_offset_t to avoid truncating a physical address
when doing pmap_kextract on a 2MB page.

Spotted by:	peter
Sponsored by:	DARPA, Network Associates Laboratories
2003-04-08 18:22:41 +00:00
Jake Burkholder
46ea68dd10 Better fix for previous previous which still allows the 4megs of kva at
the top of the address space to be reclaimed.  The problem is that with
the APTD gone the mapable kernel address space runs right to the end of
the 32 bit address space.  As a max this is 0x100000000, which can't be
represented in 32 bits, so we have to use ptd entry n-1 and pte offset
n-1, instead of ptd entry n and pte offset 0.  There's still 1 page we
can't use, but we gain just under 4 megs of kva (8 megs with PAE).

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-07 14:27:19 +00:00
Jake Burkholder
d1d03c2b72 Bandaid fix for previous commit while I figure out why it broke. This
caused crashes early in boot on i386 UP machines.

Reported by:	phk
Pointy hat to:	jake
2003-04-04 10:09:44 +00:00
Jake Burkholder
163529c2b3 - Removed APTD and associated macros, it is no longer used.
BANG BANG BANG etc.

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-03 23:44:35 +00:00
Peter Wemm
cc66ebe2a9 Commit a partial lazy thread switch mechanism for i386. it isn't as lazy
as it could be and can do with some more cleanup.  Currently its under
options LAZY_SWITCH.  What this does is avoid %cr3 reloads for short
context switches that do not involve another user process.  ie: we can
take an interrupt, switch to a kthread and return to the user without
explicitly flushing the tlb.  However, this isn't as exciting as it could
be, the interrupt overhead is still high and too much blocks on Giant
still.  There are some debug sysctls, for stats and for an on/off switch.

The main problem with doing this has been "what if the process that you're
running on exits while we're borrowing its address space?" - in this case
we use an IPI to give it a kick when we're about to reclaim the pmap.

Its not compiled in unless you add the LAZY_SWITCH option.  I want to fix a
few more things and get some more feedback before turning it on by default.

This is NOT a replacement for Bosko's lazy interrupt stuff.  This was more
meant for the kthread case, while his was for interrupts.  Mine helps a
little for interrupts, but his helps a lot more.

The stats are enabled with options SWTCH_OPTIM_STATS - this has been a
pseudo-option for years, I just added a bunch of stuff to it.

One non-trivial change was to select a new thread before calling
cpu_switch() in the first place.  This allows us to catch the silly
case of doing a cpu_switch() to the current process.  This happens
uncomfortably often.  This simplifies a bit of the asm code in cpu_switch
(no longer have to call choosethread() in the middle).  This has been
implemented on i386 and (thanks to jake) sparc64.  The others will come
soon.  This is actually seperate to the lazy switch stuff.

Glanced at by:  jake, jhb
2003-04-02 23:53:30 +00:00
Jake Burkholder
7ab9b220d9 - Add support for PAE and more than 4 gigs of ram on x86, dependent on the
kernel opition 'options PAE'.  This will only work with device drivers which
  either use busdma, or are able to handle 64 bit physical addresses.

Thanks to Lanny Baron from FreeBSD Systems for the loan of a test machine
with 6 gigs of ram.

Sponsored by:	DARPA, Network Associates Laboratories, FreeBSD Systems
2003-03-30 05:24:52 +00:00
Jake Burkholder
de54353fb8 - Remove invalid casts.
Sponsored by:	DARPA, Network Associates Laboratories
2003-03-30 01:44:16 +00:00
Jake Burkholder
aea57872f0 - Convert all uses of pmap_pte and get_ptbase to pmap_pte_quick. When
accessing an alternate address space this causes 1 page table page at
  a time to be mapped in, rather than using the recursive mapping technique
  to map in an entire alternate address space.  The recursive mapping
  technique changes large portions of the address space and requires global
  tlb flushes, which seem to cause problems when PAE is enabled.  This will
  also allow IPIs to be avoided when mapping in new page table pages using
  the same technique as is used for pmap_copy_page and pmap_zero_page.

Sponsored by:	DARPA, Network Associates Laboratories
2003-03-30 01:16:19 +00:00
Jake Burkholder
227f9a1c58 - Add vm_paddr_t, a physical address type. This is required for systems
where physical addresses larger than virtual addresses, such as i386s
  with PAE.
- Use this to represent physical addresses in the MI vm system and in the
  i386 pmap code.  This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
  detection code, and due to kvtop returning vm_paddr_t instead of u_long.

Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.

Sponsored by:	DARPA, Network Associates Laboratories
Discussed with:	re, phk (cdevsw change)
2003-03-25 00:07:06 +00:00