Commit Graph

34 Commits

Author SHA1 Message Date
Konstantin Belousov
00efd852cc The 'end' word was missed in the comment.
MFC after:     3 days
2013-02-08 15:52:20 +00:00
Gavin Atkinson
915ae29a17 Prevent indent(1) from reformatting this comment, as it contains
a formatting-sensitive table.
2012-09-07 08:18:06 +00:00
Marius Strobl
21bbe2ac13 For machines where the kernel address space is unrestricted increase
VM_KMEM_SIZE_SCALE to 2, awaiting more insight from alc@. As it turns
out, the VM apparently has problems with machines that have large holes
in the physical address space, causing the kmem_suballoc() call in
kmeminit() to fail with a VM_KMEM_SIZE_SCALE of 1. Using a value of 2
allows these, namely Blade 1500 with 2GB of RAM, to boot.

PR:	164227
2012-01-27 22:25:46 +00:00
Marius Strobl
915d84ba38 Fix whitespace 2011-06-21 20:50:55 +00:00
Marius Strobl
0e3d1b3853 On machines where we don't need to lock the kernel TSB into the dTLB and
thus may basically use the entire 64-bit kernel address space reduce
VM_KMEM_SIZE_SCALE to 1 allowing kernel to use more memory.
2011-06-21 20:48:14 +00:00
Matthew D Fleming
cfb00e5aa7 Move the ZERO_REGION_SIZE to a machine-dependent file, as on many
architectures (i386, for example) the virtual memory space may be
constrained enough that 2MB is a large chunk.  Use 64K for arches
other than amd64 and ia64, with special handling for sparc64 due to
differing hardware.

Also commit the comment changes to kmem_init_zero_region() that I
missed due to not saving the file.  (Darn the unfamiliar development
environment).

Arch maintainers, please feel free to adjust ZERO_REGION_SIZE as you
see fit.

Requested by:	alc
MFC after:	1 week
MFC with:	r221853
2011-05-13 19:35:01 +00:00
Marius Strobl
3a8a826af3 Remove the advertising clause from the UCB license according to the
July 22, 1999 addendum.
2011-03-13 13:42:43 +00:00
Konstantin Belousov
50a57dfbec Move repeated MAXSLP definition from machine/vmparam.h to sys/vmmeter.h.
Update the outdated comments describing MAXSLP and the process
selection algorithm for swap out.

Comments wording and reviewed by:	alc
2011-01-09 12:50:44 +00:00
Marius Strobl
3318c3ef45 Revert r216080 so kmem_map is capped at 3/5 of the currently rather modest
kernel address space in order to leave space for the buffer cache, pipes,
thread stacks, etc on machines with more physical memory until we take
advantage of ASI_ATOMIC_QUAD_LDD_PHYS on CPUs providing it so we don't need
to lock the kernel TSB pages into the dTLB, basically making the entire
64-bit kernel address space available on relevant machines.

Submitted by:	alc
2010-12-21 21:32:17 +00:00
Max Khon
5bdabbdbf8 Change VM_KMEM_SIZE_MAX to be just (VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS)
Suggested by:	marius
2010-11-30 16:49:06 +00:00
Max Khon
311e93395e Define VM_KMEM_SIZE_MAX on sparc64. Otherwise kernel built with
DEBUG_MEMGUARD panics early in kmeminit() with the message
"kmem_suballoc: bad status return of 1" because of zero "size" argument
passed to kmem_suballoc() due to "vm_kmem_size_max" being zero.

The problem also exists on ia64.
2010-11-28 19:26:20 +00:00
Alan Cox
2cf36c8f67 Enable reservation-based physical memory allocation. Even without the
creation of large page mappings in the pmap, it can provide modest
performance benefits.  In particular, for a "buildworld" on a 2x 1GHz
Ultrasparc IIIi it reduced the wall clock time by 2.2% and the system
time by 12.6%.

Tested by:	marius@
2010-11-10 17:57:34 +00:00
John Baldwin
a3870a1826 Very rough first cut at NUMA support for the physical page allocator. For
now it uses a very dumb first-touch allocation policy.  This will change in
the future.
- Each architecture indicates the maximum number of supported memory domains
  via a new VM_NDOMAIN parameter in <machine/vmparam.h>.
- Each cpu now has a PCPU_GET(domain) member to indicate the memory domain
  a CPU belongs to.  Domain values are dense and numbered from 0.
- When a platform supports multiple domains, the default freelist
  (VM_FREELIST_DEFAULT) is split up into N freelists, one for each domain.
  The MD code is required to populate an array of mem_affinity structures.
  Each entry in the array defines a range of memory (start and end) and a
  domain for the range.  Multiple entries may be present for a single
  domain.  The list is terminated by an entry where all fields are zero.
  This array of structures is used to split up phys_avail[] regions that
  fall in VM_FREELIST_DEFAULT into per-domain freelists.
- Each memory domain has a separate lookup-array of freelists that is
  used when fulfulling a physical memory allocation.  Right now the
  per-domain freelists are listed in a round-robin order for each domain.
  In the future a table such as the ACPI SLIT table may be used to order
  the per-domain lookup lists based on the penalty for each memory domain
  relative to a specific domain.  The lookup lists may be examined via a
  new vm.phys.lookup_lists sysctl.
- The first-touch policy is implemented by using PCPU_GET(domain) to
  pick a lookup list when allocating memory.

Reviewed by:	alc
2010-07-27 20:33:50 +00:00
Marius Strobl
ceab1bee37 - Use the generally more appropriate PROM base rather than the
kernel one as the non-faulting flush address in the loader so
  we can can change KERNBASE and VM_MIN_KERNEL_ADDRESS if we
  ever want to without needing to worry about using a compatible
  loader.
- Correctly check for LOADER_DEBUG.
- Add a missing const for page_sizes[].
2009-02-10 21:48:42 +00:00
Marius Strobl
db85033cd0 Assume OpenSolaris knows better and use their value for VM_MAX_PROM_ADDRESS. 2008-08-12 20:00:28 +00:00
Alan Cox
b8e7fc24fe Add configuration knobs for the superpage reservation system. Initially,
the reservation will only be enabled on amd64.
2007-12-27 16:45:39 +00:00
Alan Cox
7bfda801a8 Change the management of cached pages (PQ_CACHE) in two fundamental
ways:

(1) Cached pages are no longer kept in the object's resident page
splay tree and memq.  Instead, they are kept in a separate per-object
splay tree of cached pages.  However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock.  Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.

This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE).  The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held.  Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.

Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case.  Cached pages
are reclaimed far, far more often than they are reactivated.  Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.

(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.

Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated.  Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page.  Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.

Discussed with: many over the course of the summer, including jeff@,
   Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
Alan Cox
04f70df029 Add the machine-specific definitions for configuring the new physical
memory allocator.

Approved by:	re
2007-06-04 02:32:07 +00:00
Alan Cox
04a18977c8 Define every architecture as either VM_PHYSSEG_DENSE or
VM_PHYSSEG_SPARSE depending on whether the physical address space is
densely or sparsely populated with memory.  The effect of this
definition is to determine which of two implementations of
vm_page_array and PHYS_TO_VM_PAGE() is used.  The legacy
implementation is obtained by defining VM_PHYSSEG_DENSE, and a new
implementation that trades off time for space is obtained by defining
VM_PHYSSEG_SPARSE.  For now, all architectures except for ia64 and
sparc64 define VM_PHYSSEG_DENSE.  Defining VM_PHYSSEG_SPARSE on ia64
allows the entirety of my Itanium 2's memory to be used.  Previously,
only the first 1 GB could be used.  Defining VM_PHYSSEG_SPARSE on
sparc64 allows USIIIi-based systems to boot without crashing.

This change is a combination of Nathan Whitehorn's patch and my own
work in perforce.

Discussed with: kmacy, marius, Nathan Whitehorn
PR:		112194
2007-05-05 19:50:28 +00:00
Stephane E. Potvin
0e5179e441 Add support for specifying a minimal size for vm.kmem_size in the loader via
vm.kmem_size_min. Useful when using ZFS to make sure that vm.kmem size will
be at least 256mb (for example) without forcing a particular value via vm.kmem_size.

Approved by: njl (mentor)
Reviewed by: alc
2007-04-21 01:14:48 +00:00
Jake Burkholder
7b666b648f Define UMA_MD_SMALL_ALLOC so that uma_small_alloc and uma_small_free will
be used for zones that allocate objects of less 1 page.  The biggest advantage
of this is that all of a sudden the majority of kernel malloc-ed data doesn't
need kva allocated for it.  Besides microbenchmarks I haven't seen a measurable
performance improvement from doing this.
2002-12-27 19:31:26 +00:00
Jake Burkholder
a106c6930a - Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
  to use 4mb tlb entries instead of 8k.  This requires that the system only
  use the direct mapped addresses when they have the same virtual colour as
  all other mappings of the same page, instead of being able to choose the
  colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
  being able to choose the colour or cachability of the direct mapped
  address.  This adds a lot more cases to handle.  Basically when a page has
  a different colour than its direct mapped address we have a choice between
  bypassing the data cache and using physical addresses directly, which
  requires a cache flush, or mapping it at the right colour, which requires
  a tlb flush.  For now we choose to map the page and do the tlb flush.

This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
Jake Burkholder
5aebb40291 Auto size available kernel virtual address space based on phsyical memory
size.  This avoids blowing out kva in kmeminit() on large memory machines
(4 gigs or more).

Reviewed by:	tmm
2002-08-10 22:14:16 +00:00
Jake Burkholder
d73b19ef9d Use a fixed address for KERNBASE, so it doesn't change if the size of KVA
is increased.  Its confusing for all the kernel addresses to change, and
doesn't serve much purpose as far as conserving address space.
2002-07-13 03:29:10 +00:00
Jake Burkholder
e3957e9d67 Add constants for the min and max prom addresses. Use these instead of
magic numbers.  Use stxa_sync instead of stxa; membar #Sync; to ensure
that no instruction is placed between the two.  This can cause random
corruption even though interrupts are already disabled.
2002-06-17 15:44:10 +00:00
Jake Burkholder
a46877abfc Increase VM_KMEM_SIZE to 16 megs from 12. Define VM_KMEM_SIZE_SCALE so that
the number of physical pages per KVA page allocated scales properly with
memory size.  This fixes problems with kmem_map being too small.

Noticed by:	mike, wollman
Submitted by:	tmm
2002-03-09 23:35:50 +00:00
Jake Burkholder
1e903121eb Add comments as to why VM_MAXUSER_ADDRESS is magic (abitrary).
Define the KVA_RANGE in terms of ttes, not sttes.
Remove UPT_MIN_ADDRESS.  We no longer use a hard coded address for
the user tsb.
2001-12-29 08:25:43 +00:00
Jake Burkholder
6ef2d9a02d Parameterize the size of the kernel virtual address space on KVA_PAGES.
Don't use a hard coded address constant for the virtual address of the
kernel tsb.  Allocate kernel virtual address space for the kernel tsb
at runtime.
Remove unused parameter to pmap_bootstrap.
Adapt pmap.c to use KVA_PAGES.
Map the message buffer too.
Add some traces.
Implement pmap_protect.
2001-10-20 16:17:04 +00:00
Jake Burkholder
956856ae06 Move the kernel to end of the first 4 gigabytes of address space, so that
one 4 meg page can map both the kernel and the openfirmware mappings.
Add the openfirmware mappings to the kernel tsb so we can call the firmware
on the kernel trap table and access kernel memory normally.
Implement pmap_swapout_proc, pmap_swapin_proc, pmap_swapout_thread,
pmap_swapin_thread, pmap_activate, pmap_page_exists, and pmap_phys_address.
2001-09-30 19:03:22 +00:00
Jake Burkholder
3d450a75c1 Use the correct copyrights. Note where most of this came from.
Requested by:	obrien
2001-09-03 22:27:23 +00:00
David E. O'Brien
73a4930297 The author isn't a [UC] Regents. Correct the copyright language. 2001-08-09 02:09:34 +00:00
Jake Burkholder
2de67fb944 The kernel runs at a much lower address now. 2001-08-06 02:24:52 +00:00
Jake Burkholder
89bf8575ee Flesh out the sparc64 port considerably. This contains:
- mostly complete kernel pmap support, and tested but currently turned
  off userland pmap support
- low level assembly language trap, context switching and support code
- fully implemented atomic.h and supporting cpufunc.h
- some support for kernel debugging with ddb
- various header tweaks and filling out of machine dependent structures
2001-07-31 06:05:05 +00:00
Jake Burkholder
98bb5304e1 Add skeleton machine dependent headers and c files for a port of freebsd
to a new architecture.  This is the base of the sparc64 port, but contains
limited machine dependent code, and can be used a base for ports.  Included
are:
- standard machine dependent headers, tweaked for a 64 bit, big endian
  architecture, including empty versions of all the machine dependent
  structures
- a machine independent atomic.h, which can be used until a port has
  support for interrupts and the operations really need to be atomic
- stub versions of all the machine dependent functions, which panic
  when called and print out the name of the function that needs to
  be implemented.  functions which are normally in assembly files are
  not included, but this should reduce the number of different undefined
  references on the first few compiles from hundreds to 5 or 6
Given minimal startup code and console support it should be trivial to
make this compile and run the first few sysinits on almost any architecture.

Requested by:   alfred, imp, jhb
2001-07-31 05:45:16 +00:00