Change the way that reservations keep track of which pages are in use.
Instead of using the page's PG_CACHED and PG_FREE flags, maintain a bit
vector within the reservation. This approach has a couple benefits.
First, it makes breaking reservations much cheaper because there are
fewer cache misses to identify the unused pages. Second, it is a pre-
requisite for supporting two or more reservation sizes.
vm_page_insert() so that (1) vm_radix_lookup_le() is never called while the
free page queues lock is held and (2) vm_radix_lookup_le() is called at most
once. This change reduces the average time that the free page queues lock
is held by vm_page_alloc() as well as vm_page_alloc()'s average overall
running time.
Sponsored by: EMC / Isilon Storage Division
be eliminated. If the calls to vm_radix_lookup_le() return NULL, then
the page at the head of the object's memq must be the page with the least
pindex greater than the specified pindex.
Reviewed by: attilio
Sponsored by: EMC / Isilon Storage Division
* VM_OBJECT_LOCK and VM_OBJECT_UNLOCK are mapped to write operations
* VM_OBJECT_SLEEP() is introduced as a general purpose primitve to
get a sleep operation using a VM_OBJECT_LOCK() as protection
* The approach must bear with vm_pager.h namespace pollution so many
files require including directly rwlock.h
This makes the RED/BLACK support go away and simplifies a lot vmradix
functions used here. This happens because with patricia trie support
the trie will be little enough that keeping 2 diffetnt will be
efficient too.
- Reduce differences with head, in places like backing scan where the
optimizazions used shuffled the code a little bit around.
Tested by: flo, Andrea Barberio
that it will be freed to the cache pool rather than the default pool.
Otherwise, the cached pages within the reservation may be recycled sooner
than necessary.
Reported by: Andrey Zonov
use superpage reservations. So, for the first time, kernel virtual memory
that is allocated by contigmalloc(), kmem_alloc_attr(), and
kmem_alloc_contig() can be promoted to superpages. In fact, even a series
of small contigmalloc() allocations may collectively result in a promoted
superpage.
Eliminate some duplication of code in vm_reserv_alloc_page().
Change the type of vm_reserv_reclaim_contig()'s first parameter in order
that it be consistent with other vm_*_contig() functions.
Tested by: marius (sparc64)
Black nodes support standard active pages and red nodes support cached
pages. Red nodes may be removed without the object lock but will not
collapse unused tree nodes. Red nodes may not be directly inserted,
instead a new function is supplied to convert between black and red.
- Handle cached pages and active pages in the same loop in vm_object_split,
vm_object_backing_scan, and vm_object_terminate.
- Retire the splay page handling as the ifdefs are too difficult to
maintain.
- Slightly optimize the vm_radix_lookupn() function.
eliminating duplicated code in the various pmap implementations.
Micro-optimize vm_phys_free_pages().
Introduce vm_phys_free_contig(). It is fast routine for freeing an
arbitrary number of physically contiguous pages. In particular, it
doesn't require the number of pages to be a power of two.
Use "u_long" instead of "unsigned long".
Bruce Evans (bde@) has convinced me that the "boundary" parameters
to kmem_alloc_contig(), vm_phys_alloc_contig(), and
vm_reserv_reclaim_contig() should be of type "vm_paddr_t" and not
"u_long". Make this change.
the vm object pages splay.
TODO:
- Handle differently the negative keys for having smaller depth
index nodes (negative keys caming from indirect blocks)
- Fix the get_node() by having support for a low reserved objects
directly from UMA
- Implement the lookup_le and re-enable VM_NRESERVELEVEL = 1
- Try to rework the superpage splay of idle pages and the cache splay
for every vm object in order to regain space on vm_page structure
- Verify performance and improve them (likely by having consumers to deal
with several ranges of pages manually?)
Obtained from: jeff, Mayur Shardul (GSoC 2009)
sbuf_new_for_sysctl(9). This allows using an sbuf with a SYSCTL_OUT
drain for extremely large amounts of data where the caller knows that
appropriate references are held, and sleeping is not an issue.
Inspired by: rwatson
creation of large page mappings in the pmap, it can provide modest
performance benefits. In particular, for a "buildworld" on a 2x 1GHz
Ultrasparc IIIi it reduced the wall clock time by 2.2% and the system
time by 12.6%.
Tested by: marius@
Add a drain function for struct sysctl_req, and use it for a variety
of handlers, some of which had to do awkward things to get a large
enough SBUF_FIXEDLEN buffer.
Note that some sysctl handlers were explicitly outputting a trailing
NUL byte. This behaviour was preserved, though it should not be
necessary.
Reviewed by: phk (original patch)
unexpected things in copyout(9) and so wiring the user buffer is not
sufficient to perform a copyout(9) while holding a random mutex.
Requested by: nwhitehorn
handlers, some of which had to do awkward things to get a large enough
FIXEDLEN buffer.
Note that some sysctl handlers were explicitly outputting a trailing NUL
byte. This behaviour was preserved, though it should not be necessary.
Reviewed by: phk
a reservation, unless all of the reservation's pages were free, the
reservation was moved to the head of the partially-populated reservations
queue, where it would be the next reservation to be broken in case the
free page queues were emptied. Now, instead, I am moving it to the tail.
Very likely this reservation is in the process of being freed in its
entirety, so placing it at the tail of the queue makes it more likely that
the underlying physical memory will be returned to the free page queues as
one contiguous chunk. If a reservation must be broken, it will, instead,
be the longest unchanged reservation, which is arguably the reservation
that is least likely to ever achieve promotion or be freed in its entirety.
MFC after: 6 weeks
contigmalloc(9) as a last resort to steal pages from an inactive,
partially-used superpage reservation.
Rename vm_reserv_reclaim() to vm_reserv_reclaim_inactive() and
refactor it so that a separate subroutine is responsible for breaking
the selected reservation. This subroutine is also used by
vm_reserv_reclaim_contig().
machine-independent support for superpages. (The earlier part was
the rewrite of the physical memory allocator.) The remainder of the
code required for superpages support is machine-dependent and will
be added to the various pmap implementations at a later date.
Initially, I am only supporting one large page size per architecture.
Moreover, I am only enabling the reservation system on amd64. (In
an emergency, it can be disabled by setting VM_NRESERVLEVELS to 0
in amd64/include/vmparam.h or your kernel configuration file.)