Commit Graph

1688 Commits

Author SHA1 Message Date
phk
cd531ac811 Make the first two pages magic to protect the BSD labels rather than
only one.
2003-08-06 14:13:38 +00:00
phk
df05426cf5 Remove an unused variable. 2003-08-06 12:09:34 +00:00
phk
09d8ecf0bf Staticize swap_pager_putpages()
Eliminate a lot of checkes to make sure requests are not cross-device
which is unnecessary with the new layout.  We know a sequential request
cannot possibly be cross-device because there is a reserved page between
the devices.

Remove a couple of comments which no longer are relevant.
2003-08-06 12:08:27 +00:00
phk
63d9a65167 Access the swap_pagers' ->putpages() through swappagerops instead
of directly, this is a cleaner way to do it.
2003-08-06 12:05:48 +00:00
phk
084bb4037c Add XXX: comment to vm_pager_unswapped(). 2003-08-06 10:51:40 +00:00
phk
890df5b795 Explicitly set B_PAGING 2003-08-06 09:22:47 +00:00
phk
d2426c0f94 Rip out the totally bogos vnode swapdev_vp with extreeme prejudice.
Don't mark buffers with B_KEEPGIANT, we don't drop giant in strategy
at this point in time.
2003-08-06 06:53:31 +00:00
phk
d0c4c329b1 Use sparse struct initialization for struct pagerops.
Mark our buffers B_KEEPGIANT before sending them downstream.

Remove swap_pager_strategy implementation.
2003-08-05 06:54:56 +00:00
phk
a295f12128 Use sparse struct initializations for struct pagerops.
This makes grepping for which pagers implement which methods easier.
2003-08-05 06:51:26 +00:00
phk
8952fe0759 Put an uncovered page between the swap devices, that way we can be sure
to not get any cross-device I/O requests.  (The unallocated first page
protecting BSD labels already gave us this, but that hack may go away
at some point in time).

Remove the check for cross-device I/O requests in swap_pager_strategy.

Move the repeated statistics updating into flushchainbuf().
2003-08-04 08:22:49 +00:00
alc
321771d262 Use kmem_alloc_nofault() instead of kmem_alloc_pageable() to allocate
swapbkva.  Swapbkva mappings are explicitly managed using pmap_qenter(),
not on-demand by vm_fault(), making kmem_alloc_nofault() more appropriate.

Submitted by:	tegge
2003-08-04 04:35:04 +00:00
phk
049e0c4c31 Name swap_pager_find_dev() more correctly swp_pager_finde_dev().
Use ->bio_children to count child buffers, rather than abuse the
bio_caller1 pointer.

Expand the relevant bits of waitchainbuf() inline, this clarifies
the code a little bit.
2003-08-03 21:22:42 +00:00
phk
2b330fbdd3 I accidentally hit undo before committing, fix the resulting off-by-one. 2003-08-03 14:53:52 +00:00
phk
b51aac6e92 Change the layout policy of the swap_pager from a hardcoded width
striping to a per device round-robin algorithm.

Because of the policy of not attempting to retain previous swap
allocation on page-out, this means that a newly added swap device
almost instantly takes its 1/N share of the I/O load but it takes
somewhat longer for it to assume it's 1/N share of the pages if there
is plenty of space on the other devices.

Change the 8G total swapspace limitation to 8G per device instead
by using a per device blist rather than one global blist.  This
reduces the memory footprint by 75% (typically a couple hundred
kilobytes) for the common case with one swapdevice but NSWAPDEV=4.

Remove the compile time constant limit of number of swap devices,
there is no limit now.  Instead of a fixed size array, store the
per swapdev structure in a TAILQ.

Total swap space is still addressed by a 32 bit page number and
therefore the upper limit is now 2^42 bytes = 16TB (for i386).

We still do not allocate the first page of each device in order to
give some amount of protection to any bsdlabel at the start of the
device.

A new device is appended after the existing devices in the swap space,
no attempt is made to fill in holes left behind by swapoff (this can
trivially be changed should it ever become a problem).

The sysctl vm.nswapdev now reflects the number of currently configured
swap devices.

Rename vm_swap_size to swap_pager_avail for consistency with other
exported names.

Change argument type for vm_proc_swapin_all() and swap_pager_isswapped()
to be a struct swdevt pointer rather than an index.

Not changed: we are still using blists to manage the free space,
but since the swapspace is no longer fragmented by the striping
different resource managers might fare better.
2003-08-03 13:35:31 +00:00
phk
61f64f46ab Move extern declaration of the various pagerops from vm_pager.c
to vm_pager.h where the various pagers will also see them.
2003-08-03 09:27:39 +00:00
alc
52878a6770 Revise obj_alloc(). Most notably, use the object's lock to prevent two
concurrent invocations from acquiring the same address(es).  Also, in case
of an incomplete allocation, free any allocated pages.

In collaboration with:	tegge
2003-08-03 06:08:48 +00:00
bmilekic
2a8e0c5c0a When INVARIANTS is on and we're in uma_zalloc_free(), we need to make
sure that uma_dbg_free() is called if we're about to call
uma_zfree_internal() but we're asking it to skip the dtor and
uma_dbg_free() call itself.  So, if we're about to call
uma_zfree_internal() from uma_zfree_arg() and skip == 1, call
uma_dbg_free() ourselves.
2003-08-02 22:40:27 +00:00
alc
9e28da88a0 Update the comment at the head of kmem_alloc_nofault() to describe its
purpose and use.
2003-08-01 19:51:43 +00:00
bmilekic
9caa205e5b Only free the pcpu cache buckets if they are non-NULL.
Crashed this person's machine: harti
Pointy-hat to: me
2003-08-01 17:42:27 +00:00
phk
1a2b042c99 Remove unused stuff.
Move used stuff to swap_pager.c where it belongs.

This file no longer exports anything to userland.
2003-07-31 22:19:28 +00:00
peter
6dab4e8092 Add #include "opt_kstack_pages.h" and "opt_kstack_max_pages.h" to remain
in sync with the backend machdep code.  When cpu_thread_init() does not
have the same idea of KSTACK_PAGES as the thing that created the kstack,
all hell breaks loose.

Bad alc! no cookie! :-)
2003-07-31 01:25:05 +00:00
bmilekic
7c379c85d8 Plug a race and a leak in UMA.
1) The race has to do with zone destruction.  From the zone destructor we
   would lock the zone, set the working set size to 0, then unlock the zone,
   drain it, and then free the structure.  Within the window following the
   working-set-size set to 0 and unlocking of the zone and the point where
   in zone_drain we re-acquire the zone lock, the uma timer routine could
   have fired off and changed the working set size to something non-zero,
   thereby potentially preventing us from completely freeing slabs before
   destroying the zone (and thus leaking them).

2) The leak has to do with zone destruction as well.  When destroying a
   zone we would take care to free all the buckets cached in the zone, but
   although we would drain the pcpu cache buckets, we would not free them.
   This resulted in leaking a couple of bucket structures (512 bytes each)
   per cpu on SMP during zone destruction.

While I'm here, also silence GCC warnings by turning uma_slab_alloc()
from inline to real function.  It's too big to be an inline.

Reviewed by: JeffR
2003-07-30 18:55:15 +00:00
bmilekic
260d19ed7e When generating the zone stats make sure to handle the master zone
("UMA Zone") carefully, because it does not have pcpu caches allocated
at all.  In the UP case, we did not catch this because one pcpu cache
is always allocated with the zone, but for the MP case, we were getting
bogus stats for this zone.

Tested by: Lukas Ertl <le@univie.ac.at>
2003-07-30 15:22:37 +00:00
phk
213f4e3d07 Remove the disabling of buckets workaround.
Thanks to:	jeffr
2003-07-30 07:50:19 +00:00
jeff
8512070a52 - Get rid of the ill-conceived uz_cachefree member of uma_zone.
- In sysctl_vm_zone use the per cpu locks to read the current cache
   statistics this makes them more accurate while under heavy load.

Submitted by:	tegge
2003-07-30 05:59:17 +00:00
jeff
50d6e1a822 - Check to see if we need a slab prior to allocating one. Failure to do
so not only wastes memory but it can also cause a leak in zones that
   will be destroyed later.  The problem is that the slab allocation code
   places newly created slabs on the partially allocated list because it
   assumes that the caller will actually allocate some memory from it.
   Failure to do so places an otherwise free slab on the partial slab list
   where we wont find it later in zone_drain().

Continuously prodded to fix by:	phk (Thanks)
2003-07-30 05:42:55 +00:00
phk
70398bc9a3 Temporary workaround: Always disable buckets, there is a bug there
somewhere.

JeffR will look at this as soon as he has time.

OK'ed by:	jeffr
2003-07-29 22:07:10 +00:00
alc
79bbf9b702 None of the "alloc" functions used by UMA assume that Giant is held any
longer.  (If they still need it, e.g., contigmalloc(), they acquire it
themselves.)  Therefore, we need not acquire Giant in slab_zalloc().
2003-07-28 02:29:07 +00:00
alc
fa6dd8ff58 Remove GIANT_REQUIRED from kmem_alloc(). 2003-07-27 18:31:32 +00:00
mux
e7241bd66a Use pmap_zero_page() to zero pages instead of bzero() because
they haven't been vm_map_wire()'d yet.
2003-07-27 10:41:33 +00:00
alc
56615188dc Allow vm_object_reference() on kernel_object without Giant. 2003-07-27 05:43:58 +00:00
alc
d63c1dd2b8 Acquire Giant rather than asserting it is held in contigmalloc(). This is
a prerequisite to removing further uses of Giant from UMA.
2003-07-26 21:48:46 +00:00
phk
6221ef9078 Add a "int fd" argument to VOP_OPEN() which in the future will
contain the filedescriptor number on opens from userland.

The index is used rather than a "struct file *" since it conveys a bit
more information, which may be useful to in particular fdescfs and /dev/fd/*

For now pass -1 all over the place.
2003-07-26 07:32:23 +00:00
alc
0cffd21856 Gulp ... call kmem_malloc() without Giant. 2003-07-26 03:55:32 +00:00
mux
a3fee15cba Add support for the M_ZERO flag to contigmalloc().
Reviewed by:	jeff
2003-07-25 21:02:25 +00:00
phk
676c7dc42a Remove all but one of the inlines here, this reduces the code size by
2032 bytes and has no measurable impact on performance.
2003-07-22 20:54:26 +00:00
phk
df7d325032 Don't inline very large functions.
Gcc has silently not been doing this for a long time.
2003-07-22 09:27:58 +00:00
peter
27c163f9f7 swp_pager_hash() was called before it was instantiated inline. This made
gcc (quite rightly) unhappy.  Move it earlier.
2003-07-22 06:55:48 +00:00
phk
ea98d2c3e5 Fix a printf format warning I introduced.
Use the macro max number of swap devices rather than cache the constant
in a variable.
Avoid a (now) pointless variable.
2003-07-18 22:11:17 +00:00
harti
de9698a4f7 When INVARIANTS is defined make sure that uma_zalloc_arg (and hence
uma_zalloc) is called with exactly one of either M_WAITOK or M_NOWAIT and
that it is called with neither M_TRYWAIT or M_DONTWAIT. Print a warning
if anything is wrong. Default to M_WAITOK of no flag is given. This is the
same test as in malloc(9).
2003-07-18 16:04:36 +00:00
phk
6fd98eaab7 If a proposed swap device exceeds the 8G artificial limit which out
radix-tree code imposes, truncate the device instead of rejecting it.
2003-07-18 11:01:23 +00:00
phk
aa8896f3b8 Move the implementation of the vmspace_swap_count() (used only in
the "toss the largest process" emergency handling) from vm_map.c to
swap_pager.c.

The quantity calculated depends strongly on the internals of the
swap_pager and by moving it, we no longer need to expose the
internal metrics of the swap_pager to the world.
2003-07-18 10:47:58 +00:00
phk
5fa40a3265 Add a new function swap_pager_status() which reports the total size of the
paging space and how much of it is in use (in pages).

Use this interface from the Linuxolator instead of groping around in the
internals of the swap_pager.
2003-07-18 10:26:09 +00:00
phk
84f9cb2fa8 Merge swap_pager.c and vm_swap.c into swap_pager.c, the separation
is not natural and needlessly exposes a lot of dirty laundry.

Move private interfaces between the two from swap_pager.h to swap_pager.c
and staticize as much as possible.

No functional change.
2003-07-18 10:02:44 +00:00
phk
a8381d2cc6 Make sure that SWP_NPAGES always has the same value in all source
files, so that SWAP_META_PAGES does not vary either.

swap_pager.c ended up with a value of 16, everybody else 8.  Go with
the 16 for now.

This should only have any effect in the "kill processes because we
are out of swap" scenario, where it will make some sort of estimate
of something more precise.
2003-07-17 21:58:43 +00:00
robert
52004fa962 Avoid an unnecessary calculation: there is no need to subtract
`firstaddr' from `v' if we know that the former equals zero.
2003-07-13 21:02:11 +00:00
alc
b2f0d26888 - Complete the vm object locking in vm_pageout_object_deactivate_pages().
- Change vm_pageout_object_deactivate_pages()'s first parameter from a
   vm_map_t to a pmap_t.
 - Change vm_pageout_object_deactivate_pages()'s and
   vm_pageout_map_deactivate_pages()'s last parameter from a vm_pindex_t
   to a long.  Since the number of pages in an address space doesn't
   require 64 bits on an i386, vm_pindex_t is overkill.
2003-07-07 07:16:29 +00:00
alc
bab2fb677f Lock a vm object when freeing a page from it. 2003-07-05 20:51:22 +00:00
phk
3875fcca40 Remove unnecessary cast. 2003-07-04 12:23:43 +00:00
alc
0699f7e17f Background: pmap_object_init_pt() premaps the pages of a object in
order to avoid the overhead of later page faults.  In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects.  Only the device-backed case is really
machine-dependent, belonging in the pmap.

This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter().  On amd64 and i386, this commit only
amounts to code rearrangement.  On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations.  (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.)  On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
2003-07-03 20:18:02 +00:00