Commit Graph

3414 Commits

Author SHA1 Message Date
kib
ba20870cbd Fix locking. The dst_object must remain locked on the retry of the
loop iteration.

Reported and tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	6 days
2014-05-11 18:07:07 +00:00
alc
b716a32dc2 With the new-and-improved vm_fault_copy_entry() (r265843), we can always
avoid soft page faults when adding write access to user wired entries in
vm_map_protect().  Previously, we only avoided the soft page fault when
the underlying pages were copy-on-write.  In other words, we avoided the
pages faults that might sleep on page allocation, but not the trivial
page faults to update the physical map.

Reviewed by:	kib
MFC after:	1 week
Sponsored by:	EMC / Isilon Storage Division
2014-05-11 17:41:29 +00:00
alc
d283b621dc About 9% of the pmap_protect() calls being performed by vm_map_copy_entry()
are unnecessary.  Eliminate the unnecessary calls.

Reviewed by:	kib
MFC after:	1 week
Sponsored by:	EMC / Isilon Storage Division
2014-05-10 19:47:00 +00:00
kib
7798d7f7f4 For the upgrade case in vm_fault_copy_entry(), when the entry does not
need COW and is writeable (i.e. becoming writeable due to the
mprotect(2) operation), do not create a new backing object for the
entry.  The caller of the function is vm_map_protect(), the call is
made to ensure that wired entry has all pages resident and wired in
the top level object and to enable the write.  We might need to copy
read-only page from some backing objects into the top object or remap
the page with the write allowed.

This fixes the issue with mishandling of the swap accounting when
read-only wired mapping is upgraded to write-enabled after fork.  The
previous code path did not accounted the new object, but it creation
is redundand anyway and the change provides an optimization for the
non-common situation.

Reported by:	markj
Suggested and reviewed by:	alc (previous version)
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2014-05-10 17:03:33 +00:00
kib
32f811c3b5 When printing the map with the ddb 'show procvm' command, do not dump
page queues for the backing objects.  The queues are huge and clutter
the display, when mostly the map entries and its backing storage is
interesting.

The page queues can be seen with ddb 'show object' command.

Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2014-05-10 16:36:13 +00:00
kib
90e904e481 Print the entry address in addition to the object. The variable is
typically optimized out and debuggers cannot find its value.

Sponsored by:	    The FreeBSD Foundation
MFC after:	1 week
2014-05-10 16:30:48 +00:00
pho
1108602cbc msync(2) must return ENOMEM and not EINVAL when the address is outside the
allowed range or when one or more pages are not mapped. This according to
The Open Group Base Specifications Issue 7.

Discussed with:	 attilio, Bruce Evans
Reviewed by:	 alc, Garrett Cooper
Reported by:	 ATF
MFC after:	 2 weeks
Sponsored by:	EMC / Isilon storage division
2014-05-07 08:38:02 +00:00
alc
c77af15259 Prior to r254304, a separate function, vm_pageout_page_stats(), was used to
periodically update the reference status of the active pages.  This function
was called, instead of vm_pageout_scan(), when memory was not scarce.  The
objective was to provide up to date reference status for active pages in
case memory did become scarce and active pages needed to be deactivated.

The active page queue scan performed by vm_pageout_page_stats() was
virtually identical to that performed by vm_pageout_scan(), and so r254304
eliminated vm_pageout_page_stats().  Instead, vm_pageout_scan() is
called with the parameter "pass" set to zero.  The intention was that when
pass is zero, vm_pageout_scan() would only scan the active queue.  However,
the variable page_shortage can still be greater than zero when memory is not
scarce and vm_pageout_scan() is called with pass equal to zero.
Consequently, the inactive queue may be scanned and dirty pages laundered
even though that was not intended by r254304.  This revision fixes that.

Reported by:	avg
MFC after:	1 week
Sponsored by:	EMC / Isilon Storage Division
2014-05-06 03:42:04 +00:00
kib
0c45ba8eb0 For the VM_PHYSSEG_DENSE case, checking the requested range to fall
into the area backed by vm_page_array wrongly compared end with
vm_page_array_size.  It should be adjusted by first_page index to be
correct.

Also, the corner and incorrect case of the requested range extending
after the end of the vm_page_array was incorrectly handled by
allocating the segment.

Fix the comparision for the end of range and return EINVAL if the end
extends beyond vm_page_array.

Discussed with:	royger
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2014-04-29 18:42:37 +00:00
kib
c36743d23f When vm_fault_copy_entry() is called from vm_map_protect() for a wired
entry and performs the upgrade of the entry permissions from read-only
to read-write, we must allow to search for the source pages in the
backing object, like we do in the case of forking the read-only wired
entry. For the fork case, the behaviour is allowed by src_readonly
boolean, which in fact is only used to assert that read-write case
provides all source pages in the top-level object.

Eliminate the src_readonly variable.  Allow for the copy loop to look
into the backing objects, add explicit asserts to ensure that only
read-only and upgrade case actually does.

Expand comments. Change the panic call into assert.

Reported by:	markj
Tested by:	markj, pho (previous version)
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2014-04-27 05:19:01 +00:00
des
65493094e9 Add sysctl OIDs showing the actual size and capacity of the swap zone.
MFC after:	1 week
2014-04-26 12:18:17 +00:00
bdrewery
6fcf6199a4 Rename global cnt to vm_cnt to avoid shadowing.
To reduce the diff struct pcu.cnt field was not renamed, so
PCPU_OP(cnt.field) is still used. pc_cnt and pcpu are also used in
kvm(3) and vmstat(8). The goal was to not affect externally used KPI.

Bump __FreeBSD_version_ in case some out-of-tree module/code relies on the
the global cnt variable.

Exp-run revealed no ports using it directly.

No objection from:	arch@
Sponsored by:	EMC / Isilon Storage Division
2014-03-22 10:26:09 +00:00
kib
24c4e4a548 Fix two issues with /dev/mem access on amd64, both causing kernel page
faults.

First, for accesses to direct map region should check for the limit by
which direct map is instantiated.

Second, for accesses to the kernel map, success returned from the
kernacc(9) does not guarantee that consequent attempt to read or write
to the checked address succeed, since other thread might invalidate
the address meantime.  Add a new thread private flag TDP_DEVMEMIO,
which instructs vm_fault() to return error when fault happens on the
MAP_ENTRY_NOFAULT entry, instead of panicing.  The trap handler would
then see a page fault from access, and recover in normal way, making
/dev/mem access safer.

Remove GIANT_REQUIRED from the amd64 memrw(), since it is not needed
and having Giant locked does not solve issues for amd64.

Note that at least the second issue exists on other architectures, and
requires similar patching for md code.

Reported and tested by:	clusteradm (gjb, sbruno)
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2014-03-21 14:25:09 +00:00
kib
07e1d8d74f Initialize vm_map_entry member wiring_thread on the map entry creation.
This was missed in r253190.

Reported by:	hps, peter
Tested by:	hps
Sponsored by:	The FreeBSD Foundation
MFC after:	3 days
2014-03-21 13:55:57 +00:00
attilio
f19bbde667 vm_page_grab() and vm_pager_get_pages() can drop the vm_object lock,
then threads can sleep on the pip condition.
Avoid to deadlock such threads by correctly awakening the sleeping ones
after the pip is finished.
swapoff side of the bug can likely result in shutdown deadlocks.

Sponsored by:	EMC / Isilon Storage Division
Reported by:	pho, pluknet
Tested by:	pho
2014-03-19 01:13:42 +00:00
rwatson
33fdc14c0c Update kernel inclusions of capability.h to use capsicum.h instead; some
further refinement is required as some device drivers intended to be
portable over FreeBSD versions rely on __FreeBSD_version to decide whether
to include capability.h.

MFC after:	3 weeks
2014-03-16 10:55:57 +00:00
kib
12c58a8fb0 Initialize paddr to handle the case of zero size.
Reported and reviewed by:	Conrad Meyer <cemeyer@uw.edu>
MFC after:	1 week
2014-03-12 16:38:55 +00:00
kib
e4111a6b71 Do not vdrop() the tmpfs vnode until it is unlocked. The hold
reference might be the last, and then vdrop() would free the vnode.

Reported and tested by:	bdrewery
MFC after:	1 week
2014-03-12 15:13:57 +00:00
dim
e21b440a4c After r251709, avoid a clang 3.4 warning about an unused static const
variable (uma_max_ipers), when asserts are disabled.

Reviewed by:	glebius
MFC after:	3 days
2014-02-14 17:47:18 +00:00
attilio
6a31e25bb9 Fix-up r254141: in the process of making a failing vm_page_rename()
a call of pager_swap_freespace() was moved around, now leading to freeing
the incorrect page because of the pindex changes after vm_page_rename().

Get back to use the correct pindex when destroying the swap space.

Sponsored by:	EMC / Isilon storage division
Reported by:	avg
Tested by:	pho
MFC after:	7 days
2014-02-14 03:34:12 +00:00
glebius
b1650f2d1e Fix function name in KASSERT().
Submitted by:	hiren
2014-02-12 20:11:20 +00:00
jhb
ee403b8a8c Correct assertion to assert that the existing device VM object uses the
same type rather than asserting in the case where we just created a new
VM object.

Reviewed by:	kib
2014-02-11 22:05:21 +00:00
glebius
45bf1cc683 Create two public UMA_ZONE_PCPU zones: 64 bit sized and pointer sized.
Sponsored by:	Nginx, Inc.
2014-02-10 19:59:46 +00:00
glebius
613c5f4e53 Style. 2014-02-10 19:51:15 +00:00
glebius
1861286fed Make M_ZERO flag work correctly on UMA_ZONE_PCPU zones.
Sponsored by:	Nginx, Inc.
2014-02-10 19:48:26 +00:00
alc
43e9da37b5 Don't call vm_fault_prefault() on zero-fill faults. It's a waste of time.
Successful prefaults after a zero-fill fault are extremely rare.
2014-02-09 01:59:52 +00:00
glebius
e8c2426587 Provide macros that allow easily export uma(9) zone limits and
current usage via sysctl(9):

  SYSCTL_UMA_MAX()
  SYSCTL_ADD_UMA_MAX()
  SYSCTL_UMA_CUR()
  SYSCTL_ADD_UMA_CUR()

Sponsored by:	Nginx, Inc.
2014-02-07 14:29:03 +00:00
alc
cf63b11b17 Make prefaulting more aggressive on hard faults. Previously, we would only
map a fraction of the pages that were fetched by vm_pager_get_pages() from
secondary storage.  Now, we map them all in order to avoid future soft
faults.  This effect is most evident when a memory-mapped file is accessed
sequentially.  Previously, there were 6 soft faults for every hard fault.
Now, these soft faults are eliminated.

Sponsored by:	EMC / Isilon Storage Division
2014-02-02 20:21:53 +00:00
alc
50a7eacf05 In an effort to diagnose possible corruption of struct vm_page on some
sparc64 machines make the page queue assert in vm_page_dequeue() more
precise.  While I'm here switch the page lock assert to the newer style.
2014-01-24 19:08:42 +00:00
jhb
14337759da Fix a couple of typos. 2014-01-21 03:27:47 +00:00
glebius
681dcc3c57 ANSIfy declarations.
Ok'ed by:	alc
2014-01-20 18:47:56 +00:00
alc
d98c6ca3a1 Style changes in vm_pageout_scan():
1. Be consistent in the style of "act_delta" manipulations between the
   inactive and active queue scans.

2. Explicitly compare to zero.

3. The deactivation of a page is based is based on its recent history
   and not just the current call to vm_pageout_scan().  The variable
   "act_delta" represents the current state of the page, and not its
   history.  Avoid possible confusion by not (ab)using "act_delta" for
   the making the deactivation decision.

Submitted by:	kib [1]
Reviewed by:	kib [2,3]
2014-01-18 20:02:59 +00:00
alc
ed1e11749f Correctly update the count of stuck pages, "addl_page_shortage", in
vm_pageout_scan().  There were missing increments in two less common cases.

Don't conflate the count of stuck pages and the pageout deficit provided by
vm_page_alloc{,_contig}().  (A proposed fix to the OOM code depends on this.)

Handle held pages consistently in the inactive queue scan.  In the more
common case, we did not move the page to the tail of the queue.  Whereas, in
the less common case, we did.  There's no particular reason to move the page
in the less common case, so remove it.

Perform the calculation of the page shortage for the active queue scan a
little earlier, before the active queue lock is acquired.  The correctness
of this calculation doesn't depend on the active queue lock being held.

Eliminate a redundant variable, "pcount".  Use the more descriptive
variable, "maxscan", in its place.

Apply a few nearby style fixes, e.g., eliminate stray whitespace and excess
parentheses.

Reviewed by:	kib
Sponsored by:	EMC / Isilon Storage Division
2014-01-12 19:04:20 +00:00
alc
d0ccfff2c5 Since the introduction of the popmap to reservations in r259999, there is
no longer any need for the page's PG_CACHED and PG_FREE flags to be set and
cleared while the free page queues lock is held.  Thus, vm_page_alloc(),
vm_page_alloc_contig(), and vm_page_alloc_freelist() can wait until after
the free page queues lock is released to clear the page's flags.  Moreover,
the PG_FREE flag can be retired.  Now that the reservation system no longer
uses it, its only uses are in a few assertions.  Eliminating these
assertions is no real loss.  Other assertions catch the same types of
misbehavior, like doubly freeing a page (see r260032) or dirtying a free
page (free pages are invalid and only valid pages can be dirtied).

Eliminate an unneeded variable from vm_page_alloc_contig().

Sponsored by:	EMC / Isilon Storage Division
2013-12-31 18:25:15 +00:00
alc
4e0447c7bb Add "popmap" assertions: The page being freed isn't already free, and the
page being allocated isn't already allocated.

Sponsored by:	EMC / Isilon Storage Division
2013-12-29 04:54:52 +00:00
alc
0850ce80b6 MFp4 alc_popmap
Change the way that reservations keep track of which pages are in use.
  Instead of using the page's PG_CACHED and PG_FREE flags, maintain a bit
  vector within the reservation.  This approach has a couple benefits.
  First, it makes breaking reservations much cheaper because there are
  fewer cache misses to identify the unused pages.  Second, it is a pre-
  requisite for supporting two or more reservation sizes.
2013-12-28 04:28:35 +00:00
kib
2e35793d0f Do not coalesce stack entry, vm_map_stack() asserts that the requested
region is claimed by a new entry.

Pass MAP_STACK_GROWS_DOWN and MAP_STACK_GROWS_UP flags to
vm_map_insert() from vm_map_stack(), to really turn off coalescing
code and call to vm_map_simplify_entry() [1].

Reported by:	avg, peter, many
Tested by:	avg, peter
Noted by:	avg [1]
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-12-27 16:59:47 +00:00
marcel
df5d69cc0d For ia64, use pmap_remove_pages() and not pmap_remove(). The problem is
that we don't have a good way (yet) to iterate over the mapped pages by
virtual address and simply try each page within the range. Given that we
call pmap_remove() over the entire 2^63 bytes of address space, it takes
a while for pmap_remove to have tried all 2^50 pages.
By using pmap_remove_pages() we use the PV list to find all mappings.

Change derived from a patch by: alc
2013-12-26 05:46:10 +00:00
dim
7f980c6758 In sys/vm/vm_pageout.c, since vm_pageout_worker() takes a void * as
argument, cast the incoming 0 argument to void *, to silence a warning
from clang 3.4 ("expression which evaluates to zero treated as a null
pointer constant of type 'void *' [-Wnon-literal-null-conversion]").

MFC after:	3 days
2013-12-25 22:32:34 +00:00
alc
1fd1edcf18 Eliminate a redundant parameter to vm_radix_replace().
Improve the wording of the comment describing vm_radix_replace().

Reviewed by:	attilio
MFC after:	6 weeks
Sponsored by:	EMC / Isilon Storage Division
2013-12-08 20:07:02 +00:00
rodrigc
cc89f06998 In keg_dtor(), print out the keg name in the "Freed UMA keg was not empty"
message printed to the console.  This makes it easier to track down
the source of certain memory leaks.

Suggested by: adrian
2013-11-29 08:04:45 +00:00
mav
78de33b79a - Add bucket size column to show uma DDB command.
- Add `show umacache` command to show alike stats for cache-only UMA zones.
2013-11-28 19:20:49 +00:00
mav
ffd93c315f Make UMA to not blindly force offpage slab header allocation for large
(> PAGE_SIZE) zones.  If zone is not multiple to PAGE_SIZE, there may
be enough space for the header at the last page, so we may avoid extra
header memory allocation and hash table update/lookup.

ZFS creates bunch of odd-sized UMA zones (5120, 6144, 7168, 10240, 14336).
This change gives good use to at least some of otherwise lost memory there.

Reviewed by:	avg
2013-11-27 20:56:10 +00:00
mav
8937d14f8f Don't count bucket allocation failures for UMA zones as their own failures.
There are good reasons for this to happen, such as recursion prevention, etc.
and they are not fatal since buckets are just an optimization mechanism.
Real bucket allocation failures are any way counted by the bucket zones
themselves, and we don't need double accounting there.
2013-11-27 20:16:18 +00:00
mav
8773f9e310 Fix bug introduced at r252226, when udata argument passed to bucket_alloc()
was used without making sure first that it was really passed for us.

On some of my systems this bug made user argument passed by ZFS code to
uma_zalloc_arg() unexpectedly block UMA per-CPU caches for those zones.
2013-11-27 19:55:42 +00:00
mav
21d101ccc5 When purging per-CPU UMA caches do not return empty buckets into the global
full bucket cache to not trigger assertion if allocation happen before that
global cache get purged.
2013-11-23 13:42:56 +00:00
kib
732157ce21 Vm map code performs clipping when map entry covers region which is
larger than the operational region.  If the op region size is zero,
clipping would create a zero-sized map entry.  The result is that vm
map splay starts behaving inconsistently, sometimes returning
zero-sized entry, sometimes the next (or previous) entry.

One step further, it could result in e.g. vm_map_wire() setting
MAP_ENTRY_IN_TRANSITION on the zero-sized entry, but failing to clear
it in the done part.  The vm_map_delete() than hangs forever waiting
for the flag removal.

Verify for zero-length requests and act as if it is always successfull
without performing any action on the address space.

Diagnosed by:	pho
Tested by:	pho (previous version)
Reviewed by:	alc (previous version)
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-11-20 09:03:48 +00:00
kib
2f482609db Add assertions to cover all places in the wiring and unwiring code
where MAP_ENTRY_IN_TRANSITION is set or cleared.

Tested by:	pho
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-11-20 08:47:54 +00:00
kib
3c8b0e8428 Revert back to use int for the page counts. In vn_io_fault(), the i/o
is chunked to pieces limited by integer io_hold_cnt tunable, while
vm_fault_quick_hold_pages() takes integer max_count as the upper bound.

Rearrange the checks to correctly handle overflowing address arithmetic.

Submitted by:	bde
Tested by:	pho
Discussed with:	alc
MFC after:	1 week
2013-11-20 08:45:26 +00:00
mav
ff33031e0d Implement mechanism to safely but slowly purge UMA per-CPU caches.
This is a last resort for very low memory condition in case other measures
to free memory were ineffective.  Sequentially cycle through all CPUs and
extract per-CPU cache buckets into zone cache from where they can be freed.
2013-11-19 10:51:46 +00:00