Commit Graph

2799 Commits

Author SHA1 Message Date
kib
eeb1ebf124 Handle the corner case in vm_fault_quick_hold_pages().
If supplied length is zero, and user address is invalid, function
might return -1, due to the truncation and rounding of the address.
The callers interpret the situation as EFAULT. Instead of handling
the zero length in caller, filter it in vm_fault_quick_hold_pages().

Sponsored by:	The FreeBSD Foundation
Reviewed by:	alc
2011-03-25 16:38:10 +00:00
jhb
c7ac62aecd Fix some locking nits with the p_state field of struct proc:
- Hold the proc lock while changing the state from PRS_NEW to PRS_NORMAL
  in fork to honor the locking requirements.  While here, expand the scope
  of the PROC_LOCK() on the new process (p2) to avoid some LORs.  Previously
  the code was locking the new child process (p2) after it had locked the
  parent process (p1).  However, when locking two processes, the safe order
  is to lock the child first, then the parent.
- Fix various places that were checking p_state against PRS_NEW without
  having the process locked to use PROC_LOCK().  Every place was already
  locking the process, just after the PRS_NEW check.
- Remove or reduce the use of PROC_SLOCK() for places that were checking
  p_state against PRS_NEW.  The PROC_LOCK() alone is sufficient for reading
  the current state.
- Reorder fill_kinfo_proc() slightly so it only acquires PROC_SLOCK() once.

MFC after:	1 week
2011-03-24 18:40:11 +00:00
jeff
2d7d8c05e7 - Merge changes to the base system to support OFED. These include
a wider arg2 for sysctl, updates to vlan code, IFT_INFINIBAND,
   and other miscellaneous small features.
2011-03-21 09:40:01 +00:00
trasz
1eb6b91508 In vm_daemon(), when iterating over all processes in the system, skip those
which are not yet fully initialized (i.e. ones with p_state == PRS_NEW).
Without it, we could panic in _thread_lock_flags().

Note that there may be other instances of FOREACH_PROC_IN_SYSTEM() that
require similar fix.

Reported by:	pho, keramida
Discussed with:	kib
2011-03-18 06:47:23 +00:00
alc
9e6c312311 Eliminate duplication of the fake page code and zone by the device and sg
pagers.

Reviewed by:	jhb
2011-03-11 07:07:48 +00:00
brucec
3bd182f4eb Change the return type of vmspace_swap_count to a long to match the other
vmspace_*_count functions.

MFC after:	3 days
2011-03-01 11:04:30 +00:00
pluknet
3061aea0d2 Remove sysctl vm.max_proc_mmap used to protect from KVA space exhaustion.
As it was pointed out by Alan Cox, that no longer serves its purpose with
the modern UMA allocator compared to the old one used in 4.x days.

The removal of sysctl eliminates max_proc_mmap type overflow leading to
the broken mmap(2) seen with large amount of physical memory on arches
with factually unbound KVA space (such as amd64).  It was found that
slightly less than 256GB of physmem was enough to trigger the overflow.

Reviewed by:	alc, kib
Approved by:	avg (mentor)
MFC after:	2 months
2011-02-24 09:22:56 +00:00
brucec
2d8d5824cb Calculate and return the count in vmspace_swap_count as a vm_offset_t
instead of an int to avoid overflow.

While here, clean up some style(9) issues.

PR:		kern/152200
Reviewed by:	kib
MFC after:	2 weeks
2011-02-23 10:28:37 +00:00
alc
2f4da8e71e Remove pmap fields that are either unused or not fully implemented.
Discussed with:	kib
2011-02-17 15:36:29 +00:00
kib
d20e0514a9 Since r218070 reenabled the call to vm_map_simplify_entry() from
vm_map_insert(), the kmem_back() assumption about newly inserted
entry might be broken due to interference of two factors. In the low
memory condition, when vm_page_alloc() returns NULL, supplied map is
unlocked. If another thread performs kmem_malloc() meantime, and its
map entry is placed right next to our thread map entry in the map,
both entries wire count is still 0 and entries are coalesced due to
vm_map_simplify_entry().

Mark new entry with MAP_ENTRY_IN_TRANSITION to prevent coalesce.
Fix some style issues, tighten the assertions to account for
MAP_ENTRY_IN_TRANSITION state.

Reported and tested by:	pho
Reviewed by:	alc
2011-02-15 09:03:58 +00:00
kib
210cf47742 Lock the vnode around clearing of VV_TEXT flag. Remove mp_fixme() note
mentioning that vnode lock is needed.

Reviewed by:	alc
Tested by:	pho
MFC after:	1 week
2011-02-13 21:52:26 +00:00
jmallett
77919e089a Use CPU_FOREACH rather than expecting CPUs 0 through mp_ncpus-1 to be present.
Don't micro-optimize the uniprocessor case; use the same loop there.

Submitted by:	Bhanu Prakash
Reviewed by:	kib, jhb
2011-02-12 02:10:08 +00:00
alc
060dcf42aa Retire VFS_BIO_DEBUG. Convert those checks that were still valid into
KASSERT()s and eliminate the rest.

Replace excessive printf()s and a panic() in bufdone_finish() with a
KASSERT() in vm_page_io_finish().

Reviewed by:	kib
2011-02-12 01:00:00 +00:00
alc
11491a4c5e Unless "cnt" exceeds MAX_COMMIT_COUNT, nfsrv_commit() and nfsvno_fsync() are
incorrectly calling vm_object_page_clean().  They are passing the length of
the range rather than the ending offset of the range.

Perform the OFF_TO_IDX() conversion in vm_object_page_clean() rather than the
callers.

Reviewed by:	kib
MFC after:	3 weeks
2011-02-05 21:21:27 +00:00
alc
e017b59ac6 Since the last parameter to vm_object_shadow() is a vm_size_t and not a
vm_pindex_t, it makes no sense for its callers to perform atop().  Let
vm_object_shadow() do that instead.
2011-02-04 21:49:24 +00:00
alc
a8872fa39a Release the free page queues lock earlier in vm_page_alloc().
Discussed with:	kib@
2011-01-30 23:55:48 +00:00
alc
48530618fa Reenable the call to vm_map_simplify_entry() from vm_map_insert() for non-
MAP_STACK_* entries.  (See r71983 and r74235.)

In some cases, performing this call to vm_map_simplify_entry() halves the
number of vm map entries used by the Sun JDK.
2011-01-29 15:23:02 +00:00
mdf
7fc649fc41 Explicitly wire the user buffer rather than doing it implicitly in
sbuf_new_for_sysctl(9).  This allows using an sbuf with a SYSCTL_OUT
drain for extremely large amounts of data where the caller knows that
appropriate references are held, and sleeping is not an issue.

Inspired by:	rwatson
2011-01-27 00:34:12 +00:00
pluknet
5f536fc1d3 Make MSGBUF_SIZE kernel option a loader tunable kern.msgbufsize.
Submitted by:	perryh pluto.rain.com (previous version)
Reviewed by:	jhb
Approved by:	kib (mentor)
Tested by:	universe
2011-01-21 10:26:26 +00:00
alc
dcf40640d1 Move the definition of M_VMPGDATA to the swap pager, where the only
remaining uses are.
2011-01-18 04:54:43 +00:00
alc
d326a38b4b Explicitly initialize the page's queue field to PQ_NONE instead of relying
on PQ_NONE being zero.

Redefine PQ_NONE and PQ_COUNT so that a page queue isn't allocated for
PQ_NONE.

Reviewed by:	kib@
2011-01-17 19:17:26 +00:00
alc
677bb7a34f Sort function prototypes. 2011-01-16 20:40:50 +00:00
alc
ff6910496e Update a lock annotation on the page structure. 2011-01-16 18:04:01 +00:00
alc
b513439d0a Shift responsibility for synchronizing access to the page's act_count
field to the object's lock.

Reviewed by:	kib@
2011-01-16 18:01:39 +00:00
alc
a4fbc5e9f2 Clean up the start of vm_page_alloc(). In particular, eliminate an
assertion that is no longer required.  Long ago, calls to vm_page_alloc()
from an interrupt handler had to specify VM_ALLOC_INTERRUPT so that
vm_page_alloc() would not attempt to reclaim a PQ_CACHE page from another vm
object.  Today, with the synchronization on a vm object's collection of
PQ_CACHE pages, this is no longer an issue.  In fact, VM_ALLOC_INTERRUPT now
reclaims PQ_CACHE pages just like VM_ALLOC_{NORMAL,SYSTEM}.

MFC after:	3 weeks
2011-01-16 17:33:34 +00:00
kib
ccf352ab59 For consistency, use kernel_object instead of &kernel_object_store
when initializing the object mutex. Do the same for kmem_object.

Discussed with:	alc
MFC after:	1 week
2011-01-15 21:56:38 +00:00
alc
01330416e0 For some time now, the kernel and kmem objects have been ordinary
OBJT_PHYS objects.  Thus, there is no need for handling them specially
in vm_fault().  In fact, this special case handling would have led to
an assertion failure just before the call to pmap_enter().

Reviewed by:	kib@
MFC after:	6 weeks
2011-01-15 19:21:28 +00:00
jhb
c17f46e472 Remove unneeded includes of <sys/linker_set.h>. Other headers that use
it internally contain nested includes.

Reviewed by:	bde
2011-01-11 13:59:06 +00:00
kib
4f8260e700 Move repeated MAXSLP definition from machine/vmparam.h to sys/vmmeter.h.
Update the outdated comments describing MAXSLP and the process
selection algorithm for swap out.

Comments wording and reviewed by:	alc
2011-01-09 12:50:44 +00:00
alc
2ff68e8630 Eliminate a redundant alignment directive on the page locks array. 2011-01-09 04:34:02 +00:00
alc
a3f4c0274d Eliminate the counting of vm_page_pa_tryrelock calls. We really don't
need it anymore.  Moreover, its implementation had a type mismatch, a
long is not necessarily an uint64_t.  (This mismatch was hidden by
casting.)  Move the remaining two counters up a level in the sysctl
hierarchy.  There is no reason for them to be under the vm.pmap node.

Reviewed by:	kib
2011-01-08 22:45:22 +00:00
alc
8cd48d17c8 Release the page lock early in vm_pageout_clean(). There is no reason to
hold this lock until the end of the function.

With the aforementioned change to vm_pageout_clean(), page locks don't need
to support recursive (MTX_RECURSE) or duplicate (MTX_DUPOK) acquisitions.

Reviewed by:	kib
2011-01-03 00:41:56 +00:00
alc
7fb616e10a Make a couple refinements to r216799 and r216810. In particular, revise
a comment and move it to its proper place.

Reviewed by:	kib
2011-01-01 17:39:38 +00:00
brucec
688b83a0a6 There can be more than 0x20000000 swap meta blocks allocated if a swap-backed
md(4) device is used. Don't panic when deallocating such a device if swap
has been used.

PR:	kern/133170
Discussed with:	kib
MFC after:	3 days
2011-01-01 16:59:05 +00:00
kib
f112887cab Remove OBJ_CLEANING flag. The vfs_setdirty_locked_object() is the only
consumer of the flag, and it used the flag because OBJ_MIGHTBEDIRTY
was cleared early in vm_object_page_clean, before the cleaning pass
was done. This is no longer true after r216799.

 Moreover, since OBJ_CLEANING is a flag, and not the counter, it could
be reset too prematurely when parallel vm_object_page_clean() are
performed.

Reviewed by:	alc (as a part of the bigger patch)
MFC after:	1 month (after r216799 is merged)
2010-12-29 22:26:49 +00:00
alc
cc9b2308bc There is no point in vm_contig_launder{,_page}() flushing held pages,
instead skip over them.  As long as a page is held, it can't be reclaimed by
contigmalloc(M_WAITOK).  Moreover, a held page may be undergoing
modification, e.g., vmapbuf(), so even if the hold were released before the
completion of contigmalloc(), the page might have to be flushed again.

MFC after:	3 weeks
2010-12-29 20:35:36 +00:00
kib
1a1716f8a9 Move the increment of vm object generation count into
vm_object_set_writeable_dirty().

Fix an issue where restart of the scan in vm_object_page_clean() did
not removed write permissions for newly added pages or, if the mapping
for some already scanned page changed to writeable due to fault.
Merge the two loops in vm_object_page_clean(), doing the remove of
write permission and cleaning in the same loop. The restart of the
loop then correctly downgrade writeable mappings.

Fix an issue where a second caller to msync() might actually return
before the first caller had actually completed flushing the
pages. Clear the OBJ_MIGHTBEDIRTY flag after the cleaning loop, not
before.

Calls to pmap_is_modified() are not needed after pmap_remove_write()
there.

Proposed, reviewed and tested by:	alc
MFC after:	1 week
2010-12-29 12:53:53 +00:00
alc
99000f1878 Correct a typo in vm_fault_quick_hold_pages().
Reported by:	Bartosz Stec
2010-12-28 20:02:30 +00:00
alc
8e22952d45 Move vm_object_print()'s prototype to the expected place. 2010-12-27 07:12:22 +00:00
alc
d835374cc6 Retire vm_fault_quick(). It's no longer used.
Reviewed by:	kib@
2010-12-25 23:54:50 +00:00
alc
971b02b7bc Introduce and use a new VM interface for temporarily pinning pages. This
new interface replaces the combined use of vm_fault_quick() and
pmap_extract_and_hold() throughout the kernel.

In collaboration with:	kib@
2010-12-25 21:26:56 +00:00
alc
be5201b0d1 Introduce vm_fault_hold() and use it to (1) eliminate a long-standing race
condition in proc_rwmem() and to (2) simplify the implementation of the
cxgb driver's vm_fault_hold_user_pages().  Specifically, in proc_rwmem()
the requested read or write could fail because the targeted page could be
reclaimed between the calls to vm_fault() and vm_page_hold().

In collaboration with:	kib@
MFC after:	6 weeks
2010-12-20 22:49:31 +00:00
alc
303f816df2 Implement and use a single optimized function for unholding a set of pages.
Reviewed by:	kib@
2010-12-17 22:41:22 +00:00
alc
fa0943bb62 Change memguard_fudge() so that it can handle km_max being zero. Not
every platform defines VM_KMEM_SIZE_MAX, and on those platforms km_max
will be zero.

Reviewed by:	mdf
Tested by:	marius
2010-12-14 05:47:35 +00:00
mlaier
da2dde653e Fix a long standing (from the original 4.4BSD lite sources) race between
vmspace_fork and vm_map_wire that would lead to "vm_fault_copy_wired: page
missing" panics.  While faulting in pages for a map entry that is being
wired down, mark the containing map as busy.  In vmspace_fork wait until the
map is unbusy, before we try to copy the entries.

Reviewed by:	kib
MFC after:	5 days
Sponsored by:	Isilon Systems, Inc.
2010-12-09 21:02:22 +00:00
jchandra
bfd4abe2ac Revert the vm/vm_page.c change in r216317.
This adds back changes in r216141, which was reverted by the above
check in.
2010-12-09 07:39:06 +00:00
jchandra
012e7effe8 swi_vm() for mips. 2010-12-09 06:54:06 +00:00
trasz
b2b2bfee2e Fix comment intentation. 2010-12-04 17:41:58 +00:00
imp
d804a05c38 To make minidumps work properly on mips for memory that's direct
mapped and entered via vm_page_setup, keep track of it like we do
for amd64.

# A separate commit will be made to move this to a capability-based ifdef
# rather than arch-based ifdef.

Submitted by:	alc@
MFC after:	1 week
2010-12-03 04:39:48 +00:00
trasz
e5fb69509c Replace pointer to "struct uidinfo" with pointer to "struct ucred"
in "struct vm_object".  This is required to make it possible to account
for per-jail swap usage.

Reviewed by:	kib@
Tested by:	pho@
Sponsored by:	FreeBSD Foundation
2010-12-02 17:37:16 +00:00