Commit Graph

2509 Commits

Author SHA1 Message Date
Alan Cox
beb3c3a9c5 Retire VM_PROT_READ_IS_EXEC. It was intended to be a micro-optimization,
but I see no benefit from it today.

VM_PROT_READ_IS_EXEC was only intended for use on processors that do not
distinguish between read and execute permission.  On an mmap(2) or
mprotect(2), it automatically added execute permission if the caller
specified permissions included read permission.  The hope was that this
would reduce the number of vm map entries needed to implement an address
space because there would be fewer neighboring vm map entries that differed
only in the presence or absence of VM_PROT_EXECUTE.  (See vm/vm_mmap.c
revision 1.56.)

Today, I don't see any real applications that benefit from
VM_PROT_READ_IS_EXEC.  In any case, vm map entries are now organized
as a self-adjusting binary search tree instead of an ordered list.  So,
the need for coalescing vm map entries is not as great as it once was.
2009-04-04 23:12:14 +00:00
Alan Cox
a7f9bae19e Eliminate dead code.
Reviewed by:	jhb
2009-04-01 04:36:37 +00:00
John Baldwin
5bd65606f4 Adjust some variables (mostly related to the buffer cache) that hold
address space sizes to be longs instead of ints.  Specifically, the follow
values are now longs: runningbufspace, bufspace, maxbufspace,
bufmallocspace, maxbufmallocspace, lobufspace, hibufspace, lorunningspace,
hirunningspace, maxswzone, maxbcache, and maxpipekva.  Previously, a
relatively small number (~ 44000) of buffers set in kern.nbuf would result
in integer overflows resulting either in hangs or bogus values of
hidirtybuffers and lodirtybuffers.  Now one has to overflow a long to see
such problems.  There was a check for a nbuf setting that would cause
overflows in the auto-tuning of nbuf.  I've changed it to always check and
cap nbuf but warn if a user-supplied tunable would cause overflow.

Note that this changes the ABI of several sysctls that are used by things
like top(1), etc., so any MFC would probably require a some gross shims
to allow for that.

MFC after:	1 month
2009-03-09 19:35:20 +00:00
Alan Cox
5758fe7185 Prior to r188331 a map entry's last read offset was only updated by a hard
fault.  In r188331 this update was relocated because of synchronization
changes to a place where it would occur on both hard and soft faults.  This
change again restricts the update to hard faults.
2009-02-25 07:52:53 +00:00
Konstantin Belousov
655c349022 Revert the addition of the freelist argument for the vm_map_delete()
function, done in r188334. Instead, collect the entries that shall be
freed, in the deferred_freelist member of the map. Automatically purge
the deferred freelist when map is unlocked.

Tested by:	pho
Reviewed by:	alc
2009-02-24 20:57:43 +00:00
Konstantin Belousov
3a0916b8ea Add the assertion macros for the map locks. Use them in several map
manipulation functions.

Tested by:	pho
Reviewed by:	alc
2009-02-24 20:43:29 +00:00
Konstantin Belousov
e608cc3c8d Update the comment after the r188334.
Reviewed by:	alc
2009-02-24 20:23:16 +00:00
Roman Divacky
af83f5d77c Change the functions to ANSI in those cases where it breaks promotion
to int rule. See ISO C Standard: SS6.7.5.3:15.

Approved by:	kib (mentor)
Reviewed by:	warner
Tested by:	silence on -current
2009-02-24 18:09:31 +00:00
Robert Watson
9309e63c1f Put debug.vm_lowmem sysctl under DIAGNOSTIC.
Submitted by:	sam
MFC after:	3 days
2009-02-23 23:30:17 +00:00
Robert Watson
86f087370b Add a debugging sysctl, debug.vm_lowmem, that when assigned a value of
1 will trigger a pass through the VM's low-memory handlers, such as
protocol and UMA drain routines.  This makes it easier to exercise
these otherwise rarely-invoked code paths.

MFC after:	3 days
2009-02-23 23:00:12 +00:00
Alan Cox
bfd9b137a0 Reduce the scope of the page queues lock in vm_object_page_remove().
MFC after:	1 week
2009-02-21 20:57:25 +00:00
Alan Cox
9d13a605d4 Eliminate stale comments. 2009-02-20 16:19:34 +00:00
Konstantin Belousov
cb61d6987e Comment out the assertion from r188321. It is not valid for nfs.
Reported by:	alc
2009-02-09 11:32:23 +00:00
Alan Cox
c722e407dc Avoid some cases of unnecessary page queues locking by vm_fault's delete-
behind heuristic.
2009-02-09 06:23:21 +00:00
Alan Cox
7b54b1a9f5 Eliminate OBJ_NEEDGIANT. After r188331, OBJ_NEEDGIANT's only use is by a
redundant assertion in vm_fault().

Reviewed by:	kib
2009-02-08 22:17:24 +00:00
Konstantin Belousov
2fada4c2b3 Remove no longer valid comment.
Submitted by:	alc
2009-02-08 21:20:13 +00:00
Konstantin Belousov
b0994946c7 Improve comments, correct English.
Submitted by:	alc
2009-02-08 20:52:09 +00:00
Konstantin Belousov
897d81a020 Do not call vm_object_deallocate() from vm_map_delete(), because we
hold the map lock there, and might need the vnode lock for OBJT_VNODE
objects. Postpone object deallocation until caller of vm_map_delete()
drops the map lock. Link the map entries to be freed into the freelist,
that is released by the new helper function vm_map_entry_free_freelist().

Reviewed by:	tegge, alc
Tested by:	pho
2009-02-08 20:39:17 +00:00
Konstantin Belousov
e53fa61bf2 In vm_map_sync(), do not call vm_object_sync() while holding map lock.
Reference object, drop the map lock, and then call vm_object_sync().
The object sync might require vnode lock for OBJT_VNODE type objects.

Reviewed by:	tegge
Tested by:	pho
2009-02-08 20:30:51 +00:00
Konstantin Belousov
d2bf64c309 Do not sleep for vnode lock while holding map lock in vm_fault. Try to
acquire vnode lock for OBJT_VNODE object after map lock is dropped.
Because we have the busy page(s) in the object, sleeping there would
result in deadlock with vnode resize. Try to get lock without sleeping,
and, if the attempt failed, drop the state, lock the vnode, and restart
the fault handler from the start with already locked vnode.

Because the vnode_pager_lock() function is inlined in vm_fault(),
axe it.

Based on suggestion by:	alc
Reviewed by:	tegge, alc
Tested by:	pho
2009-02-08 20:23:46 +00:00
Konstantin Belousov
7fd10fb3c7 Add the comments to vm_map_simplify_entry() and vmspace_fork(),
describing why several calls to vm_deallocate_object() with locked map
do not result in the acquisition of the vnode lock after map lock.

Suggested and reviewed by:	tegge
2009-02-08 20:00:33 +00:00
Konstantin Belousov
1fac7d7f35 Lock the new map in vmspace_fork(). The newly allocated map should not
be accessible outside vmspace_fork() yet, but locking it would satisfy
the protocol of the vm_map_entry_link() and other functions called
from vmspace_fork().

Use trylock that is supposedly cannot fail, to silence WITNESS warning
of the nested acquisition of the sx lock with the same name.

Suggested and reviewed by:	tegge
2009-02-08 19:55:03 +00:00
Konstantin Belousov
705f0a82c2 Assert that vnode is exclusively locked when its vm object is resized.
Reviewed by:	tegge
2009-02-08 19:44:50 +00:00
Konstantin Belousov
9f6acfd1a8 Do not leak the MAP_ENTRY_IN_TRANSITION flag when copying map entry
on fork. Otherwise, copied entry cannot be removed in the child map.

Reviewed by:	tegge
MFC after:	2 weeks
2009-02-08 19:41:08 +00:00
Konstantin Belousov
0d0be82a5d Style. 2009-02-08 19:37:01 +00:00
Jeff Roberson
e20a199fd5 - Make the keg abstraction more complete. Permit a zone to have multiple
backend kegs so it may source compatible memory from multiple backends.
   This is useful for cases such as NUMA or different layouts for the same
   memory type.
 - Provide a new api for adding new backend kegs to secondary zones.
 - Provide a new flag for adjusting the layout of zones to stagger
   allocations better across cache lines.

Sponsored by:	Nokia
2009-01-25 09:11:24 +00:00
John Baldwin
8a7ef10b71 - Mark all standalone INT/LONG/QUAD sysctl's MPSAFE. This is done
inside the SYSCTL() macros and thus does not need to be done for
  all of the nodes scattered across the source tree.
- Mark the name-cache related sysctl's (including debug.hashstat.*) MPSAFE.
- Mark vm.loadavg MPSAFE.
- Remove GIANT_REQUIRED from vmtotal() (everything in this routine already
  has sufficient locking) and mark vm.vmtotal MPSAFE.
- Mark the vm.stats.(sys|vm).* sysctls MPSAFE.
2009-01-23 22:49:23 +00:00
John Baldwin
fa3de7700c Now that vfs_markatime() no longer requires an exclusive lock due to
the VOP_MARKATIME() changes, use a shared vnode lock for mmap().

Submitted by:	ups
2009-01-21 14:43:35 +00:00
Konstantin Belousov
641e2829b6 Extend the struct vm_page wire_count to u_int to avoid the overflow
of the counter, that may happen when too many sendfile(2) calls are
being executed with this vnode [1].

To keep the size of the struct vm_page and offsets of the fields
accessed by out-of-tree modules, swap the types and locations
of the wire_count and cow fields. Add safety checks to detect cow
overflow and force fallback to the normal copy code for zero-copy
sockets. [2]

Reported by:	Anton Yuzhaninov <citrin citrin ru> [1]
Suggested by:	alc [2]
Reviewed by:	alc
MFC after:	2 weeks
2009-01-03 13:24:08 +00:00
Alan Cox
05a8c41419 Resurrect shared map locks allowing greater concurrency during some map
operations, such as page faults.

An earlier version of this change was ...

Reviewed by:	kib
Tested by:	pho
MFC after:	6 weeks
2009-01-01 00:31:46 +00:00
Alan Cox
e2abaaaa2b Update or eliminate some stale comments. 2008-12-31 05:44:05 +00:00
Alan Cox
7438d60b4b Avoid an unnecessary memory dereference in vm_map_entry_splay(). 2008-12-30 21:52:18 +00:00
Alan Cox
095104ac36 Style change to vm_map_lookup(): Eliminate a macro of dubious value. 2008-12-30 20:51:07 +00:00
Alan Cox
4c3ef59e3d Move the implementation of the vm map's fast path on address lookup from
vm_map_lookup{,_locked}() to vm_map_lookup_entry().  Having the fast path
in vm_map_lookup{,_locked}() limits its benefits to page faults.  Moving
it to vm_map_lookup_entry() extends its benefits to other operations on
the vm map.
2008-12-30 19:48:03 +00:00
Robert Noland
e9f541267d Fix printing of KASSERT message missed in r163604.
Approved by:	kib
2008-12-21 16:56:13 +00:00
Konstantin Belousov
6129343d5d Instead of forcing vn_start_write() to reset mp back to NULL for the
failed calls with non-NULL vp, explicitely clear mp after failure.

Tested by:	stass
Reviewed by:	tegge
PR:		123768
MFC after:	1 week
2008-11-16 21:57:54 +00:00
Rafal Jaworowski
8e321b7943 Support kernel crash mini dumps on ARM architecture.
Obtained from:	Juniper Networks, Semihalf
2008-11-06 16:20:27 +00:00
Giorgos Keramidas
2db63c5e38 Various comment nits, and typos. 2008-11-02 00:41:26 +00:00
Robert Watson
556c3162b9 Update mmap() comment: no more block devices, so no more block device
cache coherency questions.

MFC after:	3 days
2008-10-22 16:50:12 +00:00
Attilio Rao
0d7935fd01 Remove the struct thread unuseful argument from bufobj interface.
In particular following functions KPI results modified:
- bufobj_invalbuf()
- bufsync()

and BO_SYNC() "virtual method" of the buffer objects set.
Main consumers of bufobj functions are affected by this change too and,
in particular, functions which changed their KPI are:
- vinvalbuf()
- g_vfs_close()

Due to the KPI breakage, __FreeBSD_version will be bumped in a later
commit.

As a side note, please consider just temporary the 'curthread' argument
passing to VOP_SYNC() (in bufsync()) as it will be axed out ASAP

Reviewed by:	kib
Tested by:	Giovanni Trematerra <giovanni dot trematerra at gmail dot com>
2008-10-10 21:23:50 +00:00
Konstantin Belousov
2025d69ba7 Move the code for doing out-of-memory grass from vm_pageout_scan()
into the separate function vm_pageout_oom(). Supply a parameter for
vm_pageout_oom() describing a reason for the call.

Call vm_pageout_oom() from the swp_pager_meta_build() when swap zone
is exhausted.

Reviewed by:	alc
Tested by:	pho, jhb
MFC after:	2 weeks
2008-09-29 19:45:12 +00:00
Ed Maste
a8a478fce6 Move CTASSERT from header file to source file, per implementation note now
in the CTASSERT man page.
2008-09-26 18:44:40 +00:00
Konstantin Belousov
7818e0a545 Save previous content of the td_fpop before storing the current
filedescriptor into it. Make sure that td_fpop is NULL when calling
d_mmap from dev_pager_getpages().

Change guards against td_fpop field being non-NULL with private state
for another device, and against sudden clearing the td_fpop. This
could occur when either a driver method calls another driver through
the filedescriptor operation, or a page fault happen while driver is
writing to a memory backed by another driver.

Noted by:	rwatson
Tested by:	rnoland
MFC after:	3 days
2008-09-26 14:50:49 +00:00
Alan Cox
8d28bf04e2 Prevent an integer overflow in vm_pageout_page_stats() on machines with a
large number of physical pages.

PR:		126158
Submitted by:	Dmitry Tejblum
MFC after:	3 days
2008-09-21 18:01:34 +00:00
Konstantin Belousov
36b907893d Allow the d_mmap driver methods to use cdevpriv KPI during verification
phase of establishing mapping.

Discussed with:	rwatson, jhb, rnoland
Tested by:	rnoland
MFC after:	3 days
2008-09-20 19:56:02 +00:00
Attilio Rao
0359a12ead Decontextualize the couplet VOP_GETATTR / VOP_SETATTR as the passed thread
was always curthread and totally unuseful.

Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com>
2008-08-28 15:23:18 +00:00
Antoine Brodin
2f2ea10a07 Remove unused variable nosleepwithlocks.
PR:		126609
Submitted by:	Mateusz Guzik
MFC after:	1 month
X-MFC:		to stable/7 only, this variable is still used in stable/6
2008-08-23 12:40:07 +00:00
Nathan Whitehorn
f620b5bf45 Allow the MD UMA allocator to use VM routines like kmem_*(). Existing code requires MD allocator to be available early in the boot process, before the VM is fully available. This defines a new VM define (UMA_MD_SMALL_ALLOC_NEEDS_VM) that allows an MD UMA small allocator to become available at the same time as the default UMA allocator.
Approved by:	marcel (mentor)
2008-08-23 01:35:36 +00:00
Julian Elischer
ac957cd271 A bunch of formatting fixes brough to light by, or created by the Vimage commit
a few days ago.
2008-08-20 01:05:56 +00:00
Kip Macy
4b34502e99 Work around differences in page allocation for initial page tables on xen
MFC after:	1 month
2008-08-17 23:40:29 +00:00