3117 Commits

Author SHA1 Message Date
attilio
c72fe43a63 Add braces. 2012-05-12 19:54:57 +00:00
attilio
e5220032ec On 32-bits architecture KTR has a bug as it cannot correctly grok
64-bits numbers. ktr_tracepoint() infacts casts all the passed value to
u_long values as that is what the ktr entries can handle.

However, we have to work a lot with vm_pindex_t which are always 64-bit
also on 32-bits architectures (most notable case being i386).

Use macros to split the 64 bits printing into 32-bits chunks which
KTR can correctly handle.

Reported and tested by:	flo
2012-05-12 19:52:59 +00:00
attilio
1b73802874 MFC 2012-05-12 19:26:15 +00:00
attilio
3bd53aaf3c - Fix a bug where lookupn can wrap up looking for the pages to scan,
returning a non correct very low address again.
- Stub out vm_lookup_foreach as it is not used
2012-05-12 19:22:57 +00:00
alc
323d529ebe Give vm_fault()'s sequential access optimization a makeover.
There are two aspects to the sequential access optimization: (1) read ahead
of pages that are expected to be accessed in the near future and (2) unmap
and cache behind of pages that are not expected to be accessed again.  This
revision changes both aspects.

The read ahead optimization is now more effective.  It starts with the same
initial read window as before, but arithmetically grows the window on
sequential page faults.  This can yield increased read bandwidth.  For
example, on one of my machines, a program using mmap() to read a file that
is several times larger than the machine's physical memory takes about 17%
less time to complete.

The unmap and cache behind optimization is now more selectively applied.
The read ahead window must grow to its maximum size before unmap and cache
behind is performed.  This significantly reduces the number of times that
pages are unmapped and cached only to be reactivated a short time later.

The unmap and cache behind optimization now clears each page's referenced
flag.  Previously, in the case of dirty pages, if the containing file was
still mapped at the time that the page daemon examined the dirty pages,
they would be reactivated.

From a stylistic standpoint, this revision also cleanly separates the
implementation of the read ahead and unmap/cache behind optimizations.

Glanced at:	kib
MFC after:	2 weeks
2012-05-10 15:16:42 +00:00
attilio
c7b668e647 MFC 2012-05-05 21:40:32 +00:00
nwhitehorn
d394621163 Avoid a lock order reversal in pmap_extract_and_hold() from relocking
the page. This PMAP requires an additional lock besides the PMAP lock
in pmap_extract_and_hold(), which vm_page_pa_tryrelock() did not release.

Suggested by:	kib
MFC after:	4 days
2012-04-22 17:58:30 +00:00
kib
bc03240549 When MAP_STACK mapping is created, the map entry is created only to
cover the initial stack size. For MCL_WIREFUTURE maps, the subsequent
call to vm_map_wire() to wire the whole stack region fails due to
VM_MAP_WIRE_NOHOLES flag.

Use the VM_MAP_WIRE_HOLESOK to only wire mapped part of the stack.

Reported and tested by:	Sushanth Rai <sushanth_rai yahoo com>
Reviewed by:	alc
MFC after:	1 week
2012-04-21 18:36:53 +00:00
alc
ad5c747d1d As documented in vm_page.h, updates to the vm_page's flags no longer
require the page queues lock.

MFC after:	1 week
2012-04-21 18:26:16 +00:00
attilio
b721606b1f Fix a brain-o.
vm_page_lookup_cache() is not exported and cannot be used arbitrarely.

Reported by:	pho
2012-04-21 00:32:56 +00:00
attilio
af795c2c10 MFC 2012-04-11 14:54:06 +00:00
attilio
628004ddfb - Introduce a cache-miss optimization for consistency with other
accesses of the cache member of vm_object objects.
- Use novel vm_page_is_cached() for checks outside of the vm subsystem.

Reviewed by:	alc
MFC after:	2 weeks
X-MFC:		r234039
2012-04-09 17:05:18 +00:00
alc
e30063988b Fix mincore(2) so that it reports PG_CACHED pages as resident.
MFC after:	2 weeks
2012-04-08 18:25:12 +00:00
alc
a317fe0618 If a page belonging a reservation is cached, then mark the reservation so
that it will be freed to the cache pool rather than the default pool.
Otherwise, the cached pages within the reservation may be recycled sooner
than necessary.

Reported by:	Andrey Zonov
2012-04-08 17:00:46 +00:00
attilio
c85d28efad Fix a typo.
Reported by:	pho
2012-04-08 11:04:08 +00:00
attilio
b07157326c MFC 2012-04-08 00:14:15 +00:00
attilio
a539d5c569 If we already had a page with pindex == 0 in the device pager object
(if not already fictious) the code can panic when trying to first insert
a fictious page because of the overridden pindex.

Fix this by applying the same spinning pattern of vm_page_rename().

Reported by:	pho
2012-04-07 23:58:49 +00:00
attilio
4bd8f1d188 Staticize vm_page_cache_remove().
Reviewed by:	alc
2012-04-06 20:34:00 +00:00
attilio
a729154742 MFC 2012-04-06 19:59:37 +00:00
attilio
ed06f155e1 Free a cached page rather than only removing it.
vm_page_cache_remove() should only be used in very little and specific
cases (and marked as static likely) where the callers is going to take
care also of the page flags appropriately, otherwise one can end up
with a corrupted page.

Reported by:	pho
2012-04-06 19:49:45 +00:00
nwhitehorn
9c79ae8eea Reduce the frequency that the PowerPC/AIM pmaps invalidate instruction
caches, by invalidating kernel icaches only when needed and not flushing
user caches for shared pages.

Suggested by:	kib
MFC after:	2 weeks
2012-04-06 16:03:38 +00:00
jhb
5829de48d9 Add new ktrace records for the start and end of VM faults. This gives
a pair of records similar to syscall entry and return that a user can
use to determine how long page faults take.  The new ktrace records are
enabled via the 'p' trace type, and are enabled in the default set of
trace points.

Reviewed by:	kib
MFC after:	2 weeks
2012-04-05 17:13:14 +00:00
attilio
3a71812376 The cached_pages_counter is not reset even when shared pages are present
if the obj is not a vnode.

Fix this.

Reported and tested by:	flo
2012-04-03 20:06:07 +00:00
attilio
081936d842 MFC 2012-03-30 16:54:21 +00:00
mckusick
9a7982e5a0 Keep track of the mount point associated with a special device
to enable the collection of counts of synchronous and asynchronous
reads and writes for its associated filesystem. The counts are
displayed using `mount -v'.

Ensure that buffers used for paging indicate the vnode from
which they are operating so that counts of paging I/O operations
from the filesystem are collected.

This checkin only adds the setting of the mount point for the
UFS/FFS filesystem, but it would be trivial to add the setting
and clearing of the mount point at filesystem mount/unmount
time for other filesystems too.

Reviewed by: kib
2012-03-28 20:49:11 +00:00
alc
e02fd6b842 Handle spurious page faults that may occur in no-fault sections of the
kernel.

When access restrictions are added to a page table entry, we flush the
corresponding virtual address mapping from the TLB.  In contrast, when
access restrictions are removed from a page table entry, we do not
flush the virtual address mapping from the TLB.  This is exactly as
recommended in AMD's documentation.  In effect, when access
restrictions are removed from a page table entry, AMD's MMUs will
transparently refresh a stale TLB entry.  In short, this saves us from
having to perform potentially costly TLB flushes.  In contrast,
Intel's MMUs are allowed to generate a spurious page fault based upon
the stale TLB entry.  Usually, such spurious page faults are handled
by vm_fault() without incident.  However, when we are executing
no-fault sections of the kernel, we are not allowed to execute
vm_fault().  This change introduces special-case handling for spurious
page faults that occur in no-fault sections of the kernel.

In collaboration with:	kib
Tested by:		gibbs (an earlier version)

I would also like to acknowledge Hiroki Sato's assistance in
diagnosing this problem.

MFC after:	1 week
2012-03-22 04:52:51 +00:00
jhb
102d1a0777 Bah, just revert my earlier change entirely. (Missed alc's request to do
this earlier.)

Requested by:	alc
2012-03-19 19:06:40 +00:00
attilio
7ee59361d5 MFC 2012-03-19 18:54:01 +00:00
jhb
9628d3dbf8 Fix madvise(MADV_WILLNEED) to properly handle individual mappings larger
than 4GB.  Specifically, the inlined version of 'ptoa' of the the 'int'
count of pages overflowed on 64-bit platforms.  While here, change
vm_object_madvise() to accept two vm_pindex_t parameters (start and end)
rather than a (start, count) tuple to match other VM APIs as suggested
by alc@.
2012-03-19 18:47:34 +00:00
jhb
2e0db42a5f Alter the previous commit to use vm_size_t instead of vm_pindex_t.
vm_pindex_t is not a count of pages per se, it is more like vm_ooffset_t,
but a page index instead of a byte offset.
2012-03-19 18:43:44 +00:00
kib
2963c3c979 In vm_object_page_clean(), do not clean OBJ_MIGHTBEDIRTY object flag
if the filesystem performed short write and we are skipping the page
due to this.

Propogate write error from the pager back to the callers of
vm_pageout_flush().  Report the failure to write a page from the
requested range as the FALSE return value from vm_object_page_clean(),
and propagate it back to msync(2) to return EIO to usermode.

While there, convert the clearobjflags variable in the
vm_object_page_clean() and arguments of the helper functions to
boolean.

PR:	kern/165927
Reviewed by:	alc
MFC after:	2 weeks
2012-03-17 23:00:32 +00:00
attilio
5725f63f57 MFC 2012-03-16 15:46:44 +00:00
attilio
f9319cf885 Fix the nodes allocator in architectures without direct-mapping:
- Fix bugs in the free path where the pages were not unwired and
  relevant locking wasn't acquired.
- Introduce the rnode_map, submap of kernel_map, where to allocate from.
  The reason is that, in architectures without direct-mapping,
  kmem_alloc*() will try to insert the newly created mapping while
  holding the vm_object lock introducing a LOR or lock recursion.
  rnode_map is however a leafly-used submap, thus there cannot be any
  deadlock.
  Notes: Size the submap in order to be, by default, around 64 MB and
  decrase the size of the nodes as the allocation will be much smaller
  (and when the compacting code in the vm_radix will be implemented this
  will aim for much less space to be used).  However note that the
  size of the submap can be changed at boot time via the
  hw.rnode_map_scale scaling factor.
- Use uma_zone_set_max() covering the size of the submap.

Tested by:	flo
2012-03-16 15:41:07 +00:00
jhb
1e515523f3 Pedantic nit: use vm_pindex_t instead of long for a count of pages. 2012-03-14 20:57:48 +00:00
flo
83ea608b00 IFC at r232948
Approved by:	attilio
2012-03-14 00:41:37 +00:00
jhb
19feaba08b Add KTR_VFS traces to track modifications to a vnode's writecount. 2012-03-08 20:27:20 +00:00
attilio
86fae10111 MFC 2012-03-07 11:18:38 +00:00
attilio
9e63566650 Fix a compile time bug by adding a check just after the struct
definition
2012-03-06 23:37:53 +00:00
alc
9181f45b9f Eliminate stale incorrect ARGSUSED comments.
Submitted by:	bde
2012-03-02 17:33:51 +00:00
attilio
df89a6a2db - Exclude vm_radix_shrink() from the interface but retain the code
still as it can be useful.
- Make most of the interface private as it is unnecessary public right
  now.  This will help in making nodes changing with arch and still avoid
  namespace pollution.
2012-03-01 00:54:08 +00:00
attilio
3c5fbc2c09 MFC 2012-03-01 00:27:51 +00:00
alc
54c1d2e89a Simplify kmem_alloc() by eliminating code that existed on account of
external pagers in Mach.  FreeBSD doesn't implement external pagers.
Moreover, it don't pageout the kernel object.  So, the reasons for
having code don't hold.

Reviewed by:	kib
MFC after:	6 weeks
2012-02-29 05:41:29 +00:00
alc
867c58a8cb Simplify vm_mmap()'s control flow.
Add a comment describing what vm_mmap_to_errno() does.

Reviewed by:	kib
MFC after:	3 weeks
X-MFC after:	r232071
2012-02-25 21:06:39 +00:00
attilio
d4c43cbb8b MFC 2012-02-25 18:24:45 +00:00
alc
7d737c65b5 Simplify vmspace_fork()'s control flow by copying immutable data before
the vm map locks are acquired.  Also, eliminate redundant initialization
of the new vm map's timestamp.

Reviewed by:	kib
MFC after:	3 weeks
2012-02-25 17:49:59 +00:00
kib
8c39852ba5 Place the if() at the right location, to activate the v_writecount
accounting for shared writeable mappings for all filesystems, not only
for the bypass layers.

Submitted by:	alc
Pointy hat to:	kib
MFC after:	20 days
2012-02-24 10:41:58 +00:00
kib
f315a59476 Account the writeable shared mappings backed by file in the vnode
v_writecount.  Keep the amount of the virtual address space used by
the mappings in the new vm_object un_pager.vnp.writemappings
counter. The vnode v_writecount is incremented when writemappings gets
non-zero value, and decremented when writemappings is returned to
zero.

Writeable shared vnode-backed mappings are accounted for in vm_mmap(),
and vm_map_insert() is instructed to set MAP_ENTRY_VN_WRITECNT flag on
the created map entry.  During deferred map entry deallocation,
vm_map_process_deferred() checks for MAP_ENTRY_VN_WRITECOUNT and
decrements writemappings for the vm object.

Now, the writeable mount cannot be demoted to read-only while
writeable shared mappings of the vnodes from the mount point
exist. Also, execve(2) fails for such files with ETXTBUSY, as it
should be.

Noted by:	tegge
Reviewed by:	tegge (long time ago, early version), alc
Tested by:	pho
MFC after:	3 weeks
2012-02-23 21:07:16 +00:00
kib
ac9e4627bb Remove wrong comment.
Discussed with:	alc
MFC after:	3 days
2012-02-22 20:01:38 +00:00
alc
506ef17e7f When vm_mmap() is used to map a vm object into a kernel vm_map, it
makes no sense to check the size of the kernel vm_map against the
user-level resource limits for the calling process.

Reviewed by:	kib
2012-02-16 06:45:51 +00:00
attilio
d58aaf5046 MFC 2012-02-14 19:58:00 +00:00