Don't set PG_WRITEABLE in init_pte_prot() (and thus pmap_enter()) unless

the page is managed.

Don't set the machine-independent layer's dirty field for the page being
mapped in init_pte_prot().  (The dirty field is only supposed to set when
a mapping is removed or write-protected and the page was managed and
modified.)

Determine whether or not to perform dirty bit emulation based on whether
or not the page is managed, i.e., pageable, not based on whether the page
is being mapped into the kernel address space.  Nearly all of the kernel
address space consists of unmanaged pages, so this has neglible impact on
the overhead of dirty bit emulation for the kernel address space.  However,
there can also exist unmanaged pages in the user address space.  Previously,
dirty bit emulation was unnecessarily performed on these pages.

Tested by:	jchandra@
This commit is contained in:
Alan Cox 2010-06-06 06:07:44 +00:00
parent 2c6b25b4cd
commit c99b7cc5c9
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=208866

View File

@ -3072,26 +3072,20 @@ page_is_managed(vm_offset_t pa)
static int
init_pte_prot(vm_offset_t va, vm_page_t m, vm_prot_t prot)
{
int rw = 0;
int rw;
if (!(prot & VM_PROT_WRITE))
rw = PTE_ROPAGE;
else {
if (va >= VM_MIN_KERNEL_ADDRESS) {
/*
* Don't bother to trap on kernel writes, just
* record page as dirty.
*/
rw = PTE_RWPAGE;
vm_page_dirty(m);
} else if ((m->md.pv_flags & PV_TABLE_MOD) ||
m->dirty == VM_PAGE_BITS_ALL)
else if ((m->flags & (PG_FICTITIOUS | PG_UNMANAGED)) == 0) {
if ((m->md.pv_flags & PV_TABLE_MOD) != 0)
rw = PTE_RWPAGE;
else
rw = PTE_CWPAGE;
vm_page_flag_set(m, PG_WRITEABLE);
}
return rw;
} else
/* Needn't emulate a modified bit for unmanaged pages. */
rw = PTE_RWPAGE;
return (rw);
}
/*