Set VPO_UNMANAGED on the freed page when insertion of the page into the
object queue failed, to satisfy the assertion.
MFC r283163:
Do grammar fix in the comment to record the right commit message for
r283162.
users, myself included. The original code is likely papering over a
larger bug that needs to be explored, but for now get things back to
a working state.
Obtained from: Netflix, Inc.
Swap device is still reported as enabled, and system still may crash later
if some swapped-out kernel pages were lost with the device, but at least
GEOM and CAM can now release the lost disk, allowing it to be reconnected.
Update kernel inclusions of capability.h to use capsicum.h instead; some
further refinement is required as some device drivers intended to be
portable over FreeBSD versions rely on __FreeBSD_version to decide whether
to include capability.h.
Sponsored by: Google, Inc.
vmspace_release() may sleep if the last reference is being released,
so add a WITNESS_WARN() to catch cases where it is called with a
non-sleepable lock held.
MFC after: 1 month
Sponsored by: Sandvine Inc.
By the time that pmap_init() runs, vm_phys_segs[] has been initialized.
Obtaining the end of memory address from vm_phys_segs[] is a little
easier than obtaining it from phys_avail[].
Enable the use of VM_PHYSSEG_SPARSE on amd64 and i386, making it the
default on i386 PAE. (The use of VM_PHYSSEG_SPARSE on i386 PAE saves
us some precious kernel virtual address space that would have been
wasted on unused vm_page structures.)
Change the UMA mutex into a rwlock
Acquire the lock in read mode when just needed to ensure the stability
of the keg list. The UMA lock may be held for a long time (relatively
speaking) in uma_reclaim() on machines with lots of zones/kegs. If the
uma_timeout() would fire during that period, subsequent callouts on that
CPU may be significantly delayed.
Refactor ZFS ARC reclaim logic to be more VM cooperative
MFC r270861:
Ensure that ZFS ARC free memory checks include cached pages
MFC r272483:
Refactor ZFS ARC reclaim checks and limits
Sponsored by: Multiplay
Remove stray uma_mtx lock/unlock in zone_drain_wait()
Callers of zone_drain_wait(M_WAITOK) do not need to hold (and were not)
the uma_mtx, but we would attempt to unlock and relock the mutex if we
had to sleep because the zone was already draining. The M_NOWAIT callers
may hold the uma_mtx, but we do not sleep in that case.
Back in the days when the kernel was single threaded, testing
"vm_paging_target() > 0" was a reasonable way of determining if the
inactive queue scan met its target. However, now that other threads
can be allocating pages while the inactive queue scan is running, it's
an unreliable method. The effect of it being unreliable is that we
can start swapping out processes when we didn't intend to.
This issue has existed since the kernel was multithreaded, but the
changes to the inactive queue target in 10.0-RELEASE have made its
effects visible.
This change introduces a more direct method for determining if the
inactive queue scan met its target that is not affected by the actions
of other threads.
wired region. Rework the handling of unwire to do the it in batch,
both at pmap and object level.
All commits below are by alc.
MFC r268327:
Introduce pmap_unwire().
MFC r268591:
Implement pmap_unwire() for powerpc.
MFC r268776:
Implement pmap_unwire() for arm.
MFC r268806:
pmap_unwire(9) man page.
MFC r269134:
When unwiring a region of an address space, do not assume that the
underlying physical pages are mapped by the pmap. This fixes a leak
of the wired pages on the unwiring of the region mapped with no access
allowed.
MFC r269339:
In the implementation of the new function pmap_unwire(), the call to
MOEA64_PVO_TO_PTE() must be performed before any changes are made to the
PVO. Otherwise, MOEA64_PVO_TO_PTE() will panic.
MFC r269365:
Correct a long-standing problem in moea{,64}_pvo_enter() that was revealed
by the combination of r268591 and r269134: When we attempt to add the
wired attribute to an existing mapping, moea{,64}_pvo_enter() do nothing.
(They only set the wired attribute on newly created mappings.)
MFC r269433:
Handle wiring failures in vm_map_wire() with the new functions
pmap_unwire() and vm_object_unwire().
Retire vm_fault_{un,}wire(), since they are no longer used.
MFC r269438:
Rewrite a loop in vm_map_wire() so that gcc doesn't think that the variable
"rv" is uninitialized.
MFC r269485:
Retire pmap_change_wiring().
Reviewed by: alc
Implement 'fast path' for the vm page fault handler.
MFC r270387 (by alc):
Relax one of the conditions for mapping a page on the fast path.
Approved by: re (gjb)
by flag). The ia64 pmap.c changes are direct commit, since ia64 is
removed on head.
MFC r269368 (by alc):
Retire PVO_EXECUTABLE.
MFC r269728:
Change pmap_enter(9) interface to take flags parameter and superpage
mapping size (currently unused).
MFC r269759 (by alc):
Update the text of a KASSERT() to reflect the changes in r269728.
MFC r269822 (by alc):
Change {_,}pmap_allocpte() so that they look for the flag
PMAP_ENTER_NOSLEEP instead of M_NOWAIT/M_WAITOK when deciding whether
to sleep on page table page allocation.
MFC r270151 (by alc):
Replace KASSERT that no PV list locks are held with a conditional
unlock.
Reviewed by: alc
Approved by: re (gjb)
Sponsored by: The FreeBSD Foundation
Add OBJ_TMPFS_NODE flag.
MFC r268616:
Set the OBJ_TMPFS_NODE flag for vm_object of VREG tmpfs node.
MFC r269053:
Correct assertion. tmpfs vm object is always at the bottom of
the shadow chain.
Assert that the new entry is inserted into the right location in the
map entries list, and that it does not overlap with the previous and
next entries.