blocking vm map entry and object coalescing for the calling process.
However, there is no reason that mlockall(MCL_FUTURE) should block
such coalescing. This change enables it.
Reviewed by: kib, markj
Tested by: pho
MFC after: 6 weeks
Differential Revision: https://reviews.freebsd.org/D16413
Due to the way rtld creates mappings for the shared objects, each dso
causes unmap of at least three guard map entries. For instance, in
the buildworld load, this change reduces the amount of pmap_remove()
calls by 1/5.
Profiled by: alc
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D16148
vm_map_madvise(). Previously, vm_map_madvise() used a traditional Unix-
style "return (0);" to indicate success in the common case, but Mach-
style return values in the edge cases. Since KERN_SUCCESS equals zero,
the only problem with this inconsistency was stylistic. vm_map_madvise()
has exactly two callers in the entire source tree, and only one of them
cares about the return value. That caller, kern_madvise(), can be
simplified if vm_map_madvise() consistently uses Unix-style return
values.
Since vm_map_madvise() uses the variable modify_map as a Boolean, make it
one.
Eliminate a redundant error check from kern_madvise(). Add a comment
explaining where the check is performed.
Explicitly note that exec_release_args_kva() doesn't care about
vm_map_madvise()'s return value. Since MADV_FREE is passed as the
behavior, the return value will always be zero.
Reviewed by: kib, markj
MFC after: 7 days
There are out of tree consumers of vm_map_min() and vm_map_max(), and
I believe there are consumers of vm_map_pmap(), although the later is
arguably less in the need of KBI-stable interface. For the consumers
benefit, make modules using this KPI not depended on the struct vm_map
layout.
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D14902
global to per-domain state. Protect reservations with the free lock
from the domain that they belong to. Refactor to make vm domains more
of a first class object.
Reviewed by: markj, kib, gallatin
Tested by: pho
Sponsored by: Netflix, Dell/EMC Isilon
Differential Revision: https://reviews.freebsd.org/D14000
In several places, entry start and end field are checked, after
excluding the possibility that the entry is map->header. By assigning
max and min values to the start and end fields of map->header in
vm_map_init, the explicit map->header checks become unnecessary.
Submitted by: Doug Moore <dougm@rice.edu>
Reviewed by: alc, kib, markj (previous version)
Tested by: pho (previous version)
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D13735
for finding aligned free space in the given map. With this change, we
always return KERN_NO_SPACE when we fail to find free space. Whereas,
previously, we might return KERN_INVALID_ADDRESS. Also, with this change,
we explicitly check for address wrap, rather than relying upon the map's
min and max addresses to establish sentinel-like regions.
This refactoring was inspired by the problem that we addressed in r326098.
Reviewed by: kib
Tested by: pho
Discussed with: markj
MFC after: 3 weeks
Differential Revision: https://reviews.freebsd.org/D13346
The arena argument to kmem_*() is now only used in an assert. A follow-up
commit will remove the argument altogether before we freeze the API for the
next release.
This replaces the hard limit on kmem size with a soft limit imposed by UMA. When
the soft limit is exceeded we periodically wakeup the UMA reclaim thread to
attempt to shrink KVA. On 32bit architectures this should behave much more
gracefully as we exhaust KVA. On 64bit the limits are likely never hit.
Reviewed by: markj, kib (some objections)
Discussed with: alc
Tested by: pho
Sponsored by: Netflix / Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D13187
On KERN_NO_SPACE error, as it is returned now, vm_map_find() continues
the loop searching for the suitable range for the requested mapping
with specific alignment. Since the vm_map_findspace() succesfully
finds the same place, the loop never ends.
The errors returned from vm_map_stack() completely repeat the behavior
of vm_map_insert() now, as suggested by Alan.
Reported by: Arto Pekkanen <aksyom@gmail.com>
PR: 223732
Reviewed by: alc, markj
Discussed with: jhb
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
Differential revision: https://reviews.freebsd.org/D13186
second scan of the address space with find_space = VMFS_ANY_SPACE is
performed. Previously, vm_map_find() released and reacquired the map lock
between the first and second scans. However, there is no compelling
reason to do so. This revision modifies vm_map_find() to retain the map
lock.
Reviewed by: jhb, kib, markj
MFC after: 1 week
X-Differential Revision: https://reviews.freebsd.org/D13155
Mainly focus on files that use BSD 3-Clause license.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
Special thanks to Wind River for providing access to "The Duke of
Highlander" tool: an older (2014) run over FreeBSD tree was useful as a
starting point.
Commit message for r321173 incorrectly stated that the change disables
automatic stack growth from the AIO daemons contexts, with explanation
that this is currently prevents applying wrong resource limits. Fix
this by actually disabling the growth.
Noted by: alc
Reviewed by: alc, jhb
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
check blocking grow from other processes accesses.
Debugger may access stack grow area with ptrace(2). In this case,
real state of the process is to not have the stack grown, which
provides more accurate inspection. Technical reason to avoid the grow
is to avoid applying wrong process (debugger) stack limit.
This change also has a consequence of making aio workers accesses past
the bottom of stacks into EFAULT, arguably the situation is a
programmers mistake.
Reported by: jhb
Discussed with: alc, jhb
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
Reported by: antoine
Tested by: Stefan Ehmann <shoesoft@gmx.net>,
Jan Kokemueller <jan.kokemueller@gmail.com>
PR: 220493
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
gap entry in the vm map being smaller than the sysctl-derived stack guard
size. Otherwise, the value of max_grow can suffer from overflow, and the
roundup(grow_amount, sgrowsiz) will not be properly capped, resulting in
an assertion failure.
In collaboration with: kib
MFC after: 3 days
recycles the current vm space. Otherwise, an mlockall(MCL_FUTURE) could
still be in effect on the process after an execve(2), which violates the
specification for mlockall(2).
It's pointless for vm_map_stack() to check the MEMLOCK limit. It will
never be asked to wire the stack. Moreover, it doesn't even implement
wiring of the stack.
Reviewed by: kib, markj
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D11421
a hint.
Right now, for non-fixed mmap(2) calls, addr is de-facto interpreted
as the absolute minimal address of the range where the mapping is
created. The VA allocator only allocates in the range [addr,
VM_MAXUSER_ADDRESS]. This is too restrictive, the mmap(2) call might
unduly fail if there is no free addresses above addr but a lot of
usable space below it.
Lift this implementation limitation by allocating VA in two passes.
First, try to allocate above addr, as before. If that fails, do the
second pass with less restrictive constraints for the start of
allocation by specifying minimal allocation address at the max bss
end, if this limit is less than addr.
One important case where this change makes a difference is the
allocation of the stacks for new threads in libthr. Under some
configuration conditions, libthr tries to hint kernel to reuse the
main thread stack grow area for the new stacks. This cannot work by
design now after grow area is converted to stack, and there is no
unallocated VA above the main stack. Interpreting requested stack
base address as the hint provides compatibility with old libthr and
with (mis-)configured current libthr.
Reviewed by: alc
Tested by: dim (previous version)
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
the requested protection.
The syscall returns success without changing the protection of the
guard. This is consistent with the current mprotect(2) behaviour on
the unmapped ranges. More important, the calls performed by libc and
libthr to allow execution of stacks, if requested by the loaded ELF
objects, do the expected change instead of failing on the grow space
guard.
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
If mmap(2) is called with the MAP_STACK flag and the size which is
less or equal to the initial stack mapping size plus guard,
calculation of the mapping layout created zero-sized guard. Attempt
to create such entry failed in vm_map_insert(), causing the whole
mmap(2) call to fail.
Fix it by adjusting the initial mapping size to have space for
non-empty guard. Reject MAP_STACK requests which are shorter or equal
to the configured guard pages size.
Reported and tested by: Manfred Antar <null@pozo.com>
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Guard, requested by the MAP_GUARD mmap(2) flag, prevents the reuse of
the allocated address space, but does not allow instantiation of the
pages in the range. It is useful for more explicit support for usual
two-stage reserve then commit allocators, since it prevents accidental
instantiation of the mapping, e.g. by mprotect(2).
Use guards to reimplement stack grow code. Explicitely track stack
grow area with the guard, including the stack guard page. On stack
grow, trivial shift of the guard map entry and stack map entry limits
makes the stack expansion. Move the code to detect stack grow and
call vm_map_growstack(), from vm_fault() into vm_map_lookup().
As result, it is impossible to get random mapping to occur in the
stack grow area, or to overlap the stack guard page.
Enable stack guard page by default.
Reviewed by: alc, markj
Man page update reviewed by: alc, bjk, emaste, markj, pho
Tested by: pho, Qualys
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D11306 (man pages)
The issue is catched by "vm_map_wire: alien wire" KASSERT at the end
of the vm_map_wire(). We currently check for MAP_ENTRY_WIRE_SKIPPED
flag before ensuring that the wiring_thread is curthread. For HOLESOK
wiring, this means that we might see WIRE_SKIPPED entry from different
wiring.
The fix it by only checking WIRE_SKIPPED if the entry is put
IN_TRANSITION by us. Also fixed a typo in the comment explaining the
situation.
Reported and tested by: pho
Reviewed by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
instantiated.
Calling pmap_copy() on non-faulted anonymous memory entries is useless.
Noted and reviewed by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
VM_MAP_WIRE_SYSTEM mode when wiring the newly grown stack.
System maps do not create auto-grown stack. Any stack we handled,
even for P_SYSTEM, must be for user address space. P_SYSTEM processes
with mapped user space is either init(8) or an aio worker attached to
other user process with aio buffer pointing into stack area. In either
case, VM_MAP_WIRE_USER mode should be used.
Noted and reviewed by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
We are otherwise susceptible to a race with a concurrent vm_map_wire(),
which may drop the map lock to fault pages into the object chain. In
particular, vm_map_protect() will only copy newly writable wired pages
into the top-level object when MAP_ENTRY_USER_WIRED is set, but
vm_map_wire() only sets this flag after its fault loop. We may thus end
up with a writable wired entry whose top-level object does not contain the
entire range of pages.
Reported and tested by: pho
Reviewed by: kib
MFC after: 1 week
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D10349
INHERIT_ZERO is an OpenBSD feature.
When a page is marked as such, it would be zeroed
upon fork().
This would be used in new arc4random(3) functions.
PR: 182610
Reviewed by: kib (earlier version)
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D427
Fix two missed places where vm_object offset to index calculation
should use unsigned shift, to allow handling of full range of unsigned
offsets used to create device mappings.
Reported and tested by: royger (previous version)
Reviewed by: alc (previous version)
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Renumber cluase 4 to 3, per what everybody else did when BSD granted
them permission to remove clause 3. My insistance on keeping the same
numbering for legal reasons is too pedantic, so give up on that point.
Submitted by: Jan Schaumann <jschauma@stevens.edu>
Pull Request: https://github.com/freebsd/freebsd/pull/96
independent layer of the virtual memory system. Update some of the nearby
comments to eliminate redundancy and improve clarity.
In vm/vm_reserv.c, do not use hyphens after adverbs ending in -ly per
The Chicago Manual of Style.
Update the comment in vm/vm_page.h defining the four types of page queues to
reflect the elimination of PG_CACHED pages and the introduction of the
laundry queue.
Reviewed by: kib, markj
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D8752
vm_offset_t. (This field is used to detect sequential access to the virtual
address range represented by the map entry.) There are three reasons to
make this change. First, a vm_offset_t is smaller on 32-bit architectures.
Consequently, a struct vm_map_entry is now smaller on 32-bit architectures.
Second, a vm_offset_t can be written atomically, whereas it may not be
possible to write a vm_pindex_t atomically on a 32-bit architecture. Third,
using a vm_pindex_t makes the next_read field dependent on which object in
the shadow chain is being read from.
Replace an "XXX" comment.
Reviewed by: kib
Approved by: re (gjb)
Sponsored by: EMC / Isilon Storage Division
- Pull the vmspace logic out into helper functions and reduce duplication.
Operations on the vmspace are all isolated to vm_map.c, but it now exports
a new 'vmspace_switch_aio' for use by AIO kernel processes.
- When an AIO kernel process wants to exit, break out of the main loop and
perform cleanup after the loop end. This reduces a lot of indentation and
allows cleanup to more closely mirror setup actions before the loop starts.
- Convert a DIAGNOSTIC to KASSERT().
- Replace mycp with more typical 'p'.
Reviewed by: kib
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D4990
mapped address without valid pte installed, when parallel wiring of
the entry happen. The entry must be copy on write. If entry is COW
but was already copied, and parallel wiring set
MAP_ENTRY_IN_TRANSITION, vm_fault() would sleep waiting for the
MAP_ENTRY_IN_TRANSITION flag to clear. After that, the fault handler
is restarted and vm_map_lookup() or vm_map_lookup_locked() trip over
the check. Note that this is race, if the address is accessed after
the wiring is done, the entry does not fault at all.
There is no reason in the current kernel to disallow write access to
the COW wired entry if the entry permissions allow it. Initially this
was done in r24666, since that kernel did not supported proper
copy-on-write for wired text, which was fixed in r199869. The r251901
revision re-introduced the r24666 fix for the current VM.
Note that write access must clear MAP_ENTRY_NEEDS_COPY entry flag by
performing COW. In reverse, when MAP_ENTRY_NEEDS_COPY is set in
vmspace_fork(), the MAP_ENTRY_USER_WIRED flag is cleared. Put the
assert stating the invariant, instead of returning the error.
Reported and debugging help by: peter
Reviewed by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
the VM_FAULT_CHANGE_WIRING flag to VM_FAULT_WIRE. Assert that the
flag is only passed when faulting on the wired map entry. Remove the
vm_page_unwire() call, which should be never reachable.
Since VM_FAULT_WIRE flag implies wired map entry, the TRYPAGER() macro
is reduced to the testing of the fs.object having a default pager.
Inline the check.
Suggested and reviewed by: alc
Tested by: pho (previous version)
MFC after: 1 week
user address when ABI uses shared page.
Note that the change is no-op for correctness, since shared page does
not fault. The mapping for the shared page is installed at the
address space creation, the page is unmanaged and its pte/pv entry
cannot be reclaimed.
Submitted by: Oliver Pinter
Review: https://reviews.freebsd.org/D2954
MFC after: 1 week