Commit Graph

3875 Commits

Author SHA1 Message Date
Mateusz Guzik
efb6d4a479 uma: whack main zone counter update in the slow path, freeing side
See r333052.
2018-07-12 22:35:52 +00:00
Mark Johnston
013072f04c Fix pre-SI_SUB_CPU initialization of per-CPU counters.
r336020 introduced pcpu_page_alloc(), replacing page_alloc() as the
backend allocator for PCPU UMA zones.  Unlike page_alloc(), it does
not honour malloc(9) flags such as M_ZERO or M_NODUMP, so fix that.

r336020 also changed counter(9) to initialize each counter using a
CPU_FOREACH() loop instead of an SMP rendezvous.  Before SI_SUB_CPU,
smp_rendezvous() will only execute the callback on the current CPU
(i.e., CPU 0), so only one counter gets zeroed.  The rest are zeroed
by virtue of the fact that UMA gratuitously zeroes slabs when importing
them into a zone.

Prior to SI_SUB_CPU, all_cpus is clear, so with r336020 we weren't
zeroing vm_cnt counters during boot: the CPU_FOREACH() loop had no
effect, and pcpu_page_alloc() didn't honour M_ZERO.  Fix this by
iterating over the full range of CPU IDs when zeroing counters,
ignoring whether the corresponding bits in all_cpus are set.

Reported and tested by:	pho (previous version)
Reviewed by:		kib (previous version)
Differential Revision:	https://reviews.freebsd.org/D16190
2018-07-10 00:18:12 +00:00
Sean Bruno
a03af34228 Wrap the declaration and assignment of "stripe" with #ifdef NUMA declarations
as not all targets are NUMA aware.

Found with gcc.

Sponsored by:	Limelight Networks
Differential Revision:	https://reviews.freebsd.org/D16113
2018-07-07 13:37:44 +00:00
Jeff Roberson
2ef6727edd Use the ticks since the last update to reduce hysteresis in the partpopq and
contention on the vm_reserv_domain lock.

This gives a roughly 8x speedup on will-it-scale fault1 on a 16 core machine.

Reviewed by:	alc, kib, markj
2018-07-07 01:54:45 +00:00
Konstantin Belousov
32f0fefc39 Save a call to pmap_remove() if entry cannot have any pages mapped.
Due to the way rtld creates mappings for the shared objects, each dso
causes unmap of at least three guard map entries.  For instance, in
the buildworld load, this change reduces the amount of pmap_remove()
calls by 1/5.

Profiled by:	alc
Reviewed by:	alc, markj
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D16148
2018-07-06 12:44:48 +00:00
Konstantin Belousov
be7be41275 Style: no need for braces around single-line then clause.
Reviewed by:	alc, markj
Sponsored by:	The FreeBSD Foundation
MFC after:	3 days
Differential revision:	https://reviews.freebsd.org/D16148
2018-07-06 12:37:46 +00:00
Matt Macy
ab3059a8e7 Back pcpu zone with domain correct pages
- Change pcpu zone consumers to use a stride size of PAGE_SIZE.
  (defined as UMA_PCPU_ALLOC_SIZE to make future identification easier)

- Allocate page from the correct domain for a given cpu.

- Don't initialize pc_domain to non-zero value if NUMA is not defined
  There are some misconceptions surrounding this field. It is the
  _VM_ NUMA domain and should only ever correspond to valid domain
  values as understood by the VM.

The former slab size of sizeof(struct pcpu) was somewhat arbitrary.
The new value is PAGE_SIZE because that's the smallest granularity
which the VM can allocate a slab for a given domain. If you have
fewer than PAGE_SIZE/8 counters on your system there will be some
memory wasted, but this is obviously something where you want the
cache line to be coming from the correct domain.

Reviewed by: jeff
Sponsored by: Limelight Networks
Differential Revision:  https://reviews.freebsd.org/D15933
2018-07-06 02:06:03 +00:00
Andrew Turner
2bf9501287 Create a new macro for static DPCPU data.
On arm64 (and possible other architectures) we are unable to use static
DPCPU data in kernel modules. This is because the compiler will generate
PC-relative accesses, however the runtime-linker expects to be able to
relocate these.

In preparation to fix this create two macros depending on if the data is
global or static.

Reviewed by:	bz, emaste, markj
Sponsored by:	ABT Systems Ltd
Differential Revision:	https://reviews.freebsd.org/D16140
2018-07-05 17:13:37 +00:00
Konstantin Belousov
a66d7a8ddc Copyout(9) on 4/4 i386 needs correct vm_page_array[].
On the 4/4 i386, copyout(9) may need to call pmap_extract_and_hold()
on arbitrary userspace mapping.  If the mapping is backed by the
non-managed cdev pager or by the sg pager, on dense configs we might
access arbitrary element of vm_page_array[], in particular, not
corresponding to a page from the memory segment.  Initialize such pages
as fictitious with the corresponding physical address.

Reported by:	bde
Reviewed by:	alc, markj (previous version)
Sponsored by:	The FreeBSD Foundation
Differential revision:	https://reviews.freebsd.org/D16085
2018-07-05 16:43:15 +00:00
Alan Cox
370a338a7d Allow callers to vm_phys_split_pages() to specify whether insertion should
occur at the head or the tail of the page queues.
2018-07-05 02:08:57 +00:00
Matt Macy
f4b3640475 inline atomics and allow tied modules to inline locks
- inline atomics in modules on i386 and amd64 (they were always
  inline on other arches)
- allow modules to opt in to inlining locks by specifying
  MODULE_TIED=1 in the makefile

Reviewed by: kib
Sponsored by: Limelight Networks
Differential Revision: https://reviews.freebsd.org/D16079
2018-07-02 19:48:38 +00:00
Alan Cox
7493904eca Introduce vm_phys_enq_range(), and call it in vm_phys_alloc_npages()
and vm_phys_alloc_seg_contig() instead of vm_phys_free_contig().  In
short, vm_phys_enq_range() is simpler and faster than the more general
vm_phys_free_contig(), and in the case of vm_phys_alloc_seg_contig(),
vm_phys_free_contig() was placing the excess physical pages at the
wrong end of the queues.

In collaboration with:	Doug Moore <dougm@rice.edu>
2018-07-02 17:18:46 +00:00
Alan Cox
9161b4de54 Three changes to vm_phys_alloc_seg_contig():
1. Optimize the order computation.

2. Update the pool for all of the chunks that are removed from the free
   page lists, and not just the first chunk.

3. Simplify the code for returning excess pages to the free page lists.

Reviewed by:	Doug Moore <dougm@rice.edu>
2018-06-29 04:08:14 +00:00
Alan Cox
32d81f21b9 Reflow one of the comments describing vm_phys_alloc_npages(). 2018-06-28 17:52:06 +00:00
Ed Maste
e8a1ec3e05 Split kern_break from sys_break and use it in linuxulator
Previously the linuxulator's linux_brk invoked the FreeBSD sys_break
syscall implementation directly.  Instead, move the bulk of the existing
implementation to kern_break, and call that from both sys_break and
linux_brk.

This also addresses a minor bug in linux_brk in that we now return the
actual (rounded up) break address, rather than the requested value.

Reviewed by:	brooks (earlier version)
Sponsored by:	Turing Robotic Industries
Differential Revision:	https://reviews.freebsd.org/D16019
2018-06-27 14:45:13 +00:00
Alan Cox
89ea39a727 Update the physical page selection strategy used by vm_page_import() so
that it does not cause rapid fragmentation of the free physical memory.

Reviewed by:	jeff, markj (an earlier version)
Differential Revision:	https://reviews.freebsd.org/D15976
2018-06-26 18:29:56 +00:00
Mateusz Guzik
a3d799fbb5 vm: stop passing M_ZERO when allocating radix nodes
Allocation explicitely initialized the 3 leading fields. The rest is an
array which is supposed to be NULL-ed prior to deallocation.

Delegate zeroing to the infrequently called object initializator.

This gets rid of one of the most common memset consumers.

Reviewed by:	markj
Differential Revision:	https://reviews.freebsd.org/D15989
2018-06-24 13:08:05 +00:00
Jeff Roberson
63b5557b2f Sort uma_zone fields according to 64 byte cache line with adjacent line
prefetch on 64bit architectures.  Prior to this, two lines were needed
for the fast path and each line may fetch an unused adjacent neighbor.
 - Move fields used by the fast path into a single line.
 - Move constants into the adjacent line which is mostly used for
   the spare bucket alloc 'medium path'.
 - Unpad the mtx which is only used by the fast path and place it in
   a line with rarely used data.  This aligns the cachelines better and
   eliminates 128 bytes of wasted space.

This gives a 45% improvement on a will-it-scale test on a 24 core machine.

Reviewed by:	mmacy
2018-06-23 08:10:09 +00:00
Ian Lepore
c5b7751fa2 Eliminate a spurious panic on non-SMP systems (occurred on shutdown/reboot). 2018-06-22 20:22:26 +00:00
Ruslan Bukin
b47999470d Fix uma_zalloc_pcpu_arg() operation in case of !SMP build.
Reviewed by:	mjg
Sponsored by:	DARPA, AFRL
2018-06-21 11:43:54 +00:00
Brooks Davis
9da5364ed9 Name the implementation of brk and sbrk sys_break().
The break() system call was renamed (several times) starting in v3
AT&T UNIX when C was invented and break was a language keyword. The
last vestage of a need for it to be called something else (eg obreak)
was removed in r225617 which consistantly prefixed all syscall
implementations.

Reviewed by:	emaste, kib (older version)
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D15638
2018-06-14 21:27:25 +00:00
Konstantin Belousov
b7b8a09658 Handle the race between fork/vm_object_split() and faults.
If fault started before vmspace_fork() locked the map, and then during
fork, vm_map_copy_entry()->vm_object_split() is executed, it is
possible that the fault instantiate the page into the original object
when the page was already copied into the new object (see
vm_map_split() for the orig/new objects terminology). This can happen
if split found a busy page (e.g. from the fault) and slept dropping
the objects lock, which allows the swap pager to instantiate
read-behind pages for the fault.  Then the restart of the scan can see
a page in the scanned range, where it was already copied to the upper
object.

Fix it by instantiating the read-ahead pages before
swap_pager_getpages() method drops the lock to allocate pbuf.  The
object scan would see the whole range prefilled with the busy pages
and not proceed the range.

Note that vm_fault rechecks the map generation count after the object
unlock, so that it restarts the handling if raced with split, and
re-lookups the right page from the upper object.

In collaboration with:	alc
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2018-06-14 19:41:02 +00:00
Jonathan T. Looney
0766f278d8 Make UMA and malloc(9) return non-executable memory in most cases.
Most kernel memory that is allocated after boot does not need to be
executable.  There are a few exceptions.  For example, kernel modules
do need executable memory, but they don't use UMA or malloc(9).  The
BPF JIT compiler also needs executable memory and did use malloc(9)
until r317072.

(Note that a side effect of r316767 was that the "small allocation"
path in UMA on amd64 already returned non-executable memory.  This
meant that some calls to malloc(9) or the UMA zone(9) allocator could
return executable memory, while others could return non-executable
memory.  This change makes the behavior consistent.)

This change makes malloc(9) return non-executable memory unless the new
M_EXEC flag is specified.  After this change, the UMA zone(9) allocator
will always return non-executable memory, and a KASSERT will catch
attempts to use the M_EXEC flag to allocate executable memory using
uma_zalloc() or its variants.

Allocations that do need executable memory have various choices.  They
may use the M_EXEC flag to malloc(9), or they may use a different VM
interfact to obtain executable pages.

Now that malloc(9) again allows executable allocations, this change also
reverts most of r317072.

PR:		228927
Reviewed by:	alc, kib, markj, jhb (previous version)
Sponsored by:	Netflix
Differential Revision:	https://reviews.freebsd.org/D15691
2018-06-13 17:04:41 +00:00
Mateusz Guzik
4e180881ae uma: implement provisional api for per-cpu zones
Per-cpu zone allocations are very rarely done compared to regular zones.
The intent is to avoid pessimizing the latter case with per-cpu specific
code.

In particular contrary to the claim in r334824, M_ZERO is sometimes being
used for such zones. But the zeroing method is completely different and
braching on it in the fast path for regular zones is a waste of time.
2018-06-08 21:40:03 +00:00
Mateusz Guzik
b8af2820f6 uma: fix up r334824
Turns out there is code which ends up passing M_ZERO to counters.
Since counters zero unconditionally on their own, just ignore drop the
flag in that place.
2018-06-08 05:40:36 +00:00
Mateusz Guzik
ea99223ec9 uma: remove M_ZERO support for pcpu zones
Nothing in the tree uses it and pcpu zones have a fundamentally different use
case than the regular zones - they are not supposed to be allocated and freed
all the time.

This reduces pollution in the allocation fast path.
2018-06-08 03:16:16 +00:00
Gleb Smirnoff
c5deaf0452 UMA memory debugging enabled with INVARIANTS consists of two things:
trashing freed memory and checking that allocated memory is properly
trashed, and also of keeping a bitset of freed items. Trashing/checking
creates a lot of CPU cache poisoning, while keeping debugging bitsets
consistent creates a lot of contention on UMA zone lock(s). The performance
difference between INVARIANTS kernel and normal one is mostly attributed
to UMA debugging, rather than to all KASSERT checks in the kernel.

Add loader tunable vm.debug.divisor that allows either to turn off UMA
debugging completely, or turn it on only for a fraction of allocations,
while still running all KASSERTs in kernel. That allows to run INVARIANTS
kernels in production environments without reducing load by orders of
magnitude, but still doing useful extra checks.

Default value is 1, meaning debug every allocation. Value of 0 would
disable UMA debugging completely. Values above 1 enable debugging only
for every N-th item. It isn't possible to strictly follow the number,
but still amount of debugging is reduced roughly by (N-1)/N percent.

Sponsored by:		Netflix
Differential Revision:	https://reviews.freebsd.org/D15199
2018-06-08 00:15:08 +00:00
Jonathan T. Looney
16e05b3275 Fix a typo in vm_domain_set(). When a domain crosses into the severe range,
we need to set the domain bit from the vm_severe_domains bitset (instead
of clearing it).

Reviewed by:	jeff, markj
Sponsored by:	Netflix, Inc.
2018-06-07 13:29:54 +00:00
Mark Johnston
9f9c9b22ec Reimplement brk() and sbrk() to avoid the use of _end.
Previously, libc.so would initialize its notion of the break address
using _end, a special symbol emitted by the static linker following
the bss section.  Compatibility issues between lld and ld.bfd could
cause the wrong definition of _end (libc.so's definition rather than
that of the executable) to be used, breaking the brk()/sbrk()
interface.

Avoid this problem and future interoperability issues by simply not
relying on _end.  Instead, modify the break() system call to return
the kernel's view of the current break address, and have libc
initialize its state using an extra syscall upon the first use of the
interface.  As a side effect, this appears to fix brk()/sbrk() usage
in executables run with rtld direct exec, since the kernel and libc.so
no longer maintain separate views of the process' break address.

PR:		228574
Reviewed by:	kib (previous version)
MFC after:	2 months
Differential Revision:	https://reviews.freebsd.org/D15663
2018-06-04 19:35:15 +00:00
Mark Johnston
27e29d103f Correct the description of vm_pageout_scan_inactive() after r334508.
Reported by:	alc
2018-06-04 16:46:36 +00:00
Alan Cox
3e7cb27cdd Use a single, consistent approach to returning success versus failure in
vm_map_madvise().  Previously, vm_map_madvise() used a traditional Unix-
style "return (0);" to indicate success in the common case, but Mach-
style return values in the edge cases.  Since KERN_SUCCESS equals zero,
the only problem with this inconsistency was stylistic.  vm_map_madvise()
has exactly two callers in the entire source tree, and only one of them
cares about the return value.  That caller, kern_madvise(), can be
simplified if vm_map_madvise() consistently uses Unix-style return
values.

Since vm_map_madvise() uses the variable modify_map as a Boolean, make it
one.

Eliminate a redundant error check from kern_madvise().  Add a comment
explaining where the check is performed.

Explicitly note that exec_release_args_kva() doesn't care about
vm_map_madvise()'s return value.  Since MADV_FREE is passed as the
behavior, the return value will always be zero.

Reviewed by:	kib, markj
MFC after:	7 days
2018-06-04 16:28:06 +00:00
Justin Hibbits
12f691959f Align UMA data to 128 byte cacheline size
Suggested by:	mjg
2018-06-04 15:44:17 +00:00
Mark Johnston
49a3710c89 Remove the "pass" variable from the page daemon control loop.
It serves little purpose after r308474 and r329882.  As a side
effect, the removal fixes a bug in r329882 which caused the
page daemon to periodically invoke lowmem handlers even in the
absence of memory pressure.

Reviewed by:	jeff
Differential Revision:	https://reviews.freebsd.org/D15491
2018-06-02 00:01:07 +00:00
Konstantin Belousov
633d3b1c71 Only check for MAP_32BIT when available.
Reported by:	mmacy
Sponsored by:	The FreeBSD Foundation
MFC after:	10 days
2018-06-01 23:50:51 +00:00
Alan Cox
60221a5701 Only a small subset of mmap(2)'s flags should be used in combination with
the flag MAP_GUARD.  Rather than enumerating the flags that are not
allowed, enumerate the flags that are allowed.  The list of allowed flags
is much shorter and less likely to change.  (As an aside, one of the
previously enumerated flags, MAP_PREFAULT, was not even a legal flag for
mmap(2).  However, because of an earlier check within kern_mmap(), this
misuse of MAP_PREFAULT was harmless.)

Reviewed by:	kib
MFC after:	10 days
2018-06-01 21:37:42 +00:00
Mark Johnston
6939b4d3b4 Typo.
PR:		228533
Submitted by:	Jakub Piecuch <j.piecuch96@gmail.com>
MFC after:	1 week
2018-05-30 16:48:48 +00:00
Alan Cox
6e1e759c56 Addendum to r334233. In vm_fault_populate(), since the page lock is held,
we must use vm_page_xunbusy_maybelocked() rather than vm_page_xunbusy() to
unbusy the page.

Reviewed by:	kib
X-MFC with:	r334233
2018-05-28 16:23:39 +00:00
Alan Cox
fccdefa1a1 Eliminate duplicate assertions. We assert at the start of vm_fault_hold()
that the map entry is wired if the caller passes the flag VM_FAULT_WIRE.
Eliminate the same assertion, but spelled differently, at the end of
vm_fault_hold() and vm_fault_populate().  Repeat the assertion only if the
map is unlocked and the map lookup must be repeated.

Reviewed by:	kib
MFC after:	10 days
Differential Revision:	https://reviews.freebsd.org/D15582
2018-05-28 04:38:10 +00:00
Alan Cox
70183daa80 Use pmap_enter(..., psind=1) in vm_fault_populate() on amd64. While
superpage mappings were already being created by automatic promotion in
vm_fault_populate(), this change reduces the cost of creating those
mappings.  Essentially, one pmap_enter(..., psind=1) call takes the place
of 512 pmap_enter(..., psind=0) calls, and that one pmap_enter(...,
psind=1) call eliminates the allocation of a page table page.

Reviewed by:	kib
MFC after:	10 days
Differential Revision:	https://reviews.freebsd.org/D15572
2018-05-26 02:59:34 +00:00
Brooks Davis
7351a8bdb5 Make vadvise compat freebsd11.
The vadvise syscall (aka ovadvise) is undocumented and has always been
implmented as returning EINVAL.  Put the syscall under COMPAT11 and
provide a userspace implementation.

Reviewed by:	kib
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D15557
2018-05-25 20:40:23 +00:00
Alan Cox
d3f8534e99 Eliminate an unused parameter from vm_fault_populate().
Reviewed by:	kib
MFC after:	10 days
2018-05-24 20:43:41 +00:00
Mark Johnston
7bb4634e18 Update r334154 with review feedback from D15490.
An old revision was committed by accident.

Differential Revision:	https://reviews.freebsd.org/D15490
2018-05-24 20:26:37 +00:00
Brooks Davis
758d46cfb0 Don't implement break(2) at all on aarch64 and riscv.
This should have been done when they were removed from libc, but was
overlooked in the runup to 11.0.  No users should exist.

Approved by:	andrew
Sponsored by:	DARPA, AFRL
Differential Revision:	https://reviews.freebsd.org/D15539
2018-05-24 17:04:27 +00:00
Mark Johnston
be37ee791f Split the active and inactive queue scans into separate subroutines.
The scans are largely independent, so this helps make the code
marginally neater, and makes it easier to incorporate feedback from the
active queue scan into the page daemon control loop.

Improve some comments while here.  No functional change intended.

Reviewed by:	alc, kib
Differential Revision:	https://reviews.freebsd.org/D15490
2018-05-24 14:16:22 +00:00
Mark Johnston
a99ee60b9a Ensure that "m" is initialized in vm_page_alloc_freelist_domain().
While here, remove a superfluous comment.

Coverity CID:	1383559
MFC after:	3 days
2018-05-22 16:19:48 +00:00
Mark Johnston
23d123c6cf Use the canonical check for reservation support. 2018-05-19 23:49:13 +00:00
Mark Johnston
01f04471f4 Don't increment addl_page_shortage for wired pages.
Such pages are dequeued as they're encountered during the inactive queue
scan, so by the time we get to the active queue scan, they should have
already been subtracted from the inactive queue length.

Reviewed by:	alc
Differential Revision:	https://reviews.freebsd.org/D15479
2018-05-18 16:59:58 +00:00
Mark Johnston
ba2b3349e1 Fix a race in vm_page_pagequeue_lockptr().
The value of m->queue must be cached after comparing it with PQ_NONE,
since it may be concurrently changing.

Reported by:	glebius
Reviewed by:	jeff
Differential Revision:	https://reviews.freebsd.org/D15462
2018-05-17 04:27:08 +00:00
Matt Macy
73e37d1deb Fix powerpc64 LINT
vm_object_reserve() == true is impossible on power. Make conditional
on VM_LEVEL_0_ORDER being defined.

Reviewed by:	jeff
Approved by:	sbruno
2018-05-17 03:19:31 +00:00
Mark Johnston
36f8fe9bbb Get rid of vm_pageout_page_queued().
vm_page_queue(), added in r333256, generalizes vm_pageout_page_queued(),
so use it instead.  No functional change intended.

Reviewed by:	kib
Differential Revision:	https://reviews.freebsd.org/D15402
2018-05-13 13:00:59 +00:00