doing so adds more flexibility with less redundant code.
Reviewed by: jhb, markj, kib
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D21250
Move the floppy driver to the x86 specific notes file.
Reviewed by: jhb, manu, jhibbits, emaste
Differential Revision: https://reviews.freebsd.org/D21208
x86 needs sc, as does sparc64. powerpc doesn't use it by default, but some old
powermac notebooks do not work with vt yet for reasons unknonw. Even so, I've
removed it from powerpc LINT. It's not in daily use there, and the intent is to
100% switch to vt now that it works for that platform to limit support burden.
All the other architectures omit some or all of the screen savers from their
lint config. Move them to the x86 NOTES files and remove the exclusions. This
reduces slightly the number of savers sparc64 compiles, but since they are in
GENERIC, the overage is adequate and if someone reaelly wants to sort them out
in sparc64 they can sweat the details and the testing.
Reviewed by: jhb (earlier version), manu (earlier version), jhibbits
Differential Revision: https://reviews.freebsd.org/D21233
Create a rough and ready NOTES file from GENERIC, remove the duplication from
sys/conf/NOTES and add relevant no* directives to make this compile.
Reviewed by: jhb, manu (earlier versions that differed only in comments)
Differential Revision: https://reviews.freebsd.org/D21184
The COMPAT_43 option isn't quite like the other compat options, and arm64 makes
attempts to support it in 64-bit mode. In 32-bit compat mode, however, two
syscall implementations that COMPAT_FREEBSD32 assumes will be there are
missing. Provide implementations for these: ofreebsd32_sigreturn (which we'll
never encounter, so implement it as nosys as is done in kern_sig.c) and
ofreebsd32_getpagesize, where we'll always return 4096 since that's the only
PAGE_SIZE we support, similar to how the ia32 implementation does things.
Reviewed by: manu@
Differential Revision: https://reviews.freebsd.org/D21192
pmap's lock ensures that other operations on the pmap don't observe the
old mapping being broken before the new mapping is established. However,
pmap_kextract() doesn't acquire the kernel pmap's lock, so it may observe
the broken mapping. And, if it does, it returns an incorrect result.
This revision implements a lock-free solution to this problem in
pmap_update_entry() and pmap_kextract() because pmap_kextract() can't
acquire the kernel pmap's lock.
Reported by: andrew, greg_unrelenting.technology
Reviewed by: andrew, markj
Tested by: greg_unrelenting.technology
X-MFC with: r350579
Differential Revision: https://reviews.freebsd.org/D21169
memory barrier between the stores for initializing a page table page and
the store for adding that page to the page table. Otherwise, a page table
walk by another processor's MMU could see the page table page before it
sees the initialized entries.
Simplify pmap_growkernel(). In particular, eliminate an unnecessary TLB
invalidation.
Reviewed by: andrew, markj
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D21126
The ARMv8 reference manual only states that the bit is reserved in
this case; following Linux's example, use it instead of a
software-defined bit for the purpose of indicating that a managed
mapping is writable.
Reviewed by: alc, andrew
MFC after: r350004
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D21121
mapping and then destroy one of the 4 KB page mappings so that there is a
potential trigger for repromotion. Currently, we destroy the first 4 KB
page mapping that falls within the (current) superpage mapping or the
virtual address range [sva, eva). However, I have found empirically that
destroying the last 4 KB mapping produces slightly better results,
specifically, more promotions and fewer failed promotion attempts.
Accordingly, this revision changes pmap_advise() to destroy the last 4 KB
page mapping. It also replaces some nearby uses of boolean_t with bool.
Reviewed by: kib, markj
Differential Revision: https://reviews.freebsd.org/D21115
It is assembled using "${CC} -x assembler-with-cpp", which by convention
(bsd.suffixes.mk) uses the .asm extension.
This is a portion of the review referenced below (D18344). That review
also renamed linux_support.s to .S, but that is a functional change
(using the compiler's integrated assembler instead of as) and will be
revisited separately.
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D18344
If we take a WnR permission fault on a managed, writeable and dirty
PTE, simply return success without calling the main fault handler. This
situation can occur if multiple threads simultaneously access a clean
writeable mapping and trigger WnR faults; losers of the race to mark the
PTE dirty would end up calling the main fault handler, which had no work
to do.
Reported by: alc
Reviewed by: alc
MFC with: r350004
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D21097
Terasic DE10-Pro (an Intel Stratix 10 GX/SX FPGA Development Kit).
The Altera EMAC is an instance of Synopsys DesignWare Gigabit MAC.
This driver sets correct clock range for MDIO interface on Intel Stratix 10
platform.
This is required due to lack of support for clock manager device for
this platform that could tell us the clock frequency value for ethernet
clock domain.
Sponsored by: DARPA, AFRL
if a demotion succeeds, then all of the 4KB page mappings within the
superpage-sized region must be valid, so there is no point in testing the
validity of the 4KB page mapping that is going to be write protected.
Deindent the nearby code.
Reviewed by: kib, markj
Tested by: pho (amd64, i386)
X-MFC after: r350004 (this change depends on arm64 dirty bit emulation)
Differential Revision: https://reviews.freebsd.org/D21027
Previously only some of the ID register fields were 64 bit. To allow
for a script to generate these mark them all 64 bit. To allow for their
use in assembly we need to use the UINT64_C macro via a new UL macro
to stop the lines from being too long.
MFC after: 1 week
Sponsored by: DARPA, AFRL
Differential Revision: https://reviews.freebsd.org/D20977
we should test ATTR_SW_DBM, not ATTR_AP_RW, to determine whether to set
PGA_WRITEABLE. In effect, we are currently setting PGA_WRITEABLE based on
whether the dirty bit is preset, not whether the mapping is writeable.
Correct this mistake.
Reviewed by: markj
X-MFC with: r350004
Differential Revision: https://reviews.freebsd.org/D21013
where the page table entry was previously invalid. (Note that I did not
replace pmap_load_store() when it was followed by a TLB invalidation, even
if we are not using the return value from pmap_load_store().)
Correct an error in pmap_enter(). A test for determining when to set
PGA_WRITEABLE was always true, even if the mapping was read only.
In pmap_enter_l2(), when replacing an empty kernel page table page by a
superpage mapping, clear the old l2 entry and issue a TLB invalidation. My
reading of the ARM architecture manual leads me to believe that the TLB
could hold an intermediate entry referencing the empty kernel page table
page even though it contains no valid mappings.
Replace a couple direct uses of atomic_clear_64() by the new
pmap_clear_bits().
In a couple comments, replace the term "paging-structure caches", which is
an Intel-specific term for the caches that hold intermediate entries in the
page table, with wording that is more consistent with the ARM architecture
manual.
Reviewed by: markj
X-MFC after: r350004
Differential Revision: https://reviews.freebsd.org/D20998
Add HWCAP support for arm64.
defines are the same as in Linux and a userland program can use
elf_aux_info to get the data.
We only save the common denominator for all cores in case the
big and little cluster have different support (this is known to
exists even if we don't support those SoCs in FreeBSD)
Differential Revision: https://reviews.freebsd.org/D17137
r350004 added most of the machinery needed to support hardware DBM
management, but it did not intend to actually enable use of the hardware
DBM bit.
Reviewed by: andrew
MFC with: r350004
Sponsored by: The FreeBSD Foundation
After r349117 and r349122, some mapping attribute changes do not trigger
superpage demotion. However, pmap_demote_l2() was not updated to ensure
that the replacement L3 entries carry any attribute changes that
occurred since promotion.
Reported and tested by: manu
Reviewed by: alc
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D20965
syscallret() doesn't use error anymore. Fix a few other places to permit
removing the return value from syscallenter() entirely.
- Remove a duplicated assertion from arm's syscall().
- Use td_errno for amd64_syscall_ret_flush_l1d.
Reviewed by: kib
MFC after: 1 month
Sponsored by: DARPA
Differential Revision: https://reviews.freebsd.org/D2090
Previously the arm64 pmap did no reference or modification tracking;
all mappings were treated as referenced and all read-write mappings
were treated as dirty. This change implements software management
of these attributes.
Dirty bit management is implemented to emulate ARMv8.1's optional
hardware dirty bit modifier management, following a suggestion from alc.
In particular, a mapping with ATTR_SW_DBM set is logically writeable and
is dirty if the ATTR_AP_RW_BIT bit is clear. Mappings with
ATTR_AP_RW_BIT set are write-protected, and a write access will trigger
a permission fault. pmap_fault() handles permission faults for such
mappings and marks the page dirty by clearing ATTR_AP_RW_BIT, thus
mapping the page read-write.
Reviewed by: alc
MFC after: 1 month
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D20907
TLB entry. Specifically, at the start of pmap_enter_quick_locked(), we
would sometimes have a TLB entry for an invalid PTE, and we would need to
issue a TLB invalidation before exiting pmap_enter_quick_locked(). However,
we should never have a TLB entry for an invalid PTE. r349905 has addressed
the root cause of the problem, and so we no longer need this workaround.
X-MFC after: r349905
Casueword(9) on ll/sc architectures must be prepared for userspace
constantly modifying the same cache line as containing the CAS word,
and not loop infinitely. Otherwise, rogue userspace livelocks the
kernel.
To fix the issue, change casueword(9) interface to return new value 1
indicating that either comparision or store failed, instead of relying
on the oldval == *oldvalp comparison. The primitive no longer retries
the operation if it failed spuriously. Modify callers of
casueword(9), all in kern_umtx.c, to handle retries, and react to
stops and requests to terminate between retries.
On x86, despite cmpxchg should not return spurious failures, we can
take advantage of the new interface and just return PSL.ZF.
Reviewed by: andrew (arm64, previous version), markj
Tested by: pho
Reported by: https://xenbits.xen.org/xsa/advisory-295.txt
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
Differential revision: https://reviews.freebsd.org/D20772
- Check for ATTR_SW_MANAGED before anything else.
- Use pmap_pte_dirty() in pmap_remove_pages().
No functional change intended.
Reviewed by: alc
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
register values" of the architecture manual, an isb instruction should be
executed after updating ttbr0_el1 and before invalidating the TLB. The
lack of this instruction in pmap_activate() appears to be the reason why
andrew@ and I have observed an unexpected TLB entry for an invalid PTE on
entry to pmap_enter_quick_locked(). Thus, we should now be able to revert
the workaround committed in r349442.
Reviewed by: markj
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D20904
of pmap_load_clear(), in places where we don't care about the page table
entry's prior contents.
Eliminate an unnecessary pmap_load() from pmap_remove_all(). Instead, use
the value returned by the pmap_load_clear() on the very next line. (In the
future, when we support "hardware dirty bit management", using the value
from the pmap_load() rather than the pmap_load_clear() would have actually
been an error because the dirty bit could potentially change between the
pmap_load() and the pmap_load_clear().)
A KASSERT() in pmap_enter(), which originated in the amd64 pmap, was meant
to check the value returned by the pmap_load_clear() on the previous line.
However, we were ignoring the value returned by the pmap_load_clear(), and
so the KASSERT() was not serving its intended purpose. Use the value
returned by the pmap_load_clear() in the KASSERT().
MFC after: 2 weeks
The hold_count and wire_count fields of struct vm_page are separate
reference counters with similar semantics. The remaining essential
differences are that holds are not counted as a reference with respect
to LRU, and holds have an implicit free-on-last unhold semantic whereas
vm_page_unwire() callers must explicitly determine whether to free the
page once the last reference to the page is released.
This change removes the KPIs which directly manipulate hold_count.
Functions such as vm_fault_quick_hold_pages() now return wired pages
instead. Since r328977 the overhead of maintaining LRU for wired pages
is lower, and in many cases vm_fault_quick_hold_pages() callers would
swap holds for wirings on the returned pages anyway, so with this change
we remove a number of page lock acquisitions.
No functional change is intended. __FreeBSD_version is bumped.
Reviewed by: alc, kib
Discussed with: jeff
Discussed with: jhb, np (cxgbe)
Tested by: pho (previous version)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D19247
1. Use _pmap_alloc_l3() instead of pmap_alloc_l3() in order to handle the
possibility that a superpage mapping for "va" was created while we slept.
(This is derived from the amd64 version.)
2. Eliminate code for allocating kernel page table pages. Kernel page
table pages are preallocated by pmap_growkernel().
3. Eliminate duplicated unlock operations when KERN_RESOURCE_SHORTAGE is
returned.
MFC after: 2 weeks
restructure cache_handle_range so that all of the data cache operations are
performed before any instruction cache operations. Then, we only need one
barrier between the data and instruction cache operations and one barrier
after the instruction cache operations.
On an Amazon EC2 a1.2xlarge instance, this simple change reduces the time
for a "make -j8 buildworld" by 9%.
Reviewed by: andrew
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D20848
copy the VFP registers.
arvm7 VFP uses 32 64bits fp registers (but those could be used in pairs to
make 16 128bits registers), while aarch64 uses 32 128bits fp registers, so
we have to copy the value of each register.
that replaced a pmap_invalidate_page() with a dsb(ishst) in
pmap_enter_quick_locked(). Even though this change is in principle
correct, I am seeing occasional, spurious bus errors that are only
reproducible without this pmap_invalidate_page(). (None of adding an
isb, "upgrading" the dsb to wait on loads as well as stores, or
disabling superpage mappings eliminates the bus errors.) Add an XXX
comment explaining why the pmap_invalidate_page() is being performed.
Discussed with: andrew, markj
In set_regs32()/fill_regs32(), we have to get/set SP and LR from/to
tf_x[13] and tf_x[14].
set_regs() and fill_regs() may be called for a 32bits process, if the process
is ptrace'd from a 64bits debugger. So, in set_regs() and fill_regs(), get
or set PC and SPSR from where the debugger expects it, from tf_x[15] and
tf_x[16].
Print warnings for some bad kernel configurations (like NUMA disabled
with multiple domains). Check and report some firmware errors (like
incorrect proximity domain entries).
Differential Revision: https://reviews.freebsd.org/D20416