-Wno-error=tautological-constant-compare again (this flag is now out of
-Wextra after upstream https://reviews.llvm.org/rL322901). Otherwise
the MK_SYSTEM_COMPILER logic will not build a cross-tools compiler.
Reported by: jpaetzel, tuexen, Stefan Hagen
objects' init functions instead of doing the setup via a constructor
in libc as the init functions may already depend on these handlers
to be in place. This gets us rid of:
- the undefined order in which libc constructors as __guard_setup()
and jemalloc_constructor() are executed WRT __sparc_utrap_setup(),
- the requirement to link libc last so __sparc_utrap_setup() gets
called prior to constructors in other libraries (see r122883).
For static binaries, crt1.o still sets up the user trap handlers.
o Move misplaced prototypes for MD functions in to the MD prototype
section of rtld.h.
o Sprinkle nitems().
6.0.0 (branches/release_60 r324090).
This introduces retpoline support, with the -mretpoline flag. The
upstream initial commit message (r323155 by Chandler Carruth) contains
quite a bit of explanation. Quoting:
Introduce the "retpoline" x86 mitigation technique for variant #2 of
the speculative execution vulnerabilities disclosed today,
specifically identified by CVE-2017-5715, "Branch Target Injection",
and is one of the two halves to Spectre.
Summary:
First, we need to explain the core of the vulnerability. Note that
this is a very incomplete description, please see the Project Zero
blog post for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
The basis for branch target injection is to direct speculative
execution of the processor to some "gadget" of executable code by
poisoning the prediction of indirect branches with the address of
that gadget. The gadget in turn contains an operation that provides a
side channel for reading data. Most commonly, this will look like a
load of secret data followed by a branch on the loaded value and then
a load of some predictable cache line. The attacker then uses timing
of the processors cache to determine which direction the branch took
*in the speculative execution*, and in turn what one bit of the
loaded value was. Due to the nature of these timing side channels and
the branch predictor on Intel processors, this allows an attacker to
leak data only accessible to a privileged domain (like the kernel)
back into an unprivileged domain.
The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In
many cases, the compiler can simply use directed conditional branches
and a small search tree. LLVM already has support for lowering
switches in this way and the first step of this patch is to disable
jump-table lowering of switches and introduce a pass to rewrite
explicit indirectbr sequences into a switch over integers.
However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as a
trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures
the processor predicts the return to go to a controlled, known
location. The retpoline then "smashes" the return address pushed onto
the stack by the call with the desired target of the original
indirect call. The result is a predicted return to the next
instruction after a call (which can be used to trap speculative
execution within an infinite loop) and an actual indirect branch to
an arbitrary address.
On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this
device. For 32-bit ABIs there isn't a guaranteed scratch register
and so several different retpoline variants are introduced to use a
scratch register if one is available in the calling convention and to
otherwise use direct stack push/pop sequences to pass the target
address.
This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886
We also support a target feature that disables emission of the
retpoline thunk by the compiler to allow for custom thunks if users
want them. These are particularly useful in environments like
kernels that routinely do hot-patching on boot and want to hot-patch
their thunk to different code sequences. They can write this custom
thunk and use `-mretpoline-external-thunk` *in addition* to
`-mretpoline`. In this case, on x86-64 thu thunk names must be:
```
__llvm_external_retpoline_r11
```
or on 32-bit:
```
__llvm_external_retpoline_eax
__llvm_external_retpoline_ecx
__llvm_external_retpoline_edx
__llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.
There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.
The only other indirect branches remaining that we are aware of are
from precompiled runtimes (such as crt0.o and similar). The ones we
have found are not really attackable, and so we have not focused on
them here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.
For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z
retpolineplt` (or use similar functionality from some other linker).
We strongly recommend also using `-z now` as non-lazy binding allows
the retpoline-mitigated PLT to be substantially smaller.
When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typic al workloads, and relatively minor hits (approximately
2%) even for extremely syscall-heavy applications. This is largely
due to the small number of indirect branches that occur in
performance sensitive paths of the kernel.
When using these patches on statically linked applications,
especially C++ applications, you should expect to see a much more
dramatic performance hit. For microbenchmarks that are switch,
indirect-, or virtual-call heavy we have seen overheads ranging from
10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically
reduce the impact of hot indirect calls (by speculatively promoting
them to direct calls) and allow optimized search trees to be used to
lower switches. If you need to deploy these techniques in C++
applications, we *strongly* recommend that you ensure all hot call
targets are statically linked (avoiding PLT indirection) and use both
PGO and ThinLTO. Well tuned servers using all of these techniques saw
5% - 10% overhead from the use of retpoline.
We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality
available as soon as possible. Happy for more code review, but we'd
really like to get these patches landed and backported ASAP for
obvious reasons. We're planning to backport this to both 6.0 and 5.0
release streams and get a 5.0 release with just this cherry picked
ASAP for distros and vendors.
This patch is the work of a number of people over the past month:
Eric, Reid, Rui, and myself. I'm mailing it out as a single commit
due to the time sensitive nature of landing this and the need to
backport it. Huge thanks to everyone who helped out here, and
everyone at Intel who helped out in discussions about how to craft
this. Also, credit goes to Paul Turner (at Google, but not an LLVM
contributor) for much of the underlying retpoline design.
Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer
Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D41723
MFC after: 3 months
X-MFC-With: r327952
PR: 224669
In contrast to the existing NetBSD setcontext_link test, these tests
verify that passing from 1 to 6 arguments through to the callback function
work correctly which can be useful for testing ABIs which split arguments
between registers and the stack.
Sponsored by: DARPA / AFRL
This implementation spills additional arguments on the stack so works
fine with more than 6 arguments. I believe the check was just copied
over from sparc64 (which doesn't support spilling onto the stack)
Sponsored by: DARPA / AFRL
NCARGS isn't a limit on the number of arguments to pass to a function,
but the number of bytes that can be consumed by arguments to exec. As
such, it is not suitable for a limit on the count of arguments passed
to makecontext().
Sponsored by: DARPA / AFRL
- Add a new <machine/abi.h> header to hold constants shared between C
and assembly such as CALLFRAME_SZ.
- Add a new STACK_ALIGN constant to <machine/abi.h> and use it to
replace hardcoded constants in the kernel and makecontext(). As a
result of this, ensure the stack pointer on N32 and N64 is 16-byte
aligned for N32 and N64 after exec(), after pthread_create(), and
when sending signals rather than 8-byte aligned.
Reviewed by: jmallett
Sponsored by: DARPA / AFRL
Differential Revision: https://reviews.freebsd.org/D13875
NetBSD's libedit has been been cleaned-up considerably so the
non--widecharacter version is no longer an option. Re -sorting the
Makefile should make it easier for some brave soul trying to update it.
No functional change intended.
MFC after: 5 days
The test was added prematurely as a goal to reach with the GNU extension
functionality, but the functionality has not yet been introduced. Mark it as
an expected fail until that point.
- N32 and N64 do not have a $a0-3 gap.
- Use 'sp += 4' to skip over the gap for O32 rather than '+= i'. It
doesn't make a functional change, but makes the code match the comment.
Sponsored by: DARPA / AFRL
Specifically reading is done if ffs_sbget() and writing is done
in ffs_sbput(). These functions are exported to libufs via the
sbget() and sbput() functions which then used in the various
filesystem utilities. This work is in preparation for adding
subperblock check hashes.
No functional change intended.
Reviewed by: kib
r260553 added a number of mangled C++ symbols to Version.map inside of
an existing `extern "C++"` block.
ld.bfd 2.17.50 treats `extern "C++"` permissively and will match both
mangled and demangled symbols against the strings in the version map
block. ld.lld interprets `extern "C++"` strictly, and matches only
demangled symbols.
I believe lld's behaviour is correct. Contemporary versions of ld.bfd
also behave as lld does, so move the mangled symbols out of the
`extern "C++"` block.
PR: 225128, 185663
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
utilities is done by calling gr_addgid() for each group to be
added (usually found by traversing /etc/group) then calling the
setgroups() system call after the group set has been created.
The gr_addgid() function (helpfully?) deduplicates the addition
of group members. So, if you call it to add a group member that
already exists, it is just dropped. Because group[0] is the
effective group-ID and is over-written when a setgid program
is run, The value in group[0] is usually duplicated so that
group value is not lost when a setgid program is run.
Historically this happened because the group value indicated
in the password file also appears in /etc/group (e.g., if you
are group staff in the password file, you will also appear in
the staff line in /etc/group). But, with the addition of the
deduplication, the attempt to add group staff was lost because
it already appeared in group[0]. So, the fix is to deduplicate
starting from group[1] which allows a duplicate of the entry in
group[0], but not in later entries.
There is some confusion about the setgroups system call because in
BSD it has (always) set the entire group including the egid group
(in group[0]). However, in Linux, it skips over group[0] and starts
setting from group[1]. See this comment from linux_setgroups:
/*
* cr_groups[0] holds egid. Setting the whole set from
* the supplied set will cause egid to be changed too.
* Keep cr_groups[0] unchanged to prevent that.
*/
To make it clear what the BSD setgroups system call does, I
added the following paragraph to the setgroups(2) manual page:
The first entry of the group array (gidset[0]) is used as the effective
group-ID for the process. This entry is over-written when a setgid
program is run. To avoid losing access to the privileges of the
gidset[0] entry, it should be duplicated later in the group array.
By convention, this happens because the group value indicated in the
password file also appears in /etc/group. The group value in the
password file is placed in gidset[0] and that value then gets added a
second time when the /etc/group file is scanned to create the group set.
Reported by: Paul McMath paulm at tetrardus.net
Reviewed by: kib
MFC after: 2 weeks
The man page is years out of date regarding errors. Our implementation _does_
allow unaligned addresses, and it _does_not_ check for negative lengths,
because the length is unsigned. It checks for overflow instead.
Update the tests accordingly.
Reviewed by: bcr
MFC after: 3 weeks
Differential Revision: https://reviews.freebsd.org/D13826
kib points out that trying to re-use symbol versioning from libc is dirty
and wrong. The implementation in libregex is incompatible by design with the
implementation in libc. Using the symbol versions from libc can and likely
will cause confusions for linkers and bring unexpected behavior for
consumers that unwillingly (transitively) link against libregex.
Reported by: kib
It's become clear that my armv7 builds didn't catch all of the warnings that
other builds are picking up, drop WARNS to 2 to match libc until they're all
caught.
regcomp uses some libc internal collation bits that are not available in the
libregex context. It's easy enough to bring in the needed parts that can
work in a libregex world, so do so.
Pointy hat to: me
libregex is a regex(3) implementation intended to feature GNU extensions and
any other non-POSIX compliant extensions that are deemed worthy.
These extensions are separated out into a separate library for the sake of
not cluttering up libc further with them as well as not deteriorating the
speed (or lack thereof) of the libc implementation.
libregex is implemented as a build of the libc implementation with LIBREGEX
defined to distinguish this from a libc build. The reasons for
implementation like this are two-fold:
1.) Maintenance- This reduces the overhead induced by adding yet another
regex implementation to base.
2.) Ease of use- Flipping on GNU extensions will be as simple as linking
against libregex, and POSIX-compliant compilations can be guaranteed with a
REG_POSIX cflag that should be ignored by libc/regex and disables extensions
in libregex. It is also easier to keep REG_POSIX sane and POSIX pure when
implemented in this fashion.
Tests are added for future functionality, but left disconnected for the time
being while other testing is done.
Reviewed by: cem (previous version)
Differential Revision: https://reviews.freebsd.org/D12934
break is probably intended and correct,
but has no correctness implications due to is94 => is96
Reviewed by: cem, jilles
Reported by: swildner@DragonFlyBSD.org
MFC After: 1 week
libc is set for WARNS=2, but the incoming libregex will use WARNS=6.
Sprinkle some casts and (void)bc's to alleviate the warnings that come along
with the higher WARNS level.
These 'bc' parameters could be outright removed, but as of right now they
will be used in some parts of libregex land. Silence the warnings instead
rather than flip-flopping.
check-hash after making changes to the cylinder group. The problem
was that the journal-recovery code was calling the libufs bwrite()
function instead of the cgput() function. The cgput() function updates
the cylinder-group check-hash before writing the cylinder group.
This change required the additions of the cgget() and cgput() functions
to the libufs API to avoid a gratuitous bcopy of every cylinder group
to be read or written. These new functions have been added to the
libufs manual pages. This was the first opportunity that I have had
to use and document the use of the EDOOFUS error code.
Reviewed by: kib
Reported by: emaste and others
Allow usage of X86-prefixes as separate instrs.
Differential Revision: https://reviews.llvm.org/D42102
This should fix parse errors when x86 prefixes (such as 'lock' and
'rep') are followed by various non-mnemonic tokens, e.g. comments, .byte
directives and labels.
PR: 224669,225054
pmcstat request for close will generate a close event.
This event will be in turn received by pmcstat to close the file.
Reviewed by: kib
Tested by: pho
MFC after: 1 week
Sponsored by: Stormshield
In order to crossbuild FreeBSD on Mac/Linux I also need to build libnetbsd
and FILE* is not equal to struct __sFILE on those platforms.
Reviewed By: brooks, emaste
Approved By: jhb (mentor)
Differential Revision: https://reviews.freebsd.org/D13305
to the next int position.
Bug was introduced in r324959 ("Extract a set of pmcstat functions and
interfaces to the new internal library -- libpmcstat.")
This fixes pmcstat top mode (-T) operation.
Example: pmcstat -n1 -S clock.hard -T
Reported by: Peter Holm <peter@holm.cc>
Sponsored by: DARPA, AFRL
This should be effectively a nop for all archs, but for some reason the codegen
difference on the PowerPC 970 is such that the struct assignment doesn't work
(unless a printf() using one of the elements in the copied struct follows it),
while the memcpy() succeeds. On all archs the memcpy() should be expanded to an
inline copy, since the copy is bounded to ~16 bytes.
MFC after: 3 weeks
domains can be done by the _domain() API variants. UMA also supports a
first-touch policy via the NUMA zone flag.
The slab layer is now segregated by VM domains and is precise. It handles
iteration for round-robin directly. The per-cpu cache layer remains
a mix of domains according to where memory is allocated and freed. Well
behaved clients can achieve perfect locality with no performance penalty.
The direct domain allocation functions have to visit the slab layer and
so require per-zone locks which come at some expense.
Reviewed by: Attilio (a slightly older version)
Tested by: pho
Sponsored by: Netflix, Dell/EMC Isilon
userspace to control NUMA policy administratively and programmatically.
Implement domainset based iterators in the page layer.
Remove the now legacy numa_* syscalls.
Cleanup some header polution created by having seq.h in proc.h.
Reviewed by: markj, kib
Discussed with: alc
Tested by: pho
Sponsored by: Netflix, Dell/EMC Isilon
Differential Revision: https://reviews.freebsd.org/D13403
would be filesystem type dependent, but that's difficult to accomplish
and it's unclear how the UEFI firmware will cope. Be conservative and
make boot loaders cope instead.
Sponsored by: Netflix
condition. This should prevent a double free. In addition, prevent a
leak by freeing dp each loop and when we're done.
CID: 1383577
Sponsored by: Netflix
Fix for remainder overflow, when in rare cases adding remainder to divider
exceeded 1 and turned the total to 1000 in final formatting, taking up
the space for the unit character.
The fix continues the division of the original number if the above case
happens -- added the appropriate check to the for loop performing
the division. This lowers the value shown, to make it fit into the buffer
space provided (1.0M for 4+1 character buffer, as used by ls).
Add test case for the reported bug and extend test program to support
providing buffer length (ls -lh uses 5, tests hard-coded 4).
PR: 224498
Submitted by: Pawel Biernacki <pawel.biernacki@gmail.com>
Reported by: Masachika Ishizuka <ish@amail.plala.or.jp>
Reviewed by: cem, kib
Approved by: cem, kib
MFC after: 1 week
Sponsored by: Mysterious Code Ltd.
Differential Revision: D13578