to fix the memory leak that I introduced in r328426. Instead of
trying to clear up the possible memory leak in all the clients, I
ensure that it gets cleaned up in the source (e.g., ffs_sbget ensures
that memory is always freed if it returns an error).
The original change in r328426 was a bit sparse in its description.
So I am expanding on its description here (thanks cem@ and rgrimes@
for your encouragement for my longer commit messages).
In preparation for adding check hashing to superblocks, r328426 is
a refactoring of the code to get the reading/writing of the superblock
into one place. Unlike the cylinder group reading/writing which
ends up in two places (ffs_getcg/ffs_geom_strategy in the kernel
and cgget/cgput in libufs), I have the core superblock functions
just in the kernel (ffs_sbfetch/ffs_sbput in ffs_subr.c which is
already imported into utilities like fsck_ffs as well as libufs to
implement sbget/sbput). The ffs_sbfetch and ffs_sbput functions
take a function pointer to do the actual I/O for which there are
four variants:
ffs_use_bread / ffs_use_bwrite for the in-kernel filesystem
g_use_g_read_data / g_use_g_write_data for kernel geom clients
ufs_use_sa_read for the standalone code (stand/libsa/ufs.c
but not stand/libsa/ufsread.c which is size constrained)
use_pread / use_pwrite for libufs
Uses of these interfaces are in the UFS filesystem, geoms journal &
label, libsa changes, and libufs. They also permeate out into the
filesystem utilities fsck_ffs, newfs, growfs, clri, dump, quotacheck,
fsirand, fstyp, and quot. Some of these utilities should probably be
converted to directly use libufs (like dumpfs was for example), but
there does not seem to be much win in doing so.
Tested by: Peter Holm (pho@)
nothing - it was checking for ENXIO, which, with devfs, is no longer
returned - and was badly placed anyway, and replaces it with similar
one that works, and is done just before starting getty, instead of being
done when rereading ttys(5).
From the practical point of view, this makes init(8) handle disappearing
terminals (eg /dev/ttyU*) gracefully, without unneccessary getty restarts
and resulting error messages.
Reviewed by: imp@
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D14307
Unlike the existing GLA2GPA ioctl, GLA2GPA_NOFAULT does not modify
the guest. In particular, it does not inject any faults or modify
PTEs in the guest when performing an address space translation.
This is used by bhyve's debug server to read and write memory for
the remote debugger.
Reviewed by: grehan
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D14075
Added the ability to:
* Create virtual interfaces
* Create vlan interfaces
* Get interface fib
* Get interface groups
* Get interface status
* Get nd6 info
* Get media status
* Get additional ifaddr info in a convenient struct
* Get vhids
* Get carp info
* Get lagg and laggport status
* Iterate over all interfaces and ifaddrs
And add more examples, too.
Note that this is a backwards-incompatible change. But that's ok, because it's
a private library.
MFC after: 3 weeks
Sponsored by: Spectra Logic Corp
Differential Revision: https://reviews.freebsd.org/D14463
According to the getpeereid(3) documentation, on failure the value -1 is
returned and the global variable errno is set to indicate the error. We
were returning the error instead.
Obtained from: Apple's Libc-1244.30.3
MFC after: 5 days
These are a convenience for bhyve's debug server to use a single
ioctl for 'g' and 'G' rather than a loop of individual get/set
ioctl requests.
Reviewed by: grehan
MFC after: 2 months
Differential Revision: https://reviews.freebsd.org/D14074
Instead of passing flags (which describe a type of nvlist)
every send/recv we remember them in channel.
It's enough for use to extract them only during unwrap.
This simplify use of Casper.
Reviewed by: bruffer@, bcr@ (both man page)
Differential Revision: https://reviews.freebsd.org/D14196 (man page)
ffs_sbget() may return a superblock buffer even if it fails, so the
caller must be prepared to free it in this case. Moreover, when tasting
alternate superblock locations in a loop, ffs_sbget()'s readfunc
callback must free the previously allocated buffer.
Reported and tested by: pho
Reviewed by: kib (previous version)
Differential Revision: https://reviews.freebsd.org/D14390
C11 standard (ISO/IEC 9899:2011) K.3.7.4.1 The memset_s function
(p: 621-622)
Fix memset(3) portion of the man page by replacing the first argument
(destination) "b" with "dest", which is more descriptive than "b".
This also makes it consistent with the term used in the memset_s()
portion of the man page.
See also http://en.cppreference.com/w/c/string/byte/memset.
Reviewed by: kib
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D13682
cddl/usr.sbin/zfsd/zfsd_event.cc
Remove the check for da and ada devices. This way zfsd can work on md,
geli, glabel, gstripe, etc devices. geli in particular is useful
combined with ZFS. gnop is also useful for simulating drive pulls in
the ZFSD test suite.
Also, eliminate the DevfsEvent class entirely. Move its
responsibilities into GeomEvent. We can get everything we need to know
just from listening to GEOM events.
lib/libdevdctl/event.cc
Fix GeomEvent::DevName for CREATE events. Oddly, the relevant field is
named "cdev" for CREATE events but "devname" for disk events.
MFC after: 3 weeks
Relnotes: Yes (probably worth mentioning the geli part)
Sponsored by: Spectra Logic Corp
As a component of atan2(y, x), the case of x == 1.0 is farmed out to
atan(y). The current implementation of this comparison is vulnerable
to signed integer underflow (that is, undefined behavior), and it's
performed in a somewhat more complicated way than it need be. Change
it to not be quite so cute, rather directly comparing the high/low
bits of x to the specific IEEE-754 bit pattern that encodes 1.0.
Note that while there are three different e_atan* files in the
relevant directory, only this one needs fixing. e_atan2f.c already
compares against the full bit pattern encoding 1.0f, while
e_atan2l.cuses bitwise-ands/ors/nots and so doesn't require a change.
Closes#130
Submitted by: Jeff Walden (@jswalden github PR #130)
Reviewed by: bde
MFC After: 1 month
Introduce WITH_/WITHOUT_LLVM_COV to match GCC's WITH_/WITHOUT_GCOV.
It is intended to provide a superset of the interface and functionality
of gcov.
It is enabled by default when building Clang, similarly to gcov and GCC.
This change moves one file in libllvm to be compiled unconditionally.
Previously it was included only when WITH_CLANG_EXTRAS was set, but the
complexity of a new special case for (CLANG_EXTRAS | LLVM_COV) is not
worth avoiding a tiny increase in build time.
Reviewed by: dim, imp
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D142645
The GP register can be clobbered by the callback, so save it in S1
while invoking the callback function.
While here, add a comment expounding on the treatment of GP for the
various ABIs and the assumptions made.
Reviewed by: jmallett (earlier version)
Sponsored by: DARPA / AFRL
Differential Revision: https://reviews.freebsd.org/D14179
-Wno-error=tautological-constant-compare again (this flag is now out of
-Wextra after upstream https://reviews.llvm.org/rL322901). Otherwise
the MK_SYSTEM_COMPILER logic will not build a cross-tools compiler.
Reported by: jpaetzel, tuexen, Stefan Hagen
objects' init functions instead of doing the setup via a constructor
in libc as the init functions may already depend on these handlers
to be in place. This gets us rid of:
- the undefined order in which libc constructors as __guard_setup()
and jemalloc_constructor() are executed WRT __sparc_utrap_setup(),
- the requirement to link libc last so __sparc_utrap_setup() gets
called prior to constructors in other libraries (see r122883).
For static binaries, crt1.o still sets up the user trap handlers.
o Move misplaced prototypes for MD functions in to the MD prototype
section of rtld.h.
o Sprinkle nitems().
6.0.0 (branches/release_60 r324090).
This introduces retpoline support, with the -mretpoline flag. The
upstream initial commit message (r323155 by Chandler Carruth) contains
quite a bit of explanation. Quoting:
Introduce the "retpoline" x86 mitigation technique for variant #2 of
the speculative execution vulnerabilities disclosed today,
specifically identified by CVE-2017-5715, "Branch Target Injection",
and is one of the two halves to Spectre.
Summary:
First, we need to explain the core of the vulnerability. Note that
this is a very incomplete description, please see the Project Zero
blog post for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
The basis for branch target injection is to direct speculative
execution of the processor to some "gadget" of executable code by
poisoning the prediction of indirect branches with the address of
that gadget. The gadget in turn contains an operation that provides a
side channel for reading data. Most commonly, this will look like a
load of secret data followed by a branch on the loaded value and then
a load of some predictable cache line. The attacker then uses timing
of the processors cache to determine which direction the branch took
*in the speculative execution*, and in turn what one bit of the
loaded value was. Due to the nature of these timing side channels and
the branch predictor on Intel processors, this allows an attacker to
leak data only accessible to a privileged domain (like the kernel)
back into an unprivileged domain.
The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In
many cases, the compiler can simply use directed conditional branches
and a small search tree. LLVM already has support for lowering
switches in this way and the first step of this patch is to disable
jump-table lowering of switches and introduce a pass to rewrite
explicit indirectbr sequences into a switch over integers.
However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as a
trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures
the processor predicts the return to go to a controlled, known
location. The retpoline then "smashes" the return address pushed onto
the stack by the call with the desired target of the original
indirect call. The result is a predicted return to the next
instruction after a call (which can be used to trap speculative
execution within an infinite loop) and an actual indirect branch to
an arbitrary address.
On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this
device. For 32-bit ABIs there isn't a guaranteed scratch register
and so several different retpoline variants are introduced to use a
scratch register if one is available in the calling convention and to
otherwise use direct stack push/pop sequences to pass the target
address.
This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886
We also support a target feature that disables emission of the
retpoline thunk by the compiler to allow for custom thunks if users
want them. These are particularly useful in environments like
kernels that routinely do hot-patching on boot and want to hot-patch
their thunk to different code sequences. They can write this custom
thunk and use `-mretpoline-external-thunk` *in addition* to
`-mretpoline`. In this case, on x86-64 thu thunk names must be:
```
__llvm_external_retpoline_r11
```
or on 32-bit:
```
__llvm_external_retpoline_eax
__llvm_external_retpoline_ecx
__llvm_external_retpoline_edx
__llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.
There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.
The only other indirect branches remaining that we are aware of are
from precompiled runtimes (such as crt0.o and similar). The ones we
have found are not really attackable, and so we have not focused on
them here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.
For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z
retpolineplt` (or use similar functionality from some other linker).
We strongly recommend also using `-z now` as non-lazy binding allows
the retpoline-mitigated PLT to be substantially smaller.
When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typic al workloads, and relatively minor hits (approximately
2%) even for extremely syscall-heavy applications. This is largely
due to the small number of indirect branches that occur in
performance sensitive paths of the kernel.
When using these patches on statically linked applications,
especially C++ applications, you should expect to see a much more
dramatic performance hit. For microbenchmarks that are switch,
indirect-, or virtual-call heavy we have seen overheads ranging from
10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically
reduce the impact of hot indirect calls (by speculatively promoting
them to direct calls) and allow optimized search trees to be used to
lower switches. If you need to deploy these techniques in C++
applications, we *strongly* recommend that you ensure all hot call
targets are statically linked (avoiding PLT indirection) and use both
PGO and ThinLTO. Well tuned servers using all of these techniques saw
5% - 10% overhead from the use of retpoline.
We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality
available as soon as possible. Happy for more code review, but we'd
really like to get these patches landed and backported ASAP for
obvious reasons. We're planning to backport this to both 6.0 and 5.0
release streams and get a 5.0 release with just this cherry picked
ASAP for distros and vendors.
This patch is the work of a number of people over the past month:
Eric, Reid, Rui, and myself. I'm mailing it out as a single commit
due to the time sensitive nature of landing this and the need to
backport it. Huge thanks to everyone who helped out here, and
everyone at Intel who helped out in discussions about how to craft
this. Also, credit goes to Paul Turner (at Google, but not an LLVM
contributor) for much of the underlying retpoline design.
Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer
Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D41723
MFC after: 3 months
X-MFC-With: r327952
PR: 224669
In contrast to the existing NetBSD setcontext_link test, these tests
verify that passing from 1 to 6 arguments through to the callback function
work correctly which can be useful for testing ABIs which split arguments
between registers and the stack.
Sponsored by: DARPA / AFRL