Commit Graph

348 Commits

Author SHA1 Message Date
Glen Barber
91e8ea7f5c Comment -DNDEBUG in head after r339436 when head was switched
from 12.0-ALPHA10 to 13.0-CURRENT.  This edit was a mistake,
and should have been applied to stable/12 upon branching, not
head.

Reported by:	jbeich, dim
Sponsored by:	The FreeBSD Foundation
2018-10-21 15:54:38 +00:00
Ed Maste
b0bffd128c Bump LLD_REVISION_STRING for 13-CURRENT 2018-10-20 17:42:38 +00:00
Glen Barber
b958317950 - Update head to 13.0-CURRENT.
- Bump MACHINE_TRIPLE, TARGET_TRIPLE, FBSD_MAJOR, FBSD_CC_VER,
  FREEBSD_CC_VERSION, OS_VERSION.
- Update comment in UPDATING regarding debugging options.
- Remove debug.witness.trace=0 from installation media.
- Bump __FreeBSD_version.

Approved by:	re (implicit)
Sponsored by:	The FreeBSD Foundation
2018-10-19 00:37:47 +00:00
Ed Maste
b371a9f082 lld: set sh_link and sh_info for .rela.plt sections
ELF spec says that for SHT_REL and SHT_RELA sh_link should reference the
associated string table and sh_info should reference the "section to
which the relocation applies."  ELF Tool Chain's elfcopy / strip use
this (in part) to control whether or not the relocation entry is copied
to the output.

LLVM PR 37538 https://bugs.llvm.org/show_bug.cgi?id=37538

Approved by:	re (kib)
Obtained from:	llvm r344226 (backported for 6.0)
2018-10-11 13:19:17 +00:00
Ed Maste
ea28e71e86 clang: allow ifunc resolvers to accept arguments
Previously Clang required ifunc resolution functions to take no
arguments, presumably because GCC documented ifunc resolvers as taking
no arguments.  However, GCC accepts resolvers accepting arguments, and
our rtld passes CPU ID information (cpuid, hwcap, etc.) to ifunc
resolvers.  Just remove the check from the in-tree compiler for our in-
tree compiler; a different (per-OS) approach may be required upstream.

Reported by:	mjg
Approved by:	re (rgrimes)
MFC after:	1 week
Relnotes:	Yes
Sponsored by:	The FreeBSD Foundation
2018-09-29 20:01:23 +00:00
Dimitry Andric
89edb881e6 Add optional LLVM BPF target support
BPF (eBPF) is an independent instruction set architecture which is
introduced in Linux a few years ago. Originally, eBPF execute
environment was only inside Linux kernel. However, recent years there
are some user space implementation (https://github.com/iovisor/ubpf,
https://doc.dpdk.org/guides/prog_guide/bpf_lib.html) and kernel space
implementation for FreeBSD is going on
(https://github.com/YutaroHayakawa/generic-ebpf).

The BPF target support can be enabled using WITH_LLVM_TARGET_BPF, as it
is not built by default.

Submitted by:	Yutaro Hayakawa <yhayakawa3720@gmail.com>
Reviewed by:	dim, bdrewery
Differential Revision: https://reviews.freebsd.org/D16033
2018-08-09 21:28:31 +00:00
Ed Maste
837f338599 bump lld version number after r336972 arm(v7) VFP tag support
Reported by:	kevans
Sponsored by:	The FreeBSD Foundation
2018-07-31 21:06:28 +00:00
Ed Maste
e92a42059b llvm: remove __FreeBSD_version conditionals
All supported FreeBSD build host versions have backtrace.h, so we can
just eliminate that test.  For futimes() we can test the compiler's
built-in __FreeBSD__ major version rather than relying on including
osreldate.h.  This should reduce the frequency with which Clang gets
rebuilt when building world.

Reviewed by:	dim
Sponsored by:	The FreeBSD Foundation
2018-07-25 00:06:18 +00:00
Dimitry Andric
55458465af More follow-up to r335799 (llvm/clang 6.0.1 update), where I forgot to
update mtree files, ObsoleteFiles and a number of other paths.  Sorry
about all the breakage.

Pointy hat to:	me
MFC after:	2 weeks
X-MFC-With:	r335799
2018-06-30 15:03:22 +00:00
Dimitry Andric
fbfca78ed2 Follow-up to r335799 (llvm/clang 6.0.1 update), by regenerating various
headers with new version information defines.

MFC after:	2 weeks
X-MFC-With:	r335799
2018-06-30 10:04:44 +00:00
Dimitry Andric
6ccc06f6cb Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.1 release (upstream r335540).

Relnotes:	yes
MFC after:	2 weeks
2018-06-29 17:51:35 +00:00
Bryan Drewery
caf90252d5 Revert r335449 and add needed MK_LLD_BOOTSTRAP check for SRCS_MIW.
This effectively reverts r335449 and changes the previous MK_LLD_IS_LD
to a MK_LLD_BOOTSTRAP check.  If !TOOLS_PREFIX then these sources are
always built for llvm-objdump, lld, and llvm-cov.  When TOOLS_PREFIX
is set then they are only needed if lld is being bootstrapped.

Reported by:	dim
Pointyhat to:	bdrewery
Sponsored by:	Dell EMC
2018-06-22 17:58:56 +00:00
Dimitry Andric
cbafd2630b Add support for selectively enabling LLVM targets
This makes it possible, through src.conf(5) settings, to select which
LLVM targets you want to build during buildworld.  The current list is:

* (WITH|WITHOUT)_LLVM_TARGET_AARCH64
* (WITH|WITHOUT)_LLVM_TARGET_ARM
* (WITH|WITHOUT)_LLVM_TARGET_MIPS
* (WITH|WITHOUT)_LLVM_TARGET_POWERPC
* (WITH|WITHOUT)_LLVM_TARGET_SPARC
* (WITH|WITHOUT)_LLVM_TARGET_X86

To not influence anything right now, all of these are on by default, in
situations where clang is enabled.

Selectively turning a few targets off manually should work.  Turning on
only one target should work too, even if that target does not correspond
to the build architecture.  (In that case, LLVM_NATIVE_ARCH will not be
defined, and you can only use the resulting clang executable for
cross-compiling.)

I performed a few measurements on one of the FreeBSD.org reference
machines, building clang from scratch, with all targets enabled, and
with only the x86 target enabled.  The latter was ~12% faster in real
time (on a 32-core box), and ~14% faster in user time.  For a full
buildworld the difference will probably be less pronounced, though.

Reviewed by:	bdrewery
MFC after:	1 week
Differential Revision: https://reviews.freebsd.org/D11077
2018-06-22 15:00:00 +00:00
Bryan Drewery
a883ed8c1d Fix sources needed for lld.
lld always needs these DWARF sources, as well as other default and extra
tools. XDL seems to be the best fit list.

Remove MK_LLD_IS_LD check from SRCS_MIW which is now reduced to just a
few files for llvm-objdump.

Sponsored by:	Dell EMC
Differential Revision:	https://reviews.freebsd.org/D15915
2018-06-20 16:10:10 +00:00
Bryan Drewery
781872781e Rework WITHOUT_LLD/TOOLCHAIN fix from r327892 for cross-tools.
MK_LLD is for the installed lld while MK_LLD_BOOTSTRAP is for the build
tool.  For WITH_SYSTEM_LINKER it is necesarry to separate the logic of
these two.  When building libllvm TOOLS_PREFIX will be defined and
MK_LLD_BOOTSTRAP should be checked instead.

Sponsored by:	Dell EMC
Differential Revision:	https://reviews.freebsd.org/D15837
2018-06-20 16:10:07 +00:00
Ed Maste
19703503ba lld: Omit PT_NOTE for SHT_NOTE without SHF_ALLOC
A non-alloc note section should not have a PT_NOTE program header.

Found while linking ghc (Haskell compiler) with lld on FreeBSD.  Haskell
emits a .debug-ghc-link-info note section (as the name suggests, it
contains link info) as a SHT_NOTE section without SHF_ALLOC set.

For this case ld.bfd does not emit a PT_NOTE segment for
.debug-ghc-link-info.  lld previously emitted a PT_NOTE with p_vaddr = 0
and FreeBSD's rtld segfaulted when trying to parse a note at address 0.

LLVM PR:	https://llvm.org/pr37361
LLVM review:	https://reviews.llvm.org/D46623

PR:		226872
Reviewed by:	dim
Sponsored by:	The FreeBSD Foundation
2018-05-09 11:17:01 +00:00
Dimitry Andric
0556cfadc2 Recommit r332501, with an additional upstream fix for "Cannot lower
EFLAGS copy that lives out of a basic block!" errors on i386.

Pull in r325446 from upstream clang trunk (by me):

  [X86] Add 'sahf' CPU feature to frontend

  Summary:
  Make clang accept `-msahf` (and `-mno-sahf`) flags to activate the
  `+sahf` feature for the backend, for bug 36028 (Incorrect use of
  pushf/popf enables/disables interrupts on amd64 kernels).  This was
  originally submitted in bug 36037 by Jonathan Looney
  <jonlooney@gmail.com>.

  As described there, GCC also uses `-msahf` for this feature, and the
  backend already recognizes the `+sahf` feature. All that is needed is
  to teach clang to pass this on to the backend.

  The mapping of feature support onto CPUs may not be complete; rather,
  it was chosen to match LLVM's idea of which CPUs support this feature
  (see lib/Target/X86/X86.td).

  I also updated the affected test case (CodeGen/attr-target-x86.c) to
  match the emitted output.

  Reviewers: craig.topper, coby, efriedma, rsmith

  Reviewed By: craig.topper

  Subscribers: emaste, cfe-commits

  Differential Revision: https://reviews.llvm.org/D43394

Pull in r328944 from upstream llvm trunk (by Chandler Carruth):

  [x86] Expose more of the condition conversion routines in the public
  API for X86's instruction information. I've now got a second patch
  under review that needs these same APIs. This bit is nicely
  orthogonal and obvious, so landing it. NFC.

Pull in r329414 from upstream llvm trunk (by Craig Topper):

  [X86] Merge itineraries for CLC, CMC, and STC.

  These are very simple flag setting instructions that appear to only
  be a single uop. They're unlikely to need this separation.

Pull in r329657 from upstream llvm trunk (by Chandler Carruth):

  [x86] Introduce a pass to begin more systematically fixing PR36028
  and similar issues.

  The key idea is to lower COPY nodes populating EFLAGS by scanning the
  uses of EFLAGS and introducing dedicated code to preserve the
  necessary state in a GPR. In the vast majority of cases, these uses
  are cmovCC and jCC instructions. For such cases, we can very easily
  save and restore the necessary information by simply inserting a
  setCC into a GPR where the original flags are live, and then testing
  that GPR directly to feed the cmov or conditional branch.

  However, things are a bit more tricky if arithmetic is using the
  flags.  This patch handles the vast majority of cases that seem to
  come up in practice: adc, adcx, adox, rcl, and rcr; all without
  taking advantage of partially preserved EFLAGS as LLVM doesn't
  currently model that at all.

  There are a large number of operations that techinaclly observe
  EFLAGS currently but shouldn't in this case -- they typically are
  using DF.  Currently, they will not be handled by this approach.
  However, I have never seen this issue come up in practice. It is
  already pretty rare to have these patterns come up in practical code
  with LLVM. I had to resort to writing MIR tests to cover most of the
  logic in this pass already.  I suspect even with its current amount
  of coverage of arithmetic users of EFLAGS it will be a significant
  improvement over the current use of pushf/popf. It will also produce
  substantially faster code in most of the common patterns.

  This patch also removes all of the old lowering for EFLAGS copies,
  and the hack that forced us to use a frame pointer when EFLAGS copies
  were found anywhere in a function so that the dynamic stack
  adjustment wasn't a problem. None of this is needed as we now lower
  all of these copies directly in MI and without require stack
  adjustments.

  Lots of thanks to Reid who came up with several aspects of this
  approach, and Craig who helped me work out a couple of things
  tripping me up while working on this.

  Differential Revision: https://reviews.llvm.org/D45146

Pull in r329673 from upstream llvm trunk (by Chandler Carruth):

  [x86] Model the direction flag (DF) separately from the rest of
  EFLAGS.

  This cleans up a number of operations that only claimed te use EFLAGS
  due to using DF. But no instructions which we think of us setting
  EFLAGS actually modify DF (other than things like popf) and so this
  needlessly creates uses of EFLAGS that aren't really there.

  In fact, DF is so restrictive it is pretty easy to model. Only STD,
  CLD, and the whole-flags writes (WRFLAGS and POPF) need to model
  this.

  I've also somewhat cleaned up some of the flag management instruction
  definitions to be in the correct .td file.

  Adding this extra register also uncovered a failure to use the
  correct datatype to hold X86 registers, and I've corrected that as
  necessary here.

  Differential Revision: https://reviews.llvm.org/D45154

Pull in r330264 from upstream llvm trunk (by Chandler Carruth):

  [x86] Fix PR37100 by teaching the EFLAGS copy lowering to rewrite
  uses across basic blocks in the limited cases where it is very
  straight forward to do so.

  This will also be useful for other places where we do some limited
  EFLAGS propagation across CFG edges and need to handle copy rewrites
  afterward. I think this is rapidly approaching the maximum we can and
  should be doing here. Everything else begins to require either heroic
  analysis to prove how to do PHI insertion manually, or somehow
  managing arbitrary PHI-ing of EFLAGS with general PHI insertion.
  Neither of these seem at all promising so if those cases come up,
  we'll almost certainly need to rewrite the parts of LLVM that produce
  those patterns.

  We do now require dominator trees in order to reliably diagnose
  patterns that would require PHI nodes. This is a bit unfortunate but
  it seems better than the completely mysterious crash we would get
  otherwise.

  Differential Revision: https://reviews.llvm.org/D45673

Together, these should ensure clang does not use pushf/popf sequences to
save and restore flags, avoiding problems with unrelated flags (such as
the interrupt flag) being restored unexpectedly.

Requested by:	jtl
PR:		225330
MFC after:	1 week
2018-04-20 18:20:55 +00:00
Ed Maste
0873080489 lld: use correct number of digits in __FreeBSD_version-style ID
__FreeBSD_version-style IDs should have 5 digits following the major.
2018-04-20 00:59:53 +00:00
Ed Maste
12881601e5 lld: add a __FreeBSD_version-style identifier to version
This will faciliate a WITH_SYSTEM_LINKER option.

Reviewed by:	dim
MFC after:	1 week
Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D15110
2018-04-17 16:21:23 +00:00
Dimitry Andric
6ec30ab86a Revert r332501 for now, as it can cause build failures on i386.
Reported upstream as <https://bugs.llvm.org/show_bug.cgi?id=37133>.

Reported by:	emaste, ci.freebsd.org
PR:		225330
2018-04-14 14:57:32 +00:00
Dimitry Andric
0ae629bdd6 Pull in r325446 from upstream clang trunk (by me):
[X86] Add 'sahf' CPU feature to frontend

  Summary:
  Make clang accept `-msahf` (and `-mno-sahf`) flags to activate the
  `+sahf` feature for the backend, for bug 36028 (Incorrect use of
  pushf/popf enables/disables interrupts on amd64 kernels).  This was
  originally submitted in bug 36037 by Jonathan Looney
  <jonlooney@gmail.com>.

  As described there, GCC also uses `-msahf` for this feature, and the
  backend already recognizes the `+sahf` feature. All that is needed is
  to teach clang to pass this on to the backend.

  The mapping of feature support onto CPUs may not be complete; rather,
  it was chosen to match LLVM's idea of which CPUs support this feature
  (see lib/Target/X86/X86.td).

  I also updated the affected test case (CodeGen/attr-target-x86.c) to
  match the emitted output.

  Reviewers: craig.topper, coby, efriedma, rsmith

  Reviewed By: craig.topper

  Subscribers: emaste, cfe-commits

  Differential Revision: https://reviews.llvm.org/D43394

Pull in r328944 from upstream llvm trunk (by Chandler Carruth):

  [x86] Expose more of the condition conversion routines in the public
  API for X86's instruction information. I've now got a second patch
  under review that needs these same APIs. This bit is nicely
  orthogonal and obvious, so landing it. NFC.

Pull in r329414 from upstream llvm trunk (by Craig Topper):

  [X86] Merge itineraries for CLC, CMC, and STC.

  These are very simple flag setting instructions that appear to only
  be a single uop. They're unlikely to need this separation.

Pull in r329657 from upstream llvm trunk (by Chandler Carruth):

  [x86] Introduce a pass to begin more systematically fixing PR36028
  and similar issues.

  The key idea is to lower COPY nodes populating EFLAGS by scanning the
  uses of EFLAGS and introducing dedicated code to preserve the
  necessary state in a GPR. In the vast majority of cases, these uses
  are cmovCC and jCC instructions. For such cases, we can very easily
  save and restore the necessary information by simply inserting a
  setCC into a GPR where the original flags are live, and then testing
  that GPR directly to feed the cmov or conditional branch.

  However, things are a bit more tricky if arithmetic is using the
  flags.  This patch handles the vast majority of cases that seem to
  come up in practice: adc, adcx, adox, rcl, and rcr; all without
  taking advantage of partially preserved EFLAGS as LLVM doesn't
  currently model that at all.

  There are a large number of operations that techinaclly observe
  EFLAGS currently but shouldn't in this case -- they typically are
  using DF.  Currently, they will not be handled by this approach.
  However, I have never seen this issue come up in practice. It is
  already pretty rare to have these patterns come up in practical code
  with LLVM. I had to resort to writing MIR tests to cover most of the
  logic in this pass already.  I suspect even with its current amount
  of coverage of arithmetic users of EFLAGS it will be a significant
  improvement over the current use of pushf/popf. It will also produce
  substantially faster code in most of the common patterns.

  This patch also removes all of the old lowering for EFLAGS copies,
  and the hack that forced us to use a frame pointer when EFLAGS copies
  were found anywhere in a function so that the dynamic stack
  adjustment wasn't a problem. None of this is needed as we now lower
  all of these copies directly in MI and without require stack
  adjustments.

  Lots of thanks to Reid who came up with several aspects of this
  approach, and Craig who helped me work out a couple of things
  tripping me up while working on this.

  Differential Revision: https://reviews.llvm.org/D45146

Pull in r329673 from upstream llvm trunk (by Chandler Carruth):

  [x86] Model the direction flag (DF) separately from the rest of
  EFLAGS.

  This cleans up a number of operations that only claimed te use EFLAGS
  due to using DF. But no instructions which we think of us setting
  EFLAGS actually modify DF (other than things like popf) and so this
  needlessly creates uses of EFLAGS that aren't really there.

  In fact, DF is so restrictive it is pretty easy to model. Only STD,
  CLD, and the whole-flags writes (WRFLAGS and POPF) need to model
  this.

  I've also somewhat cleaned up some of the flag management instruction
  definitions to be in the correct .td file.

  Adding this extra register also uncovered a failure to use the
  correct datatype to hold X86 registers, and I've corrected that as
  necessary here.

  Differential Revision: https://reviews.llvm.org/D45154

Together, these should ensure clang does not use pushf/popf sequences to
save and restore flags, avoiding problems with unrelated flags (such as
the interrupt flag) being restored unexpectedly.

Requested by:	jtl
PR:		225330
MFC after:	1 week
2018-04-14 12:07:05 +00:00
Dimitry Andric
c5a4cd4f85 Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.0 release (upstream r326565).

Release notes for llvm, clang and lld will be available here soon:
<http://releases.llvm.org/6.0.0/docs/ReleaseNotes.html>
<http://releases.llvm.org/6.0.0/tools/clang/docs/ReleaseNotes.html>
<http://releases.llvm.org/6.0.0/tools/lld/docs/ReleaseNotes.html>

Relnotes:	yes
MFC after:	3 months
X-MFC-With:	r327952
PR:		224669
2018-03-04 17:06:37 +00:00
Dimitry Andric
4f8786afe3 Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.0 (branches/release_60 r325932).  This corresponds to 6.0.0 rc3.

MFC after:	3 months
X-MFC-With:	r327952
PR:		224669
2018-02-25 13:20:32 +00:00
Dimitry Andric
954b921d66 Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.0 (branches/release_60 r325330).

MFC after:	3 months
X-MFC-With:	r327952
PR:		224669
2018-02-16 20:45:32 +00:00
Ed Maste
1b49115a40 Promote llvm-cov to a standalone option
Introduce WITH_/WITHOUT_LLVM_COV to match GCC's WITH_/WITHOUT_GCOV.
It is intended to provide a superset of the interface and functionality
of gcov.

It is enabled by default when building Clang, similarly to gcov and GCC.

This change moves one file in libllvm to be compiled unconditionally.
Previously it was included only when WITH_CLANG_EXTRAS was set, but the
complexity of a new special case for (CLANG_EXTRAS | LLVM_COV) is not
worth avoiding a tiny increase in build time.

Reviewed by:	dim, imp
Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D142645
2018-02-10 00:22:35 +00:00
Dimitry Andric
68a574863d Bump clang's __FreeBSD_cc_version, to cope with r328816, which removed
-Wno-error=tautological-constant-compare again (this flag is now out of
-Wextra after upstream https://reviews.llvm.org/rL322901).  Otherwise
the MK_SYSTEM_COMPILER logic will not build a cross-tools compiler.

Reported by:	jpaetzel, tuexen, Stefan Hagen
2018-02-04 20:33:47 +00:00
Dimitry Andric
07577dfe2e Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.0 (branches/release_60 r324090).

This introduces retpoline support, with the -mretpoline flag.  The
upstream initial commit message (r323155 by Chandler Carruth) contains
quite a bit of explanation.  Quoting:

  Introduce the "retpoline" x86 mitigation technique for variant #2 of
  the speculative execution vulnerabilities disclosed today,
  specifically identified by CVE-2017-5715, "Branch Target Injection",
  and is one of the two halves to Spectre.

  Summary:
  First, we need to explain the core of the vulnerability. Note that
  this is a very incomplete description, please see the Project Zero
  blog post for details:
  https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html

  The basis for branch target injection is to direct speculative
  execution of the processor to some "gadget" of executable code by
  poisoning the prediction of indirect branches with the address of
  that gadget. The gadget in turn contains an operation that provides a
  side channel for reading data. Most commonly, this will look like a
  load of secret data followed by a branch on the loaded value and then
  a load of some predictable cache line. The attacker then uses timing
  of the processors cache to determine which direction the branch took
  *in the speculative execution*, and in turn what one bit of the
  loaded value was. Due to the nature of these timing side channels and
  the branch predictor on Intel processors, this allows an attacker to
  leak data only accessible to a privileged domain (like the kernel)
  back into an unprivileged domain.

  The goal is simple: avoid generating code which contains an indirect
  branch that could have its prediction poisoned by an attacker. In
  many cases, the compiler can simply use directed conditional branches
  and a small search tree. LLVM already has support for lowering
  switches in this way and the first step of this patch is to disable
  jump-table lowering of switches and introduce a pass to rewrite
  explicit indirectbr sequences into a switch over integers.

  However, there is no fully general alternative to indirect calls. We
  introduce a new construct we call a "retpoline" to implement indirect
  calls in a non-speculatable way. It can be thought of loosely as a
  trampoline for indirect calls which uses the RET instruction on x86.
  Further, we arrange for a specific call->ret sequence which ensures
  the processor predicts the return to go to a controlled, known
  location. The retpoline then "smashes" the return address pushed onto
  the stack by the call with the desired target of the original
  indirect call. The result is a predicted return to the next
  instruction after a call (which can be used to trap speculative
  execution within an infinite loop) and an actual indirect branch to
  an arbitrary address.

  On 64-bit x86 ABIs, this is especially easily done in the compiler by
  using a guaranteed scratch register to pass the target into this
  device.  For 32-bit ABIs there isn't a guaranteed scratch register
  and so several different retpoline variants are introduced to use a
  scratch register if one is available in the calling convention and to
  otherwise use direct stack push/pop sequences to pass the target
  address.

  This "retpoline" mitigation is fully described in the following blog
  post: https://support.google.com/faqs/answer/7625886

  We also support a target feature that disables emission of the
  retpoline thunk by the compiler to allow for custom thunks if users
  want them.  These are particularly useful in environments like
  kernels that routinely do hot-patching on boot and want to hot-patch
  their thunk to different code sequences. They can write this custom
  thunk and use `-mretpoline-external-thunk` *in addition* to
  `-mretpoline`. In this case, on x86-64 thu thunk names must be:
  ```
    __llvm_external_retpoline_r11
  ```
  or on 32-bit:
  ```
    __llvm_external_retpoline_eax
    __llvm_external_retpoline_ecx
    __llvm_external_retpoline_edx
    __llvm_external_retpoline_push
  ```
  And the target of the retpoline is passed in the named register, or in
  the case of the `push` suffix on the top of the stack via a `pushl`
  instruction.

  There is one other important source of indirect branches in x86 ELF
  binaries: the PLT. These patches also include support for LLD to
  generate PLT entries that perform a retpoline-style indirection.

  The only other indirect branches remaining that we are aware of are
  from precompiled runtimes (such as crt0.o and similar). The ones we
  have found are not really attackable, and so we have not focused on
  them here, but eventually these runtimes should also be replicated for
  retpoline-ed configurations for completeness.

  For kernels or other freestanding or fully static executables, the
  compiler switch `-mretpoline` is sufficient to fully mitigate this
  particular attack. For dynamic executables, you must compile *all*
  libraries with `-mretpoline` and additionally link the dynamic
  executable and all shared libraries with LLD and pass `-z
  retpolineplt` (or use similar functionality from some other linker).
  We strongly recommend also using `-z now` as non-lazy binding allows
  the retpoline-mitigated PLT to be substantially smaller.

  When manually apply similar transformations to `-mretpoline` to the
  Linux kernel we observed very small performance hits to applications
  running typic al workloads, and relatively minor hits (approximately
  2%) even for extremely syscall-heavy applications. This is largely
  due to the small number of indirect branches that occur in
  performance sensitive paths of the kernel.

  When using these patches on statically linked applications,
  especially C++ applications, you should expect to see a much more
  dramatic performance hit. For microbenchmarks that are switch,
  indirect-, or virtual-call heavy we have seen overheads ranging from
  10% to 50%.

  However, real-world workloads exhibit substantially lower performance
  impact. Notably, techniques such as PGO and ThinLTO dramatically
  reduce the impact of hot indirect calls (by speculatively promoting
  them to direct calls) and allow optimized search trees to be used to
  lower switches. If you need to deploy these techniques in C++
  applications, we *strongly* recommend that you ensure all hot call
  targets are statically linked (avoiding PLT indirection) and use both
  PGO and ThinLTO. Well tuned servers using all of these techniques saw
  5% - 10% overhead from the use of retpoline.

  We will add detailed documentation covering these components in
  subsequent patches, but wanted to make the core functionality
  available as soon as possible. Happy for more code review, but we'd
  really like to get these patches landed and backported ASAP for
  obvious reasons. We're planning to backport this to both 6.0 and 5.0
  release streams and get a 5.0 release with just this cherry picked
  ASAP for distros and vendors.

  This patch is the work of a number of people over the past month:
  Eric, Reid, Rui, and myself. I'm mailing it out as a single commit
  due to the time sensitive nature of landing this and the need to
  backport it. Huge thanks to everyone who helped out here, and
  everyone at Intel who helped out in discussions about how to craft
  this. Also, credit goes to Paul Turner (at Google, but not an LLVM
  contributor) for much of the underlying retpoline design.

  Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer

  Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits

  Differential Revision: https://reviews.llvm.org/D41723

MFC after:	3 months
X-MFC-With:	r327952
PR:		224669
2018-02-02 22:28:12 +00:00
Dimitry Andric
842d113b5c Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.0 (branches/release_60 r323948).

MFC after:	3 months
X-MFC-With:	r327952
PR:		224669
2018-02-01 21:41:15 +00:00
Dimitry Andric
042b1c2ef5 Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
6.0.0 (branches/release_60 r323338).

MFC after:	3 months
X-MFC-With:	r327952
PR:		224669
2018-01-24 22:35:00 +00:00
Dimitry Andric
d23c4359df Pull in r322623 from upstream llvm trunk (by Andrew V. Tischenko):
Allow usage of X86-prefixes as separate instrs.
  Differential Revision: https://reviews.llvm.org/D42102

This should fix parse errors when x86 prefixes (such as 'lock' and
'rep') are followed by various non-mnemonic tokens, e.g. comments, .byte
directives and labels.

PR:		224669,225054
2018-01-17 17:11:55 +00:00
Dimitry Andric
62bd626920 Build llvm-extract with -lz, and add a few objects to liblldb, both of
which turn out to be needed when you don't use -ffunction-sections.

Reported by:	Shawn Webb <shawn.webb@hardenedbsd.org>
2018-01-13 13:53:05 +00:00
Dimitry Andric
30785c0e2b Merge llvm, clang, lld, lldb, compiler-rt and libc++ release_60 r321788,
update build glue and version numbers.
2018-01-06 23:44:14 +00:00
Dimitry Andric
fe4fed2e4d Merge llvm, clang, lld, lldb, compiler-rt and libc++ trunk r321545,
update build glue and version numbers, add new intrinsics headers, and
update OptionalObsoleteFiles.inc.
2017-12-29 00:56:15 +00:00
Dimitry Andric
bbd32193a0 Add one more file to libllvm's SRCS_MIN, since this one is required for
MK_SHARED_TOOLCHAIN=yes.
2017-12-29 00:21:50 +00:00
Dimitry Andric
2757ff7e2f Update clang, lld and llvm version numbers for r321414, and update build
glue.
2017-12-24 12:32:55 +00:00
Dimitry Andric
77b0be52d7 Next step in updating llvm/clang build glue: make lldb build. 2017-12-22 19:10:19 +00:00
Dimitry Andric
7bfc2d0f8e Next step in updating llvm/clang build glue: make lld build. 2017-12-22 16:27:29 +00:00
Dimitry Andric
44389c28aa Sort source file lists under lib/clang. 2017-12-22 13:35:26 +00:00
Dimitry Andric
3cd201a12f Next step in updating llvm/clang build glue: make the optional llvm and
clang tools build.
2017-12-22 13:28:10 +00:00
Dimitry Andric
69f53b9734 Next step in updating llvm/clang build glue: make llvm-objdump build. 2017-12-22 11:41:18 +00:00
Dimitry Andric
ea68f99b08 Next step in updating llvm/clang build glue: make the full clang
executable build.
2017-12-22 10:04:40 +00:00
Dimitry Andric
36cb3905c9 First step in updating llvm/clang build glue: make only the clang
executable build.
2017-12-21 21:24:52 +00:00
Dimitry Andric
bed303d199 Bump FREEBSD_CC_VERSION. 2017-12-20 20:27:23 +00:00
Dimitry Andric
cd8e0f1f24 Add new clang intrinsics headers, and update version number. 2017-12-20 20:27:09 +00:00
Dimitry Andric
02d2ad99ac Update generated config headers, and version numbers. 2017-12-20 20:25:35 +00:00
Dimitry Andric
5bf0d7ad74 Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
5.0.1 release (upstream r320880).

Relnotes:	yes
MFC after:	2 weeks
2017-12-16 18:06:30 +00:00
Dimitry Andric
d4419f6fa8 Upgrade our copies of clang, llvm, lldb and libc++ to r319231 from the
upstream release_50 branch.  This corresponds to 5.0.1 rc2.

MFC after:	2 weeks
2017-12-03 12:14:34 +00:00
Bryan Drewery
ddf95e2ae8 Tell bsd.dep.mk which depend files to dinclude.
This allows the _SKIP_DEPEND optimization to work, avoiding reading
the files when not needed.  It also fixes META_MODE incorrectly
reading these files when not needed.

Sponsored by:	Dell EMC Isilon
2017-11-10 20:09:15 +00:00
Bryan Drewery
59696d216c Prefix {TARGET,BUILD}_TRIPLE with LLVM_ to avoid Makefile.inc1 collision.
The Makefile.inc1 TARGET_TRIPLE is for specifying which -target is used
during the build of world.

MFC after:	2 weeks
Reviewed by:	dim, imp
Sponsored by:	Dell EMC Isilon
Differential Revision:	https://reviews.freebsd.org/D12792
2017-10-25 21:45:55 +00:00
Warner Losh
0b972ac92e Support armv7 builds for userland
Make armv7 as a new MACHINE_ARCH.

Copy all the places we do armv6 and add armv7 as basically an
alias. clang appears to generate code for armv7 by default. armv7 hard
float isn't supported by the the in-tree gcc, so it hasn't been
updated to have a new default.

Support armv7 as a new valid MACHINE_ARCH (and by extension
TARGET_ARCH).

Add armv7 to the universe build.

Differential Revision: https://reviews.freebsd.org/D12010
2017-10-05 23:01:33 +00:00