Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp

10.0.1 final (aka llvmorg-10.0.1-0-gef32c611aa2).

MFC r360702:

Merge commit 4ca2cad94 from llvm git (by Justin Hibbits):

  [PowerPC] Add clang -msvr4-struct-return for 32-bit ELF

  Summary:

  Change the default ABI to be compatible with GCC. For 32-bit ELF
  targets other than Linux, Clang now returns small structs in
  registers r3/r4. This affects FreeBSD, NetBSD, OpenBSD. There is no
  change for 32-bit Linux, where Clang continues to return all structs
  in memory.

  Add clang options -maix-struct-return (to return structs in memory)
  and -msvr4-struct-return (to return structs in registers) to be
  compatible with gcc. These options are only for PPC32; reject them on
  PPC64 and other targets. The options are like -fpcc-struct-return and
  -freg-struct-return for X86_32, and use similar code.

  To actually return a struct in registers, coerce it to an integer of
  the same size. LLVM may optimize the code to remove unnecessary
  accesses to memory, and will return i32 in r3 or i64 in r3:r4.

  Fixes PR#40736

  Patch by George Koehler!

  Reviewed By: jhibbits, nemanjai
  Differential Revision: https://reviews.llvm.org/D73290

Requested by:	jhibbits

MFC r361410:

Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp
llvmorg-10.0.1-rc1-0-gf79cd71e145 (aka 10.0.1 rc1).

MFC r362235 (by kp):

llvm: Default to -mno-relax on RISC-V

Compiling on a RISC-V system fails with 'relocation R_RISCV_ALIGN
requires unimplemented linker relaxation; recompile with -mno-relax'.

Our default linker (ld.lld) doesn't support relaxation, so default to
no-relax so we don't generate object files the linker can't handle.

Reviewed by:	mhorne
Sponsored by:	Axiado
Differential Revision:	https://reviews.freebsd.org/D25210

MFC r362445:

Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp
llvmorg-10.0.0-97-g6f71678ecd2 (not quite 10.0.1 rc2, as more fixes are
still pending).

MFC r362587 (by cem):

Add WITH_CLANG_FORMAT option

clang-format is enabled conditional on either WITH_CLANG_EXTRAS or
WITH_CLANG_FORMAT.  Some sources in libclang are build conditional on
either rule, and obviously the clang-format binary itself depends on the
rule.

clang-format could still use a manual page.

Reviewed by:	emaste
Differential Revision:	https://reviews.freebsd.org/D25427

MFC r362609:

Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp
llvmorg-10.0.0-129-gd24d5c8e308. Getting closer to 10.0.1-rc2.

MFC r362679:

Regenerate ReStructuredText based manpages for llvm-project tools:

* bugpoint.1
* clang.1
* llc.1
* lldb.1
* lli.1
* llvm-ar.1
* llvm-as.1
* llvm-bcanalyzer.1
* llvm-cov.1
* llvm-diff.1
* llvm-dis.1
* llvm-dwarfdump.1
* llvm-extract.1
* llvm-link.1
* llvm-mca.1
* llvm-nm.1
* llvm-pdbutil.1
* llvm-profdata.1
* llvm-symbolizer.1
* llvm-tblgen.1
* opt.1

Add newly generated manpages for:

* llvm-addr2line.1 (this is an alias of llvm-symbolizer)
* llvm-cxxfilt.1
* llvm-objcopy.1
* llvm-ranlib.1 (this is an alias of llvm-ar)

Note that llvm-objdump.1 is an exception, as upstream has both a plain
.1 file, and a .rst variant. These will have to be reconciled upstream
first.

MFC r362680:

Follow-up to r362679, add more entries to OptionalObsoleteFiles.inc

MFC r362719:

Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp
llvmorg-10.0.1-rc2-0-g77d76b71d7d.

Also add a few more llvm utilities under WITH_CLANG_EXTRAS:

* llvm-dwp, a utility for merging DWARF 5 Split DWARF .dwo files into
  .dwp (DWARF package files)
* llvm-size, a size(1) replacement
* llvm-strings, a strings(1) replacement

MFC r362733:

Remove older llvm-ranlib.1 entry from ObsoleteFiles.inc, as it has
gotten its own manpage now, and should be no longer be removed by "make
delete-old".

MFC r362734:

Fix llvm-strings.1 not installing, this was a copy/paste error.

MFC r363401:

Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp
10.0.1 final (aka llvmorg-10.0.1-0-gef32c611aa2).

There were no changes since rc2, except in the upstream regression
tests, which we do not ship.

Relnotes:	yes
This commit is contained in:
Dimitry Andric 2020-07-24 20:54:07 +00:00
parent 30c6aa249e
commit b985941550
202 changed files with 11076 additions and 2020 deletions

View File

@ -518,7 +518,7 @@ BSARGS= DESTDIR= \
MK_HTML=no NO_LINT=yes MK_MAN=no \
-DNO_PIC MK_PROFILE=no -DNO_SHARED \
-DNO_CPU_CFLAGS MK_WARNS=no MK_CTF=no \
MK_CLANG_EXTRAS=no MK_CLANG_FULL=no \
MK_CLANG_EXTRAS=no MK_CLANG_FORMAT=no MK_CLANG_FULL=no \
MK_LLDB=no MK_TESTS=no \
MK_INCLUDES=yes
@ -535,7 +535,7 @@ TMAKE= MAKEOBJDIRPREFIX=${OBJTREE} \
SSP_CFLAGS= \
-DNO_LINT \
-DNO_CPU_CFLAGS MK_WARNS=no MK_CTF=no \
MK_CLANG_EXTRAS=no MK_CLANG_FULL=no \
MK_CLANG_EXTRAS=no MK_CLANG_FORMAT=no MK_CLANG_FULL=no \
MK_LLDB=no MK_TESTS=no
# cross-tools stage
@ -2004,7 +2004,7 @@ NXBMAKE= ${NXBENV} ${MAKE} \
MK_HTML=no NO_LINT=yes MK_MAN=no \
-DNO_PIC MK_PROFILE=no -DNO_SHARED \
-DNO_CPU_CFLAGS MK_WARNS=no MK_CTF=no \
MK_CLANG_EXTRAS=no MK_CLANG_FULL=no \
MK_CLANG_EXTRAS=no MK_CLANG_FORMAT=no MK_CLANG_FULL=no \
MK_LLDB=no MK_DEBUG_FILES=no
# native-xtools is the current target for qemu-user cross builds of ports

View File

@ -38,6 +38,257 @@
# xargs -n1 | sort | uniq -d;
# done
# 20200723: new clang import which bumps version from 10.0.0 to 10.0.1.s
OLD_FILES+=usr/lib/clang/10.0.0/include/cuda_wrappers/algorithm
OLD_FILES+=usr/lib/clang/10.0.0/include/cuda_wrappers/complex
OLD_FILES+=usr/lib/clang/10.0.0/include/cuda_wrappers/new
OLD_DIRS+=usr/lib/clang/10.0.0/include/cuda_wrappers
OLD_FILES+=usr/lib/clang/10.0.0/include/fuzzer/FuzzedDataProvider.h
OLD_DIRS+=usr/lib/clang/10.0.0/include/fuzzer
OLD_FILES+=usr/lib/clang/10.0.0/include/openmp_wrappers/__clang_openmp_math.h
OLD_FILES+=usr/lib/clang/10.0.0/include/openmp_wrappers/__clang_openmp_math_declares.h
OLD_FILES+=usr/lib/clang/10.0.0/include/openmp_wrappers/cmath
OLD_FILES+=usr/lib/clang/10.0.0/include/openmp_wrappers/math.h
OLD_DIRS+=usr/lib/clang/10.0.0/include/openmp_wrappers
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/emmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/mm_malloc.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/mmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/pmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/smmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/tmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ppc_wrappers/xmmintrin.h
OLD_DIRS+=usr/lib/clang/10.0.0/include/ppc_wrappers
OLD_FILES+=usr/lib/clang/10.0.0/include/profile/InstrProfData.inc
OLD_DIRS+=usr/lib/clang/10.0.0/include/profile
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/allocator_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/asan_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/common_interface_defs.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/coverage_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/dfsan_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/hwasan_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/linux_syscall_hooks.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/lsan_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/msan_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/netbsd_syscall_hooks.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/scudo_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/tsan_interface_atomic.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sanitizer/ubsan_interface.h
OLD_DIRS+=usr/lib/clang/10.0.0/include/sanitizer
OLD_FILES+=usr/lib/clang/10.0.0/include/xray/xray_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xray/xray_log_interface.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xray/xray_records.h
OLD_DIRS+=usr/lib/clang/10.0.0/include/xray
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_builtin_vars.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_cmath.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_complex_builtins.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_device_functions.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_intrinsics.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_libdevice_declares.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_math_forward_declares.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__clang_cuda_runtime_wrapper.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__stddef_max_align_t.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__wmmintrin_aes.h
OLD_FILES+=usr/lib/clang/10.0.0/include/__wmmintrin_pclmul.h
OLD_FILES+=usr/lib/clang/10.0.0/include/adxintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/altivec.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ammintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/arm64intr.h
OLD_FILES+=usr/lib/clang/10.0.0/include/arm_acle.h
OLD_FILES+=usr/lib/clang/10.0.0/include/arm_cmse.h
OLD_FILES+=usr/lib/clang/10.0.0/include/arm_fp16.h
OLD_FILES+=usr/lib/clang/10.0.0/include/arm_mve.h
OLD_FILES+=usr/lib/clang/10.0.0/include/arm_neon.h
OLD_FILES+=usr/lib/clang/10.0.0/include/armintr.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx2intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512bf16intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512bitalgintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512bwintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512cdintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512dqintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512erintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512fintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512ifmaintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512ifmavlintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512pfintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vbmi2intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vbmiintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vbmivlintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlbf16intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlbitalgintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlbwintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlcdintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vldqintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlvbmi2intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlvnniintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vlvp2intersectintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vnniintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vp2intersectintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vpopcntdqintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avx512vpopcntdqvlintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/avxintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/bmi2intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/bmiintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/cetintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/cldemoteintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/clflushoptintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/clwbintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/clzerointrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/cpuid.h
OLD_FILES+=usr/lib/clang/10.0.0/include/emmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/enqcmdintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/f16cintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/fma4intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/fmaintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/fxsrintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/gfniintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/htmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/htmxlintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ia32intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/immintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/invpcidintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/lwpintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/lzcntintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/mm3dnow.h
OLD_FILES+=usr/lib/clang/10.0.0/include/mm_malloc.h
OLD_FILES+=usr/lib/clang/10.0.0/include/mmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/module.modulemap
OLD_FILES+=usr/lib/clang/10.0.0/include/movdirintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/msa.h
OLD_FILES+=usr/lib/clang/10.0.0/include/mwaitxintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/nmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/opencl-c-base.h
OLD_FILES+=usr/lib/clang/10.0.0/include/opencl-c.h
OLD_FILES+=usr/lib/clang/10.0.0/include/pconfigintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/pkuintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/pmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/popcntintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/prfchwintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/ptwriteintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/rdseedintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/rtmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/s390intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/sgxintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/shaintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/smmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/tbmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/tmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/vadefs.h
OLD_FILES+=usr/lib/clang/10.0.0/include/vaesintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/vecintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/vpclmulqdqintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/waitpkgintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/wbnoinvdintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/wmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/x86intrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xmmintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xopintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xsavecintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xsaveintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xsaveoptintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xsavesintrin.h
OLD_FILES+=usr/lib/clang/10.0.0/include/xtestintrin.h
OLD_DIRS+=usr/lib/clang/10.0.0/include
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-aarch64.so
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-arm.so
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-armhf.so
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-i386.so
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-preinit-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-preinit-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-preinit-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-preinit-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-preinit-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan-x86_64.so
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan_cxx-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan_cxx-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan_cxx-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan_cxx-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.asan_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi_diag-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi_diag-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi_diag-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi_diag-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.cfi_diag-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.dd-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.dd-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.fuzzer-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.fuzzer-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.fuzzer_no_main-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.fuzzer_no_main-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.msan-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.msan-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.msan_cxx-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.msan_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-powerpc.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-powerpc64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.profile-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.safestack-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.safestack-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.safestack-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats_client-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats_client-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats_client-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats_client-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.stats_client-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.tsan-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.tsan-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.tsan_cxx-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.tsan_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_minimal-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_minimal-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_minimal-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_minimal-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_minimal-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-i386.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-basic-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-basic-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-basic-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-basic-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-fdr-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-fdr-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-fdr-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-fdr-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-profiling-aarch64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-profiling-arm.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-profiling-armhf.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-profiling-x86_64.a
OLD_FILES+=usr/lib/clang/10.0.0/lib/freebsd/libclang_rt.xray-x86_64.a
OLD_DIRS+=usr/lib/clang/10.0.0/lib/freebsd
OLD_DIRS+=usr/lib/clang/10.0.0/lib
OLD_DIRS+=usr/lib/clang/10.0.0
# 20200507: new clang import which bumps version from 9.0.1 to 10.0.0.
OLD_FILES+=usr/lib/clang/9.0.1/include/cuda_wrappers/algorithm
OLD_FILES+=usr/lib/clang/9.0.1/include/cuda_wrappers/complex
@ -3993,7 +4244,6 @@ OLD_FILES+=usr/include/clang/3.3/x86intrin.h
OLD_FILES+=usr/include/clang/3.3/xmmintrin.h
OLD_FILES+=usr/include/clang/3.3/xopintrin.h
OLD_FILES+=usr/share/man/man1/llvm-prof.1.gz
OLD_FILES+=usr/share/man/man1/llvm-ranlib.1.gz
OLD_DIRS+=usr/include/clang/3.3
# 20140216: nve(4) removed
OLD_FILES+=usr/share/man/man4/if_nve.4.gz

View File

@ -16,6 +16,12 @@ from older versions of FreeBSD, try WITHOUT_CLANG and WITH_GCC to bootstrap to
the tip of head, and then rebuild without this option. The bootstrap process
from older version of current across the gcc/clang cutover is a bit fragile.
20200723:
Clang, llvm, lld, lldb, compiler-rt, libc++, libunwind and openmp have
been upgraded to 10.0.1. Please see the 20141231 entry below for
information about prerequisites and upgrading, if you are not already
using clang 3.5.0 or higher.
20200507:
Clang, llvm, lld, lldb, compiler-rt, libc++, libunwind and openmp have
been upgraded to 10.0.0. Please see the 20141231 entry below for

View File

@ -3,6 +3,7 @@
.clang-format
.clang-tidy
.git-blame-ignore-revs
.github/
.gitignore
CONTRIBUTING.md
README.md
@ -264,6 +265,7 @@ lldb/.clang-format
lldb/.gitignore
lldb/CMakeLists.txt
lldb/CODE_OWNERS.txt
lldb/bindings/CMakeLists.txt
lldb/cmake/
lldb/docs/.htaccess
lldb/docs/CMakeLists.txt
@ -529,6 +531,7 @@ llvm/lib/ExecutionEngine/PerfJITEvents/CMakeLists.txt
llvm/lib/ExecutionEngine/PerfJITEvents/LLVMBuild.txt
llvm/lib/ExecutionEngine/RuntimeDyld/CMakeLists.txt
llvm/lib/ExecutionEngine/RuntimeDyld/LLVMBuild.txt
llvm/lib/Extensions/
llvm/lib/Frontend/CMakeLists.txt
llvm/lib/Frontend/LLVMBuild.txt
llvm/lib/Frontend/OpenMP/CMakeLists.txt
@ -861,7 +864,8 @@ llvm/tools/llvm-dis/LLVMBuild.txt
llvm/tools/llvm-dwarfdump/CMakeLists.txt
llvm/tools/llvm-dwarfdump/LLVMBuild.txt
llvm/tools/llvm-dwarfdump/fuzzer/
llvm/tools/llvm-dwp/
llvm/tools/llvm-dwp/CMakeLists.txt
llvm/tools/llvm-dwp/LLVMBuild.txt
llvm/tools/llvm-elfabi/
llvm/tools/llvm-exegesis/
llvm/tools/llvm-extract/CMakeLists.txt
@ -908,12 +912,14 @@ llvm/tools/llvm-reduce/
llvm/tools/llvm-rtdyld/CMakeLists.txt
llvm/tools/llvm-rtdyld/LLVMBuild.txt
llvm/tools/llvm-shlib/
llvm/tools/llvm-size/
llvm/tools/llvm-size/CMakeLists.txt
llvm/tools/llvm-size/LLVMBuild.txt
llvm/tools/llvm-special-case-list-fuzzer/
llvm/tools/llvm-split/
llvm/tools/llvm-stress/CMakeLists.txt
llvm/tools/llvm-stress/LLVMBuild.txt
llvm/tools/llvm-strings/
llvm/tools/llvm-strings/CMakeLists.txt
llvm/tools/llvm-strings/LLVMBuild.txt
llvm/tools/llvm-symbolizer/CMakeLists.txt
llvm/tools/llvm-undname/
llvm/tools/llvm-xray/CMakeLists.txt

View File

@ -856,14 +856,15 @@ public:
return getParentFunctionOrMethod() == nullptr;
}
/// Returns true if this declaration lexically is inside a function.
/// It recognizes non-defining declarations as well as members of local
/// classes:
/// Returns true if this declaration is lexically inside a function or inside
/// a variable initializer. It recognizes non-defining declarations as well
/// as members of local classes:
/// \code
/// void foo() { void bar(); }
/// void foo2() { class ABC { void bar(); }; }
/// inline int x = [](){ return 0; };
/// \endcode
bool isLexicallyWithinFunctionOrMethod() const;
bool isInLocalScope() const;
/// If this decl is defined inside a function/method/block it returns
/// the corresponding DeclContext, otherwise it returns null.

View File

@ -685,7 +685,7 @@ def XRayLogArgs : InheritableAttr {
def PatchableFunctionEntry
: InheritableAttr,
TargetSpecificAttr<TargetArch<["aarch64", "x86", "x86_64"]>> {
TargetSpecificAttr<TargetArch<["aarch64", "aarch64_be", "x86", "x86_64"]>> {
let Spellings = [GCC<"patchable_function_entry">];
let Subjects = SubjectList<[Function, ObjCMethod]>;
let Args = [UnsignedArgument<"Count">, DefaultIntArgument<"Offset", 0>];

View File

@ -2267,6 +2267,14 @@ def mspeculative_load_hardening : Flag<["-"], "mspeculative-load-hardening">,
Group<m_Group>, Flags<[CoreOption,CC1Option]>;
def mno_speculative_load_hardening : Flag<["-"], "mno-speculative-load-hardening">,
Group<m_Group>, Flags<[CoreOption]>;
def mlvi_hardening : Flag<["-"], "mlvi-hardening">, Group<m_Group>, Flags<[CoreOption,DriverOption]>,
HelpText<"Enable all mitigations for Load Value Injection (LVI)">;
def mno_lvi_hardening : Flag<["-"], "mno-lvi-hardening">, Group<m_Group>, Flags<[CoreOption,DriverOption]>,
HelpText<"Disable mitigations for Load Value Injection (LVI)">;
def mlvi_cfi : Flag<["-"], "mlvi-cfi">, Group<m_Group>, Flags<[CoreOption,DriverOption]>,
HelpText<"Enable only control-flow mitigations for Load Value Injection (LVI)">;
def mno_lvi_cfi : Flag<["-"], "mno-lvi-cfi">, Group<m_Group>, Flags<[CoreOption,DriverOption]>,
HelpText<"Disable control-flow mitigations for Load Value Injection (LVI)">;
def mrelax : Flag<["-"], "mrelax">, Group<m_riscv_Features_Group>,
HelpText<"Enable linker relaxation">;
@ -2439,6 +2447,12 @@ def mlongcall: Flag<["-"], "mlongcall">,
Group<m_ppc_Features_Group>;
def mno_longcall : Flag<["-"], "mno-longcall">,
Group<m_ppc_Features_Group>;
def maix_struct_return : Flag<["-"], "maix-struct-return">,
Group<m_Group>, Flags<[CC1Option]>,
HelpText<"Return all structs in memory (PPC32 only)">;
def msvr4_struct_return : Flag<["-"], "msvr4-struct-return">,
Group<m_Group>, Flags<[CC1Option]>,
HelpText<"Return small structs in registers (PPC32 only)">;
def mvx : Flag<["-"], "mvx">, Group<m_Group>;
def mno_vx : Flag<["-"], "mno-vx">, Group<m_Group>;

View File

@ -332,13 +332,16 @@ void Decl::setDeclContextsImpl(DeclContext *SemaDC, DeclContext *LexicalDC,
}
}
bool Decl::isLexicallyWithinFunctionOrMethod() const {
bool Decl::isInLocalScope() const {
const DeclContext *LDC = getLexicalDeclContext();
while (true) {
if (LDC->isFunctionOrMethod())
return true;
if (!isa<TagDecl>(LDC))
return false;
if (const auto *CRD = dyn_cast<CXXRecordDecl>(LDC))
if (CRD->isLambda())
return true;
LDC = LDC->getLexicalParent();
}
return false;

View File

@ -8593,6 +8593,10 @@ bool PointerExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
static bool EvaluateArrayNewInitList(EvalInfo &Info, LValue &This,
APValue &Result, const InitListExpr *ILE,
QualType AllocType);
static bool EvaluateArrayNewConstructExpr(EvalInfo &Info, LValue &This,
APValue &Result,
const CXXConstructExpr *CCE,
QualType AllocType);
bool PointerExprEvaluator::VisitCXXNewExpr(const CXXNewExpr *E) {
if (!Info.getLangOpts().CPlusPlus2a)
@ -8642,6 +8646,7 @@ bool PointerExprEvaluator::VisitCXXNewExpr(const CXXNewExpr *E) {
const Expr *Init = E->getInitializer();
const InitListExpr *ResizedArrayILE = nullptr;
const CXXConstructExpr *ResizedArrayCCE = nullptr;
QualType AllocType = E->getAllocatedType();
if (Optional<const Expr*> ArraySize = E->getArraySize()) {
@ -8685,7 +8690,7 @@ bool PointerExprEvaluator::VisitCXXNewExpr(const CXXNewExpr *E) {
// -- the new-initializer is a braced-init-list and the number of
// array elements for which initializers are provided [...]
// exceeds the number of elements to initialize
if (Init) {
if (Init && !isa<CXXConstructExpr>(Init)) {
auto *CAT = Info.Ctx.getAsConstantArrayType(Init->getType());
assert(CAT && "unexpected type for array initializer");
@ -8708,6 +8713,8 @@ bool PointerExprEvaluator::VisitCXXNewExpr(const CXXNewExpr *E) {
// special handling for this case when we initialize.
if (InitBound != AllocBound)
ResizedArrayILE = cast<InitListExpr>(Init);
} else if (Init) {
ResizedArrayCCE = cast<CXXConstructExpr>(Init);
}
AllocType = Info.Ctx.getConstantArrayType(AllocType, ArrayBound, nullptr,
@ -8772,6 +8779,10 @@ bool PointerExprEvaluator::VisitCXXNewExpr(const CXXNewExpr *E) {
if (!EvaluateArrayNewInitList(Info, Result, *Val, ResizedArrayILE,
AllocType))
return false;
} else if (ResizedArrayCCE) {
if (!EvaluateArrayNewConstructExpr(Info, Result, *Val, ResizedArrayCCE,
AllocType))
return false;
} else if (Init) {
if (!EvaluateInPlace(*Val, Info, Result, Init))
return false;
@ -9597,6 +9608,16 @@ static bool EvaluateArrayNewInitList(EvalInfo &Info, LValue &This,
.VisitInitListExpr(ILE, AllocType);
}
static bool EvaluateArrayNewConstructExpr(EvalInfo &Info, LValue &This,
APValue &Result,
const CXXConstructExpr *CCE,
QualType AllocType) {
assert(CCE->isRValue() && CCE->getType()->isArrayType() &&
"not an array rvalue");
return ArrayExprEvaluator(Info, This, Result)
.VisitCXXConstructExpr(CCE, This, &Result, AllocType);
}
// Return true iff the given array filler may depend on the element index.
static bool MaybeElementDependentArrayFiller(const Expr *FillerExpr) {
// For now, just whitelist non-class value-initialization and initialization

View File

@ -430,7 +430,7 @@ std::string RawComment::getFormattedText(const SourceManager &SourceMgr,
};
auto DropTrailingNewLines = [](std::string &Str) {
while (Str.back() == '\n')
while (!Str.empty() && Str.back() == '\n')
Str.pop_back();
};

View File

@ -276,11 +276,12 @@ public:
break;
case 'Q': // Memory operand that is an offset from a register (it is
// usually better to use `m' or `es' in asm statements)
Info.setAllowsRegister();
LLVM_FALLTHROUGH;
case 'Z': // Memory operand that is an indexed or indirect from a
// register (it is usually better to use `m' or `es' in
// asm statements)
Info.setAllowsMemory();
Info.setAllowsRegister();
break;
case 'R': // AIX TOC entry
case 'a': // Address operand that is an indexed or indirect from a

View File

@ -1847,9 +1847,16 @@ void CodeGenModule::SetFunctionAttributes(GlobalDecl GD, llvm::Function *F,
else if (const auto *SA = FD->getAttr<SectionAttr>())
F->setSection(SA->getName());
// If we plan on emitting this inline builtin, we can't treat it as a builtin.
if (FD->isInlineBuiltinDeclaration()) {
F->addAttribute(llvm::AttributeList::FunctionIndex,
llvm::Attribute::NoBuiltin);
const FunctionDecl *FDBody;
bool HasBody = FD->hasBody(FDBody);
(void)HasBody;
assert(HasBody && "Inline builtin declarations should always have an "
"available body!");
if (shouldEmitFunction(FDBody))
F->addAttribute(llvm::AttributeList::FunctionIndex,
llvm::Attribute::NoBuiltin);
}
if (FD->isReplaceableGlobalAllocationFunction()) {

View File

@ -4123,12 +4123,24 @@ namespace {
/// PPC32_SVR4_ABIInfo - The 32-bit PowerPC ELF (SVR4) ABI information.
class PPC32_SVR4_ABIInfo : public DefaultABIInfo {
bool IsSoftFloatABI;
bool IsRetSmallStructInRegABI;
CharUnits getParamTypeAlignment(QualType Ty) const;
public:
PPC32_SVR4_ABIInfo(CodeGen::CodeGenTypes &CGT, bool SoftFloatABI)
: DefaultABIInfo(CGT), IsSoftFloatABI(SoftFloatABI) {}
PPC32_SVR4_ABIInfo(CodeGen::CodeGenTypes &CGT, bool SoftFloatABI,
bool RetSmallStructInRegABI)
: DefaultABIInfo(CGT), IsSoftFloatABI(SoftFloatABI),
IsRetSmallStructInRegABI(RetSmallStructInRegABI) {}
ABIArgInfo classifyReturnType(QualType RetTy) const;
void computeInfo(CGFunctionInfo &FI) const override {
if (!getCXXABI().classifyReturnType(FI))
FI.getReturnInfo() = classifyReturnType(FI.getReturnType());
for (auto &I : FI.arguments())
I.info = classifyArgumentType(I.type);
}
Address EmitVAArg(CodeGenFunction &CGF, Address VAListAddr,
QualType Ty) const override;
@ -4136,8 +4148,13 @@ public:
class PPC32TargetCodeGenInfo : public TargetCodeGenInfo {
public:
PPC32TargetCodeGenInfo(CodeGenTypes &CGT, bool SoftFloatABI)
: TargetCodeGenInfo(new PPC32_SVR4_ABIInfo(CGT, SoftFloatABI)) {}
PPC32TargetCodeGenInfo(CodeGenTypes &CGT, bool SoftFloatABI,
bool RetSmallStructInRegABI)
: TargetCodeGenInfo(new PPC32_SVR4_ABIInfo(CGT, SoftFloatABI,
RetSmallStructInRegABI)) {}
static bool isStructReturnInRegABI(const llvm::Triple &Triple,
const CodeGenOptions &Opts);
int getDwarfEHStackPointer(CodeGen::CodeGenModule &M) const override {
// This is recovered from gcc output.
@ -4173,6 +4190,34 @@ CharUnits PPC32_SVR4_ABIInfo::getParamTypeAlignment(QualType Ty) const {
return CharUnits::fromQuantity(4);
}
ABIArgInfo PPC32_SVR4_ABIInfo::classifyReturnType(QualType RetTy) const {
uint64_t Size;
// -msvr4-struct-return puts small aggregates in GPR3 and GPR4.
if (isAggregateTypeForABI(RetTy) && IsRetSmallStructInRegABI &&
(Size = getContext().getTypeSize(RetTy)) <= 64) {
// System V ABI (1995), page 3-22, specified:
// > A structure or union whose size is less than or equal to 8 bytes
// > shall be returned in r3 and r4, as if it were first stored in the
// > 8-byte aligned memory area and then the low addressed word were
// > loaded into r3 and the high-addressed word into r4. Bits beyond
// > the last member of the structure or union are not defined.
//
// GCC for big-endian PPC32 inserts the pad before the first member,
// not "beyond the last member" of the struct. To stay compatible
// with GCC, we coerce the struct to an integer of the same size.
// LLVM will extend it and return i32 in r3, or i64 in r3:r4.
if (Size == 0)
return ABIArgInfo::getIgnore();
else {
llvm::Type *CoerceTy = llvm::Type::getIntNTy(getVMContext(), Size);
return ABIArgInfo::getDirect(CoerceTy);
}
}
return DefaultABIInfo::classifyReturnType(RetTy);
}
// TODO: this implementation is now likely redundant with
// DefaultABIInfo::EmitVAArg.
Address PPC32_SVR4_ABIInfo::EmitVAArg(CodeGenFunction &CGF, Address VAList,
@ -4328,6 +4373,25 @@ Address PPC32_SVR4_ABIInfo::EmitVAArg(CodeGenFunction &CGF, Address VAList,
return Result;
}
bool PPC32TargetCodeGenInfo::isStructReturnInRegABI(
const llvm::Triple &Triple, const CodeGenOptions &Opts) {
assert(Triple.getArch() == llvm::Triple::ppc);
switch (Opts.getStructReturnConvention()) {
case CodeGenOptions::SRCK_Default:
break;
case CodeGenOptions::SRCK_OnStack: // -maix-struct-return
return false;
case CodeGenOptions::SRCK_InRegs: // -msvr4-struct-return
return true;
}
if (Triple.isOSBinFormatELF() && !Triple.isOSLinux())
return true;
return false;
}
bool
PPC32TargetCodeGenInfo::initDwarfEHRegSizeTable(CodeGen::CodeGenFunction &CGF,
llvm::Value *Address) const {
@ -9613,7 +9677,8 @@ ABIArgInfo RISCVABIInfo::classifyArgumentType(QualType Ty, bool IsFixed,
uint64_t Size = getContext().getTypeSize(Ty);
// Pass floating point values via FPRs if possible.
if (IsFixed && Ty->isFloatingType() && FLen >= Size && ArgFPRsLeft) {
if (IsFixed && Ty->isFloatingType() && !Ty->isComplexType() &&
FLen >= Size && ArgFPRsLeft) {
ArgFPRsLeft--;
return ABIArgInfo::getDirect();
}
@ -9852,10 +9917,14 @@ const TargetCodeGenInfo &CodeGenModule::getTargetCodeGenInfo() {
return SetCGInfo(new ARMTargetCodeGenInfo(Types, Kind));
}
case llvm::Triple::ppc:
case llvm::Triple::ppc: {
bool IsSoftFloat =
CodeGenOpts.FloatABI == "soft" || getTarget().hasFeature("spe");
bool RetSmallStructInRegABI =
PPC32TargetCodeGenInfo::isStructReturnInRegABI(Triple, CodeGenOpts);
return SetCGInfo(
new PPC32TargetCodeGenInfo(Types, CodeGenOpts.FloatABI == "soft" ||
getTarget().hasFeature("spe")));
new PPC32TargetCodeGenInfo(Types, IsSoftFloat, RetSmallStructInRegABI));
}
case llvm::Triple::ppc64:
if (Triple.isOSBinFormatELF()) {
PPC64_SVR4_ABIInfo::ABIKind Kind = PPC64_SVR4_ABIInfo::ELFv1;

View File

@ -454,8 +454,7 @@ SanitizerArgs::SanitizerArgs(const ToolChain &TC,
<< lastArgumentForMask(D, Args, Kinds & NeedsLTO) << "-flto";
}
if ((Kinds & SanitizerKind::ShadowCallStack) &&
TC.getTriple().getArch() == llvm::Triple::aarch64 &&
if ((Kinds & SanitizerKind::ShadowCallStack) && TC.getTriple().isAArch64() &&
!llvm::AArch64::isX18ReservedByDefault(TC.getTriple()) &&
!Args.hasArg(options::OPT_ffixed_x18)) {
D.Diag(diag::err_drv_argument_only_allowed_with)

View File

@ -954,15 +954,12 @@ SanitizerMask ToolChain::getSupportedSanitizers() const {
if (getTriple().getArch() == llvm::Triple::x86 ||
getTriple().getArch() == llvm::Triple::x86_64 ||
getTriple().getArch() == llvm::Triple::arm ||
getTriple().getArch() == llvm::Triple::aarch64 ||
getTriple().getArch() == llvm::Triple::wasm32 ||
getTriple().getArch() == llvm::Triple::wasm64)
getTriple().getArch() == llvm::Triple::wasm64 || getTriple().isAArch64())
Res |= SanitizerKind::CFIICall;
if (getTriple().getArch() == llvm::Triple::x86_64 ||
getTriple().getArch() == llvm::Triple::aarch64)
if (getTriple().getArch() == llvm::Triple::x86_64 || getTriple().isAArch64())
Res |= SanitizerKind::ShadowCallStack;
if (getTriple().getArch() == llvm::Triple::aarch64 ||
getTriple().getArch() == llvm::Triple::aarch64_be)
if (getTriple().isAArch64())
Res |= SanitizerKind::MemTag;
return Res;
}

View File

@ -426,8 +426,9 @@ void riscv::getRISCVTargetFeatures(const Driver &D, const llvm::Triple &Triple,
if (Args.hasArg(options::OPT_ffixed_x31))
Features.push_back("+reserve-x31");
// -mrelax is default, unless -mno-relax is specified.
if (Args.hasFlag(options::OPT_mrelax, options::OPT_mno_relax, true))
// FreeBSD local, because ld.lld doesn't support relaxations
// -mno-relax is default, unless -mrelax is specified.
if (Args.hasFlag(options::OPT_mrelax, options::OPT_mno_relax, false))
Features.push_back("+relax");
else
Features.push_back("-relax");

View File

@ -147,6 +147,7 @@ void x86::getX86TargetFeatures(const Driver &D, const llvm::Triple &Triple,
// flags). This is a bit hacky but keeps existing usages working. We should
// consider deprecating this and instead warn if the user requests external
// retpoline thunks and *doesn't* request some form of retpolines.
auto SpectreOpt = clang::driver::options::ID::OPT_INVALID;
if (Args.hasArgNoClaim(options::OPT_mretpoline, options::OPT_mno_retpoline,
options::OPT_mspeculative_load_hardening,
options::OPT_mno_speculative_load_hardening)) {
@ -154,12 +155,14 @@ void x86::getX86TargetFeatures(const Driver &D, const llvm::Triple &Triple,
false)) {
Features.push_back("+retpoline-indirect-calls");
Features.push_back("+retpoline-indirect-branches");
SpectreOpt = options::OPT_mretpoline;
} else if (Args.hasFlag(options::OPT_mspeculative_load_hardening,
options::OPT_mno_speculative_load_hardening,
false)) {
// On x86, speculative load hardening relies on at least using retpolines
// for indirect calls.
Features.push_back("+retpoline-indirect-calls");
SpectreOpt = options::OPT_mspeculative_load_hardening;
}
} else if (Args.hasFlag(options::OPT_mretpoline_external_thunk,
options::OPT_mno_retpoline_external_thunk, false)) {
@ -167,6 +170,26 @@ void x86::getX86TargetFeatures(const Driver &D, const llvm::Triple &Triple,
// eventually switch to an error here.
Features.push_back("+retpoline-indirect-calls");
Features.push_back("+retpoline-indirect-branches");
SpectreOpt = options::OPT_mretpoline_external_thunk;
}
auto LVIOpt = clang::driver::options::ID::OPT_INVALID;
if (Args.hasFlag(options::OPT_mlvi_hardening, options::OPT_mno_lvi_hardening,
false)) {
Features.push_back("+lvi-load-hardening");
Features.push_back("+lvi-cfi"); // load hardening implies CFI protection
LVIOpt = options::OPT_mlvi_hardening;
} else if (Args.hasFlag(options::OPT_mlvi_cfi, options::OPT_mno_lvi_cfi,
false)) {
Features.push_back("+lvi-cfi");
LVIOpt = options::OPT_mlvi_cfi;
}
if (SpectreOpt != clang::driver::options::ID::OPT_INVALID &&
LVIOpt != clang::driver::options::ID::OPT_INVALID) {
D.Diag(diag::err_drv_argument_not_allowed_with)
<< D.getOpts().getOptionName(SpectreOpt)
<< D.getOpts().getOptionName(LVIOpt);
}
// Now add any that the user explicitly requested on the command line,

View File

@ -4421,6 +4421,19 @@ void Clang::ConstructJob(Compilation &C, const JobAction &JA,
CmdArgs.push_back(A->getValue());
}
if (Arg *A = Args.getLastArg(options::OPT_maix_struct_return,
options::OPT_msvr4_struct_return)) {
if (TC.getArch() != llvm::Triple::ppc) {
D.Diag(diag::err_drv_unsupported_opt_for_target)
<< A->getSpelling() << RawTriple.str();
} else if (A->getOption().matches(options::OPT_maix_struct_return)) {
CmdArgs.push_back("-maix-struct-return");
} else {
assert(A->getOption().matches(options::OPT_msvr4_struct_return));
CmdArgs.push_back("-msvr4-struct-return");
}
}
if (Arg *A = Args.getLastArg(options::OPT_fpcc_struct_return,
options::OPT_freg_struct_return)) {
if (TC.getArch() != llvm::Triple::x86) {

View File

@ -1146,6 +1146,7 @@ void Darwin::addProfileRTLibs(const ArgList &Args,
addExportedSymbol(CmdArgs, "___gcov_flush");
addExportedSymbol(CmdArgs, "_flush_fn_list");
addExportedSymbol(CmdArgs, "_writeout_fn_list");
addExportedSymbol(CmdArgs, "_reset_fn_list");
} else {
addExportedSymbol(CmdArgs, "___llvm_profile_filename");
addExportedSymbol(CmdArgs, "___llvm_profile_raw_version");

View File

@ -309,7 +309,7 @@ static const char *getLDMOption(const llvm::Triple &T, const ArgList &Args) {
}
}
static bool getPIE(const ArgList &Args, const toolchains::Linux &ToolChain) {
static bool getPIE(const ArgList &Args, const ToolChain &TC) {
if (Args.hasArg(options::OPT_shared) || Args.hasArg(options::OPT_static) ||
Args.hasArg(options::OPT_r) || Args.hasArg(options::OPT_static_pie))
return false;
@ -317,17 +317,16 @@ static bool getPIE(const ArgList &Args, const toolchains::Linux &ToolChain) {
Arg *A = Args.getLastArg(options::OPT_pie, options::OPT_no_pie,
options::OPT_nopie);
if (!A)
return ToolChain.isPIEDefault();
return TC.isPIEDefault();
return A->getOption().matches(options::OPT_pie);
}
static bool getStaticPIE(const ArgList &Args,
const toolchains::Linux &ToolChain) {
static bool getStaticPIE(const ArgList &Args, const ToolChain &TC) {
bool HasStaticPIE = Args.hasArg(options::OPT_static_pie);
// -no-pie is an alias for -nopie. So, handling -nopie takes care of
// -no-pie as well.
if (HasStaticPIE && Args.hasArg(options::OPT_nopie)) {
const Driver &D = ToolChain.getDriver();
const Driver &D = TC.getDriver();
const llvm::opt::OptTable &Opts = D.getOpts();
const char *StaticPIEName = Opts.getOptionName(options::OPT_static_pie);
const char *NoPIEName = Opts.getOptionName(options::OPT_nopie);
@ -346,8 +345,12 @@ void tools::gnutools::Linker::ConstructJob(Compilation &C, const JobAction &JA,
const InputInfoList &Inputs,
const ArgList &Args,
const char *LinkingOutput) const {
const toolchains::Linux &ToolChain =
static_cast<const toolchains::Linux &>(getToolChain());
// FIXME: The Linker class constructor takes a ToolChain and not a
// Generic_ELF, so the static_cast might return a reference to a invalid
// instance (see PR45061). Ideally, the Linker constructor needs to take a
// Generic_ELF instead.
const toolchains::Generic_ELF &ToolChain =
static_cast<const toolchains::Generic_ELF &>(getToolChain());
const Driver &D = ToolChain.getDriver();
const llvm::Triple &Triple = getToolChain().getEffectiveTriple();
@ -418,8 +421,7 @@ void tools::gnutools::Linker::ConstructJob(Compilation &C, const JobAction &JA,
if (isAndroid)
CmdArgs.push_back("--warn-shared-textrel");
for (const auto &Opt : ToolChain.ExtraOpts)
CmdArgs.push_back(Opt.c_str());
ToolChain.addExtraOpts(CmdArgs);
CmdArgs.push_back("--eh-frame-hdr");

View File

@ -356,6 +356,12 @@ public:
void addClangTargetOptions(const llvm::opt::ArgList &DriverArgs,
llvm::opt::ArgStringList &CC1Args,
Action::OffloadKind DeviceOffloadKind) const override;
virtual std::string getDynamicLinker(const llvm::opt::ArgList &Args) const {
return {};
}
virtual void addExtraOpts(llvm::opt::ArgStringList &CmdArgs) const {}
};
} // end namespace toolchains

View File

@ -61,8 +61,7 @@ static StringRef getOSLibDir(const llvm::Triple &Triple, const ArgList &Args) {
return Triple.isArch32Bit() ? "lib" : "lib64";
}
Hurd::Hurd(const Driver &D, const llvm::Triple &Triple,
const ArgList &Args)
Hurd::Hurd(const Driver &D, const llvm::Triple &Triple, const ArgList &Args)
: Generic_ELF(D, Triple, Args) {
std::string SysRoot = computeSysRoot();
path_list &Paths = getFilePaths();
@ -170,3 +169,8 @@ void Hurd::AddClangSystemIncludeArgs(const ArgList &DriverArgs,
addExternCSystemInclude(DriverArgs, CC1Args, SysRoot + "/usr/include");
}
void Hurd::addExtraOpts(llvm::opt::ArgStringList &CmdArgs) const {
for (const auto &Opt : ExtraOpts)
CmdArgs.push_back(Opt.c_str());
}

View File

@ -27,9 +27,11 @@ public:
AddClangSystemIncludeArgs(const llvm::opt::ArgList &DriverArgs,
llvm::opt::ArgStringList &CC1Args) const override;
virtual std::string computeSysRoot() const;
std::string computeSysRoot() const;
virtual std::string getDynamicLinker(const llvm::opt::ArgList &Args) const;
std::string getDynamicLinker(const llvm::opt::ArgList &Args) const override;
void addExtraOpts(llvm::opt::ArgStringList &CmdArgs) const override;
std::vector<std::string> ExtraOpts;

View File

@ -986,3 +986,8 @@ void Linux::addProfileRTLibs(const llvm::opt::ArgList &Args,
Twine("-u", llvm::getInstrProfRuntimeHookVarName())));
ToolChain::addProfileRTLibs(Args, CmdArgs);
}
void Linux::addExtraOpts(llvm::opt::ArgStringList &CmdArgs) const {
for (const auto &Opt : ExtraOpts)
CmdArgs.push_back(Opt.c_str());
}

View File

@ -42,7 +42,9 @@ public:
llvm::opt::ArgStringList &CmdArgs) const override;
virtual std::string computeSysRoot() const;
virtual std::string getDynamicLinker(const llvm::opt::ArgList &Args) const;
std::string getDynamicLinker(const llvm::opt::ArgList &Args) const override;
void addExtraOpts(llvm::opt::ArgStringList &CmdArgs) const override;
std::vector<std::string> ExtraOpts;

View File

@ -2176,6 +2176,10 @@ static bool isFunctionDeclarationName(const FormatToken &Current,
Next = Next->Next;
continue;
}
if (Next->is(TT_TemplateOpener) && Next->MatchingParen) {
Next = Next->MatchingParen;
continue;
}
break;
}
@ -2705,20 +2709,40 @@ bool TokenAnnotator::spaceRequiredBetween(const AnnotatedLine &Line,
tok::l_square));
if (Right.is(tok::star) && Left.is(tok::l_paren))
return false;
if (Right.isOneOf(tok::star, tok::amp, tok::ampamp) &&
(Left.is(tok::identifier) || Left.isSimpleTypeSpecifier()) &&
// Space between the type and the * in:
// operator void*()
// operator char*()
// operator /*comment*/ const char*()
// operator volatile /*comment*/ char*()
// operator Foo*()
// dependent on PointerAlignment style.
Left.Previous &&
(Left.Previous->endsSequence(tok::kw_operator) ||
Left.Previous->endsSequence(tok::kw_const, tok::kw_operator) ||
Left.Previous->endsSequence(tok::kw_volatile, tok::kw_operator)))
return (Style.PointerAlignment != FormatStyle::PAS_Left);
if (Right.is(tok::star) && Left.is(tok::star))
return false;
if (Right.isOneOf(tok::star, tok::amp, tok::ampamp)) {
const FormatToken *Previous = &Left;
while (Previous && !Previous->is(tok::kw_operator)) {
if (Previous->is(tok::identifier) || Previous->isSimpleTypeSpecifier()) {
Previous = Previous->getPreviousNonComment();
continue;
}
if (Previous->is(TT_TemplateCloser) && Previous->MatchingParen) {
Previous = Previous->MatchingParen->getPreviousNonComment();
continue;
}
if (Previous->is(tok::coloncolon)) {
Previous = Previous->getPreviousNonComment();
continue;
}
break;
}
// Space between the type and the * in:
// operator void*()
// operator char*()
// operator /*comment*/ const char*()
// operator volatile /*comment*/ char*()
// operator Foo*()
// operator C<T>*()
// operator std::Foo*()
// operator C<T>::D<U>*()
// dependent on PointerAlignment style.
if (Previous && (Previous->endsSequence(tok::kw_operator) ||
Previous->endsSequence(tok::kw_const, tok::kw_operator) ||
Previous->endsSequence(tok::kw_volatile, tok::kw_operator)))
return (Style.PointerAlignment != FormatStyle::PAS_Left);
}
const auto SpaceRequiredForArrayInitializerLSquare =
[](const FormatToken &LSquareTok, const FormatStyle &Style) {
return Style.SpacesInContainerLiterals ||

View File

@ -1279,11 +1279,18 @@ static bool ParseCodeGenArgs(CodeGenOptions &Opts, ArgList &Args, InputKind IK,
Diags.Report(diag::err_drv_invalid_value) << A->getAsString(Args) << Val;
}
if (Arg *A = Args.getLastArg(OPT_fpcc_struct_return, OPT_freg_struct_return)) {
if (A->getOption().matches(OPT_fpcc_struct_return)) {
// X86_32 has -fppc-struct-return and -freg-struct-return.
// PPC32 has -maix-struct-return and -msvr4-struct-return.
if (Arg *A =
Args.getLastArg(OPT_fpcc_struct_return, OPT_freg_struct_return,
OPT_maix_struct_return, OPT_msvr4_struct_return)) {
const Option &O = A->getOption();
if (O.matches(OPT_fpcc_struct_return) ||
O.matches(OPT_maix_struct_return)) {
Opts.setStructReturnConvention(CodeGenOptions::SRCK_OnStack);
} else {
assert(A->getOption().matches(OPT_freg_struct_return));
assert(O.matches(OPT_freg_struct_return) ||
O.matches(OPT_msvr4_struct_return));
Opts.setStructReturnConvention(CodeGenOptions::SRCK_InRegs);
}
}

View File

@ -3817,6 +3817,9 @@ TypeResult Sema::ActOnTagTemplateIdType(TagUseKind TUK,
SourceLocation LAngleLoc,
ASTTemplateArgsPtr TemplateArgsIn,
SourceLocation RAngleLoc) {
if (SS.isInvalid())
return TypeResult(true);
TemplateName Template = TemplateD.get();
// Translate the parser's template argument list in our AST format.
@ -5925,7 +5928,9 @@ bool UnnamedLocalNoLinkageFinder::VisitDependentNameType(
bool UnnamedLocalNoLinkageFinder::VisitDependentTemplateSpecializationType(
const DependentTemplateSpecializationType* T) {
return VisitNestedNameSpecifier(T->getQualifier());
if (auto *Q = T->getQualifier())
return VisitNestedNameSpecifier(Q);
return false;
}
bool UnnamedLocalNoLinkageFinder::VisitPackExpansionType(
@ -5979,6 +5984,7 @@ bool UnnamedLocalNoLinkageFinder::VisitTagDecl(const TagDecl *Tag) {
bool UnnamedLocalNoLinkageFinder::VisitNestedNameSpecifier(
NestedNameSpecifier *NNS) {
assert(NNS);
if (NNS->getPrefix() && VisitNestedNameSpecifier(NNS->getPrefix()))
return true;

View File

@ -2343,7 +2343,7 @@ ParmVarDecl *Sema::SubstParmVarDecl(ParmVarDecl *OldParm,
UnparsedDefaultArgInstantiations[OldParm].push_back(NewParm);
} else if (Expr *Arg = OldParm->getDefaultArg()) {
FunctionDecl *OwningFunc = cast<FunctionDecl>(OldParm->getDeclContext());
if (OwningFunc->isLexicallyWithinFunctionOrMethod()) {
if (OwningFunc->isInLocalScope()) {
// Instantiate default arguments for methods of local classes (DR1484)
// and non-defining declarations.
Sema::ContextRAII SavedContext(*this, OwningFunc);

View File

@ -4367,7 +4367,7 @@ TemplateDeclInstantiator::InitFunctionInstantiation(FunctionDecl *New,
EPI.ExceptionSpec.Type != EST_None &&
EPI.ExceptionSpec.Type != EST_DynamicNone &&
EPI.ExceptionSpec.Type != EST_BasicNoexcept &&
!Tmpl->isLexicallyWithinFunctionOrMethod()) {
!Tmpl->isInLocalScope()) {
FunctionDecl *ExceptionSpecTemplate = Tmpl;
if (EPI.ExceptionSpec.Type == EST_Uninstantiated)
ExceptionSpecTemplate = EPI.ExceptionSpec.SourceTemplate;

View File

@ -4022,50 +4022,8 @@ template<typename Derived>
void TreeTransform<Derived>::InventTemplateArgumentLoc(
const TemplateArgument &Arg,
TemplateArgumentLoc &Output) {
SourceLocation Loc = getDerived().getBaseLocation();
switch (Arg.getKind()) {
case TemplateArgument::Null:
llvm_unreachable("null template argument in TreeTransform");
break;
case TemplateArgument::Type:
Output = TemplateArgumentLoc(Arg,
SemaRef.Context.getTrivialTypeSourceInfo(Arg.getAsType(), Loc));
break;
case TemplateArgument::Template:
case TemplateArgument::TemplateExpansion: {
NestedNameSpecifierLocBuilder Builder;
TemplateName Template = Arg.getAsTemplateOrTemplatePattern();
if (DependentTemplateName *DTN = Template.getAsDependentTemplateName())
Builder.MakeTrivial(SemaRef.Context, DTN->getQualifier(), Loc);
else if (QualifiedTemplateName *QTN = Template.getAsQualifiedTemplateName())
Builder.MakeTrivial(SemaRef.Context, QTN->getQualifier(), Loc);
if (Arg.getKind() == TemplateArgument::Template)
Output = TemplateArgumentLoc(Arg,
Builder.getWithLocInContext(SemaRef.Context),
Loc);
else
Output = TemplateArgumentLoc(Arg,
Builder.getWithLocInContext(SemaRef.Context),
Loc, Loc);
break;
}
case TemplateArgument::Expression:
Output = TemplateArgumentLoc(Arg, Arg.getAsExpr());
break;
case TemplateArgument::Declaration:
case TemplateArgument::Integral:
case TemplateArgument::Pack:
case TemplateArgument::NullPtr:
Output = TemplateArgumentLoc(Arg, TemplateArgumentLocInfo());
break;
}
Output = getSema().getTrivialTemplateArgumentLoc(
Arg, QualType(), getDerived().getBaseLocation());
}
template<typename Derived>
@ -4075,12 +4033,45 @@ bool TreeTransform<Derived>::TransformTemplateArgument(
const TemplateArgument &Arg = Input.getArgument();
switch (Arg.getKind()) {
case TemplateArgument::Null:
case TemplateArgument::Integral:
case TemplateArgument::Pack:
case TemplateArgument::Declaration:
case TemplateArgument::NullPtr:
llvm_unreachable("Unexpected TemplateArgument");
case TemplateArgument::Integral:
case TemplateArgument::NullPtr:
case TemplateArgument::Declaration: {
// Transform a resolved template argument straight to a resolved template
// argument. We get here when substituting into an already-substituted
// template type argument during concept satisfaction checking.
QualType T = Arg.getNonTypeTemplateArgumentType();
QualType NewT = getDerived().TransformType(T);
if (NewT.isNull())
return true;
ValueDecl *D = Arg.getKind() == TemplateArgument::Declaration
? Arg.getAsDecl()
: nullptr;
ValueDecl *NewD = D ? cast_or_null<ValueDecl>(getDerived().TransformDecl(
getDerived().getBaseLocation(), D))
: nullptr;
if (D && !NewD)
return true;
if (NewT == T && D == NewD)
Output = Input;
else if (Arg.getKind() == TemplateArgument::Integral)
Output = TemplateArgumentLoc(
TemplateArgument(getSema().Context, Arg.getAsIntegral(), NewT),
TemplateArgumentLocInfo());
else if (Arg.getKind() == TemplateArgument::NullPtr)
Output = TemplateArgumentLoc(TemplateArgument(NewT, /*IsNullPtr=*/true),
TemplateArgumentLocInfo());
else
Output = TemplateArgumentLoc(TemplateArgument(NewD, NewT),
TemplateArgumentLocInfo());
return false;
}
case TemplateArgument::Type: {
TypeSourceInfo *DI = Input.getTypeSourceInfo();
if (!DI)
@ -11837,19 +11828,6 @@ TreeTransform<Derived>::TransformLambdaExpr(LambdaExpr *E) {
LSI->CallOperator = NewCallOperator;
for (unsigned I = 0, NumParams = NewCallOperator->getNumParams();
I != NumParams; ++I) {
auto *P = NewCallOperator->getParamDecl(I);
if (P->hasUninstantiatedDefaultArg()) {
EnterExpressionEvaluationContext Eval(
getSema(),
Sema::ExpressionEvaluationContext::PotentiallyEvaluatedIfUsed, P);
ExprResult R = getDerived().TransformExpr(
E->getCallOperator()->getParamDecl(I)->getDefaultArg());
P->setDefaultArg(R.get());
}
}
getDerived().transformAttrs(E->getCallOperator(), NewCallOperator);
getDerived().transformedLocalDecl(E->getCallOperator(), {NewCallOperator});

View File

@ -335,14 +335,38 @@ public:
SourceRange Range, const MacroArgs *Args) override {
if (!Collector)
return;
// Only record top-level expansions, not those where:
const auto &SM = Collector->PP.getSourceManager();
// Only record top-level expansions that directly produce expanded tokens.
// This excludes those where:
// - the macro use is inside a macro body,
// - the macro appears in an argument to another macro.
if (!MacroNameTok.getLocation().isFileID() ||
(LastExpansionEnd.isValid() &&
Collector->PP.getSourceManager().isBeforeInTranslationUnit(
Range.getBegin(), LastExpansionEnd)))
// However macro expansion isn't really a tree, it's token rewrite rules,
// so there are other cases, e.g.
// #define B(X) X
// #define A 1 + B
// A(2)
// Both A and B produce expanded tokens, though the macro name 'B' comes
// from an expansion. The best we can do is merge the mappings for both.
// The *last* token of any top-level macro expansion must be in a file.
// (In the example above, see the closing paren of the expansion of B).
if (!Range.getEnd().isFileID())
return;
// If there's a current expansion that encloses this one, this one can't be
// top-level.
if (LastExpansionEnd.isValid() &&
!SM.isBeforeInTranslationUnit(LastExpansionEnd, Range.getEnd()))
return;
// If the macro invocation (B) starts in a macro (A) but ends in a file,
// we'll create a merged mapping for A + B by overwriting the endpoint for
// A's startpoint.
if (!Range.getBegin().isFileID()) {
Range.setBegin(SM.getExpansionLoc(Range.getBegin()));
assert(Collector->Expansions.count(Range.getBegin().getRawEncoding()) &&
"Overlapping macros should have same expansion location");
}
Collector->Expansions[Range.getBegin().getRawEncoding()] = Range.getEnd();
LastExpansionEnd = Range.getEnd();
}
@ -399,197 +423,167 @@ public:
}
TokenBuffer build() && {
buildSpelledTokens();
// Walk over expanded tokens and spelled tokens in parallel, building the
// mappings between those using source locations.
// To correctly recover empty macro expansions, we also take locations
// reported to PPCallbacks::MacroExpands into account as we do not have any
// expanded tokens with source locations to guide us.
// The 'eof' token is special, it is not part of spelled token stream. We
// handle it separately at the end.
assert(!Result.ExpandedTokens.empty());
assert(Result.ExpandedTokens.back().kind() == tok::eof);
for (unsigned I = 0; I < Result.ExpandedTokens.size() - 1; ++I) {
// (!) I might be updated by the following call.
processExpandedToken(I);
// Tokenize every file that contributed tokens to the expanded stream.
buildSpelledTokens();
// The expanded token stream consists of runs of tokens that came from
// the same source (a macro expansion, part of a file etc).
// Between these runs are the logical positions of spelled tokens that
// didn't expand to anything.
while (NextExpanded < Result.ExpandedTokens.size() - 1 /* eof */) {
// Create empty mappings for spelled tokens that expanded to nothing here.
// May advance NextSpelled, but NextExpanded is unchanged.
discard();
// Create mapping for a contiguous run of expanded tokens.
// Advances NextExpanded past the run, and NextSpelled accordingly.
unsigned OldPosition = NextExpanded;
advance();
if (NextExpanded == OldPosition)
diagnoseAdvanceFailure();
}
// 'eof' not handled in the loop, do it here.
assert(SM.getMainFileID() ==
SM.getFileID(Result.ExpandedTokens.back().location()));
fillGapUntil(Result.Files[SM.getMainFileID()],
Result.ExpandedTokens.back().location(),
Result.ExpandedTokens.size() - 1);
Result.Files[SM.getMainFileID()].EndExpanded = Result.ExpandedTokens.size();
// Some files might have unaccounted spelled tokens at the end, add an empty
// mapping for those as they did not have expanded counterparts.
fillGapsAtEndOfFiles();
// If any tokens remain in any of the files, they didn't expand to anything.
// Create empty mappings up until the end of the file.
for (const auto &File : Result.Files)
discard(File.first);
return std::move(Result);
}
private:
/// Process the next token in an expanded stream and move corresponding
/// spelled tokens, record any mapping if needed.
/// (!) \p I will be updated if this had to skip tokens, e.g. for macros.
void processExpandedToken(unsigned &I) {
auto L = Result.ExpandedTokens[I].location();
if (L.isMacroID()) {
processMacroExpansion(SM.getExpansionRange(L), I);
return;
// Consume a sequence of spelled tokens that didn't expand to anything.
// In the simplest case, skips spelled tokens until finding one that produced
// the NextExpanded token, and creates an empty mapping for them.
// If Drain is provided, skips remaining tokens from that file instead.
void discard(llvm::Optional<FileID> Drain = llvm::None) {
SourceLocation Target =
Drain ? SM.getLocForEndOfFile(*Drain)
: SM.getExpansionLoc(
Result.ExpandedTokens[NextExpanded].location());
FileID File = SM.getFileID(Target);
const auto &SpelledTokens = Result.Files[File].SpelledTokens;
auto &NextSpelled = this->NextSpelled[File];
TokenBuffer::Mapping Mapping;
Mapping.BeginSpelled = NextSpelled;
// When dropping trailing tokens from a file, the empty mapping should
// be positioned within the file's expanded-token range (at the end).
Mapping.BeginExpanded = Mapping.EndExpanded =
Drain ? Result.Files[*Drain].EndExpanded : NextExpanded;
// We may want to split into several adjacent empty mappings.
// FlushMapping() emits the current mapping and starts a new one.
auto FlushMapping = [&, this] {
Mapping.EndSpelled = NextSpelled;
if (Mapping.BeginSpelled != Mapping.EndSpelled)
Result.Files[File].Mappings.push_back(Mapping);
Mapping.BeginSpelled = NextSpelled;
};
while (NextSpelled < SpelledTokens.size() &&
SpelledTokens[NextSpelled].location() < Target) {
// If we know mapping bounds at [NextSpelled, KnownEnd] (macro expansion)
// then we want to partition our (empty) mapping.
// [Start, NextSpelled) [NextSpelled, KnownEnd] (KnownEnd, Target)
SourceLocation KnownEnd = CollectedExpansions.lookup(
SpelledTokens[NextSpelled].location().getRawEncoding());
if (KnownEnd.isValid()) {
FlushMapping(); // Emits [Start, NextSpelled)
while (NextSpelled < SpelledTokens.size() &&
SpelledTokens[NextSpelled].location() <= KnownEnd)
++NextSpelled;
FlushMapping(); // Emits [NextSpelled, KnownEnd]
// Now the loop contitues and will emit (KnownEnd, Target).
} else {
++NextSpelled;
}
}
if (L.isFileID()) {
auto FID = SM.getFileID(L);
TokenBuffer::MarkedFile &File = Result.Files[FID];
FlushMapping();
}
fillGapUntil(File, L, I);
// Consumes the NextExpanded token and others that are part of the same run.
// Increases NextExpanded and NextSpelled by at least one, and adds a mapping
// (unless this is a run of file tokens, which we represent with no mapping).
void advance() {
const syntax::Token &Tok = Result.ExpandedTokens[NextExpanded];
SourceLocation Expansion = SM.getExpansionLoc(Tok.location());
FileID File = SM.getFileID(Expansion);
const auto &SpelledTokens = Result.Files[File].SpelledTokens;
auto &NextSpelled = this->NextSpelled[File];
// Skip the token.
assert(File.SpelledTokens[NextSpelled[FID]].location() == L &&
"no corresponding token in the spelled stream");
++NextSpelled[FID];
return;
if (Tok.location().isFileID()) {
// A run of file tokens continues while the expanded/spelled tokens match.
while (NextSpelled < SpelledTokens.size() &&
NextExpanded < Result.ExpandedTokens.size() &&
SpelledTokens[NextSpelled].location() ==
Result.ExpandedTokens[NextExpanded].location()) {
++NextSpelled;
++NextExpanded;
}
// We need no mapping for file tokens copied to the expanded stream.
} else {
// We found a new macro expansion. We should have its spelling bounds.
auto End = CollectedExpansions.lookup(Expansion.getRawEncoding());
assert(End.isValid() && "Macro expansion wasn't captured?");
// Mapping starts here...
TokenBuffer::Mapping Mapping;
Mapping.BeginExpanded = NextExpanded;
Mapping.BeginSpelled = NextSpelled;
// ... consumes spelled tokens within bounds we captured ...
while (NextSpelled < SpelledTokens.size() &&
SpelledTokens[NextSpelled].location() <= End)
++NextSpelled;
// ... consumes expanded tokens rooted at the same expansion ...
while (NextExpanded < Result.ExpandedTokens.size() &&
SM.getExpansionLoc(
Result.ExpandedTokens[NextExpanded].location()) == Expansion)
++NextExpanded;
// ... and ends here.
Mapping.EndExpanded = NextExpanded;
Mapping.EndSpelled = NextSpelled;
Result.Files[File].Mappings.push_back(Mapping);
}
}
/// Skipped expanded and spelled tokens of a macro expansion that covers \p
/// SpelledRange. Add a corresponding mapping.
/// (!) \p I will be the index of the last token in an expansion after this
/// function returns.
void processMacroExpansion(CharSourceRange SpelledRange, unsigned &I) {
auto FID = SM.getFileID(SpelledRange.getBegin());
assert(FID == SM.getFileID(SpelledRange.getEnd()));
TokenBuffer::MarkedFile &File = Result.Files[FID];
fillGapUntil(File, SpelledRange.getBegin(), I);
// Skip all expanded tokens from the same macro expansion.
unsigned BeginExpanded = I;
for (; I + 1 < Result.ExpandedTokens.size(); ++I) {
auto NextL = Result.ExpandedTokens[I + 1].location();
if (!NextL.isMacroID() ||
SM.getExpansionLoc(NextL) != SpelledRange.getBegin())
break;
// advance() is supposed to consume at least one token - if not, we crash.
void diagnoseAdvanceFailure() {
#ifndef NDEBUG
// Show the failed-to-map token in context.
for (unsigned I = (NextExpanded < 10) ? 0 : NextExpanded - 10;
I < NextExpanded + 5 && I < Result.ExpandedTokens.size(); ++I) {
const char *L =
(I == NextExpanded) ? "!! " : (I < NextExpanded) ? "ok " : " ";
llvm::errs() << L << Result.ExpandedTokens[I].dumpForTests(SM) << "\n";
}
unsigned EndExpanded = I + 1;
consumeMapping(File, SM.getFileOffset(SpelledRange.getEnd()), BeginExpanded,
EndExpanded, NextSpelled[FID]);
#endif
llvm_unreachable("Couldn't map expanded token to spelled tokens!");
}
/// Initializes TokenBuffer::Files and fills spelled tokens and expanded
/// ranges for each of the files.
void buildSpelledTokens() {
for (unsigned I = 0; I < Result.ExpandedTokens.size(); ++I) {
auto FID =
SM.getFileID(SM.getExpansionLoc(Result.ExpandedTokens[I].location()));
const auto &Tok = Result.ExpandedTokens[I];
auto FID = SM.getFileID(SM.getExpansionLoc(Tok.location()));
auto It = Result.Files.try_emplace(FID);
TokenBuffer::MarkedFile &File = It.first->second;
File.EndExpanded = I + 1;
// The eof token should not be considered part of the main-file's range.
File.EndExpanded = Tok.kind() == tok::eof ? I : I + 1;
if (!It.second)
continue; // we have seen this file before.
// This is the first time we see this file.
File.BeginExpanded = I;
File.SpelledTokens = tokenize(FID, SM, LangOpts);
}
}
void consumeEmptyMapping(TokenBuffer::MarkedFile &File, unsigned EndOffset,
unsigned ExpandedIndex, unsigned &SpelledIndex) {
consumeMapping(File, EndOffset, ExpandedIndex, ExpandedIndex, SpelledIndex);
}
/// Consumes spelled tokens that form a macro expansion and adds a entry to
/// the resulting token buffer.
/// (!) SpelledIndex is updated in-place.
void consumeMapping(TokenBuffer::MarkedFile &File, unsigned EndOffset,
unsigned BeginExpanded, unsigned EndExpanded,
unsigned &SpelledIndex) {
// We need to record this mapping before continuing.
unsigned MappingBegin = SpelledIndex;
++SpelledIndex;
bool HitMapping =
tryConsumeSpelledUntil(File, EndOffset + 1, SpelledIndex).hasValue();
(void)HitMapping;
assert(!HitMapping && "recursive macro expansion?");
TokenBuffer::Mapping M;
M.BeginExpanded = BeginExpanded;
M.EndExpanded = EndExpanded;
M.BeginSpelled = MappingBegin;
M.EndSpelled = SpelledIndex;
File.Mappings.push_back(M);
}
/// Consumes spelled tokens until location \p L is reached and adds a mapping
/// covering the consumed tokens. The mapping will point to an empty expanded
/// range at position \p ExpandedIndex.
void fillGapUntil(TokenBuffer::MarkedFile &File, SourceLocation L,
unsigned ExpandedIndex) {
assert(L.isFileID());
FileID FID;
unsigned Offset;
std::tie(FID, Offset) = SM.getDecomposedLoc(L);
unsigned &SpelledIndex = NextSpelled[FID];
unsigned MappingBegin = SpelledIndex;
while (true) {
auto EndLoc = tryConsumeSpelledUntil(File, Offset, SpelledIndex);
if (SpelledIndex != MappingBegin) {
TokenBuffer::Mapping M;
M.BeginSpelled = MappingBegin;
M.EndSpelled = SpelledIndex;
M.BeginExpanded = M.EndExpanded = ExpandedIndex;
File.Mappings.push_back(M);
}
if (!EndLoc)
break;
consumeEmptyMapping(File, SM.getFileOffset(*EndLoc), ExpandedIndex,
SpelledIndex);
MappingBegin = SpelledIndex;
}
};
/// Consumes spelled tokens until it reaches Offset or a mapping boundary,
/// i.e. a name of a macro expansion or the start '#' token of a PP directive.
/// (!) NextSpelled is updated in place.
///
/// returns None if \p Offset was reached, otherwise returns the end location
/// of a mapping that starts at \p NextSpelled.
llvm::Optional<SourceLocation>
tryConsumeSpelledUntil(TokenBuffer::MarkedFile &File, unsigned Offset,
unsigned &NextSpelled) {
for (; NextSpelled < File.SpelledTokens.size(); ++NextSpelled) {
auto L = File.SpelledTokens[NextSpelled].location();
if (Offset <= SM.getFileOffset(L))
return llvm::None; // reached the offset we are looking for.
auto Mapping = CollectedExpansions.find(L.getRawEncoding());
if (Mapping != CollectedExpansions.end())
return Mapping->second; // found a mapping before the offset.
}
return llvm::None; // no more tokens, we "reached" the offset.
}
/// Adds empty mappings for unconsumed spelled tokens at the end of each file.
void fillGapsAtEndOfFiles() {
for (auto &F : Result.Files) {
if (F.second.SpelledTokens.empty())
continue;
fillGapUntil(F.second, F.second.SpelledTokens.back().endLocation(),
F.second.EndExpanded);
}
}
TokenBuffer Result;
/// For each file, a position of the next spelled token we will consume.
llvm::DenseMap<FileID, unsigned> NextSpelled;
unsigned NextExpanded = 0; // cursor in ExpandedTokens
llvm::DenseMap<FileID, unsigned> NextSpelled; // cursor in SpelledTokens
PPExpansions CollectedExpansions;
const SourceManager &SM;
const LangOptions &LangOpts;

View File

@ -2825,6 +2825,7 @@ void EmitClangAttrPCHRead(RecordKeeper &Records, raw_ostream &OS) {
if (R.isSubClassOf(InhClass))
OS << " bool isInherited = Record.readInt();\n";
OS << " bool isImplicit = Record.readInt();\n";
OS << " bool isPackExpansion = Record.readInt();\n";
ArgRecords = R.getValueAsListOfDefs("Args");
Args.clear();
for (const auto *Arg : ArgRecords) {
@ -2840,6 +2841,7 @@ void EmitClangAttrPCHRead(RecordKeeper &Records, raw_ostream &OS) {
if (R.isSubClassOf(InhClass))
OS << " cast<InheritableAttr>(New)->setInherited(isInherited);\n";
OS << " New->setImplicit(isImplicit);\n";
OS << " New->setPackExpansion(isPackExpansion);\n";
OS << " break;\n";
OS << " }\n";
}
@ -2866,6 +2868,7 @@ void EmitClangAttrPCHWrite(RecordKeeper &Records, raw_ostream &OS) {
if (R.isSubClassOf(InhClass))
OS << " Record.push_back(SA->isInherited());\n";
OS << " Record.push_back(A->isImplicit());\n";
OS << " Record.push_back(A->isPackExpansion());\n";
for (const auto *Arg : Args)
createArgument(*Arg, R.getName())->writePCHWrite(OS);

View File

@ -0,0 +1,31 @@
//===-- int_mul_impl.inc - Integer multiplication -------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// Helpers used by __mulsi3, __muldi3.
//
//===----------------------------------------------------------------------===//
#if !defined(__riscv_mul)
.text
.align 2
.globl __mulxi3
.type __mulxi3, @function
__mulxi3:
mv a2, a0
mv a0, zero
.L1:
andi a3, a1, 1
beqz a3, .L2
add a0, a0, a2
.L2:
srli a1, a1, 1
slli a2, a2, 1
bnez a1, .L1
ret
#endif

View File

@ -0,0 +1,11 @@
//===--- muldi3.S - Integer multiplication routines -----------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#if __riscv_xlen == 64
#define __mulxi3 __muldi3
#include "int_mul_impl.inc"
#endif

View File

@ -1,4 +1,4 @@
//===--- mulsi3.S - Integer multiplication routines routines ---===//
//===--- mulsi3.S - Integer multiplication routines -----------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
@ -6,22 +6,7 @@
//
//===----------------------------------------------------------------------===//
#if !defined(__riscv_mul) && __riscv_xlen == 32
.text
.align 2
.globl __mulsi3
.type __mulsi3, @function
__mulsi3:
mv a2, a0
mv a0, zero
.L1:
andi a3, a1, 1
beqz a3, .L2
add a0, a0, a2
.L2:
srli a1, a1, 1
slli a2, a2, 1
bnez a1, .L1
ret
#if __riscv_xlen == 32
#define __mulxi3 __mulsi3
#include "int_mul_impl.inc"
#endif

View File

@ -32,8 +32,10 @@
#include <windows.h>
#include "WindowsMMap.h"
#else
#include <sys/mman.h>
#include <sys/file.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <unistd.h>
#endif
#if defined(__FreeBSD__) && defined(__i386__)
@ -119,6 +121,11 @@ struct fn_list writeout_fn_list;
*/
struct fn_list flush_fn_list;
/*
* A list of reset functions, shared between all dynamic objects.
*/
struct fn_list reset_fn_list;
static void fn_list_insert(struct fn_list* list, fn_ptr fn) {
struct fn_node* new_node = malloc(sizeof(struct fn_node));
new_node->fn = fn;
@ -634,7 +641,46 @@ void llvm_delete_flush_function_list(void) {
}
COMPILER_RT_VISIBILITY
void llvm_gcov_init(fn_ptr wfn, fn_ptr ffn) {
void llvm_register_reset_function(fn_ptr fn) {
fn_list_insert(&reset_fn_list, fn);
}
COMPILER_RT_VISIBILITY
void llvm_delete_reset_function_list(void) { fn_list_remove(&reset_fn_list); }
COMPILER_RT_VISIBILITY
void llvm_reset_counters(void) {
struct fn_node *curr = reset_fn_list.head;
while (curr) {
if (curr->id == CURRENT_ID) {
curr->fn();
}
curr = curr->next;
}
}
#if !defined(_WIN32)
COMPILER_RT_VISIBILITY
pid_t __gcov_fork() {
pid_t parent_pid = getpid();
pid_t pid = fork();
if (pid == 0) {
pid_t child_pid = getpid();
if (child_pid != parent_pid) {
// The pid changed so we've a fork (one could have its own fork function)
// Just reset the counters for this child process
// threads.
llvm_reset_counters();
}
}
return pid;
}
#endif
COMPILER_RT_VISIBILITY
void llvm_gcov_init(fn_ptr wfn, fn_ptr ffn, fn_ptr rfn) {
static int atexit_ran = 0;
if (wfn)
@ -643,10 +689,14 @@ void llvm_gcov_init(fn_ptr wfn, fn_ptr ffn) {
if (ffn)
llvm_register_flush_function(ffn);
if (rfn)
llvm_register_reset_function(rfn);
if (atexit_ran == 0) {
atexit_ran = 1;
/* Make sure we write out the data and delete the data structures. */
atexit(llvm_delete_reset_function_list);
atexit(llvm_delete_flush_function_list);
atexit(llvm_delete_writeout_function_list);
atexit(llvm_writeout_files);

View File

@ -359,7 +359,7 @@ struct _LIBCPP_TEMPLATE_VIS array<_Tp, 0>
#ifndef _LIBCPP_HAS_NO_DEDUCTION_GUIDES
template<class _Tp, class... _Args,
class = typename enable_if<(is_same_v<_Tp, _Args> && ...), void>::type
class = _EnableIf<__all<_IsSame<_Tp, _Args>::value...>::value>
>
array(_Tp, _Args...)
-> array<_Tp, 1 + sizeof...(_Args)>;

View File

@ -486,7 +486,9 @@ public:
class ImportThunkChunkARM : public ImportThunkChunk {
public:
explicit ImportThunkChunkARM(Defined *s) : ImportThunkChunk(s) {}
explicit ImportThunkChunkARM(Defined *s) : ImportThunkChunk(s) {
setAlignment(2);
}
size_t getSize() const override { return sizeof(importThunkARM); }
void getBaserels(std::vector<Baserel> *res) override;
void writeTo(uint8_t *buf) const override;
@ -494,14 +496,16 @@ public:
class ImportThunkChunkARM64 : public ImportThunkChunk {
public:
explicit ImportThunkChunkARM64(Defined *s) : ImportThunkChunk(s) {}
explicit ImportThunkChunkARM64(Defined *s) : ImportThunkChunk(s) {
setAlignment(4);
}
size_t getSize() const override { return sizeof(importThunkARM64); }
void writeTo(uint8_t *buf) const override;
};
class RangeExtensionThunkARM : public NonSectionChunk {
public:
explicit RangeExtensionThunkARM(Defined *t) : target(t) {}
explicit RangeExtensionThunkARM(Defined *t) : target(t) { setAlignment(2); }
size_t getSize() const override;
void writeTo(uint8_t *buf) const override;

View File

@ -365,7 +365,9 @@ public:
class ThunkChunkARM : public NonSectionChunk {
public:
ThunkChunkARM(Defined *i, Chunk *tm) : imp(i), tailMerge(tm) {}
ThunkChunkARM(Defined *i, Chunk *tm) : imp(i), tailMerge(tm) {
setAlignment(2);
}
size_t getSize() const override { return sizeof(thunkARM); }
@ -385,7 +387,9 @@ public:
class TailMergeChunkARM : public NonSectionChunk {
public:
TailMergeChunkARM(Chunk *d, Defined *h) : desc(d), helper(h) {}
TailMergeChunkARM(Chunk *d, Defined *h) : desc(d), helper(h) {
setAlignment(2);
}
size_t getSize() const override { return sizeof(tailMergeARM); }
@ -405,7 +409,9 @@ public:
class ThunkChunkARM64 : public NonSectionChunk {
public:
ThunkChunkARM64(Defined *i, Chunk *tm) : imp(i), tailMerge(tm) {}
ThunkChunkARM64(Defined *i, Chunk *tm) : imp(i), tailMerge(tm) {
setAlignment(4);
}
size_t getSize() const override { return sizeof(thunkARM64); }
@ -422,7 +428,9 @@ public:
class TailMergeChunkARM64 : public NonSectionChunk {
public:
TailMergeChunkARM64(Chunk *d, Defined *h) : desc(d), helper(h) {}
TailMergeChunkARM64(Chunk *d, Defined *h) : desc(d), helper(h) {
setAlignment(4);
}
size_t getSize() const override { return sizeof(tailMergeARM64); }

View File

@ -28,10 +28,12 @@ void markLive(ArrayRef<Chunk *> chunks) {
// as we push, so sections never appear twice in the list.
SmallVector<SectionChunk *, 256> worklist;
// COMDAT section chunks are dead by default. Add non-COMDAT chunks.
// COMDAT section chunks are dead by default. Add non-COMDAT chunks. Do not
// traverse DWARF sections. They are live, but they should not keep other
// sections alive.
for (Chunk *c : chunks)
if (auto *sc = dyn_cast<SectionChunk>(c))
if (sc->live)
if (sc->live && !sc->isDWARF())
worklist.push_back(sc);
auto enqueue = [&](SectionChunk *c) {

View File

@ -1906,8 +1906,17 @@ template <class ELFT> void LinkerDriver::link(opt::InputArgList &args) {
// We do not want to emit debug sections if --strip-all
// or -strip-debug are given.
return config->strip != StripPolicy::None &&
(s->name.startswith(".debug") || s->name.startswith(".zdebug"));
if (config->strip == StripPolicy::None)
return false;
if (isDebugSection(*s))
return true;
if (auto *isec = dyn_cast<InputSection>(s))
if (InputSectionBase *rel = isec->getRelocatedSection())
if (isDebugSection(*rel))
return true;
return false;
});
// Now that the number of partitions is fixed, save a pointer to the main

View File

@ -441,8 +441,7 @@ void InputSection::copyRelocations(uint8_t *buf, ArrayRef<RelTy> rels) {
// See the comment in maybeReportUndefined for PPC32 .got2 and PPC64 .toc
auto *d = dyn_cast<Defined>(&sym);
if (!d) {
if (!sec->name.startswith(".debug") &&
!sec->name.startswith(".zdebug") && sec->name != ".eh_frame" &&
if (!isDebugSection(*sec) && sec->name != ".eh_frame" &&
sec->name != ".gcc_except_table" && sec->name != ".got2" &&
sec->name != ".toc") {
uint32_t secIdx = cast<Undefined>(sym).discardedSecIdx;

View File

@ -357,6 +357,10 @@ private:
template <class ELFT> void copyShtGroup(uint8_t *buf);
};
inline bool isDebugSection(const InputSectionBase &sec) {
return sec.name.startswith(".debug") || sec.name.startswith(".zdebug");
}
// The list of all input sections.
extern std::vector<InputSectionBase *> inputSections;

View File

@ -114,8 +114,7 @@ void OutputSection::commitSection(InputSection *isec) {
flags = isec->flags;
} else {
// Otherwise, check if new type or flags are compatible with existing ones.
unsigned mask = SHF_TLS | SHF_LINK_ORDER;
if ((flags & mask) != (isec->flags & mask))
if ((flags ^ isec->flags) & SHF_TLS)
error("incompatible section flags for " + name + "\n>>> " + toString(isec) +
": 0x" + utohexstr(isec->flags) + "\n>>> output section " + name +
": 0x" + utohexstr(flags));
@ -367,8 +366,9 @@ void OutputSection::finalize() {
// all InputSections in the OutputSection have the same dependency.
if (auto *ex = dyn_cast<ARMExidxSyntheticSection>(first))
link = ex->getLinkOrderDep()->getParent()->sectionIndex;
else if (auto *d = first->getLinkOrderDep())
link = d->getParent()->sectionIndex;
else if (first->flags & SHF_LINK_ORDER)
if (auto *d = first->getLinkOrderDep())
link = d->getParent()->sectionIndex;
}
if (type == SHT_GROUP) {

View File

@ -52,6 +52,8 @@ StringRef ScriptLexer::getLine() {
// Returns 1-based line number of the current token.
size_t ScriptLexer::getLineNumber() {
if (pos == 0)
return 1;
StringRef s = getCurrentMB().getBuffer();
StringRef tok = tokens[pos - 1];
return s.substr(0, tok.data() - s.data()).count('\n') + 1;
@ -292,7 +294,9 @@ static bool encloses(StringRef s, StringRef t) {
MemoryBufferRef ScriptLexer::getCurrentMB() {
// Find input buffer containing the current token.
assert(!mbs.empty() && pos > 0);
assert(!mbs.empty());
if (pos == 0)
return mbs.back();
for (MemoryBufferRef mb : mbs)
if (encloses(mb.getBuffer(), tokens[pos - 1]))
return mb;

View File

@ -737,6 +737,7 @@ bool ScriptParser::readSectionDirective(OutputSection *cmd, StringRef tok1, Stri
expect("(");
if (consume("NOLOAD")) {
cmd->noload = true;
cmd->type = SHT_NOBITS;
} else {
skip(); // This is "COPY", "INFO" or "OVERLAY".
cmd->nonAlloc = true;

View File

@ -1524,17 +1524,30 @@ template <class ELFT> void Writer<ELFT>::resolveShfLinkOrder() {
// but sort must consider them all at once.
std::vector<InputSection **> scriptSections;
std::vector<InputSection *> sections;
bool started = false, stopped = false;
for (BaseCommand *base : sec->sectionCommands) {
if (auto *isd = dyn_cast<InputSectionDescription>(base)) {
for (InputSection *&isec : isd->sections) {
scriptSections.push_back(&isec);
sections.push_back(isec);
if (!(isec->flags & SHF_LINK_ORDER)) {
if (started)
stopped = true;
} else if (stopped) {
error(toString(isec) + ": SHF_LINK_ORDER sections in " + sec->name +
" are not contiguous");
} else {
started = true;
InputSection *link = isec->getLinkOrderDep();
if (!link->getParent())
error(toString(isec) + ": sh_link points to discarded section " +
toString(link));
scriptSections.push_back(&isec);
sections.push_back(isec);
InputSection *link = isec->getLinkOrderDep();
if (!link->getParent())
error(toString(isec) + ": sh_link points to discarded section " +
toString(link));
}
}
} else if (started) {
stopped = true;
}
}

View File

@ -29,7 +29,7 @@ class ValueLatticeElement {
/// producing instruction is dead. Caution: We use this as the starting
/// state in our local meet rules. In this usage, it's taken to mean
/// "nothing known yet".
undefined,
unknown,
/// This Value has a specific constant value. (For constant integers,
/// constantrange is used instead. Integer typed constantexprs can appear
@ -45,7 +45,12 @@ class ValueLatticeElement {
constantrange,
/// We can not precisely model the dynamic values this value might take.
overdefined
overdefined,
/// This Value is an UndefValue constant or produces undef. Undefined values
/// can be merged with constants (or single element constant ranges),
/// assuming all uses of the result will be replaced.
undef
};
ValueLatticeElementTy Tag;
@ -60,14 +65,15 @@ class ValueLatticeElement {
public:
// Const and Range are initialized on-demand.
ValueLatticeElement() : Tag(undefined) {}
ValueLatticeElement() : Tag(unknown) {}
/// Custom destructor to ensure Range is properly destroyed, when the object
/// is deallocated.
~ValueLatticeElement() {
switch (Tag) {
case overdefined:
case undefined:
case unknown:
case undef:
case constant:
case notconstant:
break;
@ -79,7 +85,7 @@ public:
/// Custom copy constructor, to ensure Range gets initialized when
/// copying a constant range lattice element.
ValueLatticeElement(const ValueLatticeElement &Other) : Tag(undefined) {
ValueLatticeElement(const ValueLatticeElement &Other) : Tag(unknown) {
*this = Other;
}
@ -109,7 +115,8 @@ public:
ConstVal = Other.ConstVal;
break;
case overdefined:
case undefined:
case unknown:
case undef:
break;
}
Tag = Other.Tag;
@ -118,14 +125,16 @@ public:
static ValueLatticeElement get(Constant *C) {
ValueLatticeElement Res;
if (!isa<UndefValue>(C))
if (isa<UndefValue>(C))
Res.markUndef();
else
Res.markConstant(C);
return Res;
}
static ValueLatticeElement getNot(Constant *C) {
ValueLatticeElement Res;
if (!isa<UndefValue>(C))
Res.markNotConstant(C);
assert(!isa<UndefValue>(C) && "!= undef is not supported");
Res.markNotConstant(C);
return Res;
}
static ValueLatticeElement getRange(ConstantRange CR) {
@ -139,7 +148,10 @@ public:
return Res;
}
bool isUndefined() const { return Tag == undefined; }
bool isUndef() const { return Tag == undef; }
bool isUnknown() const { return Tag == unknown; }
bool isUnknownOrUndef() const { return Tag == unknown || Tag == undef; }
bool isUndefined() const { return isUnknownOrUndef(); }
bool isConstant() const { return Tag == constant; }
bool isNotConstant() const { return Tag == notconstant; }
bool isConstantRange() const { return Tag == constantrange; }
@ -170,89 +182,123 @@ public:
return None;
}
private:
void markOverdefined() {
bool markOverdefined() {
if (isOverdefined())
return;
return false;
if (isConstant() || isNotConstant())
ConstVal = nullptr;
if (isConstantRange())
Range.~ConstantRange();
Tag = overdefined;
return true;
}
void markConstant(Constant *V) {
assert(V && "Marking constant with NULL");
if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
markConstantRange(ConstantRange(CI->getValue()));
return;
}
if (isa<UndefValue>(V))
return;
bool markUndef() {
if (isUndef())
return false;
assert((!isConstant() || getConstant() == V) &&
"Marking constant with different value");
assert(isUndefined());
assert(isUnknown());
Tag = undef;
return true;
}
bool markConstant(Constant *V) {
if (isa<UndefValue>(V))
return markUndef();
if (isConstant()) {
assert(getConstant() == V && "Marking constant with different value");
return false;
}
if (ConstantInt *CI = dyn_cast<ConstantInt>(V))
return markConstantRange(ConstantRange(CI->getValue()));
assert(isUnknown() || isUndef());
Tag = constant;
ConstVal = V;
return true;
}
void markNotConstant(Constant *V) {
bool markNotConstant(Constant *V) {
assert(V && "Marking constant with NULL");
if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
markConstantRange(ConstantRange(CI->getValue() + 1, CI->getValue()));
return;
}
if (isa<UndefValue>(V))
return;
if (ConstantInt *CI = dyn_cast<ConstantInt>(V))
return markConstantRange(
ConstantRange(CI->getValue() + 1, CI->getValue()));
assert((!isConstant() || getConstant() != V) &&
"Marking constant !constant with same value");
assert((!isNotConstant() || getNotConstant() == V) &&
"Marking !constant with different value");
assert(isUndefined() || isConstant());
if (isa<UndefValue>(V))
return false;
if (isNotConstant()) {
assert(getNotConstant() == V && "Marking !constant with different value");
return false;
}
assert(isUnknown());
Tag = notconstant;
ConstVal = V;
return true;
}
void markConstantRange(ConstantRange NewR) {
/// Mark the object as constant range with \p NewR. If the object is already a
/// constant range, nothing changes if the existing range is equal to \p
/// NewR. Otherwise \p NewR must be a superset of the existing range or the
/// object must be undef.
bool markConstantRange(ConstantRange NewR) {
if (isConstantRange()) {
if (getConstantRange() == NewR)
return false;
if (NewR.isEmptySet())
markOverdefined();
else {
Range = std::move(NewR);
}
return;
return markOverdefined();
assert(NewR.contains(getConstantRange()) &&
"Existing range must be a subset of NewR");
Range = std::move(NewR);
return true;
}
assert(isUndefined());
assert(isUnknown() || isUndef());
if (NewR.isEmptySet())
markOverdefined();
else {
Tag = constantrange;
new (&Range) ConstantRange(std::move(NewR));
}
return markOverdefined();
Tag = constantrange;
new (&Range) ConstantRange(std::move(NewR));
return true;
}
public:
/// Updates this object to approximate both this object and RHS. Returns
/// true if this object has been changed.
bool mergeIn(const ValueLatticeElement &RHS, const DataLayout &DL) {
if (RHS.isUndefined() || isOverdefined())
if (RHS.isUnknown() || isOverdefined())
return false;
if (RHS.isOverdefined()) {
markOverdefined();
return true;
}
if (isUndefined()) {
if (isUndef()) {
assert(!RHS.isUnknown());
if (RHS.isUndef())
return false;
if (RHS.isConstant())
return markConstant(RHS.getConstant());
if (RHS.isConstantRange() && RHS.getConstantRange().isSingleElement())
return markConstantRange(RHS.getConstantRange());
return markOverdefined();
}
if (isUnknown()) {
assert(!RHS.isUnknown() && "Unknow RHS should be handled earlier");
*this = RHS;
return !RHS.isUndefined();
return true;
}
if (isConstant()) {
if (RHS.isConstant() && getConstant() == RHS.getConstant())
return false;
if (RHS.isUndef())
return false;
markOverdefined();
return true;
}
@ -265,6 +311,9 @@ public:
}
assert(isConstantRange() && "New ValueLattice type?");
if (RHS.isUndef() && getConstantRange().isSingleElement())
return false;
if (!RHS.isConstantRange()) {
// We can get here if we've encountered a constantexpr of integer type
// and merge it with a constantrange.
@ -273,18 +322,11 @@ public:
}
ConstantRange NewR = getConstantRange().unionWith(RHS.getConstantRange());
if (NewR.isFullSet())
markOverdefined();
return markOverdefined();
else if (NewR == getConstantRange())
return false;
else
markConstantRange(std::move(NewR));
return true;
}
ConstantInt *getConstantInt() const {
assert(isConstant() && isa<ConstantInt>(getConstant()) &&
"No integer constant");
return cast<ConstantInt>(getConstant());
return markConstantRange(std::move(NewR));
}
/// Compares this symbolic value with Other using Pred and returns either
@ -292,7 +334,7 @@ public:
/// evaluated.
Constant *getCompare(CmpInst::Predicate Pred, Type *Ty,
const ValueLatticeElement &Other) const {
if (isUndefined() || Other.isUndefined())
if (isUnknownOrUndef() || Other.isUnknownOrUndef())
return UndefValue::get(Ty);
if (isConstant() && Other.isConstant())

View File

@ -71,6 +71,11 @@ public:
template <typename CreateFunc>
TypeIndex insertRecordAs(GloballyHashedType Hash, size_t RecordSize,
CreateFunc Create) {
assert(RecordSize < UINT32_MAX && "Record too big");
assert(RecordSize % 4 == 0 &&
"RecordSize is not a multiple of 4 bytes which will cause "
"misalignment in the output TPI stream!");
auto Result = HashedRecords.try_emplace(Hash, nextTypeIndex());
if (LLVM_UNLIKELY(Result.second /*inserted*/ ||

View File

@ -488,6 +488,9 @@ let TargetPrefix = "ppc" in { // All PPC intrinsics start with "llvm.ppc.".
def int_ppc_altivec_vmsumuhm : GCCBuiltin<"__builtin_altivec_vmsumuhm">,
Intrinsic<[llvm_v4i32_ty], [llvm_v8i16_ty, llvm_v8i16_ty,
llvm_v4i32_ty], [IntrNoMem]>;
def int_ppc_altivec_vmsumudm : GCCBuiltin<"__builtin_altivec_vmsumudm">,
Intrinsic<[llvm_v1i128_ty], [llvm_v2i64_ty, llvm_v2i64_ty,
llvm_v1i128_ty], [IntrNoMem]>;
def int_ppc_altivec_vmsumuhs : GCCBuiltin<"__builtin_altivec_vmsumuhs">,
Intrinsic<[llvm_v4i32_ty], [llvm_v8i16_ty, llvm_v8i16_ty,
llvm_v4i32_ty], [IntrNoMem]>;

View File

@ -152,6 +152,10 @@ AARCH64_CPU_NAME("kryo", ARMV8A, FK_CRYPTO_NEON_FP_ARMV8, false,
(AArch64::AEK_CRC))
AARCH64_CPU_NAME("thunderx2t99", ARMV8_1A, FK_CRYPTO_NEON_FP_ARMV8, false,
(AArch64::AEK_NONE))
AARCH64_CPU_NAME("thunderx3t110", ARMV8_3A, FK_CRYPTO_NEON_FP_ARMV8, false,
(AArch64::AEK_CRC | AEK_CRYPTO | AEK_FP | AEK_SIMD |
AEK_LSE | AEK_RAND | AArch64::AEK_PROFILE |
AArch64::AEK_RAS))
AARCH64_CPU_NAME("thunderx", ARMV8A, FK_CRYPTO_NEON_FP_ARMV8, false,
(AArch64::AEK_CRC | AArch64::AEK_PROFILE))
AARCH64_CPU_NAME("thunderxt88", ARMV8A, FK_CRYPTO_NEON_FP_ARMV8, false,

View File

@ -40,8 +40,8 @@ template <typename T, size_t N> struct object_deleter<T[N]> {
// constexpr, a dynamic initializer may be emitted depending on optimization
// settings. For the affected versions of MSVC, use the old linker
// initialization pattern of not providing a constructor and leaving the fields
// uninitialized.
#if !defined(_MSC_VER) || defined(__clang__)
// uninitialized. See http://llvm.org/PR41367 for details.
#if !defined(_MSC_VER) || (_MSC_VER >= 1925) || defined(__clang__)
#define LLVM_USE_CONSTEXPR_CTOR
#endif

View File

@ -959,6 +959,10 @@ def extloadi32 : PatFrag<(ops node:$ptr), (extload node:$ptr)> {
let IsLoad = 1;
let MemoryVT = i32;
}
def extloadf16 : PatFrag<(ops node:$ptr), (extload node:$ptr)> {
let IsLoad = 1;
let MemoryVT = f16;
}
def extloadf32 : PatFrag<(ops node:$ptr), (extload node:$ptr)> {
let IsLoad = 1;
let MemoryVT = f32;
@ -1094,6 +1098,11 @@ def truncstorei32 : PatFrag<(ops node:$val, node:$ptr),
let IsStore = 1;
let MemoryVT = i32;
}
def truncstoref16 : PatFrag<(ops node:$val, node:$ptr),
(truncstore node:$val, node:$ptr)> {
let IsStore = 1;
let MemoryVT = f16;
}
def truncstoref32 : PatFrag<(ops node:$val, node:$ptr),
(truncstore node:$val, node:$ptr)> {
let IsStore = 1;

View File

@ -96,9 +96,9 @@ static ValueLatticeElement intersect(const ValueLatticeElement &A,
const ValueLatticeElement &B) {
// Undefined is the strongest state. It means the value is known to be along
// an unreachable path.
if (A.isUndefined())
if (A.isUnknown())
return A;
if (B.isUndefined())
if (B.isUnknown())
return B;
// If we gave up for one, but got a useable fact from the other, use it.
@ -1203,7 +1203,7 @@ static ValueLatticeElement getValueFromICmpCondition(Value *Val, ICmpInst *ICI,
// false SETNE.
if (isTrueDest == (Predicate == ICmpInst::ICMP_EQ))
return ValueLatticeElement::get(cast<Constant>(RHS));
else
else if (!isa<UndefValue>(RHS))
return ValueLatticeElement::getNot(cast<Constant>(RHS));
}
}
@ -1722,7 +1722,7 @@ ConstantRange LazyValueInfo::getConstantRange(Value *V, BasicBlock *BB,
const DataLayout &DL = BB->getModule()->getDataLayout();
ValueLatticeElement Result =
getImpl(PImpl, AC, &DL, DT).getValueInBlock(V, BB, CxtI);
if (Result.isUndefined())
if (Result.isUnknown())
return ConstantRange::getEmpty(Width);
if (Result.isConstantRange())
return Result.getConstantRange();
@ -1761,7 +1761,7 @@ ConstantRange LazyValueInfo::getConstantRangeOnEdge(Value *V,
ValueLatticeElement Result =
getImpl(PImpl, AC, &DL, DT).getValueOnEdge(V, FromBB, ToBB, CxtI);
if (Result.isUndefined())
if (Result.isUnknown())
return ConstantRange::getEmpty(Width);
if (Result.isConstantRange())
return Result.getConstantRange();
@ -1991,7 +1991,7 @@ void LazyValueInfoAnnotatedWriter::emitBasicBlockStartAnnot(
for (auto &Arg : F->args()) {
ValueLatticeElement Result = LVIImpl->getValueInBlock(
const_cast<Argument *>(&Arg), const_cast<BasicBlock *>(BB));
if (Result.isUndefined())
if (Result.isUnknown())
continue;
OS << "; LatticeVal for: '" << Arg << "' is: " << Result << "\n";
}

View File

@ -10,8 +10,10 @@
namespace llvm {
raw_ostream &operator<<(raw_ostream &OS, const ValueLatticeElement &Val) {
if (Val.isUndefined())
return OS << "undefined";
if (Val.isUnknown())
return OS << "unknown";
if (Val.isUndef())
return OS << "undef";
if (Val.isOverdefined())
return OS << "overdefined";

View File

@ -963,10 +963,10 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
continue;
}
// If one of the blocks is the entire common tail (and not the entry
// block, which we can't jump to), we can treat all blocks with this same
// tail at once. Use PredBB if that is one of the possibilities, as that
// will not introduce any extra branches.
// If one of the blocks is the entire common tail (and is not the entry
// block/an EH pad, which we can't jump to), we can treat all blocks with
// this same tail at once. Use PredBB if that is one of the possibilities,
// as that will not introduce any extra branches.
MachineBasicBlock *EntryBB =
&MergePotentials.front().getBlock()->getParent()->front();
unsigned commonTailIndex = SameTails.size();
@ -974,19 +974,21 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
// into the other.
if (SameTails.size() == 2 &&
SameTails[0].getBlock()->isLayoutSuccessor(SameTails[1].getBlock()) &&
SameTails[1].tailIsWholeBlock())
SameTails[1].tailIsWholeBlock() && !SameTails[1].getBlock()->isEHPad())
commonTailIndex = 1;
else if (SameTails.size() == 2 &&
SameTails[1].getBlock()->isLayoutSuccessor(
SameTails[0].getBlock()) &&
SameTails[0].tailIsWholeBlock())
SameTails[0].getBlock()) &&
SameTails[0].tailIsWholeBlock() &&
!SameTails[0].getBlock()->isEHPad())
commonTailIndex = 0;
else {
// Otherwise just pick one, favoring the fall-through predecessor if
// there is one.
for (unsigned i = 0, e = SameTails.size(); i != e; ++i) {
MachineBasicBlock *MBB = SameTails[i].getBlock();
if (MBB == EntryBB && SameTails[i].tailIsWholeBlock())
if ((MBB == EntryBB || MBB->isEHPad()) &&
SameTails[i].tailIsWholeBlock())
continue;
if (MBB == PredBB) {
commonTailIndex = i;

View File

@ -269,30 +269,26 @@ MachineSinking::AllUsesDominatedByBlock(unsigned Reg,
// into and they are all PHI nodes. In this case, machine-sink must break
// the critical edge first. e.g.
//
// %bb.1: derived from LLVM BB %bb4.preheader
// %bb.1:
// Predecessors according to CFG: %bb.0
// ...
// %reg16385 = DEC64_32r %reg16437, implicit-def dead %eflags
// %def = DEC64_32r %x, implicit-def dead %eflags
// ...
// JE_4 <%bb.37>, implicit %eflags
// Successors according to CFG: %bb.37 %bb.2
//
// %bb.2: derived from LLVM BB %bb.nph
// Predecessors according to CFG: %bb.0 %bb.1
// %reg16386 = PHI %reg16434, %bb.0, %reg16385, %bb.1
BreakPHIEdge = true;
for (MachineOperand &MO : MRI->use_nodbg_operands(Reg)) {
MachineInstr *UseInst = MO.getParent();
unsigned OpNo = &MO - &UseInst->getOperand(0);
MachineBasicBlock *UseBlock = UseInst->getParent();
if (!(UseBlock == MBB && UseInst->isPHI() &&
UseInst->getOperand(OpNo+1).getMBB() == DefMBB)) {
BreakPHIEdge = false;
break;
}
}
if (BreakPHIEdge)
// %bb.2:
// %p = PHI %y, %bb.0, %def, %bb.1
if (llvm::all_of(MRI->use_nodbg_operands(Reg), [&](MachineOperand &MO) {
MachineInstr *UseInst = MO.getParent();
unsigned OpNo = UseInst->getOperandNo(&MO);
MachineBasicBlock *UseBlock = UseInst->getParent();
return UseBlock == MBB && UseInst->isPHI() &&
UseInst->getOperand(OpNo + 1).getMBB() == DefMBB;
})) {
BreakPHIEdge = true;
return true;
}
for (MachineOperand &MO : MRI->use_nodbg_operands(Reg)) {
// Determine the block of the use.

View File

@ -8,8 +8,6 @@
//
// Target-independent, SSA-based data flow graph for register data flow (RDF).
//
#include "RDFGraph.h"
#include "RDFRegisters.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SetVector.h"
@ -20,6 +18,8 @@
#include "llvm/CodeGen/MachineInstr.h"
#include "llvm/CodeGen/MachineOperand.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/CodeGen/TargetInstrInfo.h"
#include "llvm/CodeGen/TargetLowering.h"
#include "llvm/CodeGen/TargetRegisterInfo.h"
@ -753,8 +753,10 @@ RegisterSet DataFlowGraph::getLandingPadLiveIns() const {
const TargetLowering &TLI = *MF.getSubtarget().getTargetLowering();
if (RegisterId R = TLI.getExceptionPointerRegister(PF))
LR.insert(RegisterRef(R));
if (RegisterId R = TLI.getExceptionSelectorRegister(PF))
LR.insert(RegisterRef(R));
if (!isFuncletEHPersonality(classifyEHPersonality(PF))) {
if (RegisterId R = TLI.getExceptionSelectorRegister(PF))
LR.insert(RegisterRef(R));
}
return LR;
}

View File

@ -22,9 +22,6 @@
// and Embedded Architectures and Compilers", 8 (4),
// <10.1145/2086696.2086706>. <hal-00647369>
//
#include "RDFLiveness.h"
#include "RDFGraph.h"
#include "RDFRegisters.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SetVector.h"
@ -33,6 +30,9 @@
#include "llvm/CodeGen/MachineDominators.h"
#include "llvm/CodeGen/MachineFunction.h"
#include "llvm/CodeGen/MachineInstr.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/CodeGen/TargetRegisterInfo.h"
#include "llvm/MC/LaneBitmask.h"
#include "llvm/MC/MCRegisterInfo.h"

View File

@ -6,11 +6,11 @@
//
//===----------------------------------------------------------------------===//
#include "RDFRegisters.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/CodeGen/MachineFunction.h"
#include "llvm/CodeGen/MachineInstr.h"
#include "llvm/CodeGen/MachineOperand.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/CodeGen/TargetRegisterInfo.h"
#include "llvm/MC/LaneBitmask.h"
#include "llvm/MC/MCRegisterInfo.h"

View File

@ -886,6 +886,13 @@ static bool isAnyConstantBuildVector(SDValue V, bool NoOpaques = false) {
ISD::isBuildVectorOfConstantFPSDNodes(V.getNode());
}
// Determine if this an indexed load with an opaque target constant index.
static bool canSplitIdx(LoadSDNode *LD) {
return MaySplitLoadIndex &&
(LD->getOperand(2).getOpcode() != ISD::TargetConstant ||
!cast<ConstantSDNode>(LD->getOperand(2))->isOpaque());
}
bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc,
const SDLoc &DL,
SDValue N0,
@ -14222,11 +14229,11 @@ SDValue DAGCombiner::ForwardStoreValueToDirectLoad(LoadSDNode *LD) {
auto ReplaceLd = [&](LoadSDNode *LD, SDValue Val, SDValue Chain) -> SDValue {
if (LD->isIndexed()) {
bool IsSub = (LD->getAddressingMode() == ISD::PRE_DEC ||
LD->getAddressingMode() == ISD::POST_DEC);
unsigned Opc = IsSub ? ISD::SUB : ISD::ADD;
SDValue Idx = DAG.getNode(Opc, SDLoc(LD), LD->getOperand(1).getValueType(),
LD->getOperand(1), LD->getOperand(2));
// Cannot handle opaque target constants and we must respect the user's
// request not to split indexes from loads.
if (!canSplitIdx(LD))
return SDValue();
SDValue Idx = SplitIndexingFromLoad(LD);
SDValue Ops[] = {Val, Idx, Chain};
return CombineTo(LD, Ops, 3);
}
@ -14322,14 +14329,12 @@ SDValue DAGCombiner::visitLOAD(SDNode *N) {
// the indexing into an add/sub directly (that TargetConstant may not be
// valid for a different type of node, and we cannot convert an opaque
// target constant into a regular constant).
bool HasOTCInc = LD->getOperand(2).getOpcode() == ISD::TargetConstant &&
cast<ConstantSDNode>(LD->getOperand(2))->isOpaque();
bool CanSplitIdx = canSplitIdx(LD);
if (!N->hasAnyUseOfValue(0) &&
((MaySplitLoadIndex && !HasOTCInc) || !N->hasAnyUseOfValue(1))) {
if (!N->hasAnyUseOfValue(0) && (CanSplitIdx || !N->hasAnyUseOfValue(1))) {
SDValue Undef = DAG.getUNDEF(N->getValueType(0));
SDValue Index;
if (N->hasAnyUseOfValue(1) && MaySplitLoadIndex && !HasOTCInc) {
if (N->hasAnyUseOfValue(1) && CanSplitIdx) {
Index = SplitIndexingFromLoad(LD);
// Try to fold the base pointer arithmetic into subsequent loads and
// stores.

View File

@ -225,6 +225,21 @@ static bool isRegUsedByPhiNodes(unsigned DefReg,
return false;
}
static bool isTerminatingEHLabel(MachineBasicBlock *MBB, MachineInstr &MI) {
// Ignore non-EH labels.
if (!MI.isEHLabel())
return false;
// Any EH label outside a landing pad must be for an invoke. Consider it a
// terminator.
if (!MBB->isEHPad())
return true;
// If this is a landingpad, the first non-phi instruction will be an EH_LABEL.
// Don't consider that label to be a terminator.
return MI.getIterator() != MBB->getFirstNonPHI();
}
/// Build a map of instruction orders. Return the first terminator and its
/// order. Consider EH_LABEL instructions to be terminators as well, since local
/// values for phis after invokes must be materialized before the call.
@ -233,7 +248,7 @@ void FastISel::InstOrderMap::initialize(
unsigned Order = 0;
for (MachineInstr &I : *MBB) {
if (!FirstTerminator &&
(I.isTerminator() || (I.isEHLabel() && &I != &MBB->front()))) {
(I.isTerminator() || isTerminatingEHLabel(MBB, I))) {
FirstTerminator = &I;
FirstTerminatorOrder = Order;
}

View File

@ -271,8 +271,20 @@ SDValue DAGTypeLegalizer::PromoteIntRes_AtomicCmpSwap(AtomicSDNode *N,
return Res.getValue(1);
}
SDValue Op2 = GetPromotedInteger(N->getOperand(2));
// Op2 is used for the comparison and thus must be extended according to the
// target's atomic operations. Op3 is merely stored and so can be left alone.
SDValue Op2 = N->getOperand(2);
SDValue Op3 = GetPromotedInteger(N->getOperand(3));
if (TLI.getTargetMachine().getTargetTriple().isRISCV()) {
// The comparison argument must be sign-extended for RISC-V. This is
// abstracted using a new TargetLowering hook in the main LLVM development
// branch, but handled here directly in order to fix the codegen bug for
// 10.x without breaking the libLLVM.so ABI.
Op2 = SExtPromotedInteger(Op2);
} else {
Op2 = GetPromotedInteger(Op2);
}
SDVTList VTs =
DAG.getVTList(Op2.getValueType(), N->getValueType(1), MVT::Other);
SDValue Res = DAG.getAtomicCmpSwap(

View File

@ -5490,9 +5490,20 @@ char TargetLowering::isNegatibleForFree(SDValue Op, SelectionDAG &DAG,
EVT VT = Op.getValueType();
const SDNodeFlags Flags = Op->getFlags();
const TargetOptions &Options = DAG.getTarget().Options;
if (!Op.hasOneUse() && !(Op.getOpcode() == ISD::FP_EXTEND &&
isFPExtFree(VT, Op.getOperand(0).getValueType())))
return 0;
if (!Op.hasOneUse()) {
bool IsFreeExtend = Op.getOpcode() == ISD::FP_EXTEND &&
isFPExtFree(VT, Op.getOperand(0).getValueType());
// If we already have the use of the negated floating constant, it is free
// to negate it even it has multiple uses.
bool IsFreeConstant =
Op.getOpcode() == ISD::ConstantFP &&
!getNegatedExpression(Op, DAG, LegalOperations, ForCodeSize)
.use_empty();
if (!IsFreeExtend && !IsFreeConstant)
return 0;
}
// Don't recurse exponentially.
if (Depth > SelectionDAG::MaxRecursionDepth)
@ -5687,14 +5698,7 @@ SDValue TargetLowering::getNegatedExpression(SDValue Op, SelectionDAG &DAG,
ForCodeSize, Depth + 1);
char V1 = isNegatibleForFree(Op.getOperand(1), DAG, LegalOperations,
ForCodeSize, Depth + 1);
// TODO: This is a hack. It is possible that costs have changed between now
// and the initial calls to isNegatibleForFree(). That is because we
// are rewriting the expression, and that may change the number of
// uses (and therefore the cost) of values. If the negation costs are
// equal, only negate this value if it is a constant. Otherwise, try
// operand 1. A better fix would eliminate uses as a cost factor or
// track the change in uses as we rewrite the expression.
if (V0 > V1 || (V0 == V1 && isa<ConstantFPSDNode>(Op.getOperand(0)))) {
if (V0 > V1) {
// fold (fneg (fma X, Y, Z)) -> (fma (fneg X), Y, (fneg Z))
SDValue Neg0 = getNegatedExpression(
Op.getOperand(0), DAG, LegalOperations, ForCodeSize, Depth + 1);

View File

@ -90,7 +90,9 @@ static inline ArrayRef<uint8_t> stabilize(BumpPtrAllocator &Alloc,
TypeIndex MergingTypeTableBuilder::insertRecordAs(hash_code Hash,
ArrayRef<uint8_t> &Record) {
assert(Record.size() < UINT32_MAX && "Record too big");
assert(Record.size() % 4 == 0 && "Record is not aligned to 4 bytes!");
assert(Record.size() % 4 == 0 &&
"The type record size is not a multiple of 4 bytes which will cause "
"misalignment in the output TPI stream!");
LocallyHashedType WeakHash{Hash, Record};
auto Result = HashedRecords.try_emplace(WeakHash, nextTypeIndex());

View File

@ -360,16 +360,18 @@ Error TypeStreamMerger::remapType(const CVType &Type) {
[this, Type](MutableArrayRef<uint8_t> Storage) -> ArrayRef<uint8_t> {
return remapIndices(Type, Storage);
};
unsigned AlignedSize = alignTo(Type.RecordData.size(), 4);
if (LLVM_LIKELY(UseGlobalHashes)) {
GlobalTypeTableBuilder &Dest =
isIdRecord(Type.kind()) ? *DestGlobalIdStream : *DestGlobalTypeStream;
GloballyHashedType H = GlobalHashes[CurIndex.toArrayIndex()];
DestIdx = Dest.insertRecordAs(H, Type.RecordData.size(), DoSerialize);
DestIdx = Dest.insertRecordAs(H, AlignedSize, DoSerialize);
} else {
MergingTypeTableBuilder &Dest =
isIdRecord(Type.kind()) ? *DestIdStream : *DestTypeStream;
RemapStorage.resize(Type.RecordData.size());
RemapStorage.resize(AlignedSize);
ArrayRef<uint8_t> Result = DoSerialize(RemapStorage);
if (!Result.empty())
DestIdx = Dest.insertRecordBytes(Result);
@ -386,9 +388,15 @@ Error TypeStreamMerger::remapType(const CVType &Type) {
ArrayRef<uint8_t>
TypeStreamMerger::remapIndices(const CVType &OriginalType,
MutableArrayRef<uint8_t> Storage) {
unsigned Align = OriginalType.RecordData.size() & 3;
unsigned AlignedSize = alignTo(OriginalType.RecordData.size(), 4);
assert(Storage.size() == AlignedSize &&
"The storage buffer size is not a multiple of 4 bytes which will "
"cause misalignment in the output TPI stream!");
SmallVector<TiReference, 4> Refs;
discoverTypeIndices(OriginalType.RecordData, Refs);
if (Refs.empty())
if (Refs.empty() && Align == 0)
return OriginalType.RecordData;
::memcpy(Storage.data(), OriginalType.RecordData.data(),
@ -408,6 +416,16 @@ TypeStreamMerger::remapIndices(const CVType &OriginalType,
return {};
}
}
if (Align > 0) {
RecordPrefix *StorageHeader =
reinterpret_cast<RecordPrefix *>(Storage.data());
StorageHeader->RecordLen += 4 - Align;
DestContent = Storage.data() + OriginalType.RecordData.size();
for (; Align < 4; ++Align)
*DestContent++ = LF_PAD4 - Align;
}
return Storage;
}

View File

@ -44,6 +44,9 @@ void TpiStreamBuilder::setVersionHeader(PdbRaw_TpiVer Version) {
void TpiStreamBuilder::addTypeRecord(ArrayRef<uint8_t> Record,
Optional<uint32_t> Hash) {
// If we just crossed an 8KB threshold, add a type index offset.
assert(((Record.size() & 3) == 0) &&
"The type record's size is not a multiple of 4 bytes which will "
"cause misalignment in the output TPI stream!");
size_t NewSize = TypeRecordBytes + Record.size();
constexpr size_t EightKB = 8 * 1024;
if (NewSize / EightKB > TypeRecordBytes / EightKB || TypeRecords.empty()) {
@ -153,8 +156,11 @@ Error TpiStreamBuilder::commit(const msf::MSFLayout &Layout,
return EC;
for (auto Rec : TypeRecords) {
assert(!Rec.empty()); // An empty record will not write anything, but it
// would shift all offsets from here on.
assert(!Rec.empty() && "Attempting to write an empty type record shifts "
"all offsets in the TPI stream!");
assert(((Rec.size() & 3) == 0) &&
"The type record's size is not a multiple of 4 bytes which will "
"cause misalignment in the output TPI stream!");
if (auto EC = Writer.writeBytes(Rec))
return EC;
}

View File

@ -147,8 +147,17 @@ void llvm::computeLTOCacheKey(
// Include the hash for the current module
auto ModHash = Index.getModuleHash(ModuleID);
Hasher.update(ArrayRef<uint8_t>((uint8_t *)&ModHash[0], sizeof(ModHash)));
std::vector<uint64_t> ExportsGUID;
ExportsGUID.reserve(ExportList.size());
for (const auto &VI : ExportList) {
auto GUID = VI.getGUID();
ExportsGUID.push_back(GUID);
}
// Sort the export list elements GUIDs.
llvm::sort(ExportsGUID);
for (uint64_t GUID : ExportsGUID) {
// The export list can impact the internalization, be conservative here
Hasher.update(ArrayRef<uint8_t>((uint8_t *)&GUID, sizeof(GUID)));
}
@ -156,12 +165,23 @@ void llvm::computeLTOCacheKey(
// Include the hash for every module we import functions from. The set of
// imported symbols for each module may affect code generation and is
// sensitive to link order, so include that as well.
for (auto &Entry : ImportList) {
auto ModHash = Index.getModuleHash(Entry.first());
using ImportMapIteratorTy = FunctionImporter::ImportMapTy::const_iterator;
std::vector<ImportMapIteratorTy> ImportModulesVector;
ImportModulesVector.reserve(ImportList.size());
for (ImportMapIteratorTy It = ImportList.begin(); It != ImportList.end();
++It) {
ImportModulesVector.push_back(It);
}
llvm::sort(ImportModulesVector,
[](const ImportMapIteratorTy &Lhs, const ImportMapIteratorTy &Rhs)
-> bool { return Lhs->getKey() < Rhs->getKey(); });
for (const ImportMapIteratorTy &EntryIt : ImportModulesVector) {
auto ModHash = Index.getModuleHash(EntryIt->first());
Hasher.update(ArrayRef<uint8_t>((uint8_t *)&ModHash[0], sizeof(ModHash)));
AddUint64(Entry.second.size());
for (auto &Fn : Entry.second)
AddUint64(EntryIt->second.size());
for (auto &Fn : EntryIt->second)
AddUint64(Fn);
}

View File

@ -761,7 +761,6 @@ void MCObjectFileInfo::initWasmMCObjectFileInfo(const Triple &T) {
Ctx->getWasmSection(".debug_ranges", SectionKind::getMetadata());
DwarfMacinfoSection =
Ctx->getWasmSection(".debug_macinfo", SectionKind::getMetadata());
DwarfAddrSection = Ctx->getWasmSection(".debug_addr", SectionKind::getMetadata());
DwarfCUIndexSection = Ctx->getWasmSection(".debug_cu_index", SectionKind::getMetadata());
DwarfTUIndexSection = Ctx->getWasmSection(".debug_tu_index", SectionKind::getMetadata());
DwarfInfoSection =
@ -770,6 +769,17 @@ void MCObjectFileInfo::initWasmMCObjectFileInfo(const Triple &T) {
DwarfPubNamesSection = Ctx->getWasmSection(".debug_pubnames", SectionKind::getMetadata());
DwarfPubTypesSection = Ctx->getWasmSection(".debug_pubtypes", SectionKind::getMetadata());
DwarfDebugNamesSection =
Ctx->getWasmSection(".debug_names", SectionKind::getMetadata());
DwarfStrOffSection =
Ctx->getWasmSection(".debug_str_offsets", SectionKind::getMetadata());
DwarfAddrSection =
Ctx->getWasmSection(".debug_addr", SectionKind::getMetadata());
DwarfRnglistsSection =
Ctx->getWasmSection(".debug_rnglists", SectionKind::getMetadata());
DwarfLoclistsSection =
Ctx->getWasmSection(".debug_loclists", SectionKind::getMetadata());
// Wasm use data section for LSDA.
// TODO Consider putting each function's exception table in a separate
// section, as in -function-sections, to facilitate lld's --gc-section.

View File

@ -443,6 +443,10 @@ def SVEUnsupported : AArch64Unsupported {
HasSVE2BitPerm];
}
def PAUnsupported : AArch64Unsupported {
let F = [HasPA];
}
include "AArch64SchedA53.td"
include "AArch64SchedA57.td"
include "AArch64SchedCyclone.td"
@ -453,6 +457,7 @@ include "AArch64SchedExynosM4.td"
include "AArch64SchedExynosM5.td"
include "AArch64SchedThunderX.td"
include "AArch64SchedThunderX2T99.td"
include "AArch64SchedThunderX3T110.td"
def ProcA35 : SubtargetFeature<"a35", "ARMProcFamily", "CortexA35",
"Cortex-A35 ARM processors", [
@ -780,6 +785,25 @@ def ProcThunderX2T99 : SubtargetFeature<"thunderx2t99", "ARMProcFamily",
FeatureLSE,
HasV8_1aOps]>;
def ProcThunderX3T110 : SubtargetFeature<"thunderx3t110", "ARMProcFamily",
"ThunderX3T110",
"Marvell ThunderX3 processors", [
FeatureAggressiveFMA,
FeatureCRC,
FeatureCrypto,
FeatureFPARMv8,
FeatureArithmeticBccFusion,
FeatureNEON,
FeaturePostRAScheduler,
FeaturePredictableSelectIsExpensive,
FeatureLSE,
FeaturePA,
FeatureUseAA,
FeatureBalanceFPOps,
FeaturePerfMon,
FeatureStrictAlign,
HasV8_3aOps]>;
def ProcThunderX : SubtargetFeature<"thunderx", "ARMProcFamily", "ThunderX",
"Cavium ThunderX processors", [
FeatureCRC,
@ -878,6 +902,8 @@ def : ProcessorModel<"thunderxt81", ThunderXT8XModel, [ProcThunderXT81]>;
def : ProcessorModel<"thunderxt83", ThunderXT8XModel, [ProcThunderXT83]>;
// Cavium ThunderX2T9X Processors. Formerly Broadcom Vulcan.
def : ProcessorModel<"thunderx2t99", ThunderX2T99Model, [ProcThunderX2T99]>;
// Marvell ThunderX3T110 Processors.
def : ProcessorModel<"thunderx3t110", ThunderX3T110Model, [ProcThunderX3T110]>;
// FIXME: HiSilicon TSV110 is currently modeled as a Cortex-A57.
def : ProcessorModel<"tsv110", CortexA57Model, [ProcTSV110]>;

View File

@ -118,9 +118,15 @@ void AArch64BranchTargets::addBTI(MachineBasicBlock &MBB, bool CouldCall,
auto MBBI = MBB.begin();
// PACI[AB]SP are implicitly BTI JC, so no BTI instruction needed there.
if (MBBI != MBB.end() && (MBBI->getOpcode() == AArch64::PACIASP ||
MBBI->getOpcode() == AArch64::PACIBSP))
// Skip the meta instuctions, those will be removed anyway.
for (; MBBI != MBB.end() && MBBI->isMetaInstruction(); ++MBBI)
;
// SCTLR_EL1.BT[01] is set to 0 by default which means
// PACI[AB]SP are implicitly BTI C so no BTI C instruction is needed there.
if (MBBI != MBB.end() && HintNum == 34 &&
(MBBI->getOpcode() == AArch64::PACIASP ||
MBBI->getOpcode() == AArch64::PACIBSP))
return;
BuildMI(MBB, MBB.begin(), MBB.findDebugLoc(MBB.begin()),

View File

@ -211,6 +211,24 @@ AArch64FrameLowering::getStackIDForScalableVectors() const {
return TargetStackID::SVEVector;
}
/// Returns the size of the fixed object area (allocated next to sp on entry)
/// On Win64 this may include a var args area and an UnwindHelp object for EH.
static unsigned getFixedObjectSize(const MachineFunction &MF,
const AArch64FunctionInfo *AFI, bool IsWin64,
bool IsFunclet) {
if (!IsWin64 || IsFunclet) {
// Only Win64 uses fixed objects, and then only for the function (not
// funclets)
return 0;
} else {
// Var args are stored here in the primary function.
const unsigned VarArgsArea = AFI->getVarArgsGPRSize();
// To support EH funclets we allocate an UnwindHelp object
const unsigned UnwindHelpObject = (MF.hasEHFunclets() ? 8 : 0);
return alignTo(VarArgsArea + UnwindHelpObject, 16);
}
}
/// Returns the size of the entire SVE stackframe (calleesaves + spills).
static StackOffset getSVEStackSize(const MachineFunction &MF) {
const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
@ -959,10 +977,7 @@ void AArch64FrameLowering::emitPrologue(MachineFunction &MF,
bool IsWin64 =
Subtarget.isCallingConvWin64(MF.getFunction().getCallingConv());
// Var args are accounted for in the containing function, so don't
// include them for funclets.
unsigned FixedObject = (IsWin64 && !IsFunclet) ?
alignTo(AFI->getVarArgsGPRSize(), 16) : 0;
unsigned FixedObject = getFixedObjectSize(MF, AFI, IsWin64, IsFunclet);
auto PrologueSaveSize = AFI->getCalleeSavedStackSize() + FixedObject;
// All of the remaining stack allocations are for locals.
@ -993,32 +1008,8 @@ void AArch64FrameLowering::emitPrologue(MachineFunction &MF,
++MBBI;
}
// The code below is not applicable to funclets. We have emitted all the SEH
// opcodes that we needed to emit. The FP and BP belong to the containing
// function.
if (IsFunclet) {
if (NeedsWinCFI) {
HasWinCFI = true;
BuildMI(MBB, MBBI, DL, TII->get(AArch64::SEH_PrologEnd))
.setMIFlag(MachineInstr::FrameSetup);
}
// SEH funclets are passed the frame pointer in X1. If the parent
// function uses the base register, then the base register is used
// directly, and is not retrieved from X1.
if (F.hasPersonalityFn()) {
EHPersonality Per = classifyEHPersonality(F.getPersonalityFn());
if (isAsynchronousEHPersonality(Per)) {
BuildMI(MBB, MBBI, DL, TII->get(TargetOpcode::COPY), AArch64::FP)
.addReg(AArch64::X1).setMIFlag(MachineInstr::FrameSetup);
MBB.addLiveIn(AArch64::X1);
}
}
return;
}
if (HasFP) {
// For funclets the FP belongs to the containing function.
if (!IsFunclet && HasFP) {
// Only set up FP if we actually need to.
int64_t FPOffset = isTargetDarwin(MF) ? (AFI->getCalleeSavedStackSize() - 16) : 0;
@ -1161,7 +1152,9 @@ void AArch64FrameLowering::emitPrologue(MachineFunction &MF,
// Allocate space for the rest of the frame.
if (NumBytes) {
const bool NeedsRealignment = RegInfo->needsStackRealignment(MF);
// Alignment is required for the parent frame, not the funclet
const bool NeedsRealignment =
!IsFunclet && RegInfo->needsStackRealignment(MF);
unsigned scratchSPReg = AArch64::SP;
if (NeedsRealignment) {
@ -1215,7 +1208,8 @@ void AArch64FrameLowering::emitPrologue(MachineFunction &MF,
// FIXME: Clarify FrameSetup flags here.
// Note: Use emitFrameOffset() like above for FP if the FrameSetup flag is
// needed.
if (RegInfo->hasBasePointer(MF)) {
// For funclets the BP belongs to the containing function.
if (!IsFunclet && RegInfo->hasBasePointer(MF)) {
TII->copyPhysReg(MBB, MBBI, DL, RegInfo->getBaseRegister(), AArch64::SP,
false);
if (NeedsWinCFI) {
@ -1232,6 +1226,19 @@ void AArch64FrameLowering::emitPrologue(MachineFunction &MF,
.setMIFlag(MachineInstr::FrameSetup);
}
// SEH funclets are passed the frame pointer in X1. If the parent
// function uses the base register, then the base register is used
// directly, and is not retrieved from X1.
if (IsFunclet && F.hasPersonalityFn()) {
EHPersonality Per = classifyEHPersonality(F.getPersonalityFn());
if (isAsynchronousEHPersonality(Per)) {
BuildMI(MBB, MBBI, DL, TII->get(TargetOpcode::COPY), AArch64::FP)
.addReg(AArch64::X1)
.setMIFlag(MachineInstr::FrameSetup);
MBB.addLiveIn(AArch64::X1);
}
}
if (needsFrameMoves) {
const DataLayout &TD = MF.getDataLayout();
const int StackGrowth = isTargetDarwin(MF)
@ -1450,10 +1457,7 @@ void AArch64FrameLowering::emitEpilogue(MachineFunction &MF,
bool IsWin64 =
Subtarget.isCallingConvWin64(MF.getFunction().getCallingConv());
// Var args are accounted for in the containing function, so don't
// include them for funclets.
unsigned FixedObject =
(IsWin64 && !IsFunclet) ? alignTo(AFI->getVarArgsGPRSize(), 16) : 0;
unsigned FixedObject = getFixedObjectSize(MF, AFI, IsWin64, IsFunclet);
uint64_t AfterCSRPopSize = ArgumentPopSize;
auto PrologueSaveSize = AFI->getCalleeSavedStackSize() + FixedObject;
@ -1679,7 +1683,9 @@ static StackOffset getFPOffset(const MachineFunction &MF, int64_t ObjectOffset)
const auto &Subtarget = MF.getSubtarget<AArch64Subtarget>();
bool IsWin64 =
Subtarget.isCallingConvWin64(MF.getFunction().getCallingConv());
unsigned FixedObject = IsWin64 ? alignTo(AFI->getVarArgsGPRSize(), 16) : 0;
unsigned FixedObject =
getFixedObjectSize(MF, AFI, IsWin64, /*IsFunclet=*/false);
unsigned FPAdjust = isTargetDarwin(MF)
? 16 : AFI->getCalleeSavedStackSize(MF.getFrameInfo());
return {ObjectOffset + FixedObject + FPAdjust, MVT::i8};
@ -2632,9 +2638,14 @@ void AArch64FrameLowering::processFunctionBeforeFrameFinalized(
++MBBI;
// Create an UnwindHelp object.
int UnwindHelpFI =
MFI.CreateStackObject(/*size*/8, /*alignment*/16, false);
// The UnwindHelp object is allocated at the start of the fixed object area
int64_t FixedObject =
getFixedObjectSize(MF, AFI, /*IsWin64*/ true, /*IsFunclet*/ false);
int UnwindHelpFI = MFI.CreateFixedObject(/*Size*/ 8,
/*SPOffset*/ -FixedObject,
/*IsImmutable=*/false);
EHInfo.UnwindHelpFrameIdx = UnwindHelpFI;
// We need to store -2 into the UnwindHelp object at the start of the
// function.
DebugLoc DL;
@ -2656,10 +2667,14 @@ int AArch64FrameLowering::getFrameIndexReferencePreferSP(
const MachineFunction &MF, int FI, unsigned &FrameReg,
bool IgnoreSPUpdates) const {
const MachineFrameInfo &MFI = MF.getFrameInfo();
LLVM_DEBUG(dbgs() << "Offset from the SP for " << FI << " is "
<< MFI.getObjectOffset(FI) << "\n");
FrameReg = AArch64::SP;
return MFI.getObjectOffset(FI);
if (IgnoreSPUpdates) {
LLVM_DEBUG(dbgs() << "Offset from the SP for " << FI << " is "
<< MFI.getObjectOffset(FI) << "\n");
FrameReg = AArch64::SP;
return MFI.getObjectOffset(FI);
}
return getFrameIndexReference(MF, FI, FrameReg);
}
/// The parent frame offset (aka dispFrame) is only used on X86_64 to retrieve

View File

@ -26,7 +26,8 @@ def CortexA53Model : SchedMachineModel {
// v 1.0 Spreadsheet
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
}

View File

@ -31,7 +31,8 @@ def CortexA57Model : SchedMachineModel {
let LoopMicroOpBufferSize = 16;
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
}
//===----------------------------------------------------------------------===//

View File

@ -18,7 +18,8 @@ def CycloneModel : SchedMachineModel {
let MispredictPenalty = 16; // 14-19 cycles are typical.
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
}
//===----------------------------------------------------------------------===//

View File

@ -24,7 +24,8 @@ def ExynosM3Model : SchedMachineModel {
let MispredictPenalty = 16; // Minimum branch misprediction penalty.
let CompleteModel = 1; // Use the default model otherwise.
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
}
//===----------------------------------------------------------------------===//

View File

@ -24,7 +24,8 @@ def ExynosM4Model : SchedMachineModel {
let MispredictPenalty = 16; // Minimum branch misprediction penalty.
let CompleteModel = 1; // Use the default model otherwise.
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
}
//===----------------------------------------------------------------------===//

View File

@ -24,7 +24,8 @@ def ExynosM5Model : SchedMachineModel {
let MispredictPenalty = 15; // Minimum branch misprediction penalty.
let CompleteModel = 1; // Use the default model otherwise.
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
}
//===----------------------------------------------------------------------===//

View File

@ -23,8 +23,8 @@ def FalkorModel : SchedMachineModel {
let MispredictPenalty = 11; // Minimum branch misprediction penalty.
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
// FIXME: Remove when all errors have been fixed.
let FullInstRWOverlapCheck = 0;
}

View File

@ -27,8 +27,8 @@ def KryoModel : SchedMachineModel {
let LoopMicroOpBufferSize = 16;
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
// FIXME: Remove when all errors have been fixed.
let FullInstRWOverlapCheck = 0;
}

View File

@ -25,8 +25,8 @@ def ThunderXT8XModel : SchedMachineModel {
let PostRAScheduler = 1; // Use PostRA scheduler.
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
// FIXME: Remove when all errors have been fixed.
let FullInstRWOverlapCheck = 0;
}

View File

@ -25,8 +25,8 @@ def ThunderX2T99Model : SchedMachineModel {
let PostRAScheduler = 1; // Using PostRA sched.
let CompleteModel = 1;
list<Predicate> UnsupportedFeatures = SVEUnsupported.F;
list<Predicate> UnsupportedFeatures = !listconcat(SVEUnsupported.F,
PAUnsupported.F);
// FIXME: Remove when all errors have been fixed.
let FullInstRWOverlapCheck = 0;
}

File diff suppressed because it is too large Load Diff

View File

@ -160,6 +160,17 @@ void AArch64Subtarget::initializeProperties() {
PrefFunctionLogAlignment = 4;
PrefLoopLogAlignment = 2;
break;
case ThunderX3T110:
CacheLineSize = 64;
PrefFunctionLogAlignment = 4;
PrefLoopLogAlignment = 2;
MaxInterleaveFactor = 4;
PrefetchDistance = 128;
MinPrefetchStride = 1024;
MaxPrefetchIterationsAhead = 4;
// FIXME: remove this to enable 64-bit SLP if performance looks good.
MinVectorRegisterBitWidth = 128;
break;
}
}

View File

@ -63,7 +63,8 @@ public:
ThunderXT81,
ThunderXT83,
ThunderXT88,
TSV110
TSV110,
ThunderX3T110
};
protected:

View File

@ -304,7 +304,7 @@ void BPFDAGToDAGISel::PreprocessLoad(SDNode *Node,
LLVM_DEBUG(dbgs() << "Replacing load of size " << size << " with constant "
<< val << '\n');
SDValue NVal = CurDAG->getConstant(val, DL, MVT::i64);
SDValue NVal = CurDAG->getConstant(val, DL, LD->getValueType(0));
// After replacement, the current node is dead, we need to
// go backward one step to make iterator still work

View File

@ -116,11 +116,22 @@ void BPFMISimplifyPatchable::checkADDrr(MachineRegisterInfo *MRI,
else
continue;
// It must be a form of %1 = *(type *)(%2 + 0) or *(type *)(%2 + 0) = %1.
// It must be a form of %2 = *(type *)(%1 + 0) or *(type *)(%1 + 0) = %2.
const MachineOperand &ImmOp = DefInst->getOperand(2);
if (!ImmOp.isImm() || ImmOp.getImm() != 0)
continue;
// Reject the form:
// %1 = ADD_rr %2, %3
// *(type *)(%2 + 0) = %1
if (Opcode == BPF::STB || Opcode == BPF::STH || Opcode == BPF::STW ||
Opcode == BPF::STD || Opcode == BPF::STB32 || Opcode == BPF::STH32 ||
Opcode == BPF::STW32) {
const MachineOperand &Opnd = DefInst->getOperand(0);
if (Opnd.isReg() && Opnd.getReg() == I->getReg())
continue;
}
BuildMI(*DefInst->getParent(), *DefInst, DefInst->getDebugLoc(), TII->get(COREOp))
.add(DefInst->getOperand(0)).addImm(Opcode).add(*BaseOp)
.addGlobalAddress(GVal);

View File

@ -600,6 +600,38 @@ void BTFDebug::visitTypeEntry(const DIType *Ty, uint32_t &TypeId,
bool CheckPointer, bool SeenPointer) {
if (!Ty || DIToIdMap.find(Ty) != DIToIdMap.end()) {
TypeId = DIToIdMap[Ty];
// To handle the case like the following:
// struct t;
// typedef struct t _t;
// struct s1 { _t *c; };
// int test1(struct s1 *arg) { ... }
//
// struct t { int a; int b; };
// struct s2 { _t c; }
// int test2(struct s2 *arg) { ... }
//
// During traversing test1() argument, "_t" is recorded
// in DIToIdMap and a forward declaration fixup is created
// for "struct t" to avoid pointee type traversal.
//
// During traversing test2() argument, even if we see "_t" is
// already defined, we should keep moving to eventually
// bring in types for "struct t". Otherwise, the "struct s2"
// definition won't be correct.
if (Ty && (!CheckPointer || !SeenPointer)) {
if (const auto *DTy = dyn_cast<DIDerivedType>(Ty)) {
unsigned Tag = DTy->getTag();
if (Tag == dwarf::DW_TAG_typedef || Tag == dwarf::DW_TAG_const_type ||
Tag == dwarf::DW_TAG_volatile_type ||
Tag == dwarf::DW_TAG_restrict_type) {
uint32_t TmpTypeId;
visitTypeEntry(DTy->getBaseType(), TmpTypeId, CheckPointer,
SeenPointer);
}
}
}
return;
}

View File

@ -12,9 +12,6 @@
#include "HexagonInstrInfo.h"
#include "HexagonSubtarget.h"
#include "MCTargetDesc/HexagonBaseInfo.h"
#include "RDFGraph.h"
#include "RDFLiveness.h"
#include "RDFRegisters.h"
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/DenseSet.h"
#include "llvm/ADT/StringRef.h"
@ -27,6 +24,9 @@
#include "llvm/CodeGen/MachineInstrBuilder.h"
#include "llvm/CodeGen/MachineOperand.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/CodeGen/TargetSubtargetInfo.h"
#include "llvm/InitializePasses.h"
#include "llvm/MC/MCInstrDesc.h"

View File

@ -11,9 +11,6 @@
#include "MCTargetDesc/HexagonBaseInfo.h"
#include "RDFCopy.h"
#include "RDFDeadCode.h"
#include "RDFGraph.h"
#include "RDFLiveness.h"
#include "RDFRegisters.h"
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SetVector.h"
@ -24,6 +21,9 @@
#include "llvm/CodeGen/MachineInstr.h"
#include "llvm/CodeGen/MachineOperand.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/InitializePasses.h"
#include "llvm/Pass.h"
#include "llvm/Support/CommandLine.h"

View File

@ -11,13 +11,13 @@
//===----------------------------------------------------------------------===//
#include "RDFCopy.h"
#include "RDFGraph.h"
#include "RDFLiveness.h"
#include "RDFRegisters.h"
#include "llvm/CodeGen/MachineDominators.h"
#include "llvm/CodeGen/MachineInstr.h"
#include "llvm/CodeGen/MachineOperand.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/CodeGen/TargetOpcodes.h"
#include "llvm/CodeGen/TargetRegisterInfo.h"
#include "llvm/MC/MCRegisterInfo.h"

View File

@ -9,9 +9,9 @@
#ifndef LLVM_LIB_TARGET_HEXAGON_RDFCOPY_H
#define LLVM_LIB_TARGET_HEXAGON_RDFCOPY_H
#include "RDFGraph.h"
#include "RDFLiveness.h"
#include "RDFRegisters.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/CodeGen/RDFRegisters.h"
#include "llvm/CodeGen/MachineFunction.h"
#include <map>
#include <vector>

View File

@ -9,13 +9,13 @@
// RDF-based generic dead code elimination.
#include "RDFDeadCode.h"
#include "RDFGraph.h"
#include "RDFLiveness.h"
#include "llvm/ADT/SetVector.h"
#include "llvm/CodeGen/MachineBasicBlock.h"
#include "llvm/CodeGen/MachineFunction.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/Support/Debug.h"
#include <queue>

View File

@ -23,8 +23,8 @@
#ifndef RDF_DEADCODE_H
#define RDF_DEADCODE_H
#include "RDFGraph.h"
#include "RDFLiveness.h"
#include "llvm/CodeGen/RDFGraph.h"
#include "llvm/CodeGen/RDFLiveness.h"
#include "llvm/ADT/SetVector.h"
namespace llvm {

View File

@ -373,6 +373,7 @@ def : InstRW<[P9_DPE_7C, P9_DPO_7C, IP_EXECE_1C, IP_EXECO_1C, DISP_1C],
VMSUMSHS,
VMSUMUBM,
VMSUMUHM,
VMSUMUDM,
VMSUMUHS,
VMULESB,
VMULESH,

View File

@ -166,6 +166,9 @@ def FeatureHTM : SubtargetFeature<"htm", "HasHTM", "true",
"Enable Hardware Transactional Memory instructions">;
def FeatureMFTB : SubtargetFeature<"", "FeatureMFTB", "true",
"Implement mftb using the mfspr instruction">;
def FeatureUnalignedFloats :
SubtargetFeature<"allow-unaligned-fp-access", "AllowsUnalignedFPAccess",
"true", "CPU does not trap on unaligned FP access">;
def FeaturePPCPreRASched:
SubtargetFeature<"ppc-prera-sched", "UsePPCPreRASchedStrategy", "true",
"Use PowerPC pre-RA scheduling strategy">;
@ -252,7 +255,8 @@ def ProcessorFeatures {
FeatureExtDiv,
FeatureMFTB,
DeprecatedDST,
FeatureTwoConstNR];
FeatureTwoConstNR,
FeatureUnalignedFloats];
list<SubtargetFeature> P7SpecificFeatures = [];
list<SubtargetFeature> P7Features =
!listconcat(P7InheritableFeatures, P7SpecificFeatures);

Some files were not shown because too many files have changed in this diff Show More