[PowerPC] enable atomic.c in compiler_rt and do not check and forces

lock/lock_free decisions in compiled time

Summary:
Enables atomic.c in compiler_rt and forces clang to not emit a call for runtime
decision about lock/lock_free.  At compiling time, if clang can't decide if
atomic operation can be lock free, it emits calls to external functions  like
`__atomic_is_lock_free`, `__c11_atomic_is_lock_free` and
`__atomic_always_lock_free`, postponing decision to a runtime check.  According
to LLVM code documentation, the mechanism exists due to differences between
x86_64 processors that can't be decided at runtime.

On PowerPC and PowerPCSPE (32 bits), we already know in advance it can't be lock
free, so we force the decision at compile time and avoid having to implement it
in an external library.

This patch was made after 32 bit users testing the PowePC32 bit ISO reported
llvm could not be compiled with in-base llvm due to `__atomic_load8` not
implemented.

Submitted by:	alfredo.junior_eldorado.org.br
Reviewed by:	jhibbits, dim

Differential Revision:	https://reviews.freebsd.org/D22549
This commit is contained in:
Justin Hibbits 2019-12-26 23:06:28 +00:00
parent 6795e26b8a
commit 7b6b882fe4
3 changed files with 26 additions and 4 deletions

View File

@ -9896,6 +9896,13 @@ bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
}
}
// Avoid emiting call for runtime decision on PowerPC 32-bit
// The lock free possibilities on this platform are covered by the lines
// above and we know in advance other cases require lock
if (Info.Ctx.getTargetInfo().getTriple().getArch() == llvm::Triple::ppc) {
return Success(0, E);
}
return BuiltinOp == Builtin::BI__atomic_always_lock_free ?
Success(0, E) : Error(E);
}

View File

@ -51,8 +51,8 @@ static const long SPINLOCK_MASK = SPINLOCK_COUNT - 1;
////////////////////////////////////////////////////////////////////////////////
#ifdef __FreeBSD__
#include <errno.h>
#include <machine/atomic.h>
#include <sys/types.h>
#include <machine/atomic.h>
#include <sys/umtx.h>
typedef struct _usem Lock;
__inline static void unlock(Lock *l) {
@ -117,13 +117,20 @@ static __inline Lock *lock_for_pointer(void *ptr) {
return locks + (hash & SPINLOCK_MASK);
}
/// Macros for determining whether a size is lock free. Clang can not yet
/// codegen __atomic_is_lock_free(16), so for now we assume 16-byte values are
/// not lock free.
/// Macros for determining whether a size is lock free.
#define IS_LOCK_FREE_1 __c11_atomic_is_lock_free(1)
#define IS_LOCK_FREE_2 __c11_atomic_is_lock_free(2)
#define IS_LOCK_FREE_4 __c11_atomic_is_lock_free(4)
/// 32 bit PowerPC doesn't support 8-byte lock_free atomics
#if !defined(__powerpc64__) && defined(__powerpc__)
#define IS_LOCK_FREE_8 0
#else
#define IS_LOCK_FREE_8 __c11_atomic_is_lock_free(8)
#endif
/// Clang can not yet codegen __atomic_is_lock_free(16), so for now we assume
/// 16-byte values are not lock free.
#define IS_LOCK_FREE_16 0
/// Macro that calls the compiler-generated lock-free versions of functions

View File

@ -205,6 +205,14 @@ CFLAGS+= -DEMIT_SYNC_ATOMICS
SRCF+= stdatomic
.endif
.if "${COMPILER_TYPE}" == "clang" && \
(${MACHINE_ARCH} == "powerpc" || ${MACHINE_ARCH} == "powerpcspe")
SRCS+= atomic.c
CFLAGS.atomic.c+= -Wno-atomic-alignment
.endif
.for file in ${SRCF}
.if ${MACHINE_ARCH:Marmv[67]*} && (!defined(CPUTYPE) || ${CPUTYPE:M*soft*} == "") \
&& exists(${CRTSRC}/${CRTARCH}/${file}vfp.S)