Commit Graph

13 Commits

Author SHA1 Message Date
Rafael Espindola 3d21ab31af Delete BuiltinCC. NFC.
It is always identical to RuntimeCC.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@328050 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-20 22:02:57 +00:00
Abderrazek Zaafrani 749de2d465 [AARch64] Add ARMv8.2-A FP16 vector intrinsics
Putting back the code that was reverted few weeks ago.

Differential Revision: https://reviews.llvm.org/D34161

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@321294 91177308-0d34-0410-b5e6-96231b3b80d8
2017-12-21 19:20:01 +00:00
Alexander Richardson 2c42fd5f93 Convert clang::LangAS to a strongly typed enum
Summary:
Convert clang::LangAS to a strongly typed enum

Currently both clang AST address spaces and target specific address spaces
are represented as unsigned which can lead to subtle errors if the wrong
type is passed. It is especially confusing in the CodeGen files as it is
not possible to see what kind of address space should be passed to a
function without looking at the implementation.
I originally made this change for our LLVM fork for the CHERI architecture
where we make extensive use of address spaces to differentiate between
capabilities and pointers. When merging the upstream changes I usually
run into some test failures or runtime crashes because the wrong kind of
address space is passed to a function. By converting the LangAS enum to a
C++11 we can catch these errors at compile time. Additionally, it is now
obvious from the function signature which kind of address space it expects.

I found the following errors while writing this patch:

- ItaniumRecordLayoutBuilder::LayoutField was passing a clang AST address
  space to  TargetInfo::getPointer{Width,Align}()
- TypePrinter::printAttributedAfter() prints the numeric value of the
  clang AST address space instead of the target address space.
  However, this code is not used so I kept the current behaviour
- initializeForBlockHeader() in CGBlocks.cpp was passing
  LangAS::opencl_generic to TargetInfo::getPointer{Width,Align}()
- CodeGenFunction::EmitBlockLiteral() was passing a AST address space to
  TargetInfo::getPointerWidth()
- CGOpenMPRuntimeNVPTX::translateParameter() passed a target address space
  to Qualifiers::addAddressSpace()
- CGOpenMPRuntimeNVPTX::getParameterAddress() was using
  llvm::Type::getPointerTo() with a AST address space
- clang_getAddressSpace() returns either a LangAS or a target address
  space. As this is exposed to C I have kept the current behaviour and
  added a comment stating that it is probably not correct.

Other than this the patch should not cause any functional changes.

Reviewers: yaxunl, pcc, bader

Reviewed By: yaxunl, bader

Subscribers: jlebar, jholewinski, nhaehnle, Anastasia, cfe-commits

Differential Revision: https://reviews.llvm.org/D38816

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@315871 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-15 18:48:14 +00:00
Sjoerd Meijer 5bf57dfedf This reverts r305820 (ARMv.2-A FP16 vector intrinsics) because it shows
problems in testing, see comments in D34161 for some more details.
A fix is in progres in D35011, but a revert seems better now as the fix will
probably take some more time to land.


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@307277 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-06 16:37:31 +00:00
Abderrazek Zaafrani 6e3f80de39 [AArch64] ADD ARMv.2-A FP16 vector intrinsics
Differential Revision: https://reviews.llvm.org/D34161

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@305820 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-20 18:54:57 +00:00
Vedant Kumar d4d74154eb Revert "[AArch64] Add ARMv8.2-A FP16 vefctor intrinsics"
This reverts commit r304493. It breaks all the Darwin bots:
http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-incremental_check/37168

Failure:
Failing Tests (2):
    Clang :: CodeGen/aarch64-v8.2a-neon-intrinsics.c
    Clang :: CodeGen/arm_neon_intrinsics.c

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@304509 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-02 01:22:14 +00:00
Abderrazek Zaafrani d751aefbc7 [AArch64] Add ARMv8.2-A FP16 vefctor intrinsics
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@304493 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-01 23:22:29 +00:00
Yaxun Liu 3022dac388 CodeGen: Cast alloca to expected address space
Alloca always returns a pointer in alloca address space, which may
be different from the type defined by the language. For example,
in C++ the auto variables are in the default address space. Therefore
cast alloca to the expected address space when necessary.

Differential Revision: https://reviews.llvm.org/D32248


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@303370 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-18 18:51:09 +00:00
Yaxun Liu b51118de43 CodeGen: Let lifetime intrinsic use alloca address space
Differential Revision: https://reviews.llvm.org/D31717


git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@300485 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-17 20:03:11 +00:00
Yaxun Liu d0b5dcb55c Re-commit [OpenCL] AMDGCN: Fix size_t type
There was a premature cast to pointer type in emitPointerArithmetic which caused assertion in tests with assertion enabled.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@279206 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-19 05:17:25 +00:00
Yaxun Liu 2b5693d1fd Revert [OpenCL] AMDGCN: Fix size_t type
due to regressions in test/CodeGen/exprs.c on certain platforms.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@279127 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-18 20:01:06 +00:00
Yaxun Liu 7441a2bd4f [OpenCL] AMDGCN: Fix size_t type
Pointers of certain GPUs in AMDGCN target in private address space is 32 bit but pointers in other address spaces are 64 bit. size_t type should be defined as 64 bit for these GPUs so that it could hold pointers in all address spaces. Also fixed issues in pointer arithmetic codegen by using pointer specific intptr type.

Differential Revision: https://reviews.llvm.org/D23361

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@279121 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-18 19:34:04 +00:00
John McCall f4ddf94ecb Compute and preserve alignment more faithfully in IR-generation.
Introduce an Address type to bundle a pointer value with an
alignment.  Introduce APIs on CGBuilderTy to work with Address
values.  Change core APIs on CGF/CGM to traffic in Address where
appropriate.  Require alignments to be non-zero.  Update a ton
of code to compute and propagate alignment information.

As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.

The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned.  Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay.  I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.

Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.

We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment.  In particular,
field access now uses alignmentAtOffset instead of min.

Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs.  For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint.  That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.

ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments.  In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments.  That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.

I partially punted on applying this work to CGBuiltin.  Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.

git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@246985 91177308-0d34-0410-b5e6-96231b3b80d8
2015-09-08 08:05:57 +00:00