GCC uses the x87DoubleExtended model for long doubles, and passes them
indirectly by address through function calls.
Also replace the existing mingw-long-double assembly emitting test with
an IR-level test.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@251567 91177308-0d34-0410-b5e6-96231b3b80d8
CGBlocks.cpp.
This commit fixes a bug in clang's code-gen where it creates the
following functions but doesn't attach function attributes to them:
__copy_helper_block_
__destroy_helper_block_
__Block_byref_object_copy_
__Block_byref_object_dispose_
rdar://problem/20828324
Differential Revision: http://reviews.llvm.org/D13525
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249735 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247941 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@246985 91177308-0d34-0410-b5e6-96231b3b80d8
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@246814 91177308-0d34-0410-b5e6-96231b3b80d8
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@246764 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
[ARM] Allow passing/returning of __fp16 arguments
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@246760 91177308-0d34-0410-b5e6-96231b3b80d8
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@246755 91177308-0d34-0410-b5e6-96231b3b80d8
This is the PS4 counterpart to r229376, which quotes the library name if the
name contains space. It was discovered that if a library name contains both
double-quote and space characters, quoting the name might produce unexpected
results, but we are mostly concerned with a Windows host environment, which
does not allow double-quote or slashes in file/folder names.
Differential Revision: http://reviews.llvm.org/D11275
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@242689 91177308-0d34-0410-b5e6-96231b3b80d8
We shouldn't crash despite the AMD64 ABI not giving clear guidance as to
how to pass around vector types <= 32 bits. Instead, classify such
vectors as INTEGER to be compatible with GCC.
This fixes PR24162.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@242508 91177308-0d34-0410-b5e6-96231b3b80d8
For Mips direct-to-nacl, the goal is to be close to le32 front-end and
use Mips32EL backend. This patch defines new NaClMips32ELTargetInfo and
modifies it slightly to be close to le32. It also adds necessary parts,
inline with ARM and X86.
Differential Revision: http://reviews.llvm.org/D10739
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@241678 91177308-0d34-0410-b5e6-96231b3b80d8
We forgot to run postMerge after decided that the union had to be
classified as MEMORY. This left us with Lo == MEMORY and Hi == SSEUp
which is an invalid combination.
This fixes PR24021.
Differential Revision: http://reviews.llvm.org/D10908
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@241666 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Byval argument pair formation assumes that if a type is less than 8 bytes
it must be an integer and not a pointer, which is not true for x32 and NaCl.
Relax the assertion and add a test for a codegen case that triggered it.
Reviewers: jvoung
Subscribers: jfb, cfe-commits
Differential Revision: http://reviews.llvm.org/D10701
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@240600 91177308-0d34-0410-b5e6-96231b3b80d8
As specified in the SysV AVX512 ABI drafts. It follows the same scheme
as AVX2:
Arguments of type __m512 are split into eight eightbyte chunks.
The least significant one belongs to class SSE and all the others
to class SSEUP.
This also means we change the OpenMP SIMD default alignment on AVX512.
Based on r240337.
Differential Revision: http://reviews.llvm.org/D9894
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@240338 91177308-0d34-0410-b5e6-96231b3b80d8
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@240270 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fixes an assertion failure in method
'X86_64ABIInfo::GetByteVectorType'.
Method 'GetByteVectorType' (in TargetInfo.cpp) is responsible
for mapping a QualType 'Ty' (for an argument or return value) to an LLVM IR
type that, according to the ABI, must be passed in a XMM/YMM vector register.
When selecting the IR vector type, method 'GetByteVectorType' always tries to
choose the "best" IR vector type for the 'Ty' in input. In particular, if Ty
is a wrapper structure, it keeps unwrapping it until it finds a vector type VTy.
That VTy is the "preferred IR type".
However, function 'isSingleElementStructure' (used to unwrap structures) does
not know how to look through union types. So, before this patch, if Ty was in
a nest of wrapper structures with at least two union types, we would have
triggered an assertion failure (added at revision 230971).
With this patch, if method 'GetByteVectorType' fails to find the preferred
vector type, we just return a valid (although potentially 'less friendly')
vector type based on the type size. So, rather than asserting on an 'unexpected'
'Ty' in input, we conservatively return vector type <2 x double> if Ty is 16
bytes, or <4 x double> if Ty is 32 bytes.
Differential Revision: http://reviews.llvm.org/D10190
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@238861 91177308-0d34-0410-b5e6-96231b3b80d8
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@238601 91177308-0d34-0410-b5e6-96231b3b80d8
Re-land the change r238200, but with modifications in the tests that should
prevent new failures in some environments as reported with the original
change on the mailing list.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@238253 91177308-0d34-0410-b5e6-96231b3b80d8
We already have the ABI, we don't need a "HasAVX" flag.
This will also makes it easier to add an AVX512 ABI.
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@237989 91177308-0d34-0410-b5e6-96231b3b80d8
Also add trivial handling of transparent unions.
PPC32, MSP430, and XCore apparently all rely on DefaultABIInfo. This
should worry you, because DefaultABIInfo is not implementing the rules
of any particular ABI.
Fixes PR23097, patch by Andy Gibbs.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@237630 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for the z13 architecture type. For compatibility
with GCC, a pair of options -mvx / -mno-vx can be used to selectively
enable/disable use of the vector facility.
When the vector facility is present, we default to the new vector ABI.
This is characterized by two major differences:
- Vector types are passed/returned in vector registers
(except for unnamed arguments of a variable-argument list function).
- Vector types are at most 8-byte aligned.
The reason for the choice of 8-byte vector alignment is that the hardware
is able to efficiently load vectors at 8-byte alignment, and the ABI only
guarantees 8-byte alignment of the stack pointer, so requiring any higher
alignment for vectors would require dynamic stack re-alignment code.
However, for compatibility with old code that may use vector types, when
*not* using the vector facility, the old alignment rules (vector types
are naturally aligned) remain in use.
These alignment rules are not only implemented at the C language level,
but also at the LLVM IR level. This is done by selecting a different
DataLayout string depending on whether the vector ABI is in effect or not.
Based on a patch by Richard Sandiford.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@236531 91177308-0d34-0410-b5e6-96231b3b80d8
- Changed CUDALaunchBounds arguments from integers to Expr* so they can
be saved in AST for instantiation.
- Added support for template instantiation of launch_bounds attrubute.
- Moved evaluation of launch_bounds arguments to NVPTXTargetCodeGenInfo::
SetTargetAttributes() where it can be done after template instantiation.
- Added a warning on negative launch_bounds arguments.
- Amended test cases.
Differential Revision: http://reviews.llvm.org/D8985
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@235452 91177308-0d34-0410-b5e6-96231b3b80d8
Something like { void*, void * } would be passed to a function as a [2 x i64], but returned as an i128. This patch unifies the 2 behaviours so that we also return it as a [2 x i64].
This is better for the quality of the IR, and the size of the final LLVM binary as we tend to want to insert/extract values from these types and do so with the insert/extract instructions is less IR than shifting, truncating, and or'ing values.
Reviewed by Tim Northover.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@235231 91177308-0d34-0410-b5e6-96231b3b80d8
Looks like the VTable code in particular will need some work to pass
around the pointee type explicitly.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@234128 91177308-0d34-0410-b5e6-96231b3b80d8