Calling registerName() for the same symbol twice, even with a different
size, has no effect other than the lookup overhead. Avoid the
redundancy.
Fixesfacebookincubator/BOLT#299
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D136115
Extra NUL does not impact functionality of the generated code, but it confuses
various NVIDIA tools used to examine embedded GPU binaries.
Differential Revision: https://reviews.llvm.org/D135832
Previously, some uses of std::function with blocks would crash when ARC was enabled.
rdar://100907096
Differential Revision: https://reviews.llvm.org/D135706
`std::variant::operator<=>` is missing a requires clause ensuring that
`operator<=>` only exists when all of the types in the variant are
`three_way_comparable`.
Add the missing requires clause and adjust the existing test which was
incorrect.
Fixes https://github.com/llvm/llvm-project/issues/58192.
Differential Revision: https://reviews.llvm.org/D136050
When UserExpression::Evaluate() fails and doesn't return a ValueObject there is no vehicle for returning the error in the return value.
This behavior can be observed by applying the following patch:
diff --git a/lldb/source/Target/Target.cpp b/lldb/source/Target/Target.cpp
index f1a311b7252c..58c03ccdb068 100644
--- a/lldb/source/Target/Target.cpp
+++ b/lldb/source/Target/Target.cpp
@@ -2370,6 +2370,7 @@ UserExpression *Target::GetUserExpressionForLanguage(
Expression::ResultType desired_type,
const EvaluateExpressionOptions &options, ValueObject *ctx_obj,
Status &error) {
+ error.SetErrorStringWithFormat("Ha ha!"); return nullptr;
auto type_system_or_err = GetScratchTypeSystemForLanguage(language);
if (auto err = type_system_or_err.takeError()) {
error.SetErrorStringWithFormat(
and then running
$ lldb -o "p 1"
(lldb) p 1
(lldb)
This patch fixes this by creating an empty result ValueObject that wraps the error.
Differential Revision: https://reviews.llvm.org/D135998
This diff updates the `fastmath` attribute in the LLVMIR dialect to use `tblgen`
classes that were developed after the initial LLVMIR `fastmath` implementation.
Using the `EnumAttr` `tblgen` classes brings the LLVMIR `fastmath` attribute in
line with other dialects, and eliminates some of the custom printing and parsing
code in the LLVMIR dialect.
Subsequent commits will further reduce the custom processing code for the LLVMIR
`fastmath` attribute by unifying printing/parsing functionality between the
LLVMIR and `arith` `fastmath` attributes. (The actual attributes will remain
separate, but the printing and parsing will be made generic, and will be usable
by other dialects/attributes.)
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D135289
''register(ID, space)'' like register(t3, space1) will be translated into
i32 3, i32 1 as the last 2 operands for resource annotation metadata.
NamedMetadata for CBuffers and SRVs are added as "hlsl.srvs" and "hlsl.cbufs".
Reviewed By: beanz
Differential Revision: https://reviews.llvm.org/D130951
This diff causes the `tblgen`-erated print() function to skip printing a
`DefaultValuedAttr` attribute when the value is equal to the default.
This feature will reduce the amount of custom printing code that needs to be
written by users a relatively common scenario. As a motivating example, for the
fastmath flags in the LLVMIR dialect, we would prefer to print this:
```
%0 = llvm.fadd %arg0, %arg1 : f32
```
instead of this:
```
%0 = llvm.fadd %arg0, %arg1 {fastmathFlags = #llvm.fastmath<none>} : f32
```
This diff makes the handling of print functionality for default-valued attributes
standard.
This is an updated version of https://reviews.llvm.org/D135398, without the per-attribute bit to control printing.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D135993
sifive-7-series has macrofusion support to convert a branch over
a single instruction into a conditional instruction. This can be
an improvement if the branch is hard to predict.
This patch adds support for the most basic case, a branch over a
move instruction. This is implemented as a pseudo instruction so
we can hide the control flow until all code motion passes complete.
I've disabled a recent select optimization if this feature is enabled
in the subtarget.
Related gcc patch for the same optimization https://www.mail-archive.com/gcc-patches@gcc.gnu.org/msg211045.html
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D135814
In SizeClassAllocator64, the RegionBeg is determined by RegionBase +
random offset. The offset is n pages, where n is a random number less or
equal to 16. However, on certain platforms which have large page size,
it may end up immediately OOM without mapping any block pages. For
example,
PageSize = 64 KB, RegionSize = 1 MB
Suppose the random number n is 16, then the random offset will be
64 * 16 = 1024 KB which is equal to the RegionSize.
On most platforms we don't have such large page size and we have
different PRNG(pseudo random number generator) behaviors, thus we didn't
hit any failures before. Given that this now only affects the tests,
only increase the region size is enough.
Will revisit the logic of calculating the random offset.
Differential Revision: https://reviews.llvm.org/D136025
If the arithmetic for indices of inbounds GEPs overflows, the result is
poison. This means it is also OK for the coefficients to overflow. GEP
decomposition is limited to cases where the index size is <= 64 bit,
which can be represented by int64_t used for the coefficients in the
constraint system.
[This Godbolt link](https://godbolt.org/z/s17Kv1s9T) shows different codegen between clang and gcc for a transpose operation.
clang result:
```
vmovdqu xmm0, xmmword ptr [rcx + rax]
vmovdqu xmm1, xmmword ptr [rcx + rax + 16]
vmovdqu xmm2, xmmword ptr [r8 + rax]
vmovdqu xmm3, xmmword ptr [r8 + rax + 16]
vpunpckhbw xmm4, xmm2, xmm0
vpunpcklbw xmm0, xmm2, xmm0
vpunpcklbw xmm2, xmm3, xmm1
vpunpckhbw xmm1, xmm3, xmm1
vmovdqu xmmword ptr [rdi + 2*rax + 48], xmm1
vmovdqu xmmword ptr [rdi + 2*rax + 32], xmm2
vmovdqu xmmword ptr [rdi + 2*rax], xmm0
vmovdqu xmmword ptr [rdi + 2*rax + 16], xmm4
```
gcc result:
```
vmovdqu ymm3, YMMWORD PTR [rdi+rax]
vpunpcklbw ymm1, ymm3, YMMWORD PTR [rsi+rax]
vpunpckhbw ymm0, ymm3, YMMWORD PTR [rsi+rax]
vperm2i128 ymm2, ymm1, ymm0, 32
vperm2i128 ymm1, ymm1, ymm0, 49
vmovdqu YMMWORD PTR [rcx+rax*2], ymm2
vmovdqu YMMWORD PTR [rcx+32+rax*2], ymm1
```
clang's code is roughly 15% slower than gcc's when evaluated on an internal compression benchmark.
The loop vectorizer generates the following shufflevector intrinsic:
```
%interleaved.vec = shufflevector <32 x i8> %a, <32 x i8> %b, <64 x i32> <i32 0, i32 32, i32 1, i32 33, i32 2, i32 34, i32 3, i32 35, i32 4, i32 36, i32 5, i32 37, i32 6, i32 38, i32 7, i32 39, i32 8, i32 40, i32 9, i32 41, i32 10, i32 42, i32 11, i32 43, i32 12, i32 44, i32 13, i32 45, i32 14, i32 46, i32 15, i32 47, i32 16, i32 48, i32 17, i32 49, i32 18, i32 50, i32 19, i32 51, i32 20, i32 52, i32 21, i32 53, i32 22, i32 54, i32 23, i32 55, i32 24, i32 56, i32 25, i32 57, i32 26, i32 58, i32 27, i32 59, i32 28, i32 60, i32 29, i32 61, i32 30, i32 62, i32 31, i32 63>
```
which is lowered to SelectionDAG:
```
t2: v32i8,ch = CopyFromReg t0, Register:v32i8 %0
t6: v64i8 = concat_vectors t2, undef:v32i8
t4: v32i8,ch = CopyFromReg t0, Register:v32i8 %1
t7: v64i8 = concat_vectors t4, undef:v32i8
t8: v64i8 = vector_shuffle<0,64,1,65,2,66,3,67,4,68,5,69,6,70,7,71,8,72,9,73,10,74,11,75,12,76,13,77,14,78,15,79,16,80,17,81,18,82,19,83,20,84,21,85,22,86,23,87,24,88,25,89,26,90,27,91,28,92,29,93,30,94,31,95> t6, t7
```
So far this `vector_shuffle` is good enough for us to pattern-match and transform, but as we go down the SelectionDAG pipeline, it got split into smaller shuffles. During dagcombine1, the shuffle is split by `foldShuffleOfConcatUndefs`.
```
// shuffle (concat X, undef), (concat Y, undef), Mask -->
// concat (shuffle X, Y, Mask0), (shuffle X, Y, Mask1)
t2: v32i8,ch = CopyFromReg t0, Register:v32i8 %0
t4: v32i8,ch = CopyFromReg t0, Register:v32i8 %1
t19: v32i8 = vector_shuffle<0,32,1,33,2,34,3,35,4,36,5,37,6,38,7,39,8,40,9,41,10,42,11,43,12,44,13,45,14,46,15,47> t2, t4
t15: ch,glue = CopyToReg t0, Register:v32i8 $ymm0, t19
t20: v32i8 = vector_shuffle<16,48,17,49,18,50,19,51,20,52,21,53,22,54,23,55,24,56,25,57,26,58,27,59,28,60,29,61,30,62,31,63> t2, t4
t17: ch,glue = CopyToReg t15, Register:v32i8 $ymm1, t20, t15:1
```
With `foldShuffleOfConcatUndefs` commented out, the vector is still split later by the type legalizer, which comes after dagcombine1, because v64i8 is not a legal type in AVX2 (64 * 8 = 512 bits while ymm = 256 bits). There doesn't seem to be a good way to avoid this split. Lowering the `vector_shuffle` into unpck and perm during dagcombine1 is too early. Therefore, although somewhat inconvenient, we decided to go with pattern-matching a pair vector shuffles later in the SelectionDAG pipeline, as part of `lowerV32I8Shuffle`.
The code looks at the two operands of the first shuffle it encounters, iterates through the users of the operands, and tries to find two shuffles that are consecutive interleaves. Once the pattern is found, it lowers them into unpcks and perms. It returns the perm for the shuffle that's currently being lowered (have ISel modify the DAG), and replaces the other shuffle in place.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D134477
This follows the path that AArch64 SVE has taken. Doing this via a function attribute set in the frontend is basically a workaround for the fact that several analyzes which need the information (i.e. known bits, lvi, scev) can't easily use TTI without significant amounts of plumbing changes.
This patch hard codes "v" numbers, and directly follows the SVE precedent as a result. In a follow up, I hope to drive this from RISCVISAInfo.h/cpp instead, but the MinVLen number being returned from that interface seemed to always be 0 (which is wrong), and I haven't figured out what's going wrong there.
Differential Revision: https://reviews.llvm.org/D135894
The target-specific code (AArch64, PPC64) does not fit into the generic code and
adds virtual function overhead. Move relocateAlloc into ELF/Arch/ instead. This
removes many virtual functions (relaxTls*). In addition, this helps get rid of
getRelocTargetVA dispatch and many RelExpr members in the future.
The code incorrectly checked for CTLZ_ZERO_UNDEF instead of
CTTZ_ZERO_UNDEF.
While I was there I flipped the condition into an early out.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D136010
This patch renames FuncletPadInst::getNumArgOperands to arg_size for
consistency with CallBase, where getNumArgOperands was removed in
favor of arg_size in commit 3e1c787b31
Differential Revision: https://reviews.llvm.org/D136048
This reverts commit 233659c7ae.
I see some sanitizer build bot failures. Not sure if it is change
causing it, but let's see if a revert returns the bots to green...
The current implementation outputs JSON in the following way:
[{'<filename>':{'FileSummary':{},...}}]
Using the filename as a key makes processing the JSON data awkward, and
should be avoided. This patch removes that outer key, since the
'FileSummary' data also includes a 'File' field, and so we lose no data.
Reviewed By: jhenderson, leonardchan
Differential Revision: https://reviews.llvm.org/D134843
VisbleModuleSet::setVisible takes a callback, to inform of modules
being made (transitively) visible. However, we were calling it as
'Vis(M)' from a recursive lambda, where 'M' is a capture of
setVisible's M, module parameter. Thus we can invoke the callback
multiple times, passing the same value to it each time.
Everywhere else in the lambda, we refer to V.M of the lambda's
Visiting parameter. We should be doing so for the callback. Thus
we'll pass the outermost module on the outermost recursive call, and
as we descend the imports, we'll pass each import to the callback.
Reviewed By: iains
Differential Revision: https://reviews.llvm.org/D135958
This assert is erroneous because an op implementing
`RegionBranchOpInterface` can have variadic regions and in some cases
have zero regions, in which case the only possible control flow is
branching from the parent op to itself.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D136052
Before this patch (and D135844)
- Given DAG node shl(op, N), isBitfieldPositioningOp uses (optionally shifted [1] ) op as the Src (least significant bits of Src are inserted into DstLSB of Dst node).
After this patch
- If op is and(val, mask), isBitfieldPositioningOp tries to see through and and find if val is a simpler source than op.
It helps in a similar (probably symmetric) way how isSeveralBitsExtractOpFromShr [2] optimizes isBitfieldExtractOpFromShr
Existing test cases are improved without regressions.
[1] cbd8464595/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (L2546)
[2] cbd8464595/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (L2057)
Differential Revision: https://reviews.llvm.org/D135850
Calling `getFunctionLinkage(CalleeInfo.getCalleeDecl())` will crash when the declaration does not have a body, e.g., `extern void foo();`. Instead, we can use `isExternallyVisible()` to see if the delcaration has internal linkage.
I believe using `!isExternallyVisible()` is correct because the clang linkage must be `InternalLinkage` or `UniqueExternalLinkage`, both of which are "internal linkage" in llvm.
9c26f51f5e/clang/include/clang/Basic/Linkage.h (L28-L40)
Fixes https://github.com/llvm/llvm-project/issues/54139
Reviewed By: tmsriram
Differential Revision: https://reviews.llvm.org/D135926
The main motivation for these additional targets is to cover the
differences in the instructions available between Thumb2 and Thumb1.
Ths shows up in these test due to the lack of the following in
Thumb1:
- Mulitply and Subtract instruction (mls) - used when calculating
a remainder.
- Unsigned Muliple Long instruction (umull) - used in certain
cases when optimising division with a constant.
Differential Revision: https://reviews.llvm.org/D135875
If lots of pending callbacks are added while the main loop has exited
already, the trigger pipe buffer fills in, causing the write to fail
and the related assertion to fail. To avoid this, add a boolean member
indicating whether the callbacks have been triggered already.
If the trigger was done, avoid writing to the pipe until loops proceeds
to run them and resets the variable.
Besides fixing the issue, this also avoids writing to the pipe multiple
times if callbacks are added faster than the loop is able to process
them. Previously, this would lead to the loop performing multiple read
iterations from pipe unnecessarily.
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.llvm.org/D135516
This class adds helper functions similar to `emitError` for the
DiagnosedSilenceableFailure class in both the silenceable and definite
failure cases. These helpers simplify the use of said class and make
tranfsorm op application code idiomatic.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D136072
In the callee side, the value cannot be used directly. For example, the
dummy argument is lhs variable or the dummy argument is passed to
another procedure as actual argument.
Fix this by allocating one temporary storage and store the value. Then
map the symbol of dummy argument to the `mlir::Value` of the temporary.
Reviewed By: jeanPerier
Differential Revision: https://reviews.llvm.org/D136009
Instead of checking that an operand is constant/opaque before calling getNode() and then checking that the result is a constant, just use FoldConstantArithmetic which will just early-out if the operands are not constant foldable.
Using different helper functions for DAG nodes with different Opcode allows specialization.
- 'isBitfieldExtractOp' [1] shows how specialization based on Opcode could catch more patterns.
- The refactor paves the way (e.g., makes diff clearer) for enhancement in {D135844,D135850,D135852}
[1] cbd8464595/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (L2163-L2202)
Differential Revision: https://reviews.llvm.org/D135843
`vector.contract` is being lowered to the default mul/add contraction
regardless if of the kind indicated. Stop the lowering completely in
this case until the correct one can be implemented.
Reviewed By: springerm, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D136079
We can't assume that operand 0 is the negated operand because
the matcher handles "fsub -0.0, X" (and also +0.0 with FMF).
By capturing the extract within the match, we avoid the bug
and make the transform more robust (can't assume that this
pass will only see canonical IR).