This patch replaces calls to GreatestCommonDivisor64 with std::gcd
where both arguments are known to be of unsigned types no larger than
64 bits in size.
Initial implementation had too weak requirements to positive/negative
range crossings. Not crossing zero with nuw is not enough for two reasons:
- If ArLHS has negative step, it may turn from positive to negative
without crossing 0 boundary from left to right (and crossing right to
left doesn't count for unsigned);
- If ArLHS crosses SINT_MAX boundary, it still turns from positive to
negative;
In fact we require that ArLHS always stays non-negative or negative,
which an be enforced by the following set of preconditions:
- both nuw and nsw;
- positive step (looks liftable);
Because of positive step, boundary crossing is only possible from left
part to the right part. And because of no-wrap flags, it is guaranteed
to never happen.
This reverts commit 354fa0b480.
Returning as is. The patch was reverted due to a miscompile, but
this patch is not causing it. This patch made it possible to infer
some nuw flags in code guarded by `false` condition, and then someone
else to managed to propagate the flag from dead code outside.
Returning the patch to be able to reproduce the issue.
This reverts commit 34ae308c73.
Our internal testing found a miscompile. Not sure if it's caused by
this patch or it revealed something else. Reverting while investigating.
Contextual knowledge may be used to prove invariance of some conditions.
For example, in this case:
```
; %len >= 0
guard(%iv = {start,+,1}<nuw> <s %len)
guard(%iv = {start,+,1}<nuw> <u %len)
```
the 2nd check always fails if `start` is negative and always passes otherwise.
It looks like there are more opportunities of this kind that are still to be
implemented in the future.
Differential Revision: https://reviews.llvm.org/D129753
Reviewed By: apilipenko
Unfortunately, this overflow is extremely hard to reproduce reliably (in fact, I was unable to do so). The issue is that:
- getOperandsToCreate sometimes skips creating an SCEV for the LHS
- then, createSCEV is called for the BinaryOp
- ... which calls getNoWrapFlagsFromUB
- ... which under certain circumstances calls isSCEVExprNeverPoison
- ... which under certain circumstances requires the SCEVs of all operands
For certain deep dependency trees, this causes a stack overflow.
Reviewed By: bkramer, fhahn
Differential Revision: https://reviews.llvm.org/D129745
Sometimes SCEV cannot infer nuw/nsw from something as simple as
```
len in [0, MAX_INT]
...
iv = phi(0, iv.next)
guard(iv <s len)
guard(iv <u len)
iv.next = iv + 1
```
just because flag strenthening only relies on definition and does not use local facts.
This patch adds support for the simplest case: inference of flags of `add(x, constant)`
if we can contextually prove that `x <= max_int - constant`.
In case if it has negative CT impact, we can add an option to switch it off. I woudln't
expect that though.
Differential Revision: https://reviews.llvm.org/D129643
Reviewed By: apilipenko
At the moment, proveNoSignedWrapViaInduction may be called for the
same AddRec a large number of times via getSignExtendExpr. This can have
a severe compile-time impact for very loop-heavy code.
If proveNoSignedWrapViaInduction failed to prove NSW the first time,
it is unlikely to succeed on subsequent tries and the cost doesn't seem
to be justified.
This is the signed version of 8daa338297 / D130648.
This can drastically improve compile-time in some excessive cases and
also has a slightly positive compile-time impact on CTMark:
NewPM-O3: -0.06%
NewPM-ReleaseThinLTO: -0.04%
NewPM-ReleaseLTO-g: -0.04%
https://llvm-compile-time-tracker.com/compare.php?from=8daa338297d533db4d1ae8d3770613eb25c29688&to=aed126a196e7a5a9803543d9b4d6bdb233d0009c&stat=instructions
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D130694
At the moment, proveNoUnsignedWrapViaInduction may be called for the
same AddRec a large number of times via getZeroExtendExpr. This can have
a severe compile-time impact for very loop-heavy code. One one
particular workload, LSR takes ~51s without this patch, almost
exlusively in proveNoUnsignedWrapViaInduction. With this patch, the time
in LSR drops to ~0.4s.
If proveNoUnsignedWrapViaInduction failed to prove NUW the first time,
it is unlikely to succeed on subsequent tries and the cost doesn't seem
to be justified.
Besides drastically improving compile-time in some excessive cases, this
also has a slightly positive compile-time impact on CTMark:
NewPM-O3: -0.07%
NewPM-ReleaseThinLTO: -0.08%
NewPM-ReleaseLTO-g: -0.06
https://llvm-compile-time-tracker.com/compare.php?from=b435da027d7774c24cdb8c88d09f6b771e07fb14&to=f2729e33e8284b502f6c35a43345272252f35d12&stat=instructions
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D130648
Handle guards uniformly with assumes, rather than iterating through all
block instructions in attempt to find them.
Differential Revision: https://reviews.llvm.org/D129874
Reviewed By: nikic
In fact, in unreached code we can say that every fact is true. So do not waste time trying to
do something smarter.
Formally it's not an NFC because it may change query results in unreached code, but they
won't have any impact on execution.
Hypothetical CT boost expected but not measured in practice.
Differential Revision: https://reviews.llvm.org/D129878
Explicitly list all binops rather than having a default case. There
were two bugs here:
1. U->getOpcode() was used instead of BO->Opcode, which means we
used the logic for the wrong opcode in some cases.
2. SCEV construction does not support LShr. We should return
unknown for it rather than recursing into the operands.
After 675080a453, we always create SCEVs for all operands of a
SelectInst. This can cause notable compile-time regressions compared to
the recursive algorithm, which only evaluates the operands if the select
is in a form we can create a usable expression.
This approach adds additional logic to getOperandsToCreate to only
queue operands for selects if we will later be able to construct a
usable SCEV.
Unfortunately this introduces a bit of coupling between actual SCEV
construction for selects and getOperandsToCreate, but I am not sure if
there are better alternatives to address the regression mentioned for
675080a453.
This doesn't have any notable compile-time impact on CTMark.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D129731
Motivation here is to unblock LSRs ability to use ICmpZero uses - the major effect of which is to enable count down IVs. The test changes reflect this goal, but the potential impact is much broader since this isn't a change in LSR at all.
SCEVExpander needs(*) to prove that expanding the expression is safe anywhere the SCEV expression is valid. In general, we can't expand any node which might fault (or exhibit UB) unless we can either a) prove it won't fault, or b) guard the faulting case. We'd been allowing non-zero constants here; this change extends it to non-zero values.
vscale is never zero. This is already implemented in ValueTracking, and this change just adds the same logic in SCEV's range computation (which in turn drives isKnownNonZero). We should common up some logic here, but let's do that in separate changes.
(*) As an aside, "needs" is such an interesting word here. First, we don't actually need to guard this at all; we could choose to emit a select for the RHS of ever udiv and remove this code entirely. Secondly, the property being checked here is way too strong. What the client actually needs is to expand the SCEV at some particular point in some particular loop. In the examples, the original urem dominates that loop and yet we completely ignore that information when analyzing legality. I don't plan to actively pursue either direction, just noting it for future reference.
Differential Revision: https://reviews.llvm.org/D129710
This removes creation of udiv/sdiv/urem/srem constant expressions,
in preparation for their removal. I've added a
ConstantExpr::isDesirableBinOp() predicate to determine whether
an expression should be created for a certain operator.
With this patch, div/rem expressions can still be created through
explicit IR/bitcode, forbidding them entirely will be the next step.
Differential Revision: https://reviews.llvm.org/D128820
When trying to prove an implied condition on a phi by proving it
for all incoming values, we need to be careful about values coming
from a backedge, as these may refer to a previous loop iteration.
A variant of this issue was fixed in D101829, but the dominance
condition used there isn't quite right: It checks that the value
dominates the incoming block, which doesn't exclude backedges
(values defined in a loop will usually dominate the loop latch,
which is the incoming block of the backedge).
Instead, we should be checking for domination of the phi block.
Any values defined inside the loop will not dominate the loop
header phi.
Fixes https://github.com/llvm/llvm-project/issues/56242.
Differential Revision: https://reviews.llvm.org/D128640
Nowadays, we do not allow pointers in multiplies, and adds can only
have a single pointer, which is also guaranteed to be last by
complexity sorting. As such, we can somewhat simplify the treatment
of pointer types.
This allows all constant folding to happen through a single
function, without requiring special handling for loads at each
call-site.
This may not be NFC because some callers currently don't do that
special handling.
Support compares in ConstantFoldInstOperands(), instead of
forcing the use of ConstantFoldCompareInstOperands(). Also handle
insertvalue (extractvalue was already handled).
This removes a footgun, where many uses of ConstantFoldInstOperands()
need a separate check for compares beforehand. It's particularly
insidious if called on a constant expression, because it doesn't
fail in that case, but will just not do DL-dependent folding.
This patch updates SCEV construction to work iteratively instead of recursively
in most cases. It resolves stack overflow issues when trying to construct SCEVs
for certain inputs, e.g. PR45201.
The basic approach is to to use a worklist to queue operands of V which
need to be created before V. To do so, the current patch adds a
getOperandsToCreate function which collects the operands SCEV
construction depends on for a given value. This is a slight duplication
with createSCEV.
At the moment, SCEVs for phis are still created recursively.
Fixes#32078, #42594, #44546, #49293, #49599, #55333, #55511
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D114650
Use poison instead of undef for SCEVUnkown of unreachable values.
This should be in line with the movement to replace undef with poison
when possible.
Suggested in D114650.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D128586
This reverts commit 1fbdbb5595.
All known issues surfaced by this patch should have been fixed now.
The fixes included fixing issues with SCEV expansion in LV and DA's
reliance on LCSSA phis.
Clang-format InstructionSimplify and convert all "FunctionName"s to
"functionName". This patch does touch a lot of files but gets done with
the cleanup of InstructionSimplify in one commit.
This is the alternative to the less invasive clang-format only patch: D126783
Reviewed By: spatel, rengolin
Differential Revision: https://reviews.llvm.org/D126889
Also collect conditions from assume up-front in applyLoopGuards.
This allows re-using the logic to handle logical ANDs as assume
conditions.
It should should pave the road for a fix for #55645.
When computing the BECount for multi-exit loops, we need to combine
individual exit counts using umin_seq rather than umin. This is
because an earlier exit may exit on the first iteration, in which
case later exit expressions will not be evaluated and could be
poisonous. We cannot propagate potential poison values from later
exits.
In particular, this avoids the introduction of "branch on poison"
UB when optimizing multi-exit loops.
Differential Revision: https://reviews.llvm.org/D124910
Most clients only used these methods because they wanted to be able to
extend or truncate to the same bit width (which is a no-op). Now that
the standard zext, sext and trunc allow this, there is no reason to use
the OrSelf versions.
The OrSelf versions additionally have the strange behaviour of allowing
extending to a *smaller* width, or truncating to a *larger* width, which
are also treated as no-ops. A small amount of client code relied on this
(ConstantRange::castOp and MicrosoftCXXNameMangler::mangleNumber) and
needed rewriting.
Differential Revision: https://reviews.llvm.org/D125557
Evaluation odering in function call arguments is implementation-dependent.
In fact, gcc evaluates bottom-top and clang does top-bottom.
Fixes#55283 partially.
Part of https://reviews.llvm.org/D125627
Fold %x umin_seq %y to %x if %x ule %y. This also subsumes the
special handling for constant operands, as if %y is constant this
folds to umin via implied poison reasoning, and if %x is constant
then either %x is not zero and it folds to umin, or it is known
zero, in which case it is ule anything.