Commit Graph

464 Commits

Author SHA1 Message Date
zhongyunde b2b4c8721d [InstCombine] Make use of low zero bits to determine exact int->fp cast
According the comment https://reviews.llvm.org/D127854#inline-1226805,
We could also make use of these low zero bits, https://alive2.llvm.org/ce/z/GYxTRu

Reviewed By: spatel, nikic, xbolva00

Differential Revision: https://reviews.llvm.org/D128895
2022-07-05 09:15:12 +08:00
zhongyunde 404479b4b0 [InstCombine] Use known bits to determine exact int->fp cast
Reviewed By: spatel, nikic

Differential Revision: https://reviews.llvm.org/D127854
2022-06-30 09:45:11 +08:00
Kazu Hirata 7a47ee51a1 [llvm] Don't use Optional::getValue (NFC) 2022-06-20 22:45:45 -07:00
Wael Yehia 0952cf5bbb [InstCombine] decomposeSimpleLinearExpr should bail out on negative operands.
InstCombine tries to rewrite

  %prod = mul nsw i64 %X,   Scale
  %acc = add nsw i64 %prod,   Offset
  %0 = alloca i8, i64 %acc, align 4
  %1 = bitcast i8* %0 to i32*
  Use ( %1 )

into

  %prod = mul nsw i64 %X,   Scale/4
  %acc = add nsw i64 %prod,   Offset/4
  %0 = alloca i32, i64 %acc, align 4
  Use (%0)

But it assumes Scale is unsigned, and performs an unsigned division.
So we should bail out if Scale cannot be interpreted as an unsigned safely.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D126546
2022-06-08 00:57:25 +00:00
Chenbing Zheng ef256ed58e [InstCombine] bitcast (extractelement <1 x elt>, dest) -> bitcast(<1 x elt>, dest)
Only solve dest type is vector to avoid inverse transform in visitBitCast.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D125951
2022-05-30 10:16:32 +08:00
Chenbing Zheng 41aab93afc [InstCombine] bitcast(logic(bitcast(X), bitcast(Y))) -> bitcast'(logic(bitcast'(X), Y))
This patch break foldBitCastBitwiseLogic limite the destination
must have an integer element type, and eliminate one bitcast by
doing the logic op in the type of the input that has an integer
element type.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D126184
2022-05-26 10:23:44 +08:00
Chenbing Zheng 269e3f7369 [InstCombine] [NFC] Move transforms for truncated shifts into narrowBinOp
Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D126056
2022-05-25 10:21:39 +08:00
Chenbing Zheng cf348f6a2c [InstCombine] [NFC] Use a pattern matcher for ExtractElementInst
Reviewed By: RKSimon, rampitec

Differential Revision: https://reviews.llvm.org/D125857
2022-05-20 10:31:40 +08:00
Sanjay Patel f31d39c42c [InstCombine] remove cast-of-signbit to shift transform
The transform was wrong in 3 ways:

1. It created an extra instruction when the source and dest types don't match.
2. It did not account for an extra use of the icmp, so could create 2 extra insts.
3. It favored bit hacks over icmp (icmp generally has better analysis).

This fixes #54692 (modeled by the PhaseOrdering tests).

This is a minimal step to fix the bug, but we should likely invert
this and the sibling transform for the "is negative" pattern too.

The backend should be able to invert this back to a shift if that
leads to better codegen.

This is a reduced try of 3794cc0e99 - that was reverted because
it could cause infinite loops by conflicting with the related
transforms in this block that create shifts.
2022-05-17 11:10:28 -04:00
Nikita Popov a694546f7c [KnownBits] Add operator==
Checking whether two KnownBits are the same is somewhat common,
mainly in test code.

I don't think there is a lot of room for confusion with "determine
what the KnownBits for an icmp eq would be", as that has a
different result type (this is what the eq() method implements,
which returns Optional<bool>).

Differential Revision: https://reviews.llvm.org/D125692
2022-05-17 09:38:13 +02:00
Sanjay Patel 07d549bce9 Revert "[InstCombine] invert canonicalization for cast of signbit test"
This reverts commit 3794cc0e99.
This change is suspected of causing bots to hang at stage 2
compiles, so reverting to confirm and investigate.
2022-05-16 17:47:02 -04:00
Sanjay Patel 3794cc0e99 [InstCombine] invert canonicalization for cast of signbit test
The existing transform was wrong in 3 ways:
1. It created an extra instruction when the source and dest types don't match.
2. It did not account for an extra use of the icmp, so could create 2 extra insts.
3. It favored bit hacks over icmp (icmp generally has better analysis).

This fixes #54692 (modeled by the PhaseOrdering tests).

This is a minimal step to fix the bug, but we should likely invert
the sibling transform for the "is negative" pattern too.

The backend should be able to invert this back to a shift if that
leads to better codegen.
2022-05-16 12:55:52 -04:00
Sanjay Patel 8650f05c97 [InstCombine] fix miscompile when casting int->FP->int
As shown in https://github.com/llvm/llvm-project/issues/55150 -
the existing fold may be wrong when converting to a signed value.
This is a quick fix to avoid the miscompile.

I added tests/comments for all of the signed/unsigned combinations
at either side of the boundary width, and tried to confirm with Alive2:
https://alive2.llvm.org/ce/z/3p9DSu

There are already some TODO items in the test file that suggest
possible refinements, so the regression with ui->FP->si is probably ok.
It seems unlikely that we'd see these kind of edge cases with
non-byte-width integer types in real code. The potential miscompile
went undetected for several years.

This and 747c6a0c73 fixes #55150.

Differential Revision: https://reviews.llvm.org/D124692
2022-05-07 08:46:25 -04:00
Chenbing Zheng 8eaa1ef0d8 [InstCombine] add casts from splat-a-bit pattern if necessary
Splatting a bit of constant-index across a value:
sext (ashr (trunc iN X to iM), M-1) to iN --> ashr (shl X, N-M), N-1
If the dest type is different, use a cast (adjust use check).

https://alive2.llvm.org/ce/z/acAan3

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124590
2022-05-07 15:34:57 +08:00
Sanjay Patel 6631907ad2 [InstCombine] use isKnownNonNegative to reduce code duplication; NFC
We may be able to make the ValueTracking wrapper smarter
in the future (for example, analyze a simple recurrence),
so this will automatically benefit if that happens.
2022-04-25 17:13:29 -04:00
Craig Topper e3f6c2d288 [InstCombine] Don't look through bitcast from vector in collectInsertionElements.
We're making a recursive call here and everything in the function
assumes we're looking at scalars. This would be violated if we
looked through a bitcast from vectors.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D124015
2022-04-20 09:15:32 -07:00
serge-sans-paille 59630917d6 Cleanup includes: Transform/Scalar
Estimated impact on preprocessor output line:
before: 1062981579
after:  1062494547

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D120817
2022-03-03 07:56:34 +01:00
Benjamin Kramer 85243124cf Tweak some uses of std::iota to skip initializing the underlying storage. NFCI. 2022-02-04 17:00:50 +01:00
Nikita Popov 73cd8e29ad [InstCombine] Skip PromoteCastOfAllocation() transform under opaque pointers
I think this can't be hit anyway (because a ptr-to-ptr bitcast would
get folded earlier), but in the interest of being explicit skip
this transform for opaque pointers entirely.
2022-01-27 10:25:45 +01:00
Nikita Popov aa97bc116d [NFC] Remove uses of PointerType::getElementType()
Instead use either Type::getPointerElementType() or
Type::getNonOpaquePointerElementType().

This is part of D117885, in preparation for deprecating the API.
2022-01-25 09:44:52 +01:00
Matt Arsenault 286237962a InstCombine: Gracefully handle more allocas in the wrong address space
Officially this is currently required to always use the datalayout's
alloca address space. This may change in the future, and it's cleaner
to propagate the existing alloca's addrspace anyway.

This is a triple fix. Initially the change in simplifyAllocaArraySize
would drop the address space, but produce output. Fixing this hit an
assertion in the cast combine.

This patch also makes the changes to handle this situation from
a33e128012 dead, so eliminate
it. InstCombine should not take it upon itself to introduce
addrspacecasts, and preserve the original address space instead.
2021-12-24 08:59:26 -05:00
Cullen Rhodes 0395e01583 [IR] Split vscale_range interface
Interface is split from:

  std::pair<unsigned, unsigned> getVScaleRangeArgs()

into separate functions for min/max:

  unsigned getVScaleRangeMin();
  Optional<unsigned> getVScaleRangeMax();

Reviewed By: sdesmalen, paulwalker-arm

Differential Revision: https://reviews.llvm.org/D114075
2021-12-07 10:38:26 +00:00
Srividya Karumuri 9e3e1aad31 [InstCombine] Allow fake vector insert folding to bit-logic only if the insert element is integer type
The below commit is causing assertion when insert element type is not integer
 type such as half. This is because the transformation is creating zext before
 doing bitwise OR, and the zext is supported only for integer types
80ab06c599

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D114734
2021-11-30 13:54:52 -08:00
Kazu Hirata c714da2ceb [Transforms] Use {DenseSet,SetVector,SmallPtrSet}::contains (NFC) 2021-10-31 07:57:32 -07:00
Sanjay Patel 80ab06c599 [InstCombine] fold fake vector insert to bit-logic
bitcast (inselt (bitcast X), Y, 0) --> or (and X, MaskC), (zext Y)

https://alive2.llvm.org/ce/z/Ux-662

Similar to D111082 / db231ebdb0 :
We want to avoid relatively opaque vector ops on types that are
likely supported by the backend as scalar integers. The bitwise
logic ops are more likely to allow further combining.

We probably want to generalize this to allow a shift too, but
that would oppose instcombine's general rule of not creating
extra instructions, so that's left as a potential follow-up.
Alternatively, we could do that transform in VectorCombine
with the help of the TTI cost model.

This is part of solving:
https://llvm.org/PR52057
2021-10-20 14:21:40 -04:00
Jay Foad a9bceb2b05 [APInt] Stop using soft-deprecated constructors and methods in llvm. NFC.
Stop using APInt constructors and methods that were soft-deprecated in
D109483. This fixes all the uses I found in llvm, except for the APInt
unit tests which should still test the deprecated methods.

Differential Revision: https://reviews.llvm.org/D110807
2021-10-04 08:57:44 +01:00
Alex Richardson ebb3dc0833 [InstCombine] Fold ptrtoint(gep i8 null, x) -> x
This commit is the InstCombine follow-up to the previous constant-folding
change that enables noticeable optimizations for CHERI-enabled targets.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D110247
2021-09-28 17:57:37 +01:00
David Sherwood c2634fc6ab [Analysis] Fix issues when querying vscale attributes on functions
There are several places in the code that are currently broken as
they assume an Instruction always has a parent Function when
attempting to get the vscale_range attribute. This patch adds checks
that an Instruction has a parent.

I've added a test for a parentless @llvm.vscale intrinsic call here:

  unittests/Analysis/ValueTrackingTest.cpp

Differential Revision: https://reviews.llvm.org/D110158
2021-09-24 09:58:10 +01:00
hyeongyu kim e5aaf03326 [InstCombine] Update InstCombine to use poison instead of undef for shufflevector's placeholder (1/3)
This patch is for fixing potential shufflevector-related bugs like D93818.
As D93818, this patch change shufflevector's default placeholder to poison.
To reduce risk, it was divided into several patches, and this patch is for InstCombineCasts.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D110226
2021-09-22 23:18:51 +09:00
Sanjay Patel 3a126134d3 [InstCombine] remove casts from splat-a-bit pattern
https://alive2.llvm.org/ce/z/_AivbM

This case seems clear since we can reduce instruction count
and avoid an intermediate type change, but we might want to
use mask-and-compare for other sequences.

Currently, we can generate more instructions on some related
patterns by trying to use bit-hacks instead of mask+cmp, so
something is not behaving as expected.
2021-09-12 09:18:14 -04:00
Sanjay Patel 97a4e7b7ff [InstCombine] remove a buggy set of zext-icmp transforms
The motivating case is an infinite loop shown with a reduced test from:
https://llvm.org/PR51762

To solve this, I'm proposing we delete the most obviously broken part of this code.

The bug example shows a fundamental problem: we ask computeKnownBits if a transform
will be profitable, alter the code by creating new instructions, then rely on
computeKnownBits to return the same answer to actually eliminate instructions.

But there's no guarantee that the results will be the same between the 1st and 2nd
calls. In the infinite loop example, we get different answers, so we add
instructions that conflict with some other transform, and we're stuck.

There's at least one other problem visible in the test diff for
`@zext_or_masked_bit_test_uses`: the code doesn't check uses properly, so we can
end up with extra instructions created.

Last, it's not clear if this set of transforms actually improves analysis or
codegen. I spot-checked a few targets and don't see a clear win:
https://godbolt.org/z/x87EWovso

If we do see a regression from this change, codegen seems like the right place to
add a cmp -> bit-hack fold.

If this is too big of a step, we could limit the computeKnownBits calls by not
passing a context instruction and/or limiting the recursion. I checked that those
would stop the infinite loop for PR51762, but that won't guarantee that some other
example does not fall into the same loop.

Differential Revision: https://reviews.llvm.org/D109440
2021-09-09 08:49:39 -04:00
Dylan Fleming 4be7fb9762 [SVE] Add folds for truncation of vscale
Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D107453
2021-08-13 10:18:00 +01:00
Sander de Smalen fe6ae81ef3 [InstCombine] Fix vscale zext/sext optimization when vscale_range is unbounded.
According to the LangRef, a (vscale_range) value of 0 means unbounded.

This patch additionally cleans up the test file vscale_sext_and_zext.ll.
2021-08-04 17:17:37 +01:00
Dylan Fleming a7a39ec886 [SVE] Add folds for sign and zero extends of vscale
Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D105994
2021-07-30 16:02:50 +01:00
Datta Nagraj ad0085d338 [InstCombine] Eliminate casts to optimize ctlz operation
If a ctlz operation is performed on higher datatype and then
downcasted, then this can be optimized by doing a ctlz operation
on a lower datatype and adding the difference bitsize to the result
of ctlz to provide the same output:

https://alive2.llvm.org/ce/z/8uup9M

The original problem is shown in
https://llvm.org/PR50173

Differential Revision: https://reviews.llvm.org/D103788
2021-06-23 11:19:12 -04:00
Nikita Popov e790d3667e [OpaquePtr] Handle addrspacecasts in InstCombine
This adds support for addrspace casts involving opaque pointers to
InstCombine, as well as the isEliminableCastPair() helper
(otherwise the assertion failure would just move there).

Add PointerType::hasSameElementTypeAs() to hide the element type
details.

Differential Revision: https://reviews.llvm.org/D104668
2021-06-22 17:45:30 +02:00
Nikita Popov 39796e1ad0 Reapply [InstCombine] Don't try converting opaque pointer bitcast to GEP
Reapplied without changes -- this was reverted together with an
underlying patch.

-----

Bitcasts having opaque pointer source or result type cannot be
converted into a zero-index GEP, GEP source and result types
always have the same opaque-ness.
2021-06-21 22:15:56 +02:00
Nikita Popov e2c2124a4b Reapply [InstCombine] Extract bitcast -> gep transform
Relative to the original patch, an InstCombine test has been
added to show a previously missed pattern, and the Coroutine
test that resulted in the revert has been regenerated.

-----

Move this into a separate function, to make sure that early
returns do not accidentally skip other transforms. This previously
happened for the isSized() check, which skipped folds like
distributing a bitcast over a select.
2021-06-21 22:03:15 +02:00
Nikita Popov 6922ab73a5 Revert "[InstCombine] Extract bitcast -> gep transform"
This reverts commit d9f5d7b959.
This reverts commit 5780611d7e.

This causes a failure in Coroutine tests.
2021-06-21 21:34:17 +02:00
Nikita Popov 5780611d7e [InstCombine] Don't try converting opaque pointer bitcast to GEP
Bitcasts having opaque pointer source or result type cannot be
converted into a zero-index GEP, GEP source and result types
always have the same opaque-ness.
2021-06-21 21:24:50 +02:00
Nikita Popov d9f5d7b959 [InstCombine] Extract bitcast -> gep transform
Move this into a separate function, to make sure that early
returns do not accidentally skip other transforms. There is
already one isSized() check that could run into this issue,
thus this change is not strictly NFC.
2021-06-21 21:24:50 +02:00
Nikita Popov a969bdc56f [InstCombine] Remove unnecessary addres space check (NFC)
It's not possible to bitcast between different address spaces,
and this is ensured by the IR verifier. As such, this bitcast to
addrspacecast canonicalization can never be hit.
2021-06-21 20:11:39 +02:00
Juneyoung Lee ce192ced2b [InstCombine] Use poison constant to represent the result of unreachable instrs
This patch updates InstCombine to use poison constant to represent the resulting value of (either semantically or syntactically) unreachable instrs, or a don't-care value of an unreachable store instruction.

This allows more aggressive folding of unused results, as shown in llvm/test/Transforms/InstCombine/getelementptr.ll .

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D104602
2021-06-21 09:58:44 +09:00
Guozhi Wei 575ba6f425 [InstCombine] Don't transform code if DoTransform is false
In patch https://reviews.llvm.org/D72396, it doesn't check DoTransform before transforming the code, and generates wrong result for the attached test case.

Differential Revision: https://reviews.llvm.org/D104567
2021-06-18 18:01:34 -07:00
Sanjay Patel 23a116c8c4 [InstCombine] convert lshr to ashr to eliminate cast op
This is similar to b865eead76 ( D103617 ) and fixes:
https://llvm.org/PR50575

41b71f718b did this and more (noted with TODO
comments in the tests), but it didn't handle the case
where the destination is narrower than the source, so
it got reverted.

This is a simple match-and-replace. If there's evidence
that the TODO cases are useful, we can revisit/extend.
2021-06-04 07:04:37 -04:00
Sanjay Patel b865eead76 [InstCombine] eliminate sext and/or trunc if value has enough signbits
If we have enough signbits in a source value, we can skip an
intermediate cast for a trunc+sext pair:
https://alive2.llvm.org/ce/z/A_mQt-

This is the original problem shown in:
https://llvm.org/PR49543

There's a test that shows we transformed what used to be
a pair of shifts, so that suggests we could add another
ComputeNumSignBits fold starting from a shift.

There does not appear to be any change in compile-time
from the extra analysis:
https://llvm-compile-time-tracker.com/compare.php?from=3d2c9069dcafd0cbb641841aa3dd6e851fb7d760&to=b9513cdf2419704c7bb0c3a02a9ca06aae13d902&stat=instructions

Differential Revision: https://reviews.llvm.org/D103617
2021-06-03 13:58:19 -04:00
Juneyoung Lee 7161bb87c9 [InsCombine] Fix a few remaining vec transforms to use poison instead of undef
This is a patch that replaces shufflevector and insertelement's placeholder value with poison.

Underlying motivation is to fix the semantics of shufflevector with undef mask to return poison instead
(D93818)
The consensus has been made in the late 2020 via mailing list as well as the thread in https://bugs.llvm.org/show_bug.cgi?id=44185 .

This patch is a simple syntactic change to the existing code, hence directly pushed as a commit.
2021-05-31 18:47:09 +09:00
Sanjay Patel c7da0c383a [InstCombine] fold zext of masked bit set/clear
This does not solve PR17101, but it is one of the
underlying diffs noted here:
https://bugs.llvm.org/show_bug.cgi?id=17101#c8

We could ease the one-use checks for the 'clear'
(no 'not' op) half of the transform, but I do not
know if that asymmetry would make things better
or worse.

Proofs:
https://rise4fun.com/Alive/uVB

Name: masked bit set
%sh1 = shl i32 1, %y
%and = and i32 %sh1, %x
%cmp = icmp ne i32 %and, 0
%r = zext i1 %cmp to i32
=>
%s = lshr i32 %x, %y
%r = and i32 %s, 1

Name: masked bit clear
%sh1 = shl i32 1, %y
%and = and i32 %sh1, %x
%cmp = icmp eq i32 %and, 0
%r = zext i1 %cmp to i32
=>
%xn = xor i32 %x, -1
%s = lshr i32 %xn, %y
%r = and i32 %s, 1

Note: this is a re-post of a patch that I committed at:
rGa041c4ec6f7a

The commit was reverted because it exposed another bug:
rGb212eb7159b40

But that has since been corrected with:
rG8a156d1c2795189 ( D101191 )

Differential Revision: https://reviews.llvm.org/D72396
2021-05-29 08:52:26 -04:00
Sanjay Patel 52f2970036 [InstCombine] reduce code duplication; NFC 2021-05-29 08:33:25 -04:00
Sanjay Patel 0bab0f6161 [InstCombine] canonicalize cast before unary shuffle
We could go either direction on this transform. VectorCombine already goes this
way for bitcasts (and handles more complicated cases using the cost model), so
let's try cast-first.

Deferring completely to VectorCombine is another possibility. But the backend
should be able to invert this easily when the vectors have the same shape, so
it doesn't seem like a transform that we need to avoid.

The motivating example from https://llvm.org/PR49081 has an int-to-float
sandwiched between 2 shuffles, and the backend currently does not reduce that,
so on x86, we get something like:

  pshufd	$249, %xmm0, %xmm0]
  cvtdq2ps	%xmm0, %xmm0
  shufps	$144, %xmm0, %xmm0

...instead of just a single conversion instruction.

Differential Revision: https://reviews.llvm.org/D103038
2021-05-25 08:43:09 -04:00