TensorCopyInsertion should not have been exposed as a pass. This was a flaw in the original design. It is a preparation step for bufferization and certain transforms (that would otherwise be legal) are illegal between TensorCopyInsertion and actual rewrite to MemRef ops. Therefore, even if broken down as two separate steps internally, they should be exposed as a single pass.
This change affects the sparse compiler, which uses `TensorCopyInsertionPass`. A new `SparsificationAndBufferizationPass` is added to replace all passes in the sparse tensor pipeline from `TensorCopyInsertionPass` until the actual bufferization (rewrite to memref/non-tensor). It is generally unsafe to run arbitrary passes in-between, in particular passes that hoist tensor ops out of loops or change SSA use-def chains along tensor ops.
Differential Revision: https://reviews.llvm.org/D138915
The output operands will be added to input operands if the generic op (on tensors)
becomes an elementwise operation. The outputs of the generic op is still the same.
They will be cleaned up by ReplaceWithEmptyTensorIfUnused pattern.
This is https://reviews.llvm.org/D138251, plus a cmake dep fix.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D138843
This allows writing simple e2e tests where we can check for the proper materialization
of specific LLVM IR (e.g. `llvm.intr.fmuladd`).
Differential Revision: https://reviews.llvm.org/D138776
This op significantly improves transfor dialect usage when using vector abstractions.
It also brings us closer to writing simple end-to-end unit tests that guard against subtle regressions in how patterns combine.
Differential Revision: https://reviews.llvm.org/D138723
The output operands will be added to input operands if the generic op (on tensors)
becomes an elementwise operation. The outputs of the generic op is still the same.
They will be cleaned up by ReplaceWithEmptyTensorIfUnused pattern.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D138251
This brings back previous SIMD functionality, but in a separate pass.
The idea is to improve this new pass incrementally, going beyond for-loops
to while-loops for co-iteration as welll (masking), while introducing new
abstractions to make the lowering more progressive. The separation of
sparsification and vectorization is a very good first step on this journey.
Also brings back ArmSVE support
Still to be fine-tuned:
+ use of "index" in SIMD loop (viz. a[i] = i)
+ check that all ops really have SIMD support
+ check all forms of reductions
+ chain reduction SIMD values
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D138236
This patch adds bazel tests for llvm-libc.
Some math tests rely on the `mpfr` library. This is controlled via the `--@llvm-project//libc:libc_math_mpfr` flag. It can take three values:
- `external` (default) will build `mpfr` and `gmp` from source.
- `system` will use the system installed `mpfr` library.
- `disable` will skip tests relying on `mpfr`.
Reviewed By: sivachandra, GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D119547
Enable using -module-summary with -S
(similarly to what currently can be achieved with opt <input> -o - | llvm-dis).
This is a recommit of ef9e62469.
Test plan: ninja check-all
Differential revision: https://reviews.llvm.org/D137768
This diff splits out (from LLVMCore) IR printing passes into IRPrinter.
This structure is similar to what we already have for IRReader and
enables us to avoid circular dependencies between LLVMCore and Analysis
(this is a preparation for https://reviews.llvm.org/D137768).
The legacy interface is left unchanged, once the legacy pass manager
is removed (in the future) we will be able to clean it up further.
The bazel build configuration has been updated as well.
Test plan:
1/ Tested the following cmake configurations: static/dynamic linking * lld/gold * clang/gcc
2/ bazel build --config=generic_clang @llvm-project//...
Differential revision: https://reviews.llvm.org/D138081
@GMNGeoffrey let me know it there's a better way to import MPFR and GMP for the purpose of testing libc math functions.
Differential Revision: https://reviews.llvm.org/D119547
Different platforms use different signedness for `StridedMemRefType::sizes` and `std::vector::size_type`, and this has been causing a lot of portability issues re [-Wsign-compare] warnings. These new functions ensure that we need never worry about those signedness warnings ever again.
Also merging CheckedMul.h into ArithmeticUtils.h
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138149
The methods in `SideEffectUtils.h` (and their implementations in
`SideEffectUtils.cpp`) seem to have similar intent to methods already
existing in `SideEffectInterfaces.h`. Move the decleration (and
implementation) from `SideEffectUtils.h` (and `SideEffectUtils.cpp`)
into `SideEffectInterfaces.h` (and `SideEffectInterface.cpp`).
Also drop the `SideEffectInterface::hasNoEffect` method in favor of
`mlir::isMemoryEffectFree` which actually recurses into the operation
instead of just relying on the `hasRecursiveMemoryEffectTrait`
exclusively.
Differential Revision: https://reviews.llvm.org/D137857
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc). With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.
(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations. Changing the representation to `std::function` will be done in a separate differential.)
Depends On D137680
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137681
`scf.foreach_thread` defines mapping its loops to processors via an integer array, see an example below. A lowering can use this mapping. However, expressing mapping as an integer array is very confusing, especially when there are multiple levels of parallelism. In addition, the op does not verify the integer array. This change introduces device mapping attribute to make mapping descriptive and verifiable. Then it makes GPU transform dialect use it.
```
scf.foreach_thread (%i, %j) in (%c1, %c2) {
scf.foreach_thread (%i2, %j2) in (%c1, %c2)
{...} { thread_dim_mapping = [0, 1]}
} { thread_dim_mapping = [0, 1]}
```
It first introduces a `DeviceMappingInterface` which is an attribute interface. `scf.foreach_thread` defines its mapping via this interface. A lowering must define its attributes and implement this interface as well. This way gives us a clear validation.
The change also introduces two new attributes (`#gpu.thread<x/y/z>` and `#gpu.block<x,y,z>` ). After this change, the above code prints as below, as seen here, this way clarifies the loop mappings. The change also implements consuming of these two new attribute by the transform dialect. Transform dialect binds the outermost loops to the thread blocks and innermost loops to threads.
```
scf.foreach_thread (%i, %j) in (%c1, %c2) {
scf.foreach_thread (%i2, %j2) in (%c1, %c2)
{...} { thread_dim_mapping = [#gpu.thread<x>, #gpu.thread<y>]}
} { thread_dim_mapping = [#gpu.block<x>, #gpu.block<y>]}
```
Reviewed By: ftynse, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D137413
a11cd0d94e moved googletest to third-party.
This creates a corresponding BUILD file in third-party/unittest, moved from the chunk in llvm/BUILD.bazel.
We must refine the .bazelignore which is setup to ignore the benchmarking library so that we don't ignore the new dir here.