Commit Graph

64 Commits

Author SHA1 Message Date
River Riddle ace01605e0 [mlir] Split out a new ControlFlow dialect from Standard
This dialect is intended to model lower level/branch based control-flow constructs. The initial set
of operations are: AssertOp, BranchOp, CondBranchOp, SwitchOp; all split out from the current
standard dialect.

See https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D118966
2022-02-06 14:51:16 -08:00
River Riddle dec8af701f [mlir] Move SelectOp from Standard to Arithmetic
This is part of splitting up the standard dialect. See https://llvm.discourse.group/t/standard-dialect-the-final-chapter/ for discussion.

Differential Revision: https://reviews.llvm.org/D118648
2022-02-02 14:45:12 -08:00
Mogball aae5125550 [mlir] Replace StrEnumAttr -> EnumAttr in core dialects
Removes uses of `StrEnumAttr` in core dialects

Reviewed By: mehdi_amini, rriddle

Differential Revision: https://reviews.llvm.org/D117514
2022-01-18 17:15:00 +00:00
Diego Caballero e2b658cd5d [mlir][GPU] Fix attribute name of DL specification
D115722 added a DL spec to GPU modules. It happens that the DL
default interface implementation is sensitive to the name of the
DL spec attribute. This patch is fixing the name of the attribute
to be the expected one.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D116956
2022-01-11 08:30:52 +00:00
Mehdi Amini e4853be2f1 Apply clang-tidy fixes for performance-for-range-copy to MLIR (NFC) 2022-01-02 22:19:56 +00:00
Mehdi Amini 1fc096af1e Apply clang-tidy fixes for performance-unnecessary-value-param to MLIR (NFC)
Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D116250
2022-01-02 01:45:18 +00:00
Mehdi Amini 02b6fb218e Fix clang-tidy issues in mlir/ (NFC)
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D115956
2021-12-20 20:25:01 +00:00
Diego Caballero 32fe1a8a25 [mlir][GPU] Extend GPU kernel outlining to generate DL specification
This patch extends the GPU kernel outlining pass so that it can take in
an optional data layout specification that will be attached to the GPU
module operation generated. If the data layout specification is not provided
the default data layout is used instead.

Reviewed By: herhut, mehdi_amini

Differential Revision: https://reviews.llvm.org/D115722
2021-12-16 11:35:53 +00:00
Mogball a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Uday Bondhugula 08b63db8bb [MLIR][GPU] Add GPU launch op support for dynamic shared memory
Add support for dynamic shared memory for GPU launch ops: add an
optional operand to gpu.launch and gpu.launch_func ops to specify the
amount of "dynamic" shared memory to use. Update lowerings to connect
this operand to the GPU runtime.

Differential Revision: https://reviews.llvm.org/D110800
2021-10-01 16:46:07 +05:30
River Riddle 4efb7754e0 [mlir][NFC] Add a using directive for llvm::SetVector
Differential Revision: https://reviews.llvm.org/D100436
2021-04-15 16:09:34 -07:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Mehdi Amini e1364f1068 Replace use of OperationState with builder::create in GPU Kernel Outlining (NFC)
OperationState is a low level API that is rarely indicated, the builder
API convenient wrapper is preferred when possible.
2021-03-12 00:14:02 +00:00
River Riddle 1b97cdf885 [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D93432
2020-12-17 13:01:36 -08:00
Christian Sigg 1ffc1aaa09 [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D93098
2020-12-13 09:58:16 +01:00
Christian Sigg 0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
Christian Sigg c4a0405902 Add `Operation* OpState::operator->()` to provide more convenient access to members of Operation.
Given that OpState already implicit converts to Operator*, this seems reasonable.

The alternative would be to add more functions to OpState which forward to Operation.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D92266
2020-12-02 15:46:20 +01:00
Kazuaki Ishizaki 41b09f4eff [mlir] NFC: fix trivial typos
fix typos in comments and documents

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D90089
2020-10-29 04:05:22 +09:00
Stephan Herhut 366d8435b4 [mlir][gpu] Fix bug in kernel outlining
The updated version of kernel outlining did not handle cases correctly
where an operand of a candidate for sinking itself was defined by an operation
that is a sinking candidate. In such cases, it could happen that sunk
operations were inserted in the wrong order, breaking ssa properties.

Differential Revision: https://reviews.llvm.org/D89112
2020-10-09 15:03:14 +02:00
Stephan Herhut edeff6e642 [mlir][GPU] Improve constant sinking in kernel outlining
The previous implementation did not support sinking simple expressions. In particular,
it is often beneficial to sink dim operations.

Differential Revision: https://reviews.llvm.org/D88439
2020-09-29 14:46:15 +02:00
River Riddle 9db53a1827 [mlir][NFC] Remove usernames and google bug numbers from TODO comments.
These were largely leftover from when MLIR was a google project, and don't really follow LLVM guidelines.
2020-07-07 01:40:52 -07:00
Rahul Joshi d891d738d9 [MLIR][NFC] Adopt variadic isa<>
Differential Revision: https://reviews.llvm.org/D82489
2020-06-24 17:02:44 -07:00
Alex Zinenko bb1d976feb [mlir][flang] use OpBuilder& instead of Builder* in <Op>::build methods
As we start defining more complex Ops, we increasingly see the need for
Ops-with-regions to be able to construct Ops within their regions in
their ::build methods. However, these methods only have access to
Builder, and not OpBuilder. Creating a local instance of OpBuilder
inside ::build and using it fails to trigger the operation creation
hooks in derived builders (e.g., ConversionPatternRewriter). In this
case, we risk breaking the logic of the derived builder. At the same
time, OpBuilder::create, which is by far the largest user of ::build
already passes "this" as the first argument, so an OpBuilder instance is
already available.

Update all ::build methods in all Ops in MLIR and Flang to take
"OpBuilder &" instead of "Builder *". Note the change from pointer and
to reference to comply with the common style in MLIR, this also ensures
all other users must change their ::build methods.

Differential Revision: https://reviews.llvm.org/D78713
2020-04-28 10:42:08 +02:00
River Riddle 1834ad4a69 [mlir][Pass] Update the PassGen to generate base classes instead of utilities
Summary:
This is much cleaner, and fits the same structure as many other tablegen backends. This was not done originally as the CRTP in the pass classes made it overly verbose/complex.

Differential Revision: https://reviews.llvm.org/D77367
2020-04-07 14:08:52 -07:00
River Riddle 80aca1eaf7 [mlir][Pass] Remove the use of CRTP from the Pass classes
This revision removes all of the CRTP from the pass hierarchy in preparation for using the tablegen backend instead. This creates a much cleaner interface in the C++ code, and naturally fits with the rest of the infrastructure. A new utility class, PassWrapper, is added to replicate the existing behavior for passes not suitable for using the tablegen backend.

Differential Revision: https://reviews.llvm.org/D77350
2020-04-07 14:08:52 -07:00
River Riddle 722f909f7a [mlir][Pass][NFC] Replace usages of ModulePass with OperationPass<ModuleOp>
ModulePass doesn't provide any special utilities and thus doesn't give enough benefit to warrant a special pass class. This revision replaces all usages with the more general OperationPass.

Differential Revision: https://reviews.llvm.org/D77339
2020-04-07 14:08:52 -07:00
Kazuaki Ishizaki 5aacce3db2 [mlir] NFC: Fix trivial typo
Differential Revision: https://reviews.llvm.org/D77473
2020-04-05 11:30:30 +09:00
River Riddle 9a277af2d4 [mlir][Pass] Add support for generating pass utilities via tablegen
This revision adds support for generating utilities for passes such as options/statistics/etc. that can be inferred from the tablegen definition. This removes additional boilerplate from the pass, and also makes it easier to remove the reliance on the pass registry to provide certain things(e.g. the pass argument).

Differential Revision: https://reviews.llvm.org/D76659
2020-04-01 02:10:46 -07:00
River Riddle e3d834a54a [mlir][Pass] Move the registration of dialect passes to tablegen
This generates a Passes.td for all of the dialects that have transformation passes. This removes the need for global registration for all of the dialect passes.

Differential Revision: https://reviews.llvm.org/D76657
2020-04-01 02:10:46 -07:00
MaheshRavishankar 3f44495dfd [mlir][GPU] Expose the functionality to create a GPUFuncOp from a LaunchOp
The current setup of the GPU dialect is to model both the host and
device side codegen. For cases (like IREE) the host side modeling
might not directly fit its use case, but device-side codegen is still
valuable. First step in accessing just the device-side functionality
of the GPU dialect is to allow just creating a gpu.func operation from
a gpu.launch operation. In addition this change also "inlines"
operations into the gpu.func op at time of creation instead of this
being a later step.

Differential Revision: https://reviews.llvm.org/D75287
2020-03-05 11:03:51 -08:00
Rob Suderman 69d757c0e8 Move StandardOps/Ops.h to StandardOps/IR/Ops.h
Summary:
NFC - Moved StandardOps/Ops.h to a StandardOps/IR dir to better match surrounding
directories. This is to match other dialects, and prepare for moving StandardOps
related transforms in out for Transforms and into StandardOps/Transforms.

Differential Revision: https://reviews.llvm.org/D74940
2020-02-21 11:58:47 -08:00
Stephan Herhut 283b5e733d [MLIR] Make gpu.launch implicitly capture uses of values defined above.
Summary:
In the original design, gpu.launch required explicit capture of uses
and passing them as operands to the gpu.launch operation. This was
motivated by infrastructure restrictions rather than design. This
change lifts the requirement and removes the concept of kernel
arguments from gpu.launch. Instead, the kernel outlining
transformation now does the explicit capturing.

This is a breaking change for users of gpu.launch.

Differential Revision: https://reviews.llvm.org/D73769
2020-02-03 10:08:48 +01:00
Stephan Herhut 2692751895 Add 'gpu.terminator' operation.
Summary:
The 'gpu.terminator' operation is used as the terminator for the
regions of gpu.launch. This is to disambugaute them from the
return operation on 'gpu.func' functions.

This is a breaking change and users of the gpu dialect will need
to adapt their code when producting 'gpu.launch' operations.

Reviewers: nicolasvasilache

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, csigg, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73620
2020-01-30 12:41:41 +01:00
Mehdi Amini 308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
Tres Popp 9a52ea5cf9 Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reverts "Revert "[mlir] Create a gpu.module operation for the GPU Dialect.""

This reverts commit ac446302ca4145cdc89f377c0c364c29ee303be5 after
fixing internal Google issues.

This additionally updates ROCDL lowering to use the new gpu.module.

Reviewers: herhut, mravishankar, antiagainst, nicolasvasilache

Subscribers: jholewinski, mgorny, mehdi_amini, jpienaar, burmako, shauheen, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits, mravishankar, rriddle, antiagainst, bkramer

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72921
2020-01-21 14:05:03 +01:00
Benjamin Kramer 0133cc60e4 Revert "[mlir] Create a gpu.module operation for the GPU Dialect."
This reverts commit 4624a1e8ac. Causing
problems downstream.
2020-01-15 17:52:17 +01:00
Tres Popp 4624a1e8ac [mlir] Create a gpu.module operation for the GPU Dialect.
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.

Reviewers: herhut, mravishankar, antiagainst, rriddle

Reviewed By: herhut, antiagainst, rriddle

Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72336
2020-01-14 12:05:47 +01:00
River Riddle 2bdf33cc4c [mlir] NFC: Remove Value::operator* and Value::operator-> now that Value is properly value-typed.
Summary: These were temporary methods used to simplify the transition.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D72548
2020-01-11 08:54:39 -08:00
River Riddle e62a69561f NFC: Replace ValuePtr with Value and remove it now that Value is value-typed.
ValuePtr was a temporary typedef during the transition to a value-typed Value.

PiperOrigin-RevId: 286945714
2019-12-23 16:36:53 -08:00
Mehdi Amini 56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle 35807bc4c5 NFC: Introduce new ValuePtr/ValueRef typedefs to simplify the transition to Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ

This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.

PiperOrigin-RevId: 286844725
2019-12-22 22:00:23 -08:00
River Riddle 4562e389a4 NFC: Remove unnecessary 'llvm::' prefix from uses of llvm symbols declared in `mlir` namespace.
Aside from being cleaner, this also makes the codebase more consistent.

PiperOrigin-RevId: 286206974
2019-12-18 09:29:20 -08:00
Alex Zinenko 6273fa0c6a Plug gpu.func into the GPU lowering pipelines
This updates the lowering pipelines from the GPU dialect to lower-level
dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation
instead of a standard function annotated with an attribute. In particular, the
kernel outlining is updated to produce gpu.func instead of std.func and the
individual conversions are updated to consume gpu.funcs and disallow standard
funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved
in the generic syntax, but can also be used with the custom syntax on
gpu.funcs. The special kind of function for GPU allows one to use additional
features such as memory attribution.

PiperOrigin-RevId: 285822272
2019-12-16 12:12:48 -08:00
River Riddle d6ee6a0310 Update the builder API to take ValueRange instead of ArrayRef<Value *>
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.

PiperOrigin-RevId: 284360710
2019-12-07 10:35:41 -08:00
Alex Zinenko e96150eb46 Replace custom getBody method with an ODS-generated in gpu::LaunchOp
PiperOrigin-RevId: 284262981
2019-12-06 14:29:25 -08:00
Tres Popp b8cd0c1486 Move ModuleManager functionality into mlir::SymbolTable.
Note for broken code, the following transformations occurred:
ModuleManager::insert(Block::iterator, Operation*) - > SymbolTable::insert(Operation*, Block::iterator)
ModuleManager::lookupSymbol -> SymbolTable::lookup
ModuleManager::getModule() -> SymbolTable::getOp()
ModuleManager::getContext() -> SymbolTable::getOp()->getContext()
ModuleManager::* -> SymbolTable::*
PiperOrigin-RevId: 283944635
2019-12-05 03:56:46 -08:00
Stephan Herhut abb626686d Extend kernel outlining to also consider dim worth inlining.
PiperOrigin-RevId: 281483447
2019-11-20 02:59:35 -08:00
River Riddle 9b9c647cef Add support for nested symbol references.
This change allows for adding additional nested references to a SymbolRefAttr to allow for further resolving a symbol if that symbol also defines a SymbolTable. If a referenced symbol also defines a symbol table, a nested reference can be used to refer to a symbol within that table. Nested references are printed after the main reference in the following form:

  symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*

Example:

  module @reference {
    func @nested_reference()
  }

  my_reference_op @reference::@nested_reference

Given that SymbolRefAttr is now more general, the existing functionality centered around a single reference is moved to a derived class FlatSymbolRefAttr. Followup commits will add support to lookups, rauw, etc. for scoped references.

PiperOrigin-RevId: 279860501
2019-11-11 18:18:31 -08:00
MLIR Team 9fbf52e330 Look for SymbolRefAttr in KernelOutlining instead of hard-coding CallOp
This code should be exercised using the existing kernel outlining unit test, but
let me know if I should add a dedicated unit test using a fake call instruction
as well.

PiperOrigin-RevId: 279436321
2019-11-08 19:13:13 -08:00
Christian Sigg b74af4aa5c Unify GPU op definition names with other dialects.
Rename GPU op names from gpu_Foo to GPU_FooOp.

PiperOrigin-RevId: 275882232
2019-10-21 11:10:56 -07:00