Eliminate the using decls for MLFunction and CFGFunction standardizing on

Function.

This is step 18/n towards merging instructions and statements, NFC.

PiperOrigin-RevId: 227139399
This commit is contained in:
Chris Lattner 2018-12-28 08:48:09 -08:00 committed by jpienaar
parent f845bc4542
commit 69d9e990fa
55 changed files with 249 additions and 261 deletions

View File

@ -1519,9 +1519,9 @@ each followed by its indices, size of the data transfer in terms of the number
of elements (of the elemental type of the memref), and a tag memref with its
indices. The tag location is used by a dma_wait operation to check for
completion. The indices of the source memref, destination memref, and the tag
memref have the same restrictions as any load/store instruction in an MLFunction
(whenever DMA operations appear in ML Functions). This allows powerful static
analysis and transformations in the presence of such DMAs including
memref have the same restrictions as any load/store instruction in an ML
Function (whenever DMA operations appear in ML Functions). This allows powerful
static analysis and transformations in the presence of such DMAs including
rescheduling, pipelining / overlap with computation, and checking for matching
start/end operations. The source and destination memref need not be of the same
dimensionality, but need to have the same elemental type.
@ -1599,7 +1599,7 @@ The arity of indices is the rank of the memref (i.e., if the memref loaded from
is of rank 3, then 3 indices are required for the load following the memref
identifier).
In an MLFunction, the indices of a load are restricted to SSA values bound to
In an ML Function, the indices of a load are restricted to SSA values bound to
surrounding loop induction variables, [symbols](#dimensions-and-symbols),
results of a [`constant` operation](#'constant'-operation), or the results of an
`affine_apply` operation that can in turn take as arguments all of the
@ -1641,7 +1641,7 @@ Store value to memref location given by indices. The value stored should have
the same type as the elemental type of the memref. The number of arguments
provided within brackets need to match the rank of the memref.
In an MLFunction, the indices of a store are restricted to SSA values bound to
In an ML Function, the indices of a store are restricted to SSA values bound to
surrounding loop induction variables, [symbols](#dimensions-and-symbols),
results of a [`constant` operation](#'constant'-operation), or the results of an
[`affine_apply`](#'affine_apply'-operation) operation that can in turn take as

View File

@ -580,7 +580,7 @@ consideration on demand. We will revisit these discussions when we have more
implementation experience and learn more about the challenges and limitations of
our current design in practice.
### MLFunction representation alternatives: polyhedral schedule lists vs polyhedral schedules trees vs affine loop/If forms {#mlfunction-representation-alternatives-polyhedral-schedule-lists-vs-polyhedral-schedules-trees-vs-affine-loop-if-forms}
### ML Function representation alternatives: polyhedral schedule lists vs polyhedral schedules trees vs affine loop/If forms {#mlfunction-representation-alternatives-polyhedral-schedule-lists-vs-polyhedral-schedules-trees-vs-affine-loop-if-forms}
The current MLIR uses a representation of polyhedral schedules using a tree of
if/for loops. We extensively debated the tradeoffs involved in the typical
@ -609,8 +609,9 @@ At a high level, we have two alternatives here:
1. Having two different forms of MLFunctions: an affine loop tree form
(AffineLoopTreeFunction) and a polyhedral schedule tree form as two
different forms of MLFunctions. Or in effect, having four different forms
for functions in MLIR instead of three: CFGFunction, AffineLoopTreeFunction,
Polyhedral Schedule Tree function, and external functions.
for functions in MLIR instead of three: CFG Function,
AffineLoopTreeFunction, Polyhedral Schedule Tree function, and external
functions.
#### Schedule Tree Representation for MLFunctions {#schedule-tree-representation-for-mlfunctions}
@ -785,7 +786,7 @@ extfunc @dma_hbm_to_vmem(memref<1024 x f32, #layout_map0, hbm> %a,
representation. 2(b) requires no change, but impacts how cost models look at
index and layout maps.
### MLFunction Extensions for "Escaping Scalars" {#mlfunction-extensions-for-"escaping-scalars"}
### ML Function Extensions for "Escaping Scalars" {#mlfunction-extensions-for-"escaping-scalars"}
We considered providing a representation for SSA values that are live out of
if/else conditional bodies or for loops of ML functions. We ultimately abandoned

View File

@ -417,7 +417,7 @@ public:
/// the 'for' statement isn't found in the constraint system. Any new
/// identifiers that are found in the bound operands of the 'for' statement
/// are added as trailing identifiers (either dimensional or symbolic
/// depending on whether the operand is a valid MLFunction symbol).
/// depending on whether the operand is a valid ML Function symbol).
// TODO(bondhugula): add support for non-unit strides.
bool addForStmtDomain(const ForStmt &forStmt);

View File

@ -47,7 +47,7 @@ class DominanceInfo : public DominatorTreeBase {
using super = DominatorTreeBase;
public:
DominanceInfo(CFGFunction *F);
DominanceInfo(Function *F);
/// Return true if instruction A properly dominates instruction B.
bool properlyDominates(const Instruction *a, const Instruction *b);

View File

@ -62,7 +62,7 @@ using AffineBoundExprList = SmallVector<AffineExpr, 4>;
// 0 <= d0 <= 511
// max(128,M) <= d1 <= min(N-1,256)
//
// Symbols here aren't necessarily associated with MLFunction's symbols; they
// Symbols here aren't necessarily associated with Function's symbols; they
// could also correspond to outer loop IVs for example or anything abstract. The
// binding to SSA values for dimensions/symbols is optional, and these are in an
// abstract integer domain. As an example, to describe data accessed in a tile

View File

@ -29,7 +29,7 @@ struct MLFunctionMatchesStorage;
class Statement;
/// An MLFunctionMatcher is a recursive matcher that captures nested patterns in
/// an MLFunction. It is used in conjunction with a scoped
/// an ML Function. It is used in conjunction with a scoped
/// MLFunctionMatcherContext that handles the memory allocations efficiently.
///
/// In order to use MLFunctionMatchers creates a scoped context and uses
@ -47,7 +47,7 @@ class Statement;
///
/// Recursive abstraction for matching results.
/// Provides iteration over the MLFunction Statement* captured by a Matcher.
/// Provides iteration over the Statement* captured by a Matcher.
///
/// Implemented as a POD value-type with underlying storage pointer.
/// The underlying storage lives in a scoped bumper allocator whose lifetime
@ -99,7 +99,7 @@ struct MLFunctionMatcher : public StmtWalker<MLFunctionMatcher> {
FilterFunctionType filter = defaultFilterFunction);
/// Returns all the matches in `function`.
MLFunctionMatches match(MLFunction *function);
MLFunctionMatches match(Function *function);
/// Returns all the matches nested under `statement`.
MLFunctionMatches match(Statement *statement);

View File

@ -29,10 +29,10 @@ namespace mlir {
class FunctionPass;
/// Creates a pass to check memref accesses in an MLFunction.
/// Creates a pass to check memref accesses in an ML Function.
FunctionPass *createMemRefBoundCheckPass();
/// Creates a pass to check memref access dependences in an MLFunction.
/// Creates a pass to check memref access dependences in an ML Function.
FunctionPass *createMemRefDependenceCheckPass();
} // end namespace mlir

View File

@ -115,7 +115,7 @@ private:
/// cases. The computed region's 'cst' field has exactly as many dimensional
/// identifiers as the rank of the memref, and *potentially* additional symbolic
/// identifiers which could include any of the loop IVs surrounding opStmt up
/// until 'loopDepth' and another additional MLFunction symbols involved with
/// until 'loopDepth' and another additional Function symbols involved with
/// the access (for eg., those appear in affine_apply's, loop bounds, etc.).
/// For example, the memref region for this operation at loopDepth = 1 will be:
///

View File

@ -111,15 +111,15 @@ public:
BasicBlock &back() { return blocks.back(); }
const BasicBlock &back() const {
return const_cast<CFGFunction *>(this)->back();
return const_cast<Function *>(this)->back();
}
BasicBlock &front() { return blocks.front(); }
const BasicBlock &front() const {
return const_cast<CFGFunction *>(this)->front();
return const_cast<Function *>(this)->front();
}
/// Return the 'return' statement of this MLFunction.
/// Return the 'return' statement of this Function.
const OperationInst *getReturnStmt() const;
OperationInst *getReturnStmt();
@ -157,14 +157,14 @@ public:
}
// Supports non-const operand iteration.
using args_iterator = ArgumentIterator<MLFunction, BlockArgument>;
using args_iterator = ArgumentIterator<Function, BlockArgument>;
args_iterator args_begin();
args_iterator args_end();
llvm::iterator_range<args_iterator> getArguments();
// Supports const operand iteration.
using const_args_iterator =
ArgumentIterator<const MLFunction, const BlockArgument>;
ArgumentIterator<const Function, const BlockArgument>;
const_args_iterator args_begin() const;
const_args_iterator args_end() const;
llvm::iterator_range<const_args_iterator> getArguments() const;
@ -252,32 +252,31 @@ public:
};
//===--------------------------------------------------------------------===//
// MLFunction iterator methods.
// Function iterator methods.
//===--------------------------------------------------------------------===//
inline MLFunction::args_iterator MLFunction::args_begin() {
inline Function::args_iterator Function::args_begin() {
return args_iterator(this, 0);
}
inline MLFunction::args_iterator MLFunction::args_end() {
inline Function::args_iterator Function::args_end() {
return args_iterator(this, getNumArguments());
}
inline llvm::iterator_range<MLFunction::args_iterator>
MLFunction::getArguments() {
inline llvm::iterator_range<Function::args_iterator> Function::getArguments() {
return {args_begin(), args_end()};
}
inline MLFunction::const_args_iterator MLFunction::args_begin() const {
inline Function::const_args_iterator Function::args_begin() const {
return const_args_iterator(this, 0);
}
inline MLFunction::const_args_iterator MLFunction::args_end() const {
inline Function::const_args_iterator Function::args_end() const {
return const_args_iterator(this, getNumArguments());
}
inline llvm::iterator_range<MLFunction::const_args_iterator>
MLFunction::getArguments() const {
inline llvm::iterator_range<Function::const_args_iterator>
Function::getArguments() const {
return {args_begin(), args_end()};
}

View File

@ -86,14 +86,13 @@ template <> struct GraphTraits<Inverse<const mlir::BasicBlock *>> {
};
template <>
struct GraphTraits<mlir::CFGFunction *>
: public GraphTraits<mlir::BasicBlock *> {
using GraphType = mlir::CFGFunction *;
struct GraphTraits<mlir::Function *> : public GraphTraits<mlir::BasicBlock *> {
using GraphType = mlir::Function *;
using NodeRef = mlir::BasicBlock *;
static NodeRef getEntryNode(GraphType fn) { return &fn->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn->begin());
}
@ -103,14 +102,14 @@ struct GraphTraits<mlir::CFGFunction *>
};
template <>
struct GraphTraits<const mlir::CFGFunction *>
struct GraphTraits<const mlir::Function *>
: public GraphTraits<const mlir::BasicBlock *> {
using GraphType = const mlir::CFGFunction *;
using GraphType = const mlir::Function *;
using NodeRef = const mlir::BasicBlock *;
static NodeRef getEntryNode(GraphType fn) { return &fn->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::const_iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::const_iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn->begin());
}
@ -120,14 +119,14 @@ struct GraphTraits<const mlir::CFGFunction *>
};
template <>
struct GraphTraits<Inverse<mlir::CFGFunction *>>
struct GraphTraits<Inverse<mlir::Function *>>
: public GraphTraits<Inverse<mlir::BasicBlock *>> {
using GraphType = Inverse<mlir::CFGFunction *>;
using GraphType = Inverse<mlir::Function *>;
using NodeRef = NodeRef;
static NodeRef getEntryNode(GraphType fn) { return &fn.Graph->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn.Graph->begin());
}
@ -137,14 +136,14 @@ struct GraphTraits<Inverse<mlir::CFGFunction *>>
};
template <>
struct GraphTraits<Inverse<const mlir::CFGFunction *>>
struct GraphTraits<Inverse<const mlir::Function *>>
: public GraphTraits<Inverse<const mlir::BasicBlock *>> {
using GraphType = Inverse<const mlir::CFGFunction *>;
using GraphType = Inverse<const mlir::Function *>;
using NodeRef = NodeRef;
static NodeRef getEntryNode(GraphType fn) { return &fn.Graph->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::const_iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::const_iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn.Graph->begin());
}
@ -161,7 +160,7 @@ struct GraphTraits<mlir::StmtBlockList *>
static NodeRef getEntryNode(GraphType fn) { return &fn->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn->begin());
}
@ -178,7 +177,7 @@ struct GraphTraits<const mlir::StmtBlockList *>
static NodeRef getEntryNode(GraphType fn) { return &fn->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::const_iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::const_iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn->begin());
}
@ -195,7 +194,7 @@ struct GraphTraits<Inverse<mlir::StmtBlockList *>>
static NodeRef getEntryNode(GraphType fn) { return &fn.Graph->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn.Graph->begin());
}
@ -212,7 +211,7 @@ struct GraphTraits<Inverse<const mlir::StmtBlockList *>>
static NodeRef getEntryNode(GraphType fn) { return &fn.Graph->front(); }
using nodes_iterator = pointer_iterator<mlir::CFGFunction::const_iterator>;
using nodes_iterator = pointer_iterator<mlir::Function::const_iterator>;
static nodes_iterator nodes_begin(GraphType fn) {
return nodes_iterator(fn.Graph->begin());
}

View File

@ -29,7 +29,6 @@
namespace mlir {
class Location;
using MLFunction = Function;
class StmtBlock;
class ForStmt;
class MLIRContext;
@ -105,7 +104,7 @@ public:
/// Returns the function that this statement is part of.
/// The function is determined by traversing the chain of parent statements.
/// Returns nullptr if the statement is unlinked.
MLFunction *getFunction() const;
Function *getFunction() const;
/// Destroys this statement and its subclass data.
void destroy();

View File

@ -28,8 +28,6 @@
namespace mlir {
class IfStmt;
class StmtBlockList;
using CFGFunction = Function;
using MLFunction = Function;
template <typename BlockType> class PredecessorIterator;
template <typename BlockType> class SuccessorIterator;
@ -61,8 +59,8 @@ public:
/// Returns the function that this statement block is part of. The function
/// is determined by traversing the chain of parent statements.
MLFunction *getFunction();
const MLFunction *getFunction() const {
Function *getFunction();
const Function *getFunction() const {
return const_cast<StmtBlock *>(this)->getFunction();
}
@ -293,7 +291,7 @@ private:
namespace mlir {
/// This class contains a list of basic blocks and has a notion of the object it
/// is part of - an MLFunction or IfStmt or ForStmt.
/// is part of - a Function or IfStmt or ForStmt.
class StmtBlockList {
public:
explicit StmtBlockList(Function *container);
@ -345,7 +343,7 @@ public:
}
/// A StmtBlockList is part of a Function or and IfStmt/ForStmt. If it is
/// part of an Function, then return it, otherwise return null.
/// part of a Function, then return it, otherwise return null.
Function *getContainingFunction();
const Function *getContainingFunction() const {
return const_cast<StmtBlockList *>(this)->getContainingFunction();
@ -353,8 +351,8 @@ public:
// TODO(clattner): This is only to help ML -> CFG migration, remove in the
// near future. This makes StmtBlockList work more like BasicBlock did.
CFGFunction *getFunction();
const CFGFunction *getFunction() const {
Function *getFunction();
const Function *getFunction() const {
return const_cast<StmtBlockList *>(this)->getFunction();
}

View File

@ -15,7 +15,7 @@
// limitations under the License.
// =============================================================================
//
// This file defines the base classes for MLFunction's statement visitors and
// This file defines the base classes for Function's statement visitors and
// walkers. A visit is a O(1) operation that visits just the node in question. A
// walk visits the node it's called on as well as the node's descendants.
//
@ -29,7 +29,7 @@
// resolved overloading, not virtual functions.
//
// For example, here is a walker that counts the number of for loops in an
// MLFunction.
// Function.
//
// /// Declare the class. Note that we derive from StmtWalker instantiated
// /// with _our new subclasses_ type.
@ -45,7 +45,7 @@
// numLoops = lc.numLoops;
//
// There are 'visit' methods for OperationInst, ForStmt, IfStmt, and
// MLFunction, which recursively process all contained statements.
// Function, which recursively process all contained statements.
//
// Note that if you don't implement visitXXX for some statement type,
// the visitXXX method for Statement superclass will be invoked.
@ -129,14 +129,14 @@ public:
}
}
// Define walkers for MLFunction and all MLFunction statement kinds.
void walk(MLFunction *f) {
// Define walkers for Function and all Function statement kinds.
void walk(Function *f) {
static_cast<SubClass *>(this)->visitMLFunction(f);
static_cast<SubClass *>(this)->walk(f->getBody()->begin(),
f->getBody()->end());
}
void walkPostOrder(MLFunction *f) {
void walkPostOrder(Function *f) {
static_cast<SubClass *>(this)->walkPostOrder(f->getBody()->begin(),
f->getBody()->end());
static_cast<SubClass *>(this)->visitMLFunction(f);
@ -219,7 +219,7 @@ public:
// called. These are typically O(1) complexity and shouldn't be recursively
// processing their descendants in some way. When using RetTy, all of these
// need to be overridden.
void visitMLFunction(MLFunction *f) {}
void visitMLFunction(Function *f) {}
void visitForStmt(ForStmt *forStmt) {}
void visitIfStmt(IfStmt *ifStmt) {}
void visitOperationInst(OperationInst *opStmt) {}

View File

@ -25,8 +25,6 @@
namespace mlir {
class Function;
using CFGFunction = Function;
using MLFunction = Function;
class Module;
// Values that can be used by to signal success/failure. This can be implicitly
@ -93,11 +91,11 @@ public:
/// runOnCFGFunction or runOnMLFunction.
virtual PassResult runOnFunction(Function *fn);
/// Implement this function if you want to see CFGFunction's specifically.
virtual PassResult runOnCFGFunction(CFGFunction *fn) { return success(); }
/// Implement this function if you want to see CFG Function's specifically.
virtual PassResult runOnCFGFunction(Function *fn) { return success(); }
/// Implement this function if you want to see MLFunction's specifically.
virtual PassResult runOnMLFunction(MLFunction *fn) { return success(); }
/// Implement this function if you want to see ML Function's specifically.
virtual PassResult runOnMLFunction(Function *fn) { return success(); }
// Iterates over all functions in a module, halting upon failure.
virtual PassResult runOnModule(Module *m) override;

View File

@ -31,7 +31,6 @@ namespace mlir {
class AffineMap;
class ForStmt;
class Function;
using MLFunction = Function;
class FuncBuilder;
// Values that can be used to signal success/failure. This can be implicitly
@ -66,9 +65,9 @@ bool loopUnrollJamUpToFactor(ForStmt *forStmt, uint64_t unrollJamFactor);
/// was known to have a single iteration. Returns false otherwise.
bool promoteIfSingleIteration(ForStmt *forStmt);
/// Promotes all single iteration ForStmt's in the MLFunction, i.e., moves
/// Promotes all single iteration ForStmt's in the Function, i.e., moves
/// their body into the containing StmtBlock.
void promoteSingleIterationLoops(MLFunction *f);
void promoteSingleIterationLoops(Function *f);
/// Returns the lower bound of the cleanup loop when unrolling a loop
/// with the specified unroll factor.

View File

@ -46,9 +46,9 @@ private:
FuncBuilder *builder;
};
/// Base class for the MLFunction-wise lowering state. A pointer to the same
/// Base class for the Function-wise lowering state. A pointer to the same
/// instance of the subclass will be passed to all `rewrite` calls on operations
/// that belong to the same MLFunction.
/// that belong to the same Function.
class MLFuncGlobalLoweringState {
public:
virtual ~MLFuncGlobalLoweringState() {}
@ -58,7 +58,7 @@ protected:
MLFuncGlobalLoweringState() {}
};
/// Base class for MLFunction lowering patterns.
/// Base class for Function lowering patterns.
class MLLoweringPattern : public Pattern {
public:
/// Subclasses must override this function to implement rewriting. It will be
@ -104,11 +104,11 @@ public:
explicit MLPatternLoweringPass(void *ID) : FunctionPass(ID) {}
virtual std::unique_ptr<MLFuncGlobalLoweringState>
makeFuncWiseState(MLFunction *f) const {
makeFuncWiseState(Function *f) const {
return nullptr;
}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
};
/////////////////////////////////////////////////////////////////////
@ -135,7 +135,7 @@ template <typename Pattern> struct ListAdder<Pattern> {
} // namespace detail
template <typename... Patterns>
PassResult MLPatternLoweringPass<Patterns...>::runOnMLFunction(MLFunction *f) {
PassResult MLPatternLoweringPass<Patterns...>::runOnMLFunction(Function *f) {
detail::OwningMLLoweringPatternList patterns;
detail::ListAdder<Patterns...>::addPatternsToList(&patterns, f->getContext());
auto funcWiseState = makeFuncWiseState(f);

View File

@ -95,7 +95,7 @@ FunctionPass *createDmaGenerationPass(unsigned lowMemorySpace,
/// Replaces affine_apply operations in CFGFunctions with the arithmetic
/// primitives (addition, multplication) they comprise. Errors out on
/// any MLFunction since it may contain affine_applies baked into the For loop
/// any Function since it may contain affine_applies baked into the For loop
/// bounds that cannot be replaced.
FunctionPass *createLowerAffineApplyPass();

View File

@ -39,7 +39,6 @@ class Module;
class OperationInst;
class Function;
using CFGFunction = Function;
/// Replace all uses of oldMemRef with newMemRef while optionally remapping the
/// old memref's indices using the supplied affine map and adding any additional

View File

@ -565,7 +565,7 @@ bool mlir::getIndexSet(ArrayRef<ForStmt *> forStmts,
// Computes the iteration domain for 'opStmt' and populates 'indexSet', which
// encapsulates the constraints involving loops surrounding 'opStmt' and
// potentially involving any MLFunction symbols. The dimensional identifiers in
// potentially involving any Function symbols. The dimensional identifiers in
// 'indexSet' correspond to the loops surounding 'stmt' from outermost to
// innermost.
// TODO(andydavis) Add support to handle IfStmts surrounding 'stmt'.

View File

@ -30,7 +30,7 @@ template class llvm::DominatorTreeBase<BasicBlock, true>;
template class llvm::DomTreeNodeBase<BasicBlock>;
/// Compute the immediate-dominators map.
DominanceInfo::DominanceInfo(CFGFunction *function) : DominatorTreeBase() {
DominanceInfo::DominanceInfo(Function *function) : DominatorTreeBase() {
// Build the dominator tree for the function.
recalculate(function->getBlockList());
}

View File

@ -92,7 +92,7 @@ static MLFunctionMatches combine(ArrayRef<MLFunctionMatches> matches) {
}
/// Calls walk on `function`.
MLFunctionMatches MLFunctionMatcher::match(MLFunction *function) {
MLFunctionMatches MLFunctionMatcher::match(Function *function) {
assert(!matches && "MLFunctionMatcher already matched!");
this->walkPostOrder(function);
return matches;

View File

@ -41,9 +41,9 @@ namespace {
struct MemRefBoundCheck : public FunctionPass, StmtWalker<MemRefBoundCheck> {
explicit MemRefBoundCheck() : FunctionPass(&MemRefBoundCheck::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
// Not applicable to CFG functions.
PassResult runOnCFGFunction(CFGFunction *f) override { return success(); }
PassResult runOnCFGFunction(Function *f) override { return success(); }
void visitOperationInst(OperationInst *opStmt);
@ -67,10 +67,10 @@ void MemRefBoundCheck::visitOperationInst(OperationInst *opStmt) {
// TODO(bondhugula): do this for DMA ops as well.
}
PassResult MemRefBoundCheck::runOnMLFunction(MLFunction *f) {
PassResult MemRefBoundCheck::runOnMLFunction(Function *f) {
return walk(f), success();
}
static PassRegistration<MemRefBoundCheck>
memRefBoundCheck("memref-bound-check",
"Check memref access bounds in an MLFunction");
"Check memref access bounds in a Function");

View File

@ -37,16 +37,16 @@ using namespace mlir;
namespace {
// TODO(andydavis) Add common surrounding loop depth-wise dependence checks.
/// Checks dependences between all pairs of memref accesses in an MLFunction.
/// Checks dependences between all pairs of memref accesses in a Function.
struct MemRefDependenceCheck : public FunctionPass,
StmtWalker<MemRefDependenceCheck> {
SmallVector<OperationInst *, 4> loadsAndStores;
explicit MemRefDependenceCheck()
: FunctionPass(&MemRefDependenceCheck::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
// Not applicable to CFG functions.
PassResult runOnCFGFunction(CFGFunction *f) override { return success(); }
PassResult runOnCFGFunction(Function *f) override { return success(); }
void visitOperationInst(OperationInst *opStmt) {
if (opStmt->isa<LoadOp>() || opStmt->isa<StoreOp>()) {
@ -166,9 +166,9 @@ static void checkDependences(ArrayRef<OperationInst *> loadsAndStores) {
}
}
// Walks the MLFunction 'f' adding load and store ops to 'loadsAndStores'.
// Walks the Function 'f' adding load and store ops to 'loadsAndStores'.
// Runs pair-wise dependence checks.
PassResult MemRefDependenceCheck::runOnMLFunction(MLFunction *f) {
PassResult MemRefDependenceCheck::runOnMLFunction(Function *f) {
loadsAndStores.clear();
walk(f);
checkDependences(loadsAndStores);

View File

@ -34,10 +34,10 @@ struct PrintOpStatsPass : public FunctionPass, StmtWalker<PrintOpStatsPass> {
PassResult runOnModule(Module *m) override;
// Process CFG function considering the instructions in basic blocks.
PassResult runOnCFGFunction(CFGFunction *function) override;
PassResult runOnCFGFunction(Function *function) override;
// Process ML functions and operation statments in ML functions.
PassResult runOnMLFunction(MLFunction *function) override;
PassResult runOnMLFunction(Function *function) override;
void visitOperationInst(OperationInst *stmt);
// Print summary of op stats.
@ -61,7 +61,7 @@ PassResult PrintOpStatsPass::runOnModule(Module *m) {
return result;
}
PassResult PrintOpStatsPass::runOnCFGFunction(CFGFunction *function) {
PassResult PrintOpStatsPass::runOnCFGFunction(Function *function) {
for (const auto &bb : *function)
for (const auto &inst : bb)
if (auto *op = dyn_cast<OperationInst>(&inst))
@ -73,7 +73,7 @@ void PrintOpStatsPass::visitOperationInst(OperationInst *stmt) {
++opCount[stmt->getName().getStringRef()];
}
PassResult PrintOpStatsPass::runOnMLFunction(MLFunction *function) {
PassResult PrintOpStatsPass::runOnMLFunction(Function *function) {
walk(function);
return success();
}

View File

@ -15,7 +15,7 @@
// limitations under the License.
// =============================================================================
//
// This file implements Analysis functions specific to slicing in MLFunction.
// This file implements Analysis functions specific to slicing in Function.
//
//===----------------------------------------------------------------------===//
@ -30,7 +30,7 @@
#include <type_traits>
///
/// Implements Analysis functions specific to slicing in MLFunction.
/// Implements Analysis functions specific to slicing in Function.
///
using namespace mlir;

View File

@ -129,7 +129,7 @@ Optional<int64_t> MemRefRegion::getBoundingConstantSizeAndShape(
/// Computes the memory region accessed by this memref with the region
/// represented as constraints symbolic/parameteric in 'loopDepth' loops
/// surrounding opStmt and any additional MLFunction symbols. Returns false if
/// surrounding opStmt and any additional Function symbols. Returns false if
/// this fails due to yet unimplemented cases.
// For example, the memref region for this load operation at loopDepth = 1 will
// be as below:

View File

@ -146,11 +146,11 @@ bool Verifier::verifyOperation(const OperationInst &op) {
namespace {
struct CFGFuncVerifier : public Verifier {
const CFGFunction &fn;
const Function &fn;
DominanceInfo domInfo;
CFGFuncVerifier(const CFGFunction &fn)
: Verifier(fn), fn(fn), domInfo(const_cast<CFGFunction *>(&fn)) {}
CFGFuncVerifier(const Function &fn)
: Verifier(fn), fn(fn), domInfo(const_cast<Function *>(&fn)) {}
bool verify();
bool verifyBlock(const BasicBlock &block);
@ -240,10 +240,10 @@ bool CFGFuncVerifier::verifyBlock(const BasicBlock &block) {
namespace {
struct MLFuncVerifier : public Verifier, public StmtWalker<MLFuncVerifier> {
const MLFunction &fn;
const Function &fn;
bool hadError = false;
MLFuncVerifier(const MLFunction &fn) : Verifier(fn), fn(fn) {}
MLFuncVerifier(const Function &fn) : Verifier(fn), fn(fn) {}
void visitOperationInst(OperationInst *opStmt) {
hadError |= verifyOperation(*opStmt);
@ -254,7 +254,7 @@ struct MLFuncVerifier : public Verifier, public StmtWalker<MLFuncVerifier> {
fn.getName().c_str());
// Check basic structural properties.
walk(const_cast<MLFunction *>(&fn));
walk(const_cast<Function *>(&fn));
if (hadError)
return true;
@ -366,9 +366,9 @@ bool Function::verify() const {
// No body, nothing can be wrong here.
return false;
case Kind::CFGFunc:
return CFGFuncVerifier(*cast<CFGFunction>(this)).verify();
return CFGFuncVerifier(*this).verify();
case Kind::MLFunc:
return MLFuncVerifier(*cast<MLFunction>(this)).verify();
return MLFuncVerifier(*this).verify();
}
}

View File

@ -115,8 +115,8 @@ private:
// Visit functions.
void visitFunction(const Function *fn);
void visitExtFunction(const Function *fn);
void visitCFGFunction(const CFGFunction *fn);
void visitMLFunction(const MLFunction *fn);
void visitCFGFunction(const Function *fn);
void visitMLFunction(const Function *fn);
void visitStatement(const Statement *stmt);
void visitForStmt(const ForStmt *forStmt);
void visitIfStmt(const IfStmt *ifStmt);
@ -177,14 +177,14 @@ void ModuleState::visitExtFunction(const Function *fn) {
visitType(fn->getType());
}
void ModuleState::visitCFGFunction(const CFGFunction *fn) {
void ModuleState::visitCFGFunction(const Function *fn) {
visitType(fn->getType());
for (auto &block : *fn) {
for (auto &op : block.getStatements()) {
if (auto *opInst = dyn_cast<OperationInst>(&op))
visitOperation(opInst);
else {
llvm_unreachable("IfStmt/ForStmt in a CFGFunction isn't supported");
llvm_unreachable("IfStmt/ForStmt in a CFG Function isn't supported");
}
}
}
@ -230,7 +230,7 @@ void ModuleState::visitStatement(const Statement *stmt) {
}
}
void ModuleState::visitMLFunction(const MLFunction *fn) {
void ModuleState::visitMLFunction(const Function *fn) {
visitType(fn->getType());
for (auto &stmt : *fn->getBody()) {
ModuleState::visitStatement(&stmt);
@ -1103,7 +1103,7 @@ private:
unsigned nextValueID = 0;
/// This is the ID to assign to the next induction variable.
unsigned nextLoopID = 0;
/// This is the next ID to assign to an MLFunction argument.
/// This is the next ID to assign to a Function argument.
unsigned nextArgumentID = 0;
/// This is the next ID to assign when a name conflict is detected.
@ -1163,9 +1163,9 @@ void FunctionPrinter::printDefaultOp(const OperationInst *op) {
namespace {
class CFGFunctionPrinter : public FunctionPrinter {
public:
CFGFunctionPrinter(const CFGFunction *function, const ModulePrinter &other);
CFGFunctionPrinter(const Function *function, const ModulePrinter &other);
const CFGFunction *getFunction() const { return function; }
const Function *getFunction() const { return function; }
void print();
void print(const BasicBlock *block);
@ -1183,7 +1183,7 @@ public:
}
private:
const CFGFunction *function;
const Function *function;
DenseMap<const BasicBlock *, unsigned> basicBlockIDs;
void numberValuesInBlock(const BasicBlock *block);
@ -1192,7 +1192,7 @@ private:
};
} // end anonymous namespace
CFGFunctionPrinter::CFGFunctionPrinter(const CFGFunction *function,
CFGFunctionPrinter::CFGFunctionPrinter(const Function *function,
const ModulePrinter &other)
: FunctionPrinter(other), function(function) {
// Each basic block gets a unique ID per function.
@ -1319,9 +1319,9 @@ void ModulePrinter::printCFG(const Function *fn) {
namespace {
class MLFunctionPrinter : public FunctionPrinter {
public:
MLFunctionPrinter(const MLFunction *function, const ModulePrinter &other);
MLFunctionPrinter(const Function *function, const ModulePrinter &other);
const MLFunction *getFunction() const { return function; }
const Function *getFunction() const { return function; }
// Prints ML function.
void print();
@ -1349,12 +1349,12 @@ public:
private:
void numberValues();
const MLFunction *function;
const Function *function;
int numSpaces;
};
} // end anonymous namespace
MLFunctionPrinter::MLFunctionPrinter(const MLFunction *function,
MLFunctionPrinter::MLFunctionPrinter(const Function *function,
const ModulePrinter &other)
: FunctionPrinter(other), function(function), numSpaces(0) {
assert(function && "Cannot print nullptr function");
@ -1381,7 +1381,7 @@ void MLFunctionPrinter::numberValues() {
NumberValuesPass pass(this);
// TODO: it'd be cleaner to have constant visitor instead of using const_cast.
pass.walk(const_cast<MLFunction *>(function));
pass.walk(const_cast<Function *>(function));
}
void MLFunctionPrinter::print() {

View File

@ -32,11 +32,11 @@ Function::Function(Kind kind, Location location, StringRef name,
location(location), type(type), blocks(this) {
this->attrs = AttributeListStorage::get(attrs, getContext());
// Creating of an MLFunction automatically populates the entry block and
// Creating of a Function automatically populates the entry block and
// arguments.
// TODO(clattner): Unify this behavior.
if (kind == Kind::MLFunc) {
// The body of an MLFunction always has one block.
// The body of an ML Function always has one block.
auto *entry = new StmtBlock();
blocks.push_back(entry);
@ -158,18 +158,18 @@ bool Function::emitError(const Twine &message) const {
}
//===----------------------------------------------------------------------===//
// MLFunction implementation.
// Function implementation.
//===----------------------------------------------------------------------===//
const OperationInst *MLFunction::getReturnStmt() const {
const OperationInst *Function::getReturnStmt() const {
return cast<OperationInst>(&getBody()->back());
}
OperationInst *MLFunction::getReturnStmt() {
OperationInst *Function::getReturnStmt() {
return cast<OperationInst>(&getBody()->back());
}
void MLFunction::walk(std::function<void(OperationInst *)> callback) {
void Function::walk(std::function<void(OperationInst *)> callback) {
struct Walker : public StmtWalker<Walker> {
std::function<void(OperationInst *)> const &callback;
Walker(std::function<void(OperationInst *)> const &callback)
@ -182,7 +182,7 @@ void MLFunction::walk(std::function<void(OperationInst *)> callback) {
v.walk(this);
}
void MLFunction::walkPostOrder(std::function<void(OperationInst *)> callback) {
void Function::walkPostOrder(std::function<void(OperationInst *)> callback) {
struct Walker : public StmtWalker<Walker> {
std::function<void(OperationInst *)> const &callback;
Walker(std::function<void(OperationInst *)> const &callback)

View File

@ -82,7 +82,7 @@ Statement *Statement::getParentStmt() const {
return block ? block->getContainingStmt() : nullptr;
}
MLFunction *Statement::getFunction() const {
Function *Statement::getFunction() const {
return block ? block->getFunction() : nullptr;
}

View File

@ -32,7 +32,7 @@ Statement *StmtBlock::getContainingStmt() {
return parent ? parent->getContainingStmt() : nullptr;
}
MLFunction *StmtBlock::getFunction() {
Function *StmtBlock::getFunction() {
StmtBlock *block = this;
while (auto *stmt = block->getContainingStmt()) {
block = stmt->getBlock();
@ -143,7 +143,7 @@ StmtBlock *StmtBlock::getSinglePredecessor() {
// Other
//===----------------------------------------------------------------------===//
/// Unlink this BasicBlock from its CFGFunction and delete it.
/// Unlink this BasicBlock from its Function and delete it.
void BasicBlock::eraseFromFunction() {
assert(getFunction() && "BasicBlock has no parent");
getFunction()->getBlocks().erase(this);
@ -163,7 +163,7 @@ BasicBlock *BasicBlock::splitBasicBlock(iterator splitBefore) {
// Start by creating a new basic block, and insert it immediate after this
// one in the containing function.
auto newBB = new BasicBlock();
getFunction()->getBlocks().insert(++CFGFunction::iterator(this), newBB);
getFunction()->getBlocks().insert(++Function::iterator(this), newBB);
auto branchLoc =
splitBefore == end() ? getTerminator()->getLoc() : splitBefore->getLoc();
@ -186,9 +186,7 @@ StmtBlockList::StmtBlockList(Function *container) : container(container) {}
StmtBlockList::StmtBlockList(Statement *container) : container(container) {}
CFGFunction *StmtBlockList::getFunction() {
return dyn_cast_or_null<CFGFunction>(getContainingFunction());
}
Function *StmtBlockList::getFunction() { return getContainingFunction(); }
Statement *StmtBlockList::getContainingStmt() {
return container.dyn_cast<Statement *>();

View File

@ -71,7 +71,7 @@ MLIRContext *IROperandOwner::getContext() const {
//===----------------------------------------------------------------------===//
/// Return the function that this argument is defined in.
MLFunction *BlockArgument::getFunction() {
Function *BlockArgument::getFunction() {
if (auto *owner = getOwner())
return owner->getFunction();
return nullptr;

View File

@ -2560,11 +2560,11 @@ OperationInst *FunctionParser::parseCustomOperation(
namespace {
/// This is a specialized parser for CFGFunction's, maintaining the state
/// This is a specialized parser for Function's, maintaining the state
/// transient to their bodies.
class CFGFunctionParser : public FunctionParser {
public:
CFGFunctionParser(ParserState &state, CFGFunction *function)
CFGFunctionParser(ParserState &state, Function *function)
: FunctionParser(state, Kind::CFGFunc), function(function),
builder(function) {}
@ -2574,7 +2574,7 @@ public:
SmallVectorImpl<Value *> &operands);
private:
CFGFunction *function;
Function *function;
llvm::StringMap<std::pair<BasicBlock *, SMLoc>> blocksByName;
DenseMap<BasicBlock *, SMLoc> forwardRef;
@ -2770,17 +2770,17 @@ ParseResult CFGFunctionParser::parseBasicBlock() {
//===----------------------------------------------------------------------===//
namespace {
/// Refined parser for MLFunction bodies.
/// Refined parser for Function bodies.
class MLFunctionParser : public FunctionParser {
public:
MLFunctionParser(ParserState &state, MLFunction *function)
MLFunctionParser(ParserState &state, Function *function)
: FunctionParser(state, Kind::MLFunc), function(function),
builder(function->getBody()) {}
ParseResult parseFunctionBody();
private:
MLFunction *function;
Function *function;
/// This builder intentionally shadows the builder in the base class, with a
/// more specific builder type.
@ -3271,7 +3271,7 @@ ParseResult ModuleParser::parseAffineStructureDef() {
return ParseSuccess;
}
/// Parse a (possibly empty) list of MLFunction arguments with types.
/// Parse a (possibly empty) list of Function arguments with types.
///
/// ml-argument ::= ssa-id `:` type
/// ml-argument-list ::= ml-argument (`,` ml-argument)* | /*empty*/

View File

@ -1005,7 +1005,7 @@ bool LoadOp::verify() const {
// TODO: Verify we have the right number of indices.
// TODO: in MLFunction verify that the indices are parameters, IV's, or the
// TODO: in Function verify that the indices are parameters, IV's, or the
// result of an affine_apply.
return false;
}
@ -1255,7 +1255,7 @@ bool StoreOp::verify() const {
// TODO: Verify we have the right number of indices.
// TODO: in MLFunction verify that the indices are parameters, IV's, or the
// TODO: in Function verify that the indices are parameters, IV's, or the
// result of an affine_apply.
return false;
}

View File

@ -53,11 +53,11 @@ public:
private:
bool convertBasicBlock(const BasicBlock &bb, bool ignoreArguments = false);
bool convertCFGFunction(const CFGFunction &cfgFunc, llvm::Function &llvmFunc);
bool convertCFGFunction(const Function &cfgFunc, llvm::Function &llvmFunc);
bool convertFunctions(const Module &mlirModule, llvm::Module &llvmModule);
bool convertInstruction(const OperationInst &inst);
void connectPHINodes(const CFGFunction &cfgFunc);
void connectPHINodes(const Function &cfgFunc);
/// Type conversion functions. If any conversion fails, report errors to the
/// context of the MLIR type and return nullptr.
@ -799,7 +799,7 @@ static const Value *getPHISourceValue(const BasicBlock *current,
return nullptr;
}
void ModuleLowerer::connectPHINodes(const CFGFunction &cfgFunc) {
void ModuleLowerer::connectPHINodes(const Function &cfgFunc) {
// Skip the first block, it cannot be branched to and its arguments correspond
// to the arguments of the LLVM function.
for (auto it = std::next(cfgFunc.begin()), eit = cfgFunc.end(); it != eit;
@ -821,7 +821,7 @@ void ModuleLowerer::connectPHINodes(const CFGFunction &cfgFunc) {
}
}
bool ModuleLowerer::convertCFGFunction(const CFGFunction &cfgFunc,
bool ModuleLowerer::convertCFGFunction(const Function &cfgFunc,
llvm::Function &llvmFunc) {
// Clear the block mapping. Blocks belong to a function, no need to keep
// blocks from the previous functions around. Furthermore, we use this
@ -868,10 +868,10 @@ bool ModuleLowerer::convertFunctions(const Module &mlirModule,
continue;
llvm::Function *llvmFunc = functionMapping[functionPtr];
// Add function arguments to the value remapping table. In CFGFunction,
// Add function arguments to the value remapping table. In Function,
// arguments of the first block are those of the function.
assert(!functionPtr->getBlocks().empty() &&
"expected at least one basic block in a CFGFunction");
"expected at least one basic block in a Function");
const BasicBlock &firstBlock = *functionPtr->begin();
for (auto arg : llvm::enumerate(llvmFunc->args())) {
valueMapping[firstBlock.getArgument(arg.index())] = &arg.value();

View File

@ -43,8 +43,8 @@ namespace {
struct CSE : public FunctionPass {
CSE() : FunctionPass(&CSE::passID) {}
PassResult runOnCFGFunction(CFGFunction *f) override;
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnCFGFunction(Function *f) override;
PassResult runOnMLFunction(Function *f) override;
static char passID;
};
@ -162,7 +162,7 @@ struct CFGCSE : public CSEImpl {
bool processed;
};
void run(CFGFunction *f) {
void run(Function *f) {
// Note, deque is being used here because there was significant performance
// gains over vector when the container becomes very large due to the
// specific access patterns. If/when these performance issues are no
@ -210,7 +210,7 @@ struct CFGCSE : public CSEImpl {
struct MLCSE : public CSEImpl, StmtWalker<MLCSE> {
using StmtWalker<MLCSE>::walk;
void run(MLFunction *f) {
void run(Function *f) {
// Walk the function statements.
walk(f);
@ -231,12 +231,12 @@ struct MLCSE : public CSEImpl, StmtWalker<MLCSE> {
char CSE::passID = 0;
PassResult CSE::runOnCFGFunction(CFGFunction *f) {
PassResult CSE::runOnCFGFunction(Function *f) {
CFGCSE().run(f);
return success();
}
PassResult CSE::runOnMLFunction(MLFunction *f) {
PassResult CSE::runOnMLFunction(Function *f) {
MLCSE().run(f);
return success();
}

View File

@ -16,7 +16,7 @@
// =============================================================================
//
// This file implements a testing pass which composes affine maps from
// AffineApplyOps in an MLFunction, by forward subtituting results from an
// AffineApplyOps in a Function, by forward subtituting results from an
// AffineApplyOp into any of its users which are also AffineApplyOps.
//
//===----------------------------------------------------------------------===//
@ -36,7 +36,7 @@ using namespace mlir;
namespace {
// ComposeAffineMaps walks stmt blocks in an MLFunction, and for each
// ComposeAffineMaps walks stmt blocks in a Function, and for each
// AffineApplyOp, forward substitutes its results into any users which are
// also AffineApplyOps. After forward subtituting its results, AffineApplyOps
// with no remaining uses are collected and erased after the walk.
@ -48,7 +48,7 @@ struct ComposeAffineMaps : public FunctionPass, StmtWalker<ComposeAffineMaps> {
using StmtListType = llvm::iplist<Statement>;
void walk(StmtListType::iterator Start, StmtListType::iterator End);
void visitOperationInst(OperationInst *stmt);
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
using StmtWalker<ComposeAffineMaps>::walk;
static char passID;
@ -88,7 +88,7 @@ void ComposeAffineMaps::visitOperationInst(OperationInst *opStmt) {
}
}
PassResult ComposeAffineMaps::runOnMLFunction(MLFunction *f) {
PassResult ComposeAffineMaps::runOnMLFunction(Function *f) {
affineApplyOpsToErase.clear();
walk(f);
for (auto *opStmt : affineApplyOpsToErase) {

View File

@ -40,8 +40,8 @@ struct ConstantFold : public FunctionPass, StmtWalker<ConstantFold> {
ConstantFactoryType constantFactory);
void visitOperationInst(OperationInst *stmt);
void visitForStmt(ForStmt *stmt);
PassResult runOnCFGFunction(CFGFunction *f) override;
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnCFGFunction(Function *f) override;
PassResult runOnMLFunction(Function *f) override;
static char passID;
};
@ -103,7 +103,7 @@ bool ConstantFold::foldOperation(OperationInst *op,
// For now, we do a simple top-down pass over a function folding constants. We
// don't handle conditional control flow, constant PHI nodes, folding
// conditional branches, or anything else fancy.
PassResult ConstantFold::runOnCFGFunction(CFGFunction *f) {
PassResult ConstantFold::runOnCFGFunction(Function *f) {
existingConstants.clear();
FuncBuilder builder(f);
@ -155,7 +155,7 @@ void ConstantFold::visitForStmt(ForStmt *forStmt) {
constantFoldBounds(forStmt);
}
PassResult ConstantFold::runOnMLFunction(MLFunction *f) {
PassResult ConstantFold::runOnMLFunction(Function *f) {
existingConstants.clear();
opStmtsToErase.clear();

View File

@ -41,9 +41,8 @@ namespace {
// Generates CFG function equivalent to the given ML function.
class FunctionConverter : public StmtVisitor<FunctionConverter> {
public:
FunctionConverter(CFGFunction *cfgFunc)
: cfgFunc(cfgFunc), builder(cfgFunc) {}
CFGFunction *convert(MLFunction *mlFunc);
FunctionConverter(Function *cfgFunc) : cfgFunc(cfgFunc), builder(cfgFunc) {}
Function *convert(Function *mlFunc);
void visitForStmt(ForStmt *forStmt);
void visitIfStmt(IfStmt *ifStmt);
@ -56,7 +55,7 @@ private:
Location loc, CmpIPredicate predicate,
llvm::iterator_range<OperationInst::result_iterator> values);
CFGFunction *cfgFunc;
Function *cfgFunc;
FuncBuilder builder;
// Mapping between original Values and lowered Values.
@ -455,7 +454,7 @@ void FunctionConverter::visitIfStmt(IfStmt *ifStmt) {
// Entry point of the function convertor.
//
// Conversion is performed by recursively visiting statements of an MLFunction.
// Conversion is performed by recursively visiting statements of a Function.
// It reasons in terms of single-entry single-exit (SESE) regions that are not
// materialized in the code. Instead, the pointer to the last block of the
// region is maintained throughout the conversion as the insertion point of the
@ -471,11 +470,11 @@ void FunctionConverter::visitIfStmt(IfStmt *ifStmt) {
// construction. When an Value is used, it gets replaced with the
// corresponding Value that has been defined previously. The value flow
// starts with function arguments converted to basic block arguments.
CFGFunction *FunctionConverter::convert(MLFunction *mlFunc) {
Function *FunctionConverter::convert(Function *mlFunc) {
auto outerBlock = builder.createBlock();
// CFGFunctions do not have explicit arguments but use the arguments to the
// first basic block instead. Create those from the MLFunction arguments and
// first basic block instead. Create those from the Function arguments and
// set up the value remapping.
outerBlock->addArguments(mlFunc->getType().getInputs());
assert(mlFunc->getNumArguments() == outerBlock->getNumArguments());
@ -511,17 +510,17 @@ private:
// Generates CFG functions for all ML functions in the module.
void convertMLFunctions();
// Generates CFG function for the given ML function.
CFGFunction *convert(MLFunction *mlFunc);
Function *convert(Function *mlFunc);
// Replaces all ML function references in the module
// with references to the generated CFG functions.
void replaceReferences();
// Replaces function references in the given function.
void replaceReferences(CFGFunction *cfgFunc);
void replaceReferences(Function *cfgFunc);
// Replaces MLFunctions with their CFG counterparts in the module.
void replaceFunctions();
// Map from ML functions to generated CFG functions.
llvm::DenseMap<MLFunction *, CFGFunction *> generatedFuncs;
llvm::DenseMap<Function *, Function *> generatedFuncs;
Module *module = nullptr;
};
} // end anonymous namespace
@ -554,7 +553,7 @@ void ModuleConverter::convertMLFunctions() {
}
// Creates CFG function equivalent to the given ML function.
CFGFunction *ModuleConverter::convert(MLFunction *mlFunc) {
Function *ModuleConverter::convert(Function *mlFunc) {
// Use the same name as for ML function; do not add the converted function to
// the module yet to avoid collision.
auto name = mlFunc->getName().str();
@ -578,7 +577,7 @@ void ModuleConverter::replaceReferences() {
for (const Function &fn : *module) {
if (!fn.isML())
continue;
CFGFunction *convertedFunc = generatedFuncs.lookup(&fn);
Function *convertedFunc = generatedFuncs.lookup(&fn);
assert(convertedFunc && "ML function was not converted");
MLIRContext *context = module->getContext();
@ -597,11 +596,11 @@ void ModuleConverter::replaceReferences() {
}
// Replace the value of a function attribute named "name" attached to the
// operation "op" and containing an MLFunction-typed value with the result of
// converting "func" to a CFGFunction.
// operation "op" and containing a Function-typed value with the result of
// converting "func" to a Function.
static inline void replaceMLFunctionAttr(
OperationInst &op, Identifier name, const Function *func,
const llvm::DenseMap<MLFunction *, CFGFunction *> &generatedFuncs) {
const llvm::DenseMap<Function *, Function *> &generatedFuncs) {
if (!func->isML())
return;
@ -610,8 +609,8 @@ static inline void replaceMLFunctionAttr(
op.setAttr(name, b.getFunctionAttr(cfgFunc));
}
// The CFG and ML functions have the same name. First, erase the MLFunction.
// Then insert the CFGFunction at the same place.
// The CFG and ML functions have the same name. First, erase the Function.
// Then insert the Function at the same place.
void ModuleConverter::replaceFunctions() {
for (auto pair : generatedFuncs) {
auto &functions = module->getFunctions();

View File

@ -63,8 +63,8 @@ struct DmaGeneration : public FunctionPass, StmtWalker<DmaGeneration> {
}
// Not applicable to CFG functions.
PassResult runOnCFGFunction(CFGFunction *f) override { return success(); }
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnCFGFunction(Function *f) override { return success(); }
PassResult runOnMLFunction(Function *f) override;
void runOnForStmt(ForStmt *forStmt);
void visitOperationInst(OperationInst *opStmt);
@ -425,7 +425,7 @@ void DmaGeneration::runOnForStmt(ForStmt *forStmt) {
<< " KiB of DMA buffers in fast memory space\n";);
}
PassResult DmaGeneration::runOnMLFunction(MLFunction *f) {
PassResult DmaGeneration::runOnMLFunction(Function *f) {
for (auto &stmt : *f->getBody()) {
if (auto *forStmt = dyn_cast<ForStmt>(&stmt)) {
runOnForStmt(forStmt);

View File

@ -70,7 +70,7 @@ namespace {
struct LoopFusion : public FunctionPass {
LoopFusion() : FunctionPass(&LoopFusion::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
static char passID;
};
@ -158,7 +158,7 @@ public:
};
// MemRefDependenceGraph is a graph data structure where graph nodes are
// top-level statements in an MLFunction which contain load/store ops, and edges
// top-level statements in a Function which contain load/store ops, and edges
// are memref dependences between the nodes.
// TODO(andydavis) Add a depth parameter to dependence graph construction.
struct MemRefDependenceGraph {
@ -217,7 +217,7 @@ public:
// Initializes the dependence graph based on operations in 'f'.
// Returns true on success, false otherwise.
bool init(MLFunction *f);
bool init(Function *f);
// Returns the graph node for 'id'.
Node *getNode(unsigned id) {
@ -345,7 +345,7 @@ public:
// Assigns each node in the graph a node id based on program order in 'f'.
// TODO(andydavis) Add support for taking a StmtBlock arg to construct the
// dependence graph at a different depth.
bool MemRefDependenceGraph::init(MLFunction *f) {
bool MemRefDependenceGraph::init(Function *f) {
unsigned id = 0;
DenseMap<Value *, SetVector<unsigned>> memrefAccesses;
for (auto &stmt : *f->getBody()) {
@ -415,7 +415,7 @@ bool MemRefDependenceGraph::init(MLFunction *f) {
// GreedyFusion greedily fuses loop nests which have a producer/consumer
// relationship on a memref, with the goal of improving locality. Currently,
// this the producer/consumer relationship is required to be unique in the
// MLFunction (there are TODOs to relax this constraint in the future).
// Function (there are TODOs to relax this constraint in the future).
//
// The steps of the algorithm are as follows:
//
@ -425,7 +425,7 @@ bool MemRefDependenceGraph::init(MLFunction *f) {
// destination ForStmt into which fusion will be attempted.
// *) Add each LoadOp currently in 'dstForStmt' into list 'dstLoadOps'.
// *) For each LoadOp in 'dstLoadOps' do:
// *) Lookup dependent loop nests at earlier positions in the MLFunction
// *) Lookup dependent loop nests at earlier positions in the Function
// which have a single store op to the same memref.
// *) Check if dependences would be violated by the fusion. For example,
// the src loop nest may load from memrefs which are different than
@ -549,7 +549,7 @@ public:
} // end anonymous namespace
PassResult LoopFusion::runOnMLFunction(MLFunction *f) {
PassResult LoopFusion::runOnMLFunction(Function *f) {
MemRefDependenceGraph g;
if (g.init(f))
GreedyFusion(&g).run();

View File

@ -38,10 +38,10 @@ static llvm::cl::opt<unsigned>
namespace {
/// A pass to perform loop tiling on all suitable loop nests of an MLFunction.
/// A pass to perform loop tiling on all suitable loop nests of a Function.
struct LoopTiling : public FunctionPass {
LoopTiling() : FunctionPass(&LoopTiling::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
constexpr static unsigned kDefaultTileSize = 4;
static char passID;
@ -52,7 +52,7 @@ struct LoopTiling : public FunctionPass {
char LoopTiling::passID = 0;
/// Creates a pass to perform loop tiling on all suitable loop nests of an
/// MLFunction.
/// Function.
FunctionPass *mlir::createLoopTilingPass() { return new LoopTiling(); }
// Move the loop body of ForStmt 'src' from 'src' into the specified location in
@ -214,7 +214,7 @@ UtilResult mlir::tileCodeGen(ArrayRef<ForStmt *> band,
// Identify valid and profitable bands of loops to tile. This is currently just
// a temporary placeholder to test the mechanics of tiled code generation.
// Returns all maximal outermost perfect loop nests to tile.
static void getTileableBands(MLFunction *f,
static void getTileableBands(Function *f,
std::vector<SmallVector<ForStmt *, 6>> *bands) {
// Get maximal perfect nest of 'for' stmts starting from root (inclusive).
auto getMaximalPerfectLoopNest = [&](ForStmt *root) {
@ -235,7 +235,7 @@ static void getTileableBands(MLFunction *f,
}
}
PassResult LoopTiling::runOnMLFunction(MLFunction *f) {
PassResult LoopTiling::runOnMLFunction(Function *f) {
std::vector<SmallVector<ForStmt *, 6>> bands;
getTileableBands(f, &bands);

View File

@ -70,7 +70,7 @@ struct LoopUnroll : public FunctionPass {
: FunctionPass(&LoopUnroll::passID), unrollFactor(unrollFactor),
unrollFull(unrollFull), getUnrollFactor(getUnrollFactor) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
/// Unroll this for stmt. Returns false if nothing was done.
bool runOnForStmt(ForStmt *forStmt);
@ -83,7 +83,7 @@ struct LoopUnroll : public FunctionPass {
char LoopUnroll::passID = 0;
PassResult LoopUnroll::runOnMLFunction(MLFunction *f) {
PassResult LoopUnroll::runOnMLFunction(Function *f) {
// Gathers all innermost loops through a post order pruned walk.
class InnermostLoopGatherer : public StmtWalker<InnermostLoopGatherer, bool> {
public:

View File

@ -65,7 +65,7 @@ static llvm::cl::opt<unsigned>
namespace {
/// Loop unroll jam pass. Currently, this just unroll jams the first
/// outer loop in an MLFunction.
/// outer loop in a Function.
struct LoopUnrollAndJam : public FunctionPass {
Optional<unsigned> unrollJamFactor;
static const unsigned kDefaultUnrollJamFactor = 4;
@ -74,7 +74,7 @@ struct LoopUnrollAndJam : public FunctionPass {
: FunctionPass(&LoopUnrollAndJam::passID),
unrollJamFactor(unrollJamFactor) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
bool runOnForStmt(ForStmt *forStmt);
static char passID;
@ -88,7 +88,7 @@ FunctionPass *mlir::createLoopUnrollAndJamPass(int unrollJamFactor) {
unrollJamFactor == -1 ? None : Optional<unsigned>(unrollJamFactor));
}
PassResult LoopUnrollAndJam::runOnMLFunction(MLFunction *f) {
PassResult LoopUnrollAndJam::runOnMLFunction(Function *f) {
// Currently, just the outermost loop from the first loop nest is
// unroll-and-jammed by this pass. However, runOnForStmt can be called on any
// for Stmt.
@ -165,8 +165,8 @@ bool mlir::loopUnrollJamByFactor(ForStmt *forStmt, uint64_t unrollJamFactor) {
auto ubMap = forStmt->getUpperBoundMap();
// Loops with max/min expressions won't be unrolled here (the output can't be
// expressed as an MLFunction in the general case). However, the right way to
// do such unrolling for an MLFunction would be to specialize the loop for the
// expressed as a Function in the general case). However, the right way to
// do such unrolling for a Function would be to specialize the loop for the
// 'hotspot' case and unroll that hotspot.
if (lbMap.getNumResults() != 1 || ubMap.getNumResults() != 1)
return false;

View File

@ -35,8 +35,8 @@ struct LowerAffineApply : public FunctionPass {
explicit LowerAffineApply() : FunctionPass(&LowerAffineApply::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnCFGFunction(CFGFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
PassResult runOnCFGFunction(Function *f) override;
static char passID;
};
@ -45,13 +45,13 @@ struct LowerAffineApply : public FunctionPass {
char LowerAffineApply::passID = 0;
PassResult LowerAffineApply::runOnMLFunction(MLFunction *f) {
PassResult LowerAffineApply::runOnMLFunction(Function *f) {
f->emitError("ML Functions contain syntactically hidden affine_apply's that "
"cannot be expanded");
return failure();
}
PassResult LowerAffineApply::runOnCFGFunction(CFGFunction *f) {
PassResult LowerAffineApply::runOnCFGFunction(Function *f) {
for (BasicBlock &bb : *f) {
// Handle iterators with care because we erase in the same loop.
// In particular, step to the next element before erasing the current one.

View File

@ -108,7 +108,7 @@ static void rewriteAsLoops(VectorTransferOpTy *transfer,
auto vectorMemRefType = MemRefType::get({1}, vectorType, {}, 0);
// Get the ML function builder.
// We need access to the MLFunction builder stored internally in the
// We need access to the Function builder stored internally in the
// MLFunctionLoweringRewriter general rewriting API does not provide
// ML-specific functions (ForStmt and StmtBlock manipulation). While we could
// forward them or define a whole rewriting chain based on MLFunctionBuilder
@ -233,7 +233,7 @@ struct LowerVectorTransfersPass
: MLPatternLoweringPass(&LowerVectorTransfersPass::passID) {}
std::unique_ptr<MLFuncGlobalLoweringState>
makeFuncWiseState(MLFunction *f) const override {
makeFuncWiseState(Function *f) const override {
auto state = llvm::make_unique<LowerVectorTransfersState>();
auto builder = FuncBuilder(f);
builder.setInsertionPointToStart(f->getBody());

View File

@ -77,7 +77,7 @@
/// words, this pass operates on a scoped program slice. Furthermore, since we
/// do not vectorize in the presence of conditionals for now, sliced chains are
/// guaranteed not to escape the innermost scope, which has to be either the top
/// MLFunction scope of the innermost loop scope, by construction. As a
/// Function scope of the innermost loop scope, by construction. As a
/// consequence, the implementation just starts from vector_transfer_write
/// operations and builds the slice scoped the innermost loop enclosing the
/// current vector_transfer_write. These assumptions and the implementation
@ -196,7 +196,7 @@ struct MaterializationState {
struct MaterializeVectorsPass : public FunctionPass {
MaterializeVectorsPass() : FunctionPass(&MaterializeVectorsPass::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
// Thread-safe RAII contexts local to pass, BumpPtrAllocator freed on exit.
MLFunctionMatcherContext mlContext;
@ -650,7 +650,7 @@ static bool emitSlice(MaterializationState *state,
/// Additionally, this set is limited to statements in the same lexical scope
/// because we currently disallow vectorization of defs that come from another
/// scope.
static bool materialize(MLFunction *f,
static bool materialize(Function *f,
const SetVector<OperationInst *> &terminators,
MaterializationState *state) {
DenseSet<Statement *> seen;
@ -709,9 +709,9 @@ static bool materialize(MLFunction *f,
return false;
}
PassResult MaterializeVectorsPass::runOnMLFunction(MLFunction *f) {
PassResult MaterializeVectorsPass::runOnMLFunction(Function *f) {
using matcher::Op;
LLVM_DEBUG(dbgs() << "\nMaterializeVectors on MLFunction\n");
LLVM_DEBUG(dbgs() << "\nMaterializeVectors on Function\n");
LLVM_DEBUG(f->print(dbgs()));
MaterializationState state;

View File

@ -41,7 +41,7 @@ namespace {
struct PipelineDataTransfer : public FunctionPass,
StmtWalker<PipelineDataTransfer> {
PipelineDataTransfer() : FunctionPass(&PipelineDataTransfer::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
PassResult runOnForStmt(ForStmt *forStmt);
// Collect all 'for' statements.
@ -137,7 +137,7 @@ static bool doubleBuffer(Value *oldMemRef, ForStmt *forStmt) {
}
/// Returns success if the IR is in a valid state.
PassResult PipelineDataTransfer::runOnMLFunction(MLFunction *f) {
PassResult PipelineDataTransfer::runOnMLFunction(Function *f) {
// Do a post order walk so that inner loop DMAs are processed first. This is
// necessary since 'for' statements nested within would otherwise become
// invalid (erased) when the outer loop is pipelined (the pipelined one gets

View File

@ -33,7 +33,7 @@ using llvm::report_fatal_error;
namespace {
/// Simplifies all affine expressions appearing in the operation statements of
/// the MLFunction. This is mainly to test the simplifyAffineExpr method.
/// the Function. This is mainly to test the simplifyAffineExpr method.
// TODO(someone): Gradually, extend this to all affine map references found in
// ML functions and CFG functions.
struct SimplifyAffineStructures : public FunctionPass,
@ -41,10 +41,10 @@ struct SimplifyAffineStructures : public FunctionPass,
explicit SimplifyAffineStructures()
: FunctionPass(&SimplifyAffineStructures::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
// Does nothing on CFG functions for now. No reusable walkers/visitors exist
// for this yet? TODO(someone).
PassResult runOnCFGFunction(CFGFunction *f) override { return success(); }
PassResult runOnCFGFunction(Function *f) override { return success(); }
void visitIfStmt(IfStmt *ifStmt);
void visitOperationInst(OperationInst *opStmt);
@ -86,7 +86,7 @@ void SimplifyAffineStructures::visitOperationInst(OperationInst *opStmt) {
}
}
PassResult SimplifyAffineStructures::runOnMLFunction(MLFunction *f) {
PassResult SimplifyAffineStructures::runOnMLFunction(Function *f) {
walk(f);
return success();
}

View File

@ -255,7 +255,7 @@ void GreedyPatternRewriteDriver::simplifyFunction(Function *currentFunction,
uniquedConstants.clear();
}
static void processMLFunction(MLFunction *fn,
static void processMLFunction(Function *fn,
OwningRewritePatternList &&patterns) {
class MLFuncRewriter : public WorklistRewriter {
public:
@ -287,7 +287,7 @@ static void processMLFunction(MLFunction *fn,
driver.simplifyFunction(fn, rewriter);
}
static void processCFGFunction(CFGFunction *fn,
static void processCFGFunction(Function *fn,
OwningRewritePatternList &&patterns) {
class CFGFuncRewriter : public WorklistRewriter {
public:

View File

@ -122,9 +122,9 @@ bool mlir::promoteIfSingleIteration(ForStmt *forStmt) {
return true;
}
/// Promotes all single iteration for stmt's in the MLFunction, i.e., moves
/// Promotes all single iteration for stmt's in the Function, i.e., moves
/// their body into the containing StmtBlock.
void mlir::promoteSingleIterationLoops(MLFunction *f) {
void mlir::promoteSingleIterationLoops(Function *f) {
// Gathers all innermost loops through a post order pruned walk.
class LoopBodyPromoter : public StmtWalker<LoopBodyPromoter> {
public:
@ -357,8 +357,8 @@ bool mlir::loopUnrollByFactor(ForStmt *forStmt, uint64_t unrollFactor) {
auto ubMap = forStmt->getUpperBoundMap();
// Loops with max/min expressions won't be unrolled here (the output can't be
// expressed as an MLFunction in the general case). However, the right way to
// do such unrolling for an MLFunction would be to specialize the loop for the
// expressed as a Function in the general case). However, the right way to
// do such unrolling for a Function would be to specialize the loop for the
// 'hotspot' case and unroll that hotspot.
if (lbMap.getNumResults() != 1 || ubMap.getNumResults() != 1)
return false;

View File

@ -434,7 +434,7 @@ void mlir::remapFunctionAttrs(
void mlir::remapFunctionAttrs(
Function &fn, const DenseMap<Attribute, FunctionAttr> &remappingTable) {
// Look at all instructions in a CFGFunction.
// Look at all instructions in a Function.
if (fn.isCFG()) {
for (auto &bb : fn.getBlockList()) {
for (auto &inst : bb) {

View File

@ -73,12 +73,12 @@ struct VectorizerTestPass : public FunctionPass {
static constexpr auto kTestAffineMapAttrName = "affine_map";
VectorizerTestPass() : FunctionPass(&VectorizerTestPass::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
void testVectorShapeRatio(MLFunction *f);
void testForwardSlicing(MLFunction *f);
void testBackwardSlicing(MLFunction *f);
void testSlicing(MLFunction *f);
void testComposeMaps(MLFunction *f);
PassResult runOnMLFunction(Function *f) override;
void testVectorShapeRatio(Function *f);
void testForwardSlicing(Function *f);
void testBackwardSlicing(Function *f);
void testSlicing(Function *f);
void testComposeMaps(Function *f);
// Thread-safe RAII contexts local to pass, BumpPtrAllocator freed on exit.
MLFunctionMatcherContext MLContext;
@ -90,7 +90,7 @@ struct VectorizerTestPass : public FunctionPass {
char VectorizerTestPass::passID = 0;
void VectorizerTestPass::testVectorShapeRatio(MLFunction *f) {
void VectorizerTestPass::testVectorShapeRatio(Function *f) {
using matcher::Op;
SmallVector<int, 8> shape(clTestVectorShapeRatio.begin(),
clTestVectorShapeRatio.end());
@ -139,7 +139,7 @@ static std::string toString(Statement *stmt) {
return res;
}
static MLFunctionMatches matchTestSlicingOps(MLFunction *f) {
static MLFunctionMatches matchTestSlicingOps(Function *f) {
// Just use a custom op name for this test, it makes life easier.
constexpr auto kTestSlicingOpName = "slicing-test-op";
using functional::map;
@ -153,7 +153,7 @@ static MLFunctionMatches matchTestSlicingOps(MLFunction *f) {
return pat.match(f);
}
void VectorizerTestPass::testBackwardSlicing(MLFunction *f) {
void VectorizerTestPass::testBackwardSlicing(Function *f) {
auto matches = matchTestSlicingOps(f);
for (auto m : matches) {
SetVector<Statement *> backwardSlice;
@ -166,7 +166,7 @@ void VectorizerTestPass::testBackwardSlicing(MLFunction *f) {
}
}
void VectorizerTestPass::testForwardSlicing(MLFunction *f) {
void VectorizerTestPass::testForwardSlicing(Function *f) {
auto matches = matchTestSlicingOps(f);
for (auto m : matches) {
SetVector<Statement *> forwardSlice;
@ -179,7 +179,7 @@ void VectorizerTestPass::testForwardSlicing(MLFunction *f) {
}
}
void VectorizerTestPass::testSlicing(MLFunction *f) {
void VectorizerTestPass::testSlicing(Function *f) {
auto matches = matchTestSlicingOps(f);
for (auto m : matches) {
SetVector<Statement *> staticSlice = getSlice(m.first);
@ -197,7 +197,7 @@ bool customOpWithAffineMapAttribute(const Statement &stmt) {
VectorizerTestPass::kTestAffineMapOpName;
}
void VectorizerTestPass::testComposeMaps(MLFunction *f) {
void VectorizerTestPass::testComposeMaps(Function *f) {
using matcher::Op;
auto pattern = Op(customOpWithAffineMapAttribute);
auto matches = pattern.match(f);
@ -218,7 +218,7 @@ void VectorizerTestPass::testComposeMaps(MLFunction *f) {
res.print(outs() << "\nComposed map: ");
}
PassResult VectorizerTestPass::runOnMLFunction(MLFunction *f) {
PassResult VectorizerTestPass::runOnMLFunction(Function *f) {
if (!clTestVectorShapeRatio.empty()) {
testVectorShapeRatio(f);
}

View File

@ -47,7 +47,7 @@
using namespace mlir;
///
/// Implements a high-level vectorization strategy on an MLFunction.
/// Implements a high-level vectorization strategy on a Function.
/// The abstraction used is that of super-vectors, which provide a single,
/// compact, representation in the vector types, information that is expected
/// to reduce the impact of the phase ordering problem
@ -382,7 +382,7 @@ using namespace mlir;
///
/// Examples:
/// =========
/// Consider the following MLFunction:
/// Consider the following Function:
/// ```mlir
/// mlfunc @vector_add_2d(%M : index, %N : index) -> f32 {
/// %A = alloc (%M, %N) : memref<?x?xf32, 0>
@ -651,7 +651,7 @@ namespace {
struct Vectorize : public FunctionPass {
Vectorize() : FunctionPass(&Vectorize::passID) {}
PassResult runOnMLFunction(MLFunction *f) override;
PassResult runOnMLFunction(Function *f) override;
// Thread-safe RAII contexts local to pass, BumpPtrAllocator freed on exit.
MLFunctionMatcherContext MLContext;
@ -1264,13 +1264,13 @@ static bool vectorizeRootMatches(MLFunctionMatches matches,
return false;
}
/// Applies vectorization to the current MLFunction by searching over a bunch of
/// Applies vectorization to the current Function by searching over a bunch of
/// predetermined patterns.
PassResult Vectorize::runOnMLFunction(MLFunction *f) {
PassResult Vectorize::runOnMLFunction(Function *f) {
for (auto pat : makePatterns()) {
LLVM_DEBUG(dbgs() << "\n******************************************");
LLVM_DEBUG(dbgs() << "\n******************************************");
LLVM_DEBUG(dbgs() << "\n[early-vect] new pattern on MLFunction\n");
LLVM_DEBUG(dbgs() << "\n[early-vect] new pattern on Function\n");
LLVM_DEBUG(f->print(dbgs()));
unsigned patternDepth = pat.getDepth();
auto matches = pat.match(f);

View File

@ -25,16 +25,15 @@ namespace llvm {
// Specialize DOTGraphTraits to produce more readable output.
template <>
struct llvm::DOTGraphTraits<const CFGFunction *>
: public DefaultDOTGraphTraits {
struct llvm::DOTGraphTraits<const Function *> : public DefaultDOTGraphTraits {
using DefaultDOTGraphTraits::DefaultDOTGraphTraits;
static std::string getNodeLabel(const BasicBlock *basicBlock,
const CFGFunction *);
const Function *);
};
std::string llvm::DOTGraphTraits<const CFGFunction *>::getNodeLabel(
const BasicBlock *basicBlock, const CFGFunction *) {
std::string llvm::DOTGraphTraits<const Function *>::getNodeLabel(
const BasicBlock *basicBlock, const Function *) {
// Reuse the print output for the node labels.
std::string outStreamStr;
raw_string_ostream os(outStreamStr);
@ -57,19 +56,19 @@ std::string llvm::DOTGraphTraits<const CFGFunction *>::getNodeLabel(
} // end namespace llvm
void mlir::viewGraph(const CFGFunction &function, const llvm::Twine &name,
void mlir::viewGraph(const Function &function, const llvm::Twine &name,
bool shortNames, const llvm::Twine &title,
llvm::GraphProgram::Name program) {
llvm::ViewGraph(&function, name, shortNames, title, program);
}
llvm::raw_ostream &mlir::writeGraph(llvm::raw_ostream &os,
const CFGFunction *function,
bool shortNames, const llvm::Twine &title) {
const Function *function, bool shortNames,
const llvm::Twine &title) {
return llvm::WriteGraph(os, function, shortNames, title);
}
void mlir::CFGFunction::viewGraph() const {
void mlir::Function::viewGraph() const {
::mlir::viewGraph(*this, llvm::Twine("cfgfunc ") + getName().str());
}
@ -79,7 +78,7 @@ struct PrintCFGPass : public FunctionPass {
const llvm::Twine &title = "")
: FunctionPass(&PrintCFGPass::passID), os(os), shortNames(shortNames),
title(title) {}
PassResult runOnCFGFunction(CFGFunction *function) override {
PassResult runOnCFGFunction(Function *function) override {
mlir::writeGraph(os, function, shortNames, title);
return success();
}