Understanding TVM/Relay's PartitionGraph()(mod) function

Hi All,

I am working on trying to understand TVM/Relay’s graph partitioning functionalities. Specifically, I have created the following simple example, and I am getting the error as follows.

I understand that PartitionGraph() function assumes the graph is annotated with target with AnnotateTarget([“target”]) function. Based on my reading, I have written the following example, to be able to partition the “add” operator into a sperate function (I understand that using Relay Pattern language or Traversing AST, I can partition the add into a separate relay function), but here I am trying to understand how PartitionGraph() works for a simple cases.

Here is my code:

graph_type =1


def _register_external_op_helper(op_name, supported=True):

    @tvm.ir.register_op_attr(op_name, "target.special")
    def _func_wrapper(attrs, args):
        return supported

    return _func_wrapper


_register_external_op_helper("add")
_register_external_op_helper("subtract")



if graph_type == 1:
    # this is test case for graph type 1
    print("Graph type 1")

    # graph 1: true branch
    x1 = relay.var('x', shape=(10, 1))
    y1 = relay.var('y', shape=(10, 1))

    # graph 2: false branch
    x2 = relay.var('x', shape=(10, 1))
    y2 = relay.var('y', shape=(10, 1))

    f1 = relay.op.add(x1, y1)

    f2 = relay.op.multiply(x2, y2)

    cond = relay.var('c')
    result = relay.If(cond, true_branch=f1, false_branch=f2)
    f = relay.Function([], result)

    mod = tvm.IRModule({"main": f})

    mod = relay.transform.AnnotateTarget(["special"])(mod)  # ==> It GIVES ERROR here
    mod = relay.transform.PartitionGraph()(mod)  # 

Here is the error that I got stuck.

Graph type 1
Traceback (most recent call last):
  File "C:\Program Files\JetBrains\PyCharm 2020.1.2\plugins\python\helpers\pydev\pydevd.py", line 1438, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm 2020.1.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/repos/tvm23/tvm/graph_opt/subgraph/PartitionGraphTry.py", line 48, in <module>
    mod = relay.transform.AnnotateTarget(["special"])(mod)  # Output: Figure 2
  File "C:\repos\tvm23\tvm\python\tvm\ir\transform.py", line 127, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "C:\repos\tvm23\tvm\python\tvm\_ffi\_ctypes\packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  File "C:\repos\tvm23\tvm\src\ir\module.cc", line 192
TVMError: Check failed: fv.size() == 0 (5 vs. 0) : There are free variables: [Var(c, ty=TensorType([], bool)), Var(x, ty=TensorType([10, 1], float32)), Var(y, ty=TensorType([10, 1], float32)), Var(x, ty=TensorType([10, 1], float32)), Var(y, ty=TensorType([10, 1], float32))] in function: #[version = "0.0.5"]
fn () -> Tensor[(10, 1), float32] {
  free_var %c: bool;
  if (%c) {
    free_var %x: Tensor[(10, 1), float32];
    free_var %y: Tensor[(10, 1), float32];
    add(%x, %y) /* ty=Tensor[(10, 1), float32] */
  } else {
    free_var %x1: Tensor[(10, 1), float32];
    free_var %y1: Tensor[(10, 1), float32];
    multiply(%x1, %y1) /* ty=Tensor[(10, 1), float32] */
  }
}

Ping @comaniac @manupa-arm. I have a feeling the if/else handling in this pass might not be correct. Are you only seeing this problem when you have an If?

The recent PR should fix this:

See this unit test:

1 Like

Isn’t it simply a problem of free variables? I suggest replacing

f = relay.Function([], result)

with

f = relay.Function(relay.analysis.free_vars(result), result)
1 Like

Good catch @masahi :grinning:

Hi All,

Thanks @masahi, @comaniac and @mbaret. It worked now. That being said, I’d like to confirm 1) if I understand the functionality of PartitionGraph() function in the relay, 2) I’d like to understand if ParitioGraph() can be used for my specific use case.

Here is my understanding of how PartitionGraph() function works:

  • List item PartitionGraph() function partitions the IRModule into functions based on the annotations.
  • List item Annotations are achieved by adding attributes of the same kind operators (e.g., add, subtract) by function AnnotateTarget() function.

Here is my main question:

  • Annotations are done per operator kind such as “add”, not operator instances. For example, if I have two “add” operators in my true and false branch, and I’d like to partition the true and false branches separately, can PartitionGraph() can help me? I know I can overwrite visit_if() function in ExprMutator class to achieve what I just described, but I am looking for a more high-level solutions for more complex problems.

To me, it looks like ParitioGraph() seems limited because it partitions based on annotations that are attached to per operator kind.

Ideally, I’d like to have a solution, that does the followings:

  • Partition relay IRModule based on user-provided annotations on expressions into separate Relay IR functions (or IRModules)

After mod = tvm.IRModule({“main”: f})

print(mod)
def @main(%c, %x: Tensor[(10, 1), float32], %y: Tensor[(10, 1), float32], %x1: Tensor[(10, 1), float32], %y1: Tensor[(10, 1), float32]) {
  if (%c) {
    add(%x, %y)
  } else {
    multiply(%x1, %y1)
  }
}

After annotation: mod = relay.transform.AnnotateTarget([“special”])(mod)

print(mod)
def @main(%c: bool, %x: Tensor[(10, 1), float32], %y: Tensor[(10, 1), float32], %x1: Tensor[(10, 1), float32], %y1: Tensor[(10, 1), float32]) -> Tensor[(10, 1), float32] {
  %0 = annotation.compiler_begin(%c, meta[relay.attrs.CompilerAttrs][0]) /* ty=bool */;
  %9 = if (%0) {
    %1 = annotation.compiler_begin(%x, meta[relay.attrs.CompilerAttrs][1]) /* ty=Tensor[(10, 1), float32] */;
    %2 = annotation.compiler_begin(%y, meta[relay.attrs.CompilerAttrs][2]) /* ty=Tensor[(10, 1), float32] */;
    %3 = add(%1, %2) /* ty=Tensor[(10, 1), float32] */;
    %4 = annotation.compiler_end(%3, meta[relay.attrs.CompilerAttrs][3]) /* ty=Tensor[(10, 1), float32] */;
    annotation.compiler_begin(%4, meta[relay.attrs.CompilerAttrs][4]) /* ty=Tensor[(10, 1), float32] */
  } else {
    %5 = annotation.compiler_begin(%x1, meta[relay.attrs.CompilerAttrs][5]) /* ty=Tensor[(10, 1), float32] */;
    %6 = annotation.compiler_begin(%y1, meta[relay.attrs.CompilerAttrs][6]) /* ty=Tensor[(10, 1), float32] */;
    %7 = multiply(%5, %6) /* ty=Tensor[(10, 1), float32] */;
    %8 = annotation.compiler_end(%7, meta[relay.attrs.CompilerAttrs][7]) /* ty=Tensor[(10, 1), float32] */;
    annotation.compiler_begin(%8, meta[relay.attrs.CompilerAttrs][8]) /* ty=Tensor[(10, 1), float32] */
  };
  annotation.compiler_end(%9, meta[relay.attrs.CompilerAttrs][9]) /* ty=Tensor[(10, 1), float32] */
}

After mod = relay.transform.PartitionGraph()(mod)

def @special_0(%special_0_i0: Tensor[(10, 1), float32], %special_0_i1: Tensor[(10, 1), float32], global_symbol="special_0", Primitive=1, Compiler="special", Inline=1) -> Tensor[(10, 1), float32] {
  add(%special_0_i0, %special_0_i1) /* ty=Tensor[(10, 1), float32] */
}
def @main(%c: bool, %x: Tensor[(10, 1), float32], %y: Tensor[(10, 1), float32], %x1: Tensor[(10, 1), float32], %y1: Tensor[(10, 1), float32]) -> Tensor[(10, 1), float32] {
  if (%c) {
    @special_0(%x, %y) /* ty=Tensor[(10, 1), float32] */
  } else {
    multiply(%x1, %y1) /* ty=Tensor[(10, 1), float32] */
  }
}

For example, if I have two “add” operators in my true and false branch, and I’d like to partition the true and false branches separately, can PartitionGraph() can help me?

This is exactly PartitionGraph does.

To me, it looks like ParitioGraph() seems limited because it partitions based on annotations that are attached to per operator kind.

This is because you only invoke AnnotateTarget -> PartitionGraph. There is another pass called MergeCompilerRegion that removes unnecessary annotations, so you should go through AnnotateTarget -> MergeCompilerRegion -> PartitionGraph.

The expected result of your example should be:

def @special_0(%special_0_i0: Tensor[(10, 1), float32], %special_0_i1: Tensor[(10, 1), float32], global_symbol="special_0", Primitive=1, Compiler="special", Inline=1) -> Tensor[(10, 1), float32] {
  add(%special_0_i0, %special_0_i1) /* ty=Tensor[(10, 1), float32] */
}
def @special_1(%special_0_i0: Tensor[(10, 1), float32], %special_0_i1: Tensor[(10, 1), float32], global_symbol="special_0", Primitive=1, Compiler="special", Inline=1) -> Tensor[(10, 1), float32] {
  multiply(%special_0_i0, %special_0_i1) /* ty=Tensor[(10, 1), float32] */
}
def @main(%c: bool, %x: Tensor[(10, 1), float32], %y: Tensor[(10, 1), float32], %x1: Tensor[(10, 1), float32], %y1: Tensor[(10, 1), float32]) -> Tensor[(10, 1), float32] {
  if (%c) {
    @special_0(%x, %y) /* ty=Tensor[(10, 1), float32] */
  } else {
    @special_1(%x1, %y1) /* ty=Tensor[(10, 1), float32] */
  }
}

If it’s not, then we may have some issues/bugs to be fixed.

1 Like

Thank @comaniac.

I have tried to use MergeCompilerRegion, and it is giving me an error with the following code. The following code works (I commented out MergeCompilerRegion), and it produces output with UNMERGED @special_ definitions. Ideally, i like to have one partition for the expressions in the true branch and I want to get another partition for the false branch.

def _register_external_op_helper(op_name, supported=True):

    @tvm.ir.register_op_attr(op_name, "target.special")
    def _func_wrapper(attrs, args):
        return supported

    return _func_wrapper


_register_external_op_helper("multiply")
_register_external_op_helper("add")
_register_external_op_helper("subtract")



if graph_type == 1:
    # this is test case for graph type 1
    print("Graph type 1")

    # graph 1: true branch
    x1 = relay.var('x1', shape=(10, 1))
    y1 = relay.var('y1', shape=(10, 1))
    f1 = relay.op.multiply(x1, y1)

    x3 = relay.var('x3', shape=(10, 1))
    y3 = relay.var('y3', shape=(10, 1))
    f3 = relay.op.multiply(x3, y3)

    true_branch = relay.op.add(f1, f3)

    # graph 2: false branch
    x2 = relay.var('x2', shape=(10, 1))
    y2 = relay.var('y2', shape=(10, 1))
    f2 = relay.op.add(x2, y2)

    x4 = relay.var('x4', shape=(10, 1))
    y4 = relay.var('y4', shape=(10, 1))
    f4 = relay.op.add(x4, y4)

    false_branch = relay.op.add(f2, f4)

    cond = relay.var('c')
    result = relay.If(cond, true_branch=true_branch, false_branch=false_branch)
    # f = relay.Function([], result)
    f = relay.Function(relay.analysis.free_vars(result), result)


    mod = tvm.IRModule({"main": f})
    mod = relay.transform.AnnotateTarget(["special"])(mod)  # Output: Figure 2
    #mod = relay.transform.MergeCompilerRegions()(mod)
    mod = relay.transform.PartitionGraph()(mod)  # Output: Figure 4

Here is the error that I get when I uncomment the MergeCompilerRegions function

Graph type 1
Traceback (most recent call last):
  File "C:/repos/tvm23/tvm/graph_opt/subgraph/PartitionGraphTry.py", line 62, in <module>
    mod = relay.transform.MergeCompilerRegions()(mod)
  File "C:\repos\tvm23\tvm\python\tvm\ir\transform.py", line 127, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "C:\repos\tvm23\tvm\python\tvm\_ffi\_ctypes\packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: TVMError: Cannot find the corresponding region for end annotation:
#[version = "0.0.5"]
free_var %c: bool;
%0 = annotation.compiler_begin(%c, meta[relay.attrs.CompilerAttrs][0]) /* ty=bool */;
%25 = if (%0) {
  free_var %x1: Tensor[(10, 1), float32];
  %1 = annotation.compiler_begin(%x1, meta[relay.attrs.CompilerAttrs][1]) /* ty=Tensor[(10, 1), float32] */;
  free_var %y1: Tensor[(10, 1), float32];
  %2 = annotation.compiler_begin(%y1, meta[relay.attrs.CompilerAttrs][2]) /* ty=Tensor[(10, 1), float32] */;
  %3 = multiply(%1, %2) /* ty=Tensor[(10, 1), float32] */;
  %4 = annotation.compiler_end(%3, meta[relay.attrs.CompilerAttrs][3]) /* ty=Tensor[(10, 1), float32] */;
  %5 = annotation.compiler_begin(%4, meta[relay.attrs.CompilerAttrs][4]) /* ty=Tensor[(10, 1), float32] */;
  free_var %x3: Tensor[(10, 1), float32];
  %6 = annotation.compiler_begin(%x3, meta[relay.attrs.CompilerAttrs][5]) /* ty=Tensor[(10, 1), float32] */;
  free_var %y3: Tensor[(10, 1), float32];
  %7 = annotation.compiler_begin(%y3, meta[relay.attrs.CompilerAttrs][6]) /* ty=Tensor[(10, 1), float32] */;
  %8 = multiply(%6, %7) /* ty=Tensor[(10, 1), float32] */;
  %9 = annotation.compiler_end(%8, meta[relay.attrs.CompilerAttrs][7]) /* ty=Tensor[(10, 1), float32] */;
  %10 = annotation.compiler_begin(%9, meta[relay.attrs.CompilerAttrs][8]) /* ty=Tensor[(10, 1), float32] */;
  %11 = add(%5, %10) /* ty=Tensor[(10, 1), float32] */;
  %12 = annotation.compiler_end(%11, meta[relay.attrs.CompilerAttrs][9]) /* ty=Tensor[(10, 1), float32] */;
  annotation.compiler_begin(%12, meta[relay.attrs.CompilerAttrs][10]) /* ty=Tensor[(10, 1), float32] */
} else {
  free_var %x2: Tensor[(10, 1), float32];
  %13 = annotation.compiler_begin(%x2, meta[relay.attrs.CompilerAttrs][11]) /* ty=Tensor[(10, 1), float32] */;
  free_var %y2: Tensor[(10, 1), float32];
  %14 = annotation.compiler_begin(%y2, meta[relay.attrs.CompilerAttrs][12]) /* ty=Tensor[(10, 1), float32] */;
  %15 = add(%13, %14) /* ty=Tensor[(10, 1), float32] */;
  %16 = annotation.compiler_end(%15, meta[relay.attrs.CompilerAttrs][13]) /* ty=Tensor[(10, 1), float32] */;
  %17 = annotation.compiler_begin(%16, meta[relay.attrs.CompilerAttrs][14]) /* ty=Tensor[(10, 1), float32] */;
  free_var %x4: Tensor[(10, 1), float32];
  %18 = annotation.compiler_begin(%x4, meta[relay.attrs.CompilerAttrs][15]) /* ty=Tensor[(10, 1), float32] */;
  free_var %y4: Tensor[(10, 1), float32];
  %19 = annotation.compiler_begin(%y4, meta[relay.attrs.CompilerAttrs][16]) /* ty=Tensor[(10, 1), float32] */;
  %20 = add(%18, %19) /* ty=Tensor[(10, 1), float32] */;
  %21 = annotation.compiler_end(%20, meta[relay.attrs.CompilerAttrs][17]) /* ty=Tensor[(10, 1), float32] */;
  %22 = annotation.compiler_begin(%21, meta[relay.attrs.CompilerAttrs][18]) /* ty=Tensor[(10, 1), float32] */;
  %23 = add(%17, %22) /* ty=Tensor[(10, 1), float32] */;
  %24 = annotation.compiler_end(%23, meta[relay.attrs.CompilerAttrs][19]) /* ty=Tensor[(10, 1), float32] */;
  annotation.compiler_begin(%24, meta[relay.attrs.CompilerAttrs][20]) /* ty=Tensor[(10, 1), float32] */
};
annotation.compiler_end(%25, meta[relay.attrs.CompilerAttrs][21]) /* ty=Tensor[(10, 1), float32] */
/* For debugging purposes the metadata section has been omitted.
 * If you would like to see the full metadata section you can set the 
 * option to `True` when invoking `astext`. 
 */

Process finished with exit code 1

Hi All,

Hi @comaniac, I want to follow up with my above post. I removed the IF statement, and now it works. Is that mean there is some MergeCompilerRegions does not fully support IF yet.

This is the code that works.

    # this is test case for graph type 1
    print("Graph type 1")

    # graph 1: true branch
    x1 = relay.var('x1', shape=(10, 1))
    y1 = relay.var('y1', shape=(10, 1))
    f1 = relay.op.multiply(x1, y1)

    x3 = relay.var('x3', shape=(10, 1))
    y3 = relay.var('y3', shape=(10, 1))
    f3 = relay.op.multiply(x3, y3)

    true_branch = relay.op.add(f1, f3)

    # graph 2: false branch
    x2 = relay.var('x2', shape=(10, 1))
    y2 = relay.var('y2', shape=(10, 1))
    f2 = relay.op.add(x2, y2)

    x4 = relay.var('x4', shape=(10, 1))
    y4 = relay.var('y4', shape=(10, 1))
    f4 = relay.op.add(x4, y4)

    false_branch = relay.op.add(f2, f4)

    cond = relay.var('c')
    #result = relay.If(cond, true_branch=true_branch, false_branch=false_branch)
    result = true_branch
    #f = relay.Function([], result)
    f = relay.Function(relay.analysis.free_vars(result), result)


    mod = tvm.IRModule({"main": f})
    mod = relay.transform.AnnotateTarget(["special"])(mod)  # Output: Figure 2
    mod = relay.transform.MergeCompilerRegions()(mod)
    mod = relay.transform.PartitionGraph()(mod)  # Output: Figure 4

This is the CODE that DOES NOT work.

    # this is test case for graph type 1
    print("Graph type 1")

    # graph 1: true branch
    x1 = relay.var('x1', shape=(10, 1))
    y1 = relay.var('y1', shape=(10, 1))
    f1 = relay.op.multiply(x1, y1)

    x3 = relay.var('x3', shape=(10, 1))
    y3 = relay.var('y3', shape=(10, 1))
    f3 = relay.op.multiply(x3, y3)

    true_branch = relay.op.add(f1, f3)

    # graph 2: false branch
    x2 = relay.var('x2', shape=(10, 1))
    y2 = relay.var('y2', shape=(10, 1))
    f2 = relay.op.add(x2, y2)

    x4 = relay.var('x4', shape=(10, 1))
    y4 = relay.var('y4', shape=(10, 1))
    f4 = relay.op.add(x4, y4)

    false_branch = relay.op.add(f2, f4)

    cond = relay.var('c')
    result = relay.If(cond, true_branch=true_branch, false_branch=false_branch)
    #result = true_branch
    #f = relay.Function([], result)
    f = relay.Function(relay.analysis.free_vars(result), result)


    mod = tvm.IRModule({"main": f})
    mod = relay.transform.AnnotateTarget(["special"])(mod)  # Output: Figure 2
    mod = relay.transform.MergeCompilerRegions()(mod)
    mod = relay.transform.PartitionGraph()(mod)  # Output: Figure 4

My colleague was working on the IF node and this should be fixed already. Have you tried the main branch with the latest commit?

This is the script I used:

import tvm
from tvm import relay

def _register_external_op_helper(op_name, supported=True):

    @tvm.ir.register_op_attr(op_name, "target.special")
    def _func_wrapper(expr):
        return supported

    return _func_wrapper


_register_external_op_helper("add")
_register_external_op_helper("subtract")


# graph 1: true branch
x1 = relay.var('x1', shape=(10, 1))
y1 = relay.var('y1', shape=(10, 1))
f1 = relay.op.multiply(x1, y1)

x3 = relay.var('x3', shape=(10, 1))
y3 = relay.var('y3', shape=(10, 1))
f3 = relay.op.multiply(x3, y3)

true_branch = relay.op.add(f1, f3)

# graph 2: false branch
x2 = relay.var('x2', shape=(10, 1))
y2 = relay.var('y2', shape=(10, 1))
f2 = relay.op.add(x2, y2)

x4 = relay.var('x4', shape=(10, 1))
y4 = relay.var('y4', shape=(10, 1))
f4 = relay.op.add(x4, y4)

false_branch = relay.op.add(f2, f4)

cond = relay.var('c')
result = relay.If(cond, true_branch=true_branch, false_branch=false_branch)
f = relay.Function(relay.analysis.free_vars(result), result)


mod = tvm.IRModule({"main": f})
mod = relay.transform.AnnotateTarget(["special"])(mod)
mod = relay.transform.MergeCompilerRegions()(mod)
mod = relay.transform.PartitionGraph()(mod)
print(mod)

And this is the output, which looks good to me.

def @main(%c: bool, %x1: Tensor[(10, 1), float32], %y1: Tensor[(10, 1), float32], %x3: Tensor[(10, 1), float32], %y3: Tensor[(10, 1), float32], %x2: Tensor[(10, 1), float32], %y2: Tensor[(10, 1), float32], %x4: Tensor[(10, 1), float32], %y4: Tensor[(10, 1), float32]) -> Tensor[(10, 1), float32] {
  if (%c) {
    %0 = multiply(%x1, %y1) /* ty=Tensor[(10, 1), float32] */;
    %1 = multiply(%x3, %y3) /* ty=Tensor[(10, 1), float32] */;
    @special_0(%0, %1) /* ty=Tensor[(10, 1), float32] */
  } else {
    @special_2(%x2, %y2, %x4, %y4) /* ty=Tensor[(10, 1), float32] */
  }
}

def @special_0(%special_0_i0: Tensor[(10, 1), float32], %special_0_i1: Tensor[(10, 1), float32], global_symbol="special_0", Primitive=1, Compiler="special", Inline=1) -> Tensor[(10, 1), float32] {
  add(%special_0_i0, %special_0_i1) /* ty=Tensor[(10, 1), float32] */
}

def @special_2(%special_2_i0: Tensor[(10, 1), float32], %special_2_i1: Tensor[(10, 1), float32], %special_2_i2: Tensor[(10, 1), float32], %special_2_i3: Tensor[(10, 1), float32], global_symbol="special_2", Primitive=1, Compiler="special", Inline=1) -> Tensor[(10, 1), float32] {
  %2 = add(%special_2_i0, %special_2_i1) /* ty=Tensor[(10, 1), float32] */;
  %3 = add(%special_2_i2, %special_2_i3) /* ty=Tensor[(10, 1), float32] */;
  add(%2, %3) /* ty=Tensor[(10, 1), float32] */
}

1 Like