What is the difference between CallNode and FunctionNode??

i am reading the tvm source code, and want to know how the relay working, thx

data = relay.var("data")
bias = relay.var("bias")
add_op = relay.add(data, bias)
add_func = relay.Function([data, bias], add_op)
add_gvar = relay.GlobalVar("AddFunc")

input0 = relay.var("input0")
input1 = relay.var("input1")
input2 = relay.var("input2")
add_01 = relay.Call(add_gvar, [input0, input1])
add_012 = relay.Call(add_gvar, [input2, add_01])
main_func = relay.Function([input0, input1, input2], add_012)
main_gvar = relay.GlobalVar("main")

mod = tvm.IRModule({main_gvar: main_func, add_gvar: add_func})

FunctionNode just represents a relay function definition and a CallNode is the caller of a function.

FunctionNode is used heavily in Relay fusion where you can fuse calls to multiple ops into a single Relay Function, which would get lowered to a single function in TIR and eventually in the backend.

So for example if you have a graph with 2 sets of conv2d → bias_add → relu calls and would like to fuse them, you could do that by grouping them into functions. Below is a sample code of how that would look along with the expected output

data = relay.var("data") : Var
weights = relay.var("weights") : Var
weights2 = relay.var("weights2") : Var
bias = relay.var("bias") : Var
bias2 = relay.var("bias2") : Var
                                                                                                                                                                                            conv2d1 = relay.nn.conv2d(data, weights) : Any                                                                                                                                              bias_add1 = relay.nn.bias_add(conv2d1, bias) : Any                                                                                                                                          relu1 = relay.nn.relu(bias_add1) : Any                                                                                                                                                                                                                                                                                                                                                  conv2d2 = relay.nn.conv2d(relu1, weights2) : Any                                                                                                                                            bias_add2 = relay.nn.bias_add(conv2d2, bias2) : Any                                                                                                                                         relu2 = relay.nn.relu(bias_add2) : Any                                                                                                                                                      
mod = tvm.IRModule() : IRModule
mod["main"] = relay.Function([data, weights, bias, weights2, bias2], relu2)
print("original_mod")
print("------------")
print(mod)

func1 = relay.Function([data, weights, bias], relu1) : Function
gvar1 = relay.GlobalVar("fused_conv2d_bias_add") : GlobalVar
func2 = relay.Function([weights2, bias2], relu2) : Function
gvar2 = relay.GlobalVar("fused_conv2d_bias_add_2") : GlobalVar

call1 = relay.Call(gvar1, [data, weights, bias]) : Call
call2 = relay.Call(gvar2, [call1, weights2, bias2]) : Call

print("fused_mod")
print("---------")
mod = tvm.IRModule({gvar1: func1, gvar2: func2}) : IRModule
mod["main"] = relay.Function([data, weights, bias, weights2, bias2], call2)
print(mod)

And the expected output would be:

original_mod
------------
def @main(%data, %weights, %bias, %weights2, %bias2) {
  %0 = nn.conv2d(%data, %weights, padding=[0, 0, 0, 0]);
  %1 = nn.bias_add(%0, %bias);
  %2 = nn.relu(%1);
  %3 = nn.conv2d(%2, %weights2, padding=[0, 0, 0, 0]);
  %4 = nn.bias_add(%3, %bias2);
  nn.relu(%4)
}

fused_mod
---------
def @fused_conv2d_bias_add(%data, %weights, %bias) {
  %0 = nn.conv2d(%data, %weights, padding=[0, 0, 0, 0]);
  %1 = nn.bias_add(%0, %bias);
  nn.relu(%1)
}

def @fused_conv2d_bias_add_2(%weights2, %bias2) {
  %2 = nn.relu(%1);
  %3 = nn.conv2d(%2, %weights2, padding=[0, 0, 0, 0]);
  %4 = nn.bias_add(%3, %bias2);
  nn.relu(%4)
}

def @main(%data-malformed-ir, %weights-malformed-ir, %bias-malformed-ir, %weights2-malformed-ir, %bias2-malformed-ir) {
  %5 = @fused_conv2d_bias_add(%data, %weights, %bias);
  @fused_conv2d_bias_add_2(%5, %weights2, %bias2)
}
2 Likes

when i want to build this module, it seem have an error

from tvm import relay
from tvm.relay import testing
import tvm
from tvm.contrib import relay_viz

data = relay.var("data")
weights = relay.var("weights")
weights2 = relay.var("weights2")
bias = relay.var("bias")
bias2 = relay.var("bias2")

conv2d1 = relay.nn.conv2d(data, weights)
bias_add1 = relay.nn.bias_add(conv2d1, bias)
relu1 = relay.nn.relu(bias_add1)
conv2d2 = relay.nn.conv2d(relu1, weights2)
bias_add2 = relay.nn.bias_add(conv2d2, bias2)
relu2 = relay.nn.relu(bias_add2)

mod = tvm.IRModule()
mod["main"] = relay.Function([data, weights, weights2, bias, bias2], relu2)
print("original_mod")
print("------------")
print(mod)
lib = relay.build(mod, "llvm")

This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened. The type inference pass was unable to infer a type for this expression.

It looks like it’s missing shape ??

That’s right. I only gave a small example to illustrate the difference between FunctionNode and CallNode. Since InferType pass computes the shape and dtype for all operators in the graph, you might need to set the shapes for the Relay Input Variables, and then InferType would be able to Infer the shape for all the operators of the graph based on the input.

Something like this should work:

data = relay.var("data", shape=(1,3,128,128))
weights = relay.var("weights", shape=(32,3,1,1))
weights2 = relay.var("weights2", shape=(32,32,1,1))
bias = relay.var("bias", shape=(32,))
bias2 = relay.var("bias2", shape=(32,))

Note, the fused IR I showed above was just an illustration. The right way to fuse ops in Relay would be to use the FuseOps pass, so what I posted above for the fused IR might not build directly. It might still work, but I haven’t tested it.