GIven a CallNode object with the following class definition, how can the tensor type (TensorTypeNode) returned by the call be accessed?
For example, in the following graph, how can I access the output tensor type (/* ty=Tensor[(1, 32, 32, 8), float32] */) from the conv2d CallNode object (%1)?
i think the output tensor type is just the type of the callnode(conv2d), access its checked_type().
1 Like
Thanks! Using checked_type() works on the conv2d CallNode.
However, I noticed that if the CallNode is an add op, i.e., CallNode(add), checked_type() is not defined even after running the InferType pass. Any idea why the checked type might be populated for some ops but not for others?
Do you have a code snippet to show this? The below seems to work for me.
from tvm import IRModule, relay
from tvm.relay.transform import InferType
a = relay.var("a", shape=[3])
b = relay.var("b", shape=[3])
c = a + b
mod = IRModule.from_expr(c)
new_mod = InferType()(mod)
print(mod)
print(new_mod)
print(new_mod["main"].body.checked_type)
InferType returns a new module which might be the issue.
I’m loading an ONNX model through the frontend that produces the following Relay IR after the InferType pass:
The nodes of interest are the conv2d nodes at lines 14, 25 and 38 where I’m trying to determine the type of their first argument in my custom Relay transform pass using the following code snippet:
if (const CallNode* call_value = value.as<CallNode>()) {
Dump(call_value->op);
if (call_value->checked_type_.defined())
std::cout << "CHECKED TYPE\n";
}
This gives the following output where the checked type is only defined for the layout_transform node in line 13 but not for the add nodes in lines 24 and 37.

That’s interesting, since in the relay the type annotations for those nodes are seen in your printout.
Care to share the onnx model?