Computation graph after relay build

After I ran relay.build(), I got a json file representing the computation graph. I want to understand how graph executor knows the inputs to a tvm_op.

{ “nodes”: [ { “op”: “null”, “name”: “1”, “inputs”: [] }, { “op”: “null”, “name”: “2”, “inputs”: [] }, { “op”: “null”, “name”: “3”, “inputs”: [] }, { “op”: “null”, “name”: “4”, “inputs”: [] }, { “op”: “null”, “name”: “5”, “inputs”: [] }, { “op”: “null”, “name”: “6”, “inputs”: [] }, { “op”: “null”, “name”: “7”, “inputs”: [] }, { “op”: “null”, “name”: “8”, “inputs”: [] }, { “op”: “null”, “name”: “9”, “inputs”: [] }, { “op”: “tvm_op”, “name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu”, “attrs”: { “num_outputs”: “1”, “num_inputs”: “3”, “flatten_data”: “0”, “func_name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu”, “out_layout”: “”, “kernel_layout”: “OIHW”, “data_layout”: “NCHW”, “hash”: “30113ca1003060b9” }, “inputs”: [ [ 0, 0, 0 ], [ 1, 0, 0 ], [ 2, 0, 0 ] ] }, { “op”: “tvm_op”, “name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu_1”, “attrs”: { “num_outputs”: “1”, “num_inputs”: “3”, “flatten_data”: “0”, “func_name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu_1”, “out_layout”: “”, “kernel_layout”: “OIHW”, “data_layout”: “NCHW”, “hash”: “34d471bc283884cc” }, “inputs”: [ [ 9, 0, 0 ], [ 3, 0, 0 ], [ 4, 0, 0 ] ] }, { “op”: “tvm_op”, “name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu_2”, “attrs”: { “num_outputs”: “1”, “num_inputs”: “3”, “flatten_data”: “0”, “func_name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu_2”, “out_layout”: “”, “kernel_layout”: “OIHW”, “data_layout”: “NCHW”, “hash”: “5fd4faea1f176452” }, “inputs”: [ [ 10, 0, 0 ], [ 5, 0, 0 ], [ 6, 0, 0 ] ] }, { “op”: “tvm_op”, “name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add”, “attrs”: { “num_outputs”: “1”, “num_inputs”: “3”, “flatten_data”: “0”, “func_name”: “tvmgen_default_fused_nn_conv2d_expand_dims_add”, “out_layout”: “”, “kernel_layout”: “OIHW”, “data_layout”: “NCHW”, “hash”: “50711fe6ed492d61” }, “inputs”: [ [ 11, 0, 0 ], [ 7, 0, 0 ], [ 8, 0, 0 ] ] }, { “op”: “tvm_op”, “name”: “tvmgen_default_fused_reshape_transpose_reshape”, “attrs”: { “num_outputs”: “1”, “num_inputs”: “1”, “flatten_data”: “0”, “func_name”: “tvmgen_default_fused_reshape_transpose_reshape”, “hash”: “91dab2e6418a642a” }, “inputs”: [ [ 12, 0, 0 ] ] } ], “arg_nodes”: [0, 1, 2, 3, 4, 5, 6, 7, 8], “heads”: [ [ 13, 0, 0 ] ], “attrs”: { “dltype”: [ “list_str”, [ “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32”, “float32” ] ], “device_index”: [ “list_int”, [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ] ], “storage_id”: [ “list_int”, [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 10, 9 ] ], “shape”: [ “list_shape”, [ [1, 1, 224, 224], [64, 1, 5, 5], [64], [64, 64, 3, 3], [64], [32, 64, 3, 3], [32], [9, 32, 3, 3], [9], [1, 64, 224, 224], [1, 64, 224, 224], [1, 32, 224, 224], [1, 9, 224, 224], [1, 1, 672, 672] ] ] }, “node_row_ptr”: [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ] }

In the above json, how to map the inputs to their shapes for a tvm_op?

1 Like

Hi @harishch4, the inputs to a tvm_op should be stored in the inputs field with the format of (nodeid, index, version), and the shape info should be in the “shape” field in the json. This doc provides more info about the json format: Debugger — tvm 0.9.dev0 documentation.

1 Like

Thanks for the link; I am guessing the shapes of the inputs are derived from attrs->shape with nodeId as the index. For example, the op (tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu) has an input [0,0,0], nodeId here is 0 .Is the shape for that input is derived from attrs->shape[nodeId] = [1, 1, 224, 224] ?

And also, where can I get the info about each node’s output.? I want to understand the dependencies between nodes.

I believe the shapes of the nodes(that can be the graph inputs or tvm ops) are derived from attrs->shape with nodeId as the index. For example, tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu takes 3 inputs (nodes with nodeid 0,1,2, which are input tensors to the graph). The shape of this op tvmgen_default_fused_nn_conv2d_expand_dims_add_nn_relu is in the 10th of the list_shape, that is [1, 64, 224, 224], which means the output tensor shape of this op by having those 3 inputs by doing shape inference(a part of type inference in Relay).

Since each node has its input list, “heads” indicates the graph output, and the nodes in the graph json file are ordered by topological order, you can derive each node’s output from it?

1 Like

I found this piece of code [lines 455 & 459 ] helpful to understand what’s happening. **entry_id()[**which computes the index from node_row_ptr_] is giving the index(eid) and we can derive the shape of inputs and outputs of a particular tvm_op by attrs->shape[eid]. Please correct me if I’am wrong.