How to execute the model up to given intermediate node

HI all! Could you please suggest a way how to execute a model up to a certain named Tensor? Consider the following code:

  iname='Rcnn_ctcV3/Inputs'                   # Name of input node
  oname='Rcnn_ctcV3/expand_conv1/add_1/add'   # Name of output node, a subject to change

  # Read TF Graph and GraphDef from file
  g,gd=fropen()
  sym,params=nnvm.frontend.from_tensorflow(gd)

  # Query TF for meta-data and construct dictionaries
  i=g.get_tensor_by_name(iname+':0')
  o=g.get_tensor_by_name(oname+':0')
  i_shape_dict={iname+':0': i.shape.as_list()}
  i_dtype_dict={iname+':0': i.dtype.as_numpy_dtype()}
 
  # Build the model with TVM 
  with nnvm.compiler.build_config(opt_level=opt_level):
    graph,lib,params=nnvm.compiler.build(graph=sym, target='llvm', shape=i_shape_dict, dtype=i_dtype_dict, params=params)

  m=graph_runtime.create(graph, lib, ctx=tvm.cpu(0))
  o_data=m.get_output(0, tvm.nd.empty(o.shape.as_list(), o.dtype.name)) # how to query oname ??

Here we export and build the model, then specify 0 as output node index, like in tutorials. The questions are:

  1. Why zero is always the correct ID of output node?
  2. How to determine the index of some intermediate node by its name, to, say, execute only the bottom half of the model?

Regards

‘0’ here is the index of the output if there is multi output.

For an intermediate node output you may try debug_get_output instead

API Ref.
https://docs.tvm.ai/api/python/graph_runtime.html?highlight=debug_get#tvm.contrib.graph_runtime.GraphModule.debug_get_output

Thanks!
FYI: I wrote a draft code which performs staging of TF model. Basically, this is a version of from_tensorflow function which produces NNVM DSL sources in addition to its usual results (I didn’t add support for lstm part in this version for simplicity). Generated sources looks a bit weird, but I found this approach extremely useful to perform experiments on converted models.

The method is briefly described here: https://github.com/grwlf/tvm/tree/staging/nnvm/python/nnvm/staging
I have no plans regarding this code, but I’d like to discuss how does this approach fit with upcoming Relay layer. Here is the post in Relay thread: