What's the requirement of the model which can be imported into NNVM & VTA?

I tried to use nnvm.frontend.from_onnx function to import my own pretrained model which consists of both cnn and rnn. But I got one error:
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
WARNING:root:Attribute momentum is disabled in nnvm.sym.batch_norm
WARNING:root:Attribute momentum is disabled in nnvm.sym.batch_norm
WARNING:root:Attribute momentum is disabled in nnvm.sym.batch_norm

Shape: Differently implemented in NNVM as a bypass (dummy operator)


KeyError Traceback (most recent call last)
in ()
1 onnx_model = onnx.load_model(‘crnn.onnx’)
2 # we can load the graph as NNVM compatible model
----> 3 sym, params = nnvm.frontend.from_onnx(onnx_model)

/home/lijun/tvm_git/tvm/nnvm/python/nnvm/frontend/onnx.pyc in from_onnx(model)
965 except AttributeError:
966 opset = 1
–> 967 sym, params = g.from_onnx(graph, opset)
968 return sym, params

/home/lijun/tvm_git/tvm/nnvm/python/nnvm/frontend/onnx.pyc in from_onnx(self, graph, opset)
820 shape=list(t_proto.dims))
821 else:
–> 822 op = self._convert_operator(op_name, inputs, attr, opset)
823 node_output = self._fix_outputs(op_name, node.output)
824 assert len(node_output) == len(op.list_output_names()), (

/home/lijun/tvm_git/tvm/nnvm/python/nnvm/frontend/onnx.pyc in _convert_operator(self, op_name, inputs, attrs, opset, identity_list, convert_map)
921 sym = get_nnvm_op(op_name)(*inputs, **attrs)
922 elif op_name in convert_map:
–> 923 sym = convert_map[op_name](inputs, attrs, self._params)
924 else:
925 raise NotImplementedError(

/home/lijun/tvm_git/tvm/nnvm/python/nnvm/frontend/onnx.pyc in _impl_v1(cls, inputs, attr, params)
623 “Either shape attribute or input should be set”)
624 if ‘input_as_shape’ in attr and attr[‘input_as_shape’]:
–> 625 shape = params[inputs[0].list_output_names()[0]].asnumpy()
626 else:
627 is_full = False

KeyError: ‘concatenate27_output’

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

And what’s more, I tried to follow the resnet18 toturial of vta steps to port the model used in the NNVM ONNX tutorial to PYNQ. I got errors too…

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

AttributeError Traceback (most recent call last)
in ()
9
10 # Build the graph runtime
—> 11 graph, lib, params = generate_graph(sym,params,device)
12 m = graph_runtime.create(graph, lib, ctx)
13

in generate_graph(sym, params, device)
40 # if target.device_name == “vta”:
41 assert env.BLOCK_IN == env.BLOCK_OUT
—> 42 sym = vta.graph.pack(sym, None, env.BATCH, env.BLOCK_OUT)
43 # with nnvm.compiler.build_config(opt_level=3):
44 # if target.device_name != “vta”:

/home/lijun/tvm_git/tvm/vta/python/vta/graph.pyc in pack(graph, shape_dict, bfactor, cfactor, start_name)
235 The transformed graph.
236 “”"
–> 237 graph = graph_attr.set_shape_inputs(graph, shape_dict)
238 graph = graph.apply(“InferShape”)
239 shape = graph.json_attr(“shape”)

/home/lijun/tvm_git/tvm/nnvm/python/nnvm/compiler/graph_attr.pyc in set_shape_inputs(g, shape)
22 “”"
23 list_shape = [
—> 24 shape.get(name, ()) for name in g.index.input_names]
25 g._set_json_attr(“shape_inputs”, list_shape, ‘list_shape’)
26 return g

AttributeError: ‘Symbol’ object has no attribute ‘index’
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

Is there anyone can help me? Thanx in advance for any answer I’ll get.

I think this is due to symbol input instead of graph.
Calling nnvm.graph.create before pack might solve the error.

Refer to my post [VTA] Questions about VTA packed format for my view on the type of networks that can be processed by VTA.
The assertion error you are getting is because the channel size is not an integer multiple of the GEMM core.
The developers didn’t write schedules that deal with these problems and that’s why it currently doesn’t work. Does not mean it is not possible (you would have to extend their scheduling method).

Hope this helps