How to run my caffe model with nnvm/tvm?

I have a caffe model, I want to run with tvm.Should I convert it to coreml model or mxnet model?

By now, there are only mxnet fronted. So if you want to import caffe model, you can convet it by onnx to mxnet or define compute graph in nnvm. I’m trying the second way.

Coreml is not ok? Is your second method successful? is there some reference about ‘ define compute graph in nnvm’?

I’m not sure. My platform is server-class CPU.
http://nnvm.tvmlang.org/tutorials/get_started.html#sphx-glr-tutorials-get-started-py

In fact, I think CoreML is ok. Tensorflow / Caffe is promised by Apple. I convert Tensorflow to CoreML, and modify / add some code for supporting CoreML in NNVM (for example, supporting UnaryFunctionLayer), Tensorflow model is ok. I will create pull request for CoreML model soon.

Addtionally, don’t conside ONNX. ONNX sucks at least until now. Many bugs of its converters, at least for CoreML / Tensorflow converter. ONNX supported by NNVM is not very well, I am also implementing some operators in NNVM (For example Affine and so on)

1 Like

I convert my caffe model to coreml.When “graph, lib, params = nnvm.compiler.build(sym, target=tvm.target.mali(), shape=shape_dict, params=params,target_host=target_host);” It is not ok as fllow:

Traceback (most recent call last):
File “from_coreml.py”, line 70, in
graph, lib, params = nnvm.compiler.build(sym, target=tvm.target.mali(), shape=shape_dict, params=params,target_host=target_host);
File “/home/firefly/.local/lib/python2.7/site-packages/nnvm-0.8.0-py2.7.egg/nnvm/compiler/build_module.py”, line 261, in build
graph = graph.apply(“GraphFusePartition”).apply(“GraphFuseCompile”)
File “/home/firefly/.local/lib/python2.7/site-packages/nnvm-0.8.0-py2.7.egg/nnvm/graph.py”, line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File “/home/firefly/.local/lib/python2.7/site-packages/nnvm-0.8.0-py2.7.egg/nnvm/_base.py”, line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File “/home/firefly/.local/lib/python2.7/site-packages/tvm-0.2.0-py2.7-linux-aarch64.egg/tvm/_ffi/_ctypes/function.py”, line 54, in cfun
rv = local_pyfunc(*pyargs)
File “/home/firefly/.local/lib/python2.7/site-packages/nnvm-0.8.0-py2.7.egg/nnvm/top/nn.py”, line 82, in compute_conv2d
out = topi.nn.conv2d(inputs[0], inputs[1], strides, padding, layout)
File “”, line 2, in conv2d
File “/home/firefly/.local/lib/python2.7/site-packages/tvm-0.2.0-py2.7-linux-aarch64.egg/tvm/target.py”, line 342, in dispatch_func
return dispatch_dict[k](*args, **kwargs)
File “/home/firefly/.local/lib/python2.7/site-packages/topi-0.2.0-py2.7.egg/topi/mali/conv2d.py”, line 111, in decl_conv2d
assert data.shape[0].value == 1, “only support batch size=1 convolution on mali”
File “/home/firefly/.local/lib/python2.7/site-packages/tvm-0.2.0-py2.7-linux-aarch64.egg/tvm/container.py”, line 23, in getitem
raise IndexError(“array index out of range”)
IndexError: array index out of range

Do you know the errors? Thank you very much.

According to the message, I suspect that the batch size of model is not 1 on mali platform. i.e input format NCHW, the dimension of N is not 1. Could you help to make sure the batch size is 1?

My convert is :
model = coremltools.converters.caffe.convert((’/home/daming/work/ncnn_tool_20180301/dncnn/caffe_model/DnCNN_color_sigma_10.caffemodel’,’/home/daming/work/ncnn_tool_20180301/dncnn/caffe_model/DnCNN_color_sigma_10.prototxt’),image_input_names=‘data’,is_bgr=True)

================= Summary of the conversion: ===================================
Detected input(s) and shape(s) (ignoring batch size):
‘data’ : 3, 196, 196

Network Input name(s): ‘data’.
Network Output name(s): ‘prob’.

caffe to coreml converter will ignore the batch size. When you convert to nnvm, could you try batch_size = 1 in the input_shape?