File “/usr/local/lib/python2.7/dist-packages/nnvm-0.8.0-py2.7.egg/nnvm/compiler/graph_util.py”, line 31, in infer_shape
graph = graph.apply(“InferShape”)
File “/usr/local/lib/python2.7/dist-packages/nnvm-0.8.0-py2.7.egg/nnvm/graph.py”, line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File “/usr/local/lib/python2.7/dist-packages/nnvm-0.8.0-py2.7.egg/nnvm/_base.py”, line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: Error in operator pad27: [16:21:43] /home/zhoukun/FrameWork/nnvm/src/top/nn/nn.cc:576: Check failed: param.pad_width.ndim() == dshape.ndim() (4 vs. 3)
I use tensorflow as backend, keras as frontend, I want to use the nnvm as my compiler to compile the mobilenet model.
this is part of my code . target = 'llvm’ shape_dict = {‘data’: (1, 3, 224, 224)}
with nnvm.compiler.build_config(opt_level=2): graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
reshape may be an operation before (convolution / pad).
The reshape operation through keras front end was not considering the batch which results in output dim length 3 instead of 4. Hence the pad operator report and error on dimension mismatch.
I tried the fix reshape solution, and find that the result obtained by NNVM and keras are different, and not right. I used a cat picture and mobileNet.
NNVM top-1 id: 0, class name: tench, Tinca tinca
Keras top-1 id: 688, class name: oscilloscope, scope, cathode-ray oscilloscope, CRO