Hi, when I load a model with onnx format and compile it with target cuda, then error ‘Direct host side access to device memory is detected in fuse_reshape_broadcast_mul_conv2d_broadcast_mul_broadcast_add_elemwise_add. Did you forget to bind’ came out.
Traceback (most recent call last):
File "from_onnx.py", line 84, in <module>
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
File "/home/wyq/tvm/nnvm/python/nnvm/compiler/build_module.py", line 307, in build
graph = graph.apply("GraphCompile")
File "/home/wyq/tvm/nnvm/python/nnvm/graph.py", line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File "/home/wyq/tvm/nnvm/python/nnvm/_base.py", line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File "/home/wyq/tvm/python/tvm/_ffi/_ctypes/function.py", line 54, in cfun
rv = local_pyfunc(*pyargs)
File "/home/wyq/tvm/nnvm/python/nnvm/compiler/build_module.py", line 124, in _build
return tvm.build(funcs, target=target, target_host=target_host)
File "/home/wyq/tvm/python/tvm/build_module.py", line 462, in build
"Did you forget to bind?" % func.name)
ValueError: Direct host side access to device memory is detected in fuse_reshape_broadcast_mul_conv2d_broadcast_mul_broadcast_add_elemwise_add.
Did you forget to bind?
And tell me your tvm commit hash tag ( output of git log). It might be due to the recent change I made to NNVM operator fusion. (fusing reshape with conv2d seems fishy.)
I changed the code and recompiled the tvm project by your tips, but the same error “ValueError: Direct host side access to device memory is detected in fuse_reshape_broadcast_mul_conv2d_broadcast_mul_broadcast_add_elemwise_add. Did you forget to bind” happened.
I think if you change the target from “cuda” to “llvm”, there should be no error. Then you can save your resnet 50 to json file. Can you try this snippet, and post the output somewhere?
hmm, never heard of SELayer. I guess this is what is causing the error, because this is not a standard layer.
TVM and NNVM are tested mostly on standard imagenet models. If you try something new, weird errors might arise.
I am facing the similar error while compiling the onnx model
Note : The Error does’t come when using opt_level=0 while compiling the model using NNVM,
but get it only while using opt_level=1 or opt_level=2 or opt_level=3
Code:
with nnvm.compiler.build_config(opt_level=1):
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, dtype_dict, params=params)
Error:
Traceback (most recent call last):
File “sface_nchw_trail1.py”, line 64, in
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, dtype_dict, params=params)
File “/home/ubuntu/tvm_opencl/tvm/nnvm/python/nnvm/compiler/build_module.py”, line 306, in build
graph = graph.apply(“GraphCompile”)
File “/home/ubuntu/tvm_opencl/tvm/nnvm/python/nnvm/graph.py”, line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File “/home/ubuntu/tvm_opencl/tvm/nnvm/python/nnvm/_base.py”, line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File “/home/ubuntu/tvm_opencl/tvm/python/tvm/_ffi/_ctypes/function.py”, line 55, in cfun
rv = local_pyfunc(*pyargs)
File “/home/ubuntu/tvm_opencl/tvm/nnvm/python/nnvm/compiler/build_module.py”, line 124, in _build
return tvm.build(funcs, target=target, target_host=target_host)
File “/home/ubuntu/tvm_opencl/tvm/python/tvm/build_module.py”, line 586, in build
fhost, mdev = _build_for_device(flist, tar, target_host)
File “/home/ubuntu/tvm_opencl/tvm/python/tvm/build_module.py”, line 415, in _build_for_device
“Did you forget to bind?” % func.name)
ValueError: Direct host side access to device memory is detected in fuse_matmul_relu. Did you forget to bind?
The Error doesn’t come when using opt_level=0 while compiling the model using NNVM,
but get it only while using opt_level=1 or opt_level=2 or opt_level=3