Hello,
I am trying to use heterogenous execution (put conv2d on GPU) on a Mobilenet+SSD V1 coming from an MX-Net model (.params and .json files).
I’ve followed the example shown in #3621 for annotate using a visitor: the compilation goes well and the graph seems ok (nodes copied and attributed to different devices).
But I can’t execute the model, I’m doing:
target = {“gpu”: “cuda”, “cpu”: “llvm”}
with relay.build_config(opt_level=3, fallback_device=tvm.cpu(0)):
graph, lib, params = relay.build(net, target=target, params=params)
ctx = [tvm.cpu(0), tvm.context(“cuda”)]
mod = runtime.create(graph, lib, ctx)
mod.set_input(**params)
mod.run()
and I get the error:
mod.run()
File “/home/renault/tvm/python/tvm/contrib/graph_runtime.py”, line 168, in run
self._run()
File “tvm/_ffi/_cython/./function.pxi”, line 310, in tvm._ffi._cy3.core.FunctionBase.call
File “tvm/_ffi/_cython/./function.pxi”, line 245, in tvm._ffi._cy3.core.FuncCall
File “tvm/_ffi/_cython/./function.pxi”, line 234, in tvm._ffi._cy3.core.FuncCall3
File “tvm/_ffi/_cython/./base.pxi”, line 170, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (3) /home/renault/tvm/build/libtvm.so(TVMFuncCall+0x61) [0x7f30c19ba901]
[bt] (2) /home/renault/tvm/build/libtvm.so(tvm::runtime::GraphRuntime::Run()+0x47) [0x7f30c1a0d897]
[bt] (1) /home/renault/tvm/build/libtvm.so(+0x138d6c7) [0x7f30c1a0f6c7]
[bt] (0) /home/renault/tvm/build/libtvm.so(+0x1346ac0) [0x7f30c19c8ac0]
File “/home/renault/tvm/src/runtime/module_util.cc”, line 73
TVMError: Check failed: ret == 0 (-1 vs. 0) : Assert fail: (dev_type == 2), device_type need to be 2
Do you have any ideas of what I’m doing wrong? Thanks!
@zhiics