I tried to use the debugger to see the bottleneck of the module. I followed the instructions in tutorial.
set the USE_GRAPH_RUNTIME_DEBUG
flag to ON
make
In frontend script file instead of from tvm.contrib import graph_runtime
import the debug_runtime
from tvm.contrib.debugger import debug_runtime as graph_runtime
But I encountered an error
Illegal instruction (core dumped)
I use the example code in the tutorials. I tried https://docs.tvm.ai/_downloads/83dedc6352b4016772e17480ef01345d/deploy_model_on_rasp.py and https://docs.tvm.ai/_downloads/f83f0c3da8a2ab10657c61e034b7218d/from_pytorch.py, and my own code. All have the same error as above. But all of them work if from tvm.contrib import graph_runtime
rather than debugger runtime.
I have no idea how to solve it. Thanks
This maybe a problem with dependent shared libraries.
Try the following.
In your ~/.gdbinit add the line:
handle SIGILL nostop noprint
Thanks for your help. But it still raise the same error after I undating gdbinit file.
I think it may be something wrong when make and build tvm. Even if I use tvm.contrib import graph_runtime
, it still raise the error.
Then I set set(USE_GRAPH_RUNTIME_DEBUG OFF)
back, and cmake, then make again. But I still encountered illegal instruction error.
Try with a fresh build in debug mode, i.e.
cd build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make
Then in GDB, if you encounter the illegal exception run again after explicitly disabling the sigill signal.
$ gdb <however you call gdb>
... exception
gdb> handle SIGILL nostop noprint
gdb> run
If you’re doing this on a raspberry pi over RPC make sure that USE_GRAPH_RUNTIME_DEBUG
is on on both your host, and on the RPC server (e.g. rpi device).