Hi! I want to run a onnx model on tvm. According to the docs, it’s easy to use timeit to get the inference time of the model. However, I didn’t find a method to profile the operator’s runing time in a model. Such as, if the resNet has runed for about 2 seconds, I hope to know how long the time is used for calculating convolution during the 2 seconds; I found that pytorch provide a Class named Profiler, which is workful for me. So, Is there a tool like that in TVM?
using debug_executor
Thank you very much for the answer. I have found that debug_executor is working. But I was trying to run yolov5 modle which has some switches. I have to build with VM instead of graph runtime. So is there a way to calculating the operators running time while it’s build with VM?
Thank you for the answer. I have found that debug_executor is working. But I was trying to run yolov5 modle which has some switches. I have to build with VM instead of graph runtime. So is there a way to calculating the operators running time while it’s build with VM?
Thank you very much!!!