Runnig a model with tvm debugger

Hi everybody,

I used tvm debugger based on the description came here.

The following line gives me 3 files: output_tensors.params, executaion_trace.json, and graph_dump.json.

I get different output_tensors.params file for every run!! Does anyone know why the parameters file is different from one run to another?

Thanks in advance

This is a known bug. There is current a PR fixing it here: [Graph Executor Debugger] Fix parameter dump by mehrdadh · Pull Request #7903 · apache/tvm · GitHub

Thank you so much for your response! I saw the PR is merged to the master branch now so I’ve cloned the latest version but I still have the same issue!

hi @sahooora,

could you double-check you’re driving the model with the same inputs and parameters each time? if so, could you provide some more info such as which revision of TVM you’re using and some scripts we can use to reproduce your error?

thanks, Andrew

Hi @areusch,

Yes, I’m using the same input and params in each run but with graph_executor.create I get 2 different output_tensors.params in 2 runs.

I use commit 8c56ce3b9076 of TVM. I uploaded my code, model, and input image here for reproducing the issue.


Use this code to construct the graph_executor:

from tvm.contrib.debugger.graph_debugger import GraphModuleDebug
m = GraphModuleDebug(exe["debug_create"]("default", dev)), [dev], exe.get_json())

I’ve encountered errors with graph_debugger.create in the past but I haven’t had a chance to figure out what is wrong.

Hi, I have also met problem when using the same code, have you solved it?