Running onnx model on TVM

Hi everybody,

I have an onnx model that I’ve tested using two following approaches:

Approach 1:

module = graph_runtime.create(loaded_json, loaded_lib, ctx)
module.load_params(loaded_params)
module.set_input(input)
module.run()
module.get_output()

Approach 2:


module, params = relay.frontend.from_onnx( onnx_model, shape_dict)
ex = tvm.relay.create_executor(“graph”, module, tvm.cpu(0), target)
result = ex.evaluate()(input, **params).asnumpy()

The problem is that approach 1 works fine but with approach 2, I ends up with the following error:

“Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [?, ?, ?, ?]”

The onnx model as well as two python scripts for running the model with both approaches is uploaded here. You can use them to reproduce the error.

I’m using this version of TVM: GitHub - gussmith23/tvm at 2021-05-18-fix-byodt-parsing

I would appreciate it if anyone can help me to understand why approached 2 doesn’t work.

Thanks in advance

Hi @sahooora, maybe it’s because your model has dynamic shapes. Could you try using the vm executor in Approach 2 by doing the below?

ex = tvm.relay.create_executor(“vm”, module, tvm.cpu(0), target)

Sorry, I didn’t notice you also used the graph runtime in Approach 1. I can successfully run your script by doing the following changes with the latest TVM (we did some API change to the create_executor API very recently, and you can pass params to it when creating the executor):

ex = tvm.relay.create_executor("graph", module, tvm.cpu(0), target, params)

result = ex.evaluate()(input1,input2).asnumpy()

1 Like

Thanks @yuchenj for your answer.

I’ve updated my tvm to the latest version and then the error disappeared.

Grad to hear! Have fun!