Can't compile tiny yoloV3 onnx model

Hi, there is a problem when compile tiny yoloV3 onnx model from here.

here is my script:

import onnx
import numpy as np
import tvm
from tvm import te
import tvm.relay as relay
from tvm.contrib.download import download_testdata
import os

model_path = "model/tiny-yolov3-11.onnx"

onnx_model = onnx.load(model_path)

target = "llvm"

input_name = "input_1"
image_shape = "image_shape"
shape = (1, 3, 224, 224)
shape1 = (1, 2)
shape_dict = {input_name: shape, image_shape: shape1}
mod, params = relay.frontend.from_onnx(onnx_model, shape=shape_dict, freeze_params=True)
mod = relay.transform.DynamicToStatic()(mod)

with tvm.transform.PassContext(opt_level=1):
    executor = relay.build(mod, target=target, params=params)

the error message:

Check failed: (pval != nullptr) is false: Cannot allocate memory symbolic tensor shape [?]

It seems about dynamic operators problem, but I have set the specific input shape, anyone knows how to fix this, or some advice, thanks.

You need to use VM compiler and runtime, see Compile PyTorch Object Detection Models — tvm 0.9.dev0 documentation

Thanks, It works, some more questions

  1. the following message occurs during compiling, is it normal ?
[15:03:28] /home/tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_expand_dim_shape_func, 0x62312d0)
[15:03:30] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:30] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:30] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:30] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:36] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:36] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:36] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:36] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:44] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:44] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:44] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
[15:03:44] /home/tvm/src/relay/transforms/let_list.h:54: Warning: letlist not used
  1. what’s difference between relay.build and vm.compile, are there some other compilers, how do I know which one I should choose ?

  2. notes under Compile with Relay VM says “Currently only CPU target is supported”, so I can’t compile yoloV3 with other targets right ?

  1. It’s normal, no need to worry about it

  2. If your model contains dynamic shape or control flow, you need to use vm.compile. You can first try relay.build(...), and if you hit Check failed: (pval != nullptr) is false: Cannot allocate memory symbolic tensor shape [?] error, you can switch to vm.compile. We do have other compiler/runtime but you don’t need to worry about them.

  3. You can ignore that sentence. It is specific to the model used in that tutorial and even that is outdated. GPU is fully supported by VM.

Thanks for your explanation, may I ask why vm.compile can deal with dynamic or control flow, I see the vm.compile code has lower the graph earlier than relay.build, but it seems it doesn’t infer the right shape either. So I am curious the solution about vm.compile.

If I want to add a new backend, does it mean I need to deal with dynamic shape by myself when I choose vm.compile, I notice vm.compile has external compile interface.