Cannot allocate memory symbolic tensor shape [?, 3] with onnx ssd model

I am trying to run onnx ssd model from here. While building relay module , I am seeing below error

Traceback (most recent call last):
  File "ssd_tvm.py", line 20, in <module>
    lib = relay.build(mod, target, params=params)
  File "/workspace/tvm_latest/python/tvm/relay/build_module.py", line 358, in build
    mod=ir_mod, target=target, params=params, executor=executor, mod_name=mod_name
  File "/workspace/tvm_latest/python/tvm/relay/build_module.py", line 172, in build
    self._build(mod, target, target_host, executor, mod_name)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 267, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  23: TVMFuncCall
  22: _ZNSt17_Function_handlerIFvN
  21: tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
  20: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, tvm::runtime::NDArray> > > const&, tvm::runtime::String)
  19: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::backend::GraphExecutorCodegenModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  18: tvm::relay::backend::GraphExecutorCodegen::Codegen(tvm::relay::Function, tvm::runtime::String)
  17: tvm::relay::GraphPlanMemory(tvm::relay::Function const&)
  16: tvm::relay::StorageAllocator::Plan(tvm::relay::Function const&)
  15: tvm::relay::StorageAllocaBaseVisitor::GetToken(tvm::RelayExpr const&)
  14: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  13: tvm::relay::StorageAllocaBaseVisitor::VisitExpr_(tvm::relay::TupleNode const*)
  12: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  11: tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)
  10: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  9: tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)
  8: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  7: tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)
  6: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  5: tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)
  4: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  3: tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)
  2: tvm::relay::StorageAllocator::CreateToken(tvm::RelayExprNode const*, bool)
  1: tvm::relay::StorageAllocator::Request(tvm::relay::StorageToken*)
  0: tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)
  File "/workspace/tvm_latest/src/relay/backend/graph_plan_memory.cc", line 372
TVMError: 
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (pval != nullptr) is false: Cannot allocate memory symbolic tensor shape [?, 3]

Can you post a link to the script you are using? I’ve been able to run this model before.

This error happens when you try to compile a dynamic model with graph codegen (relay.build(...)). You need to use a VM compiler for models involving dynamic shapes or control flow.

Thanks @masahi for the info. One more question.

Does VM compiler supports BYOC flow. If yes, could you please share the sample python script?

Yes, BYOC flow works with the VM compiler just as well as the graph runtime. See tvm/test_external_codegen.py at e883dcba2e2529d4dcf23169a7c72494b0b5b60b · apache/tvm · GitHub