[ONNX] Failed to compile onnx yolov5s model

I was trying to compile a yolov5s onnx model but failed during compialtion stage, the error messgae is:

Traceback (most recent call last):
 File "test2.py", line 18, in <module>
 graph, lib, params = relay.build(mod, params=params, target='llvm -mcpu=skylake-avx512')
File "/home/ubuntu/tvm/python/tvm/relay/build_module.py", line 485, in build
mod_name=mod_name,
File "/home/ubuntu/tvm/python/tvm/relay/build_module.py", line 202, in build
self._build(mod, target, target_host, executor, runtime, workspace_memory_pools, mod_name)
File "/home/ubuntu/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
19: TVMFuncCall
18: tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, 
std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
17: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, tvm::runtime::String const&)
16: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::relay::backend::GraphExecutorCodegenModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
15: tvm::relay::backend::GraphExecutorCodegen::Codegen(tvm::IRModule, tvm::relay::Function, tvm::runtime::String)
14: tvm::relay::GraphPlanMemory(tvm::relay::Function const&)
13: tvm::relay::StorageAllocator::Plan(tvm::relay::Function const&)
12: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
11: tvm::relay::transform::DeviceAwareExprVisitor::VisitExpr_(tvm::relay::FunctionNode const*)
10: tvm::relay::StorageAllocaBaseVisitor::DeviceAwareVisitExpr_(tvm::relay::FunctionNode const*)
9: tvm::relay::StorageAllocaBaseVisitor::GetToken(tvm::RelayExpr const&)
8: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
7: tvm::relay::StorageAllocaBaseVisitor::VisitExpr_(tvm::relay::TupleNode const*)
6: tvm::relay::StorageAllocaBaseVisitor::GetToken(tvm::RelayExpr const&)
5: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
4: tvm::relay::transform::DeviceAwareExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
3: tvm::relay::StorageAllocator::DeviceAwareVisitExpr_(tvm::relay::CallNode const*)
2: tvm::relay::StorageAllocaBaseVisitor::CreateToken(tvm::RelayExprNode const*, bool)
1: tvm::relay::StorageAllocator::CreateTokenOnDevice(tvm::RelayExprNode const*, tvm::VirtualDevice const&, bool)
0: tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)
File "/home/ubuntu/tvm/src/relay/backend/graph_plan_memory.cc", line 398
TVMError: 
---------------------------------------------------------------

An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (pval != nullptr) is false: Cannot allocate memory symbolic tensor shape [1, ?, 85]

The model I used is here: TorchScript, ONNX, CoreML Export - YOLOv5 Documentation exported using command:

python3 export.py --weights yolov5s.pt --batch 1 --opset 14 --dynamic

I am wondering anyone encountered similar problems and how to solve it.