Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [? ? ?]

I am trying to extact the tasks from my model: ++++++++++++++++++++++++++++++++++++++++++++++++++

tasks = autotvm.task.extract_from_program(mod["main"],\
                                              target=target,\
                                              target_host=target_host,\
                                              params=params,\
                                              ops=None)

+++++++++++++++++++++++++++++++++++++++++++++++++++

Then I got this issue:

+++++++++++++++++++++++++++++++++++++++++++++++++++

Extract tasks...\
Get errors with GraphRuntimeCodegen for task extraction. Fallback to VMCompiler. Error details:
Traceback (most recent call last):\
  [bt] (8) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7fe43e43a28f]\
  [bt] (7) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0xc2) [0x7fe43e3e21a2]\
  [bt] (6) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x8b) [0x7fe43e48b8bb]
  [bt] (5) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7fe43e43a28f]
  [bt] (4) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0x1b5) [0x7fe43e3e2295]
  [bt] (3) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::CreateToken(tvm::RelayExprNode const*, bool)+0x185) [0x7fe43e3e1db5]\
  [bt] (2) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::Request(tvm::relay::StorageToken*)+0x34) [0x7fe43e3e0f84]\
  [bt] (1) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)+0x296) [0x7fe43e3e0a56]\
  [bt] (0) /home/xiaosong/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(+0x1252cb8) [0x7fe43e3decb8]\
  File "/home/xiaosong/workspacae/installation/TVM/incubator-tvm/src/relay/backend/graph_plan_memory.cc", line 292
TVMError: \
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.\
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.\
---------------------------------------------------------------

  Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [?, ?, ?, ?]

++++++++++++++++++++++++++++++++++++++++++++ why this happened?

List item

This is because your model has dynamic shape. @merrymercy @comaniac Does AutoTVM supports VM compiler during tuning ?

No. Ansor only supports graph runtime at this moment.

Thanks, @masahi @comaniac for the prompt reply. Then Is there a workaround for this?

Autotvm is able to handle dynamic shape op through vm: https://github.com/apache/tvm/blob/main/python/tvm/autotvm/task/relay_integration.py#L54-L66

In your case it has already fallen back to VM.

@kevinthesun Thanks for pointing that. I did find some dynamic shpae in my onnx model. Then This error happens in the VM stage, right? Do you have any suggestion for this issue?

This error is in graph runtime stage: Get errors with GraphRuntimeCodegen

I am pretty new to TVM. I have another model that can run successfully with the same script. Do you know how can I solve this issue?

Fallback to VM is the expected behavior. It seems like you didn’t get any error in this path(or didn’t paste here?).

@kevinthesun @masahi
This is the full log:

/home/workspacae/installation/TVM/incubator-tvm/python/tvm/target/target.py:460: UserWarning: tvm.target.create() is being deprecated. Please use tvm.target.Target() instead
  warnings.warn("tvm.target.create() is being deprecated. Please use tvm.target.Target() instead")
.... <function Target.current at 0x7f5303fc3b80>
Extract tasks...
shape_dict {'0': [1, 3, 256, 256]}
Extract tasks...  
Get errors with GraphRuntimeCodegen for task extraction. Fallback to VMCompiler. Error details:
Traceback (most recent call last):
  [bt] (8) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f52db92528f]
  [bt] (7) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0xc2) [0x7f52db8cd1a2]
  [bt] (6) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x8b) [0x7f52db9768bb]
  [bt] (5) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f52db92528f]
  [bt] (4) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0x1b5) [0x7f52db8cd295]
  [bt] (3) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::CreateToken(tvm::RelayExprNode const*, bool)+0x185) [0x7f52db8ccdb5]
  [bt] (2) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::Request(tvm::relay::StorageToken*)+0x34) [0x7f52db8cbf84]
  [bt] (1) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)+0x296) [0x7f52db8cba56]
  [bt] (0) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(+0x1252cb8) [0x7f52db8c9cb8]
  File "/home/workspacae/installation/TVM/incubator-tvm/src/relay/backend/graph_plan_memory.cc", line 292
TVMError: 
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
  Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [?, ?, ?, ?]
[20:44:14] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527501afa0)
[20:44:14] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5274e512f0)
[20:44:14] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527507b870)
[20:44:14] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5274f564f0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527400bda0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750d7ef0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527501ffa0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5274fbd240)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5274f163a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5274f343a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750adc20)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750314a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5275050a20)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750906a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5274fb75a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750ed570)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527504a0f0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527506a6a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750658a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f527514b8f0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52751632a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52751652a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52751774f0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5275182df0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52751ae710)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52750986a0)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52751ec970)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5275217b70)
[20:44:15] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5275214250)
[20:44:16] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f5275250ef0)
[20:44:16] /home/workspacae/installation/TVM/incubator-tvm/src/te/schedule/bound.cc:119: not in feed graph consumer = hybrid(_conv_shape_func, 0x7f52751e70f0)
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/relay_integration.py", line 57, in _lower
    grc.codegen(opt_mod["main"])
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/relay/backend/graph_runtime_codegen.py", line 83, in codegen
    self._codegen(func)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f52db92528f]
  [bt] (7) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0xc2) [0x7f52db8cd1a2]
  [bt] (6) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x8b) [0x7f52db9768bb]
  [bt] (5) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f52db92528f]
  [bt] (4) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0x1b5) [0x7f52db8cd295]
  [bt] (3) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::CreateToken(tvm::RelayExprNode const*, bool)+0x185) [0x7f52db8ccdb5]
  [bt] (2) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::Request(tvm::relay::StorageToken*)+0x34) [0x7f52db8cbf84]
  [bt] (1) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)+0x296) [0x7f52db8cba56]
  [bt] (0) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(+0x1252cb8) [0x7f52db8c9cb8]
  File "/home/workspacae/installation/TVM/incubator-tvm/src/relay/backend/graph_plan_memory.cc", line 292
TVMError: 
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
  Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [?, ?, ?, ?]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/relay_integration.py", line 66, in _lower
    compiler.lower(mod, target=target)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/relay/backend/vm.py", line 135, in lower
    self._lower(mod, target, target_host)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::VisitExpr_(tvm::relay::CallNode const*)+0x956) [0x7f52db9265a6]
  [bt] (7) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::runtime::Array<tvm::Type, void> const&), tvm::relay::vm::VMFunctionCompiler::VisitExpr_(tvm::relay::CallNode const*)::{lambda(tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::runtime::Array<tvm::Type, void> const&)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::Attrs const&, tvm::runtime::Array<tvm::Type, void> const&)+0x1d1) [0x7f52db922791]
  [bt] (6) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::vm::VMFunctionCompiler::EmitInvokeTVMOp(tvm::relay::Function const&, tvm::RelayExpr const&, tvm::RelayExpr const&)+0x863) [0x7f52db9219b3]
  [bt] (5) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::Lower(tvm::relay::CCacheKey const&)+0x25) [0x7f52db8c80d5]
  [bt] (4) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x75e) [0x7f52db8c730e]
  [bt] (3) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::CreateSchedule(tvm::relay::Function const&, tvm::Target const&)+0x44a) [0x7f52db8b052a]
  [bt] (2) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0xd32) [0x7f52db8bdc22]
  [bt] (1) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::OpImplementation::Schedule(tvm::Attrs const&, tvm::runtime::Array<tvm::te::Tensor, void> const&, tvm::Target const&)+0xb6) [0x7f52db990876]
  [bt] (0) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(+0x553ad7) [0x7f52dabcaad7]
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/relay_integration.py", line 57, in _lower
    grc.codegen(opt_mod["main"])
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/relay/backend/graph_runtime_codegen.py", line 83, in codegen
    self._codegen(func)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
  [bt] (8) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f52db92528f]
  [bt] (7) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0xc2) [0x7f52db8cd1a2]
  [bt] (6) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x8b) [0x7f52db9768bb]
  [bt] (5) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6f) [0x7f52db92528f]
  [bt] (4) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0x1b5) [0x7f52db8cd295]
  [bt] (3) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::CreateToken(tvm::RelayExprNode const*, bool)+0x185) [0x7f52db8ccdb5]
  [bt] (2) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::Request(tvm::relay::StorageToken*)+0x34) [0x7f52db8cbf84]
  [bt] (1) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)+0x296) [0x7f52db8cba56]
  [bt] (0) /home/workspacae/installation/TVM/incubator-tvm/build/libtvm.so(+0x1252cb8) [0x7f52db8c9cb8]
  File "/home/workspacae/installation/TVM/incubator-tvm/src/relay/backend/graph_plan_memory.cc", line 292
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
    rv = local_pyfunc(*pyargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/relay/op/strategy/generic.py", line 35, in wrapper
    return topi_schedule(outs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/topi_integration.py", line 235, in wrapper
    return topi_schedule(cfg, outs, *args, **kwargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/cuda/conv2d.py", line 47, in schedule_conv2d_nchw
    traverse_inline(s, outs[0].op, _callback)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 70, in traverse_inline
    _traverse(final_op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 67, in _traverse
    _traverse(tensor.op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 67, in _traverse
    _traverse(tensor.op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 67, in _traverse
    _traverse(tensor.op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 68, in _traverse
    callback(op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/cuda/conv2d.py", line 45, in _callback
    schedule_direct_cuda(cfg, s, op.output(0))
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/cuda/conv2d_direct.py", line 32, in schedule_direct_cuda
    cfg.define_split("tile_y", y, num_outputs=4)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 730, in define_split
    return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 829, in _add_new_transform
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 829, in <listcomp>
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 687, in axis
    return VirtualAxis(var)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 137, in __init__
    self.length = get_const_int(var.dom.extent)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/utils.py", line 164, in get_const_int
    raise ValueError("Expect value to be constant int")
TVMError: 
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
  Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [?, ?, ?, ?]

During handling of the above exception, another exception occurred:

ValueError: Expect value to be constant int
Traceback (most recent call last):
  File "tune_rx580.py", line 589, in <module>
    tune_and_evaluate(tuning_option)
  File "tune_rx580.py", line 500, in tune_and_evaluate
    tasks = autotvm.task.extract_from_program(mod["main"],
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/relay_integration.py", line 92, in extract_from_program
    return extract_from_multiple_program([mod], [params], target, target_host, ops)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/relay_integration.py", line 155, in extract_from_multiple_program
    tsk = create(task_name, args, target=target, target_host=target_host)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/task.py", line 457, in create
    sch, _ = ret.func(*args)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/task.py", line 234, in __call__
    return self._default_func(*args, **kwargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/task.py", line 242, in _default_func
    s = self.fschedule([out])
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/topi_integration.py", line 235, in wrapper
    return topi_schedule(cfg, outs, *args, **kwargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/cuda/conv2d.py", line 47, in schedule_conv2d_nchw
    traverse_inline(s, outs[0].op, _callback)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 70, in traverse_inline
    _traverse(final_op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/utils.py", line 68, in _traverse
    callback(op)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/cuda/conv2d.py", line 45, in _callback
    schedule_direct_cuda(cfg, s, op.output(0))
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/topi/cuda/conv2d_direct.py", line 32, in schedule_direct_cuda
    cfg.define_split("tile_y", y, num_outputs=4)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 730, in define_split
    return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 829, in _add_new_transform
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 829, in <listcomp>
    axes = [x if isinstance(x, (VirtualAxis, Axis)) else self.axis(x) for x in axes]
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 687, in axis
    return VirtualAxis(var)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/task/space.py", line 137, in __init__
    self.length = get_const_int(var.dom.extent)
  File "/home/workspacae/installation/TVM/incubator-tvm/python/tvm/autotvm/utils.py", line 164, in get_const_int
    raise ValueError("Expect value to be constant int")
ValueError: Expect value to be constant int

This the model info (Because of the length limitation for each reply I split the model into two replies )

mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
print(mod) 

+++++++++++++++++++++++++++++(1)

shape_dict {'0': [1, 3, 256, 256]}
Extract tasks...
def @main(%v0: Tensor[(1, 3, 256, 256), float32], %v1015: Tensor[(32, 3, 3, 3), float32], %v1016: Tensor[(32), float32], %v1018: Tensor[(32, 1, 3, 3), float32], %v1019: Tensor[(32), float32], %v1021: Tensor[(24, 32, 1, 1), float32], %v1022: Tensor[(24), float32], %v1024: Tensor[(144, 24, 1, 1), float32], %v1025: Tensor[(144), float32], %v1027: Tensor[(144, 1, 3, 3), float32], %v1028: Tensor[(144), float32], %v1030: Tensor[(32, 144, 1, 1), float32], %v1031: Tensor[(32), float32], %v1033: Tensor[(192, 32, 1, 1), float32], %v1034: Tensor[(192), float32], %v1036: Tensor[(192, 1, 3, 3), float32], %v1037: Tensor[(192), float32], %v1039: Tensor[(32, 192, 1, 1), float32], %v1040: Tensor[(32), float32], %v1042: Tensor[(192, 32, 1, 1), float32], %v1043: Tensor[(192), float32], %v1045: Tensor[(192, 1, 3, 3), float32], %v1046: Tensor[(192), float32], %v1048: Tensor[(32, 192, 1, 1), float32], %v1049: Tensor[(32), float32], %v1051: Tensor[(192, 32, 1, 1), float32], %v1052: Tensor[(192), float32], %v1054: Tensor[(192, 1, 5, 5), float32], %v1055: Tensor[(192), float32], %v1057: Tensor[(48, 192, 1, 1), float32], %v1058: Tensor[(48), float32], %v1060: Tensor[(288, 48, 1, 1), float32], %v1061: Tensor[(288), float32], %v1063: Tensor[(288, 1, 5, 5), float32], %v1064: Tensor[(288), float32], %v1066: Tensor[(48, 288, 1, 1), float32], %v1067: Tensor[(48), float32], %v1069: Tensor[(288, 48, 1, 1), float32], %v1070: Tensor[(288), float32], %v1072: Tensor[(288, 1, 5, 5), float32], %v1073: Tensor[(288), float32], %v1075: Tensor[(48, 288, 1, 1), float32], %v1076: Tensor[(48), float32], %v1078: Tensor[(288, 48, 1, 1), float32], %v1079: Tensor[(288), float32], %v1081: Tensor[(288, 1, 3, 3), float32], %v1082: Tensor[(288), float32], %v1084: Tensor[(96, 288, 1, 1), float32], %v1085: Tensor[(96), float32], %v1087: Tensor[(576, 96, 1, 1), float32], %v1088: Tensor[(576), float32], %v1090: Tensor[(576, 1, 3, 3), float32], %v1091: Tensor[(576), float32], %v1093: Tensor[(96, 576, 1, 1), float32], %v1094: Tensor[(96), float32], %v1096: Tensor[(576, 96, 1, 1), float32], %v1097: Tensor[(576), float32], %v1099: Tensor[(576, 1, 3, 3), float32], %v1100: Tensor[(576), float32], %v1102: Tensor[(96, 576, 1, 1), float32], %v1103: Tensor[(96), float32], %v1105: Tensor[(576, 96, 1, 1), float32], %v1106: Tensor[(576), float32], %v1108: Tensor[(576, 1, 3, 3), float32], %v1109: Tensor[(576), float32], %v1111: Tensor[(96, 576, 1, 1), float32], %v1112: Tensor[(96), float32], %v1114: Tensor[(576, 96, 1, 1), float32], %v1115: Tensor[(576), float32], %v1117: Tensor[(576, 1, 3, 3), float32], %v1118: Tensor[(576), float32], %v1120: Tensor[(96, 576, 1, 1), float32], %v1121: Tensor[(96), float32], %v1123: Tensor[(576, 96, 1, 1), float32], %v1124: Tensor[(576), float32], %v1126: Tensor[(576, 1, 5, 5), float32], %v1127: Tensor[(576), float32], %v1129: Tensor[(136, 576, 1, 1), float32], %v1130: Tensor[(136), float32], %v1132: Tensor[(816, 136, 1, 1), float32], %v1133: Tensor[(816), float32], %v1135: Tensor[(816, 1, 5, 5), float32], %v1136: Tensor[(816), float32], %v1138: Tensor[(136, 816, 1, 1), float32], %v1139: Tensor[(136), float32], %v1141: Tensor[(816, 136, 1, 1), float32], %v1142: Tensor[(816), float32], %v1144: Tensor[(816, 1, 5, 5), float32], %v1145: Tensor[(816), float32], %v1147: Tensor[(136, 816, 1, 1), float32], %v1148: Tensor[(136), float32], %v1150: Tensor[(816, 136, 1, 1), float32], %v1151: Tensor[(816), float32], %v1153: Tensor[(816, 1, 5, 5), float32], %v1154: Tensor[(816), float32], %v1156: Tensor[(136, 816, 1, 1), float32], %v1157: Tensor[(136), float32], %v1159: Tensor[(816, 136, 1, 1), float32], %v1160: Tensor[(816), float32], %v1162: Tensor[(816, 1, 5, 5), float32], %v1163: Tensor[(816), float32], %v1165: Tensor[(136, 816, 1, 1), float32], %v1166: Tensor[(136), float32], %v1168: Tensor[(816, 136, 1, 1), float32], %v1169: Tensor[(816), float32], %v1171: Tensor[(816, 1, 5, 5), float32], %v1172: Tensor[(816), float32], %v1174: Tensor[(232, 816, 1, 1), float32], %v1175: Tensor[(232), float32], %v1177: Tensor[(1392, 232, 1, 1), float32], %v1178: Tensor[(1392), float32], %v1180: Tensor[(1392, 1, 5, 5), float32], %v1181: Tensor[(1392), float32], %v1183: Tensor[(232, 1392, 1, 1), float32], %v1184: Tensor[(232), float32], %v1186: Tensor[(1392, 232, 1, 1), float32], %v1187: Tensor[(1392), float32], %v1189: Tensor[(1392, 1, 5, 5), float32], %v1190: Tensor[(1392), float32], %v1192: Tensor[(232, 1392, 1, 1), float32], %v1193: Tensor[(232), float32], %v1195: Tensor[(1392, 232, 1, 1), float32], %v1196: Tensor[(1392), float32], %v1198: Tensor[(1392, 1, 5, 5), float32], %v1199: Tensor[(1392), float32], %v1201: Tensor[(232, 1392, 1, 1), float32], %v1202: Tensor[(232), float32], %v1204: Tensor[(1392, 232, 1, 1), float32], %v1205: Tensor[(1392), float32], %v1207: Tensor[(1392, 1, 5, 5), float32], %v1208: Tensor[(1392), float32], %v1210: Tensor[(232, 1392, 1, 1), float32], %v1211: Tensor[(232), float32], %v1213: Tensor[(1392, 232, 1, 1), float32], %v1214: Tensor[(1392), float32], %v1216: Tensor[(1392, 1, 5, 5), float32], %v1217: Tensor[(1392), float32], %v1219: Tensor[(232, 1392, 1, 1), float32], %v1220: Tensor[(232), float32], %v1222: Tensor[(1392, 232, 1, 1), float32], %v1223: Tensor[(1392), float32], %v1225: Tensor[(1392, 1, 3, 3), float32], %v1226: Tensor[(1392), float32], %v1228: Tensor[(384, 1392, 1, 1), float32], %v1229: Tensor[(384), float32], %v1233: Tensor[(1), int64], %v1234: Tensor[(4), int64], %v1238: Tensor[(1), int64], %v1239: Tensor[(4), int64], %v1243: Tensor[(1), int64], %v1244: Tensor[(4), int64], %v1248: Tensor[(1), int64], %v1249: Tensor[(4), int64], %v1253: Tensor[(1), int64], %v1254: Tensor[(4), int64], %v1259: Tensor[(4), float32], %v1264: Tensor[(4), float32], %v1269: Tensor[(4), float32], %v1274: Tensor[(4), float32], %v1279: Tensor[(4), float32], %scratch.layer1_rn.weight: Tensor[(64, 32, 3, 3), float32], %scratch.layer2_rn.weight: Tensor[(128, 48, 3, 3), float32], %scratch.layer3_rn.weight: Tensor[(256, 136, 3, 3), float32], %scratch.layer4_rn.weight: Tensor[(512, 384, 3, 3), float32], %scratch.output_conv.0.bias: Tensor[(32), float32], %scratch.output_conv.0.weight: Tensor[(32, 64, 3, 3), float32], %scratch.output_conv.2.bias: Tensor[(32), float32], %scratch.output_conv.2.weight: Tensor[(32, 32, 3, 3), float32], %scratch.output_conv.4.bias: Tensor[(1), float32], %scratch.output_conv.4.weight: Tensor[(1, 32, 1, 1), float32], %scratch.refinenet1.out_conv.bias: Tensor[(64), float32], %scratch.refinenet1.out_conv.weight: Tensor[(64, 64, 1, 1), float32], %scratch.refinenet1.resConfUnit1.conv1.bias: Tensor[(64), float32], %scratch.refinenet1.resConfUnit1.conv1.weight: Tensor[(64, 64, 3, 3), float32], %scratch.refinenet1.resConfUnit1.conv2.bias: Tensor[(64), float32], %scratch.refinenet1.resConfUnit1.conv2.weight: Tensor[(64, 64, 3, 3), float32], %scratch.refinenet1.resConfUnit2.conv1.bias: Tensor[(64), float32], %scratch.refinenet1.resConfUnit2.conv1.weight: Tensor[(64, 64, 3, 3), float32], %scratch.refinenet1.resConfUnit2.conv2.bias: Tensor[(64), float32], %scratch.refinenet1.resConfUnit2.conv2.weight: Tensor[(64, 64, 3, 3), float32], %scratch.refinenet2.out_conv.bias: Tensor[(64), float32], %scratch.refinenet2.out_conv.weight: Tensor[(64, 128, 1, 1), float32], %scratch.refinenet2.resConfUnit1.conv1.bias: Tensor[(128), float32], %scratch.refinenet2.resConfUnit1.conv1.weight: Tensor[(128, 128, 3, 3), float32], %scratch.refinenet2.resConfUnit1.conv2.bias: Tensor[(128), float32], %scratch.refinenet2.resConfUnit1.conv2.weight: Tensor[(128, 128, 3, 3), float32], %scratch.refinenet2.resConfUnit2.conv1.bias: Tensor[(128), float32], %scratch.refinenet2.resConfUnit2.conv1.weight: Tensor[(128, 128, 3, 3), float32], %scratch.refinenet2.resConfUnit2.conv2.bias: Tensor[(128), float32], %scratch.refinenet2.resConfUnit2.conv2.weight: Tensor[(128, 128, 3, 3), float32], %scratch.refinenet3.out_conv.bias: Tensor[(128), float32], %scratch.refinenet3.out_conv.weight: Tensor[(128, 256, 1, 1), float32], %scratch.refinenet3.resConfUnit1.conv1.bias: Tensor[(256), float32], %scratch.refinenet3.resConfUnit1.conv1.weight: Tensor[(256, 256, 3, 3), float32], %scratch.refinenet3.resConfUnit1.conv2.bias: Tensor[(256), float32], %scratch.refinenet3.resConfUnit1.conv2.weight: Tensor[(256, 256, 3, 3), float32], %scratch.refinenet3.resConfUnit2.conv1.bias: Tensor[(256), float32], %scratch.refinenet3.resConfUnit2.conv1.weight: Tensor[(256, 256, 3, 3), float32], %scratch.refinenet3.resConfUnit2.conv2.bias: Tensor[(256), float32], %scratch.refinenet3.resConfUnit2.conv2.weight: Tensor[(256, 256, 3, 3), float32], %scratch.refinenet4.out_conv.bias: Tensor[(256), float32], %scratch.refinenet4.out_conv.weight: Tensor[(256, 512, 1, 1), float32], %scratch.refinenet4.resConfUnit2.conv1.bias: Tensor[(512), float32], %scratch.refinenet4.resConfUnit2.conv1.weight: Tensor[(512, 512, 3, 3), float32], %scratch.refinenet4.resConfUnit2.conv2.bias: Tensor[(512), float32], %scratch.refinenet4.resConfUnit2.conv2.weight: Tensor[(512, 512, 3, 3), float32], %v483: Tensor[(1, 3, 1, 1), float32], %v485: Tensor[(1, 3, 1, 1), float32], %v500: Tensor[(1), int64], %v501: Tensor[(1), int64], %v502: Tensor[(1), int64], %v503: Tensor[(1), int64], %v509: float32, %v513: float32, %v514: float32, %v518: float32, %v519: float32, %v525: float32, %v526: float32, %v541: Tensor[(1), int64], %v542: Tensor[(1), int64], %v543: Tensor[(1), int64], %v544: Tensor[(1), int64], %v550: float32, %v554: float32, %v555: float32, %v561: float32, %v562: float32, %v566: float32, %v567: float32, %v574: float32, %v575: float32, %v579: float32, %v580: float32, %v587: float32, %v588: float32, %v603: Tensor[(1), int64], %v604: Tensor[(1), int64], %v605: Tensor[(1), int64], %v606: Tensor[(1), int64], %v612: float32, %v616: float32, %v617: float32, %v623: float32, %v624: float32, %v628: float32, %v629: float32, %v636: float32, %v637: float32, %v641: float32, %v642: float32, %v649: float32, %v650: float32, %v665: Tensor[(1), int64], %v666: Tensor[(1), int64], %v667: Tensor[(1), int64], %v668: Tensor[(1), int64], %v674: float32, %v678: float32, %v679: float32, %v685: float32, %v686: float32, %v690: float32, %v691: float32, %v698: float32, %v699: float32, %v703: float32, %v704: float32, %v711: float32, %v712: float32, %v716: float32, %v717: float32, %v724: float32, %v725: float32, %v729: float32, %v730: float32, %v737: float32, %v738: float32, %v742: float32, %v743: float32, %v749: float32, %v750: float32, %v754: float32, %v755: float32, %v762: float32, %v763: float32, %v767: float32, %v768: float32, %v775: float32, %v776: float32, %v780: float32, %v781: float32, %v788: float32, %v789: float32, %v793: float32, %v794: float32, %v801: float32, %v802: float32, %v817: Tensor[(1), int64], %v818: Tensor[(1), int64], %v819: Tensor[(1), int64], %v820: Tensor[(1), int64], %v826: float32, %v830: float32, %v831: float32, %v837: float32, %v838: float32, %v842: float32, %v843: float32, %v850: float32, %v851: float32, %v855: float32, %v856: float32, %v863: float32, %v864: float32, %v868: float32, %v869: float32, %v876: float32, %v877: float32, %v881: float32, %v882: float32, %v889: float32, %v890: float32, %v894: float32, %v895: float32, %v902: float32, %v903: float32, %v907: float32, %v908: float32) {

++++++++++++++++++++++++(2)

  %0 = subtract(%v0, %v483);
  %1 = divide(%0, %v485);
  %2 = dyn.full(0, %v1233, shape=None, dtype="int64");
  %3 = (%v1234, %2);
  %4 = concatenate(%3);
  %5 = reshape(%4, newshape=[-1, 2]);
  %6 = scatter(meta[relay.Constant][0], %v500, %v501, meta[relay.attrs.ScatterAttrs][0]);
  %7 = cast_like(0, %6);
  %8 = less(%6, %7);
  %9 = shape_of(%5, dtype="int32");
  %10 = cast_like(%9, %6);
  %11 = add(%6, %10);
  %12 = where(%8, %11, %6);
  %13 = shape_of(%5, dtype="int64");
  %14 = scatter(%13, %v500, %v502, meta[relay.attrs.ScatterAttrs][1]);
  %15 = scatter(meta[relay.Constant][1], %v500, %v503, meta[relay.attrs.ScatterAttrs][2]);
  %16 = dyn.strided_slice(%5, %12, %14, %15, begin=None, end=None, strides=None);
  %17 = transpose(%16, axes=[1, 0]);
  %18 = reshape(%17, newshape=[-1]);
  %19 = cast(%18, dtype="int64");
  %20 = reshape(%19, newshape=[2, -1]);
  %21 = transpose(%20, axes=None);
  %22 = take(%v509, 0);
  %23 = dyn.nn.pad(%1, %21, %22, pad_width=[]);
  %24 = nn.conv2d(%23, %v1015, strides=[2, 2], padding=[0, 0, 0, 0], kernel_size=[3, 3]);
  %25 = nn.bias_add(%24, %v1016);
  %26 = maximum(%25, %v513);
  %27 = minimum(%26, %v514);
  %28 = nn.conv2d(%27, %v1018, padding=[1, 1, 1, 1], groups=32, kernel_size=[3, 3]);
  %29 = nn.bias_add(%28, %v1019);
  %30 = maximum(%29, %v518);
  %31 = minimum(%30, %v519);
  %32 = nn.conv2d(%31, %v1021, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %33 = nn.bias_add(%32, %v1022);
  %34 = nn.conv2d(%33, %v1024, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %35 = nn.bias_add(%34, %v1025);
  %36 = maximum(%35, %v525);
  %37 = minimum(%36, %v526);
  %38 = dyn.full(0, %v1238, shape=None, dtype="int64");
  %39 = (%v1239, %38);
  %40 = concatenate(%39);
  %41 = reshape(%40, newshape=[-1, 2]);
  %42 = scatter(meta[relay.Constant][2], %v541, %v542, meta[relay.attrs.ScatterAttrs][3]);
  %43 = cast_like(0, %42);
  %44 = less(%42, %43);
  %45 = shape_of(%41, dtype="int32");
  %46 = cast_like(%45, %42);
  %47 = add(%42, %46);
  %48 = where(%44, %47, %42);
  %49 = shape_of(%41, dtype="int64");
  %50 = scatter(%49, %v541, %v543, meta[relay.attrs.ScatterAttrs][4]);
  %51 = scatter(meta[relay.Constant][3], %v541, %v544, meta[relay.attrs.ScatterAttrs][5]);
  %52 = dyn.strided_slice(%41, %48, %50, %51, begin=None, end=None, strides=None);
  %53 = transpose(%52, axes=[1, 0]);
  %54 = reshape(%53, newshape=[-1]);
  %55 = cast(%54, dtype="int64");
  %56 = reshape(%55, newshape=[2, -1]);
  %57 = transpose(%56, axes=None);
  %58 = take(%v550, 0);
  %59 = dyn.nn.pad(%37, %57, %58, pad_width=[]);
  %60 = nn.conv2d(%59, %v1027, strides=[2, 2], padding=[0, 0, 0, 0], groups=144, kernel_size=[3, 3]);
  %61 = nn.bias_add(%60, %v1028);
  %62 = maximum(%61, %v554);
  %63 = minimum(%62, %v555);
  %64 = nn.conv2d(%63, %v1030, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %65 = nn.bias_add(%64, %v1031);
  %66 = nn.conv2d(%65, %v1033, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %67 = nn.bias_add(%66, %v1034);
  %68 = maximum(%67, %v561);
  %69 = minimum(%68, %v562);
  %70 = nn.conv2d(%69, %v1036, padding=[1, 1, 1, 1], groups=192, kernel_size=[3, 3]);
  %71 = nn.bias_add(%70, %v1037);
  %72 = maximum(%71, %v566);
  %73 = minimum(%72, %v567);
  %74 = nn.conv2d(%73, %v1039, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %75 = nn.bias_add(%74, %v1040);
  %76 = add(%75, %65);
  %77 = nn.conv2d(%76, %v1042, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %78 = nn.bias_add(%77, %v1043);
  %79 = maximum(%78, %v574);
  %80 = minimum(%79, %v575);
  %81 = nn.conv2d(%80, %v1045, padding=[1, 1, 1, 1], groups=192, kernel_size=[3, 3]);
  %82 = nn.bias_add(%81, %v1046);
  %83 = maximum(%82, %v579);
  %84 = minimum(%83, %v580);
  %85 = nn.conv2d(%84, %v1048, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %86 = nn.bias_add(%85, %v1049);
  %87 = add(%86, %76);
  %88 = nn.conv2d(%87, %v1051, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %89 = nn.bias_add(%88, %v1052);
  %90 = maximum(%89, %v587);
  %91 = minimum(%90, %v588);
  %92 = dyn.full(0, %v1243, shape=None, dtype="int64");
  %93 = (%v1244, %92);
  %94 = concatenate(%93);
  %95 = reshape(%94, newshape=[-1, 2]);
  %96 = scatter(meta[relay.Constant][4], %v603, %v604, meta[relay.attrs.ScatterAttrs][6]);
  %97 = cast_like(0, %96);
  %98 = less(%96, %97);
  %99 = shape_of(%95, dtype="int32");
  %100 = cast_like(%99, %96);
  %101 = add(%96, %100);
  %102 = where(%98, %101, %96);
  %103 = shape_of(%95, dtype="int64");
  %104 = scatter(%103, %v603, %v605, meta[relay.attrs.ScatterAttrs][7]);
  %105 = scatter(meta[relay.Constant][5], %v603, %v606, meta[relay.attrs.ScatterAttrs][8]);
  %106 = dyn.strided_slice(%95, %102, %104, %105, begin=None, end=None, strides=None);
  %107 = transpose(%106, axes=[1, 0]);
  %108 = reshape(%107, newshape=[-1]);
  %109 = cast(%108, dtype="int64");
  %110 = reshape(%109, newshape=[2, -1]);
  %111 = transpose(%110, axes=None);
  %112 = take(%v612, 0);
  %113 = dyn.nn.pad(%91, %111, %112, pad_width=[]);
  %114 = nn.conv2d(%113, %v1054, strides=[2, 2], padding=[0, 0, 0, 0], groups=192, kernel_size=[5, 5]);
  %115 = nn.bias_add(%114, %v1055);
  %116 = maximum(%115, %v616);
  %117 = minimum(%116, %v617);
  %118 = nn.conv2d(%117, %v1057, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %119 = nn.bias_add(%118, %v1058);
  %120 = nn.conv2d(%119, %v1060, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %121 = nn.bias_add(%120, %v1061);
  %122 = maximum(%121, %v623);
  %123 = minimum(%122, %v624);
  %124 = nn.conv2d(%123, %v1063, padding=[2, 2, 2, 2], groups=288, kernel_size=[5, 5]);
  %125 = nn.bias_add(%124, %v1064);
  %126 = maximum(%125, %v628);
  %127 = minimum(%126, %v629);
  %128 = nn.conv2d(%127, %v1066, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %129 = nn.bias_add(%128, %v1067);
  %130 = add(%129, %119);
  %131 = nn.conv2d(%130, %v1069, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %132 = nn.bias_add(%131, %v1070);
  %133 = maximum(%132, %v636);
  %134 = minimum(%133, %v637);
  %135 = nn.conv2d(%134, %v1072, padding=[2, 2, 2, 2], groups=288, kernel_size=[5, 5]);
  %136 = nn.bias_add(%135, %v1073);
  %137 = maximum(%136, %v641);
  %138 = minimum(%137, %v642);
  %139 = nn.conv2d(%138, %v1075, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %140 = nn.bias_add(%139, %v1076);
  %141 = add(%140, %130);
  %142 = nn.conv2d(%141, %v1078, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %143 = nn.bias_add(%142, %v1079);
  %144 = maximum(%143, %v649);
  %145 = minimum(%144, %v650);
  %146 = dyn.full(0, %v1248, shape=None, dtype="int64");
  %147 = (%v1249, %146);
  %148 = concatenate(%147);
  %149 = reshape(%148, newshape=[-1, 2]);
  %150 = scatter(meta[relay.Constant][6], %v665, %v666, meta[relay.attrs.ScatterAttrs][9]);
  %151 = cast_like(0, %150);
  %152 = less(%150, %151);
  %153 = shape_of(%149, dtype="int32");
  %154 = cast_like(%153, %150);
  %155 = add(%150, %154);
  %156 = where(%152, %155, %150);
  %157 = shape_of(%149, dtype="int64");
  %158 = scatter(%157, %v665, %v667, meta[relay.attrs.ScatterAttrs][10]);
  %159 = scatter(meta[relay.Constant][7], %v665, %v668, meta[relay.attrs.ScatterAttrs][11]);
  %160 = dyn.strided_slice(%149, %156, %158, %159, begin=None, end=None, strides=None);
  %161 = transpose(%160, axes=[1, 0]);
  %162 = reshape(%161, newshape=[-1]);
  %163 = cast(%162, dtype="int64");
  %164 = reshape(%163, newshape=[2, -1]);
  %165 = transpose(%164, axes=None);
  %166 = take(%v674, 0);
  %167 = dyn.nn.pad(%145, %165, %166, pad_width=[]);
  %168 = nn.conv2d(%167, %v1081, strides=[2, 2], padding=[0, 0, 0, 0], groups=288, kernel_size=[3, 3]);
  %169 = nn.bias_add(%168, %v1082);
  %170 = maximum(%169, %v678);
  %171 = minimum(%170, %v679);
  %172 = nn.conv2d(%171, %v1084, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %173 = nn.bias_add(%172, %v1085);
  %174 = nn.conv2d(%173, %v1087, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %175 = nn.bias_add(%174, %v1088);
  %176 = maximum(%175, %v685);
  %177 = minimum(%176, %v686);
  %178 = nn.conv2d(%177, %v1090, padding=[1, 1, 1, 1], groups=576, kernel_size=[3, 3]);
  %179 = nn.bias_add(%178, %v1091);
  %180 = maximum(%179, %v690);
  %181 = minimum(%180, %v691);
  %182 = nn.conv2d(%181, %v1093, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %183 = nn.bias_add(%182, %v1094);
  %184 = add(%183, %173);
  %185 = nn.conv2d(%184, %v1096, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %186 = nn.bias_add(%185, %v1097);
  %187 = maximum(%186, %v698);
  %188 = minimum(%187, %v699);
  %189 = nn.conv2d(%188, %v1099, padding=[1, 1, 1, 1], groups=576, kernel_size=[3, 3]);
  %190 = nn.bias_add(%189, %v1100);
  %191 = maximum(%190, %v703);
  %192 = minimum(%191, %v704);
  %193 = nn.conv2d(%192, %v1102, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %194 = nn.bias_add(%193, %v1103);
  %195 = add(%194, %184);
  %196 = nn.conv2d(%195, %v1105, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %197 = nn.bias_add(%196, %v1106);
  %198 = maximum(%197, %v711);
  %199 = minimum(%198, %v712);
  %200 = nn.conv2d(%199, %v1108, padding=[1, 1, 1, 1], groups=576, kernel_size=[3, 3]);
  %201 = nn.bias_add(%200, %v1109);
  %202 = maximum(%201, %v716);
  %203 = minimum(%202, %v717);
  %204 = nn.conv2d(%203, %v1111, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %205 = nn.bias_add(%204, %v1112);
  %206 = add(%205, %195);
  %207 = nn.conv2d(%206, %v1114, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %208 = nn.bias_add(%207, %v1115);
  %209 = maximum(%208, %v724);
  %210 = minimum(%209, %v725);
  %211 = nn.conv2d(%210, %v1117, padding=[1, 1, 1, 1], groups=576, kernel_size=[3, 3]);
  %212 = nn.bias_add(%211, %v1118);
  %213 = maximum(%212, %v729);
  %214 = minimum(%213, %v730);
  %215 = nn.conv2d(%214, %v1120, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %216 = nn.bias_add(%215, %v1121);
  %217 = add(%216, %206);
  %218 = nn.conv2d(%217, %v1123, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %219 = nn.bias_add(%218, %v1124);
  %220 = maximum(%219, %v737);
  %221 = minimum(%220, %v738);
  %222 = nn.conv2d(%221, %v1126, padding=[2, 2, 2, 2], groups=576, kernel_size=[5, 5]);
  %223 = nn.bias_add(%222, %v1127);
  %224 = maximum(%223, %v742);
  %225 = minimum(%224, %v743);
  %226 = nn.conv2d(%225, %v1129, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %227 = nn.bias_add(%226, %v1130);
  %228 = nn.conv2d(%227, %v1132, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %229 = nn.bias_add(%228, %v1133);
  %230 = maximum(%229, %v749);
  %231 = minimum(%230, %v750);
  %232 = nn.conv2d(%231, %v1135, padding=[2, 2, 2, 2], groups=816, kernel_size=[5, 5]);
  %233 = nn.bias_add(%232, %v1136);
  %234 = maximum(%233, %v754);
  %235 = minimum(%234, %v755);
  %236 = nn.conv2d(%235, %v1138, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %237 = nn.bias_add(%236, %v1139);
  %238 = add(%237, %227);
  %239 = nn.conv2d(%238, %v1141, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %240 = nn.bias_add(%239, %v1142);
  %241 = maximum(%240, %v762);
  %242 = minimum(%241, %v763);
  %243 = nn.conv2d(%242, %v1144, padding=[2, 2, 2, 2], groups=816, kernel_size=[5, 5]);
  %244 = nn.bias_add(%243, %v1145);
  %245 = maximum(%244, %v767);
  %246 = minimum(%245, %v768);
  %247 = nn.conv2d(%246, %v1147, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %248 = nn.bias_add(%247, %v1148);
  %249 = add(%248, %238);
  %250 = nn.conv2d(%249, %v1150, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %251 = nn.bias_add(%250, %v1151);
  %252 = maximum(%251, %v775);
  %253 = minimum(%252, %v776);
  %254 = nn.conv2d(%253, %v1153, padding=[2, 2, 2, 2], groups=816, kernel_size=[5, 5]);
  %255 = nn.bias_add(%254, %v1154);
  %256 = maximum(%255, %v780);
  %257 = minimum(%256, %v781);
  %258 = nn.conv2d(%257, %v1156, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %259 = nn.bias_add(%258, %v1157);
  %260 = add(%259, %249);
  %261 = nn.conv2d(%260, %v1159, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %262 = nn.bias_add(%261, %v1160);
  %263 = maximum(%262, %v788);
  %264 = minimum(%263, %v789);
  %265 = nn.conv2d(%264, %v1162, padding=[2, 2, 2, 2], groups=816, kernel_size=[5, 5]);
  %266 = nn.bias_add(%265, %v1163);
  %267 = maximum(%266, %v793);
  %268 = minimum(%267, %v794);
  %269 = nn.conv2d(%268, %v1165, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %270 = nn.bias_add(%269, %v1166);
  %271 = add(%270, %260);
  %272 = nn.conv2d(%271, %v1168, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %273 = nn.bias_add(%272, %v1169);
  %274 = maximum(%273, %v801);
  %275 = minimum(%274, %v802);
  %276 = dyn.full(0, %v1253, shape=None, dtype="int64");
  %277 = (%v1254, %276);
  %278 = concatenate(%277);
  %279 = reshape(%278, newshape=[-1, 2]);
  %280 = scatter(meta[relay.Constant][8], %v817, %v818, meta[relay.attrs.ScatterAttrs][12]);
  %281 = cast_like(0, %280);
  %282 = less(%280, %281);
  %283 = shape_of(%279, dtype="int32");
  %284 = cast_like(%283, %280);
  %285 = add(%280, %284);
  %286 = where(%282, %285, %280);
  %287 = shape_of(%279, dtype="int64");
  %288 = scatter(%287, %v817, %v819, meta[relay.attrs.ScatterAttrs][13]);
  %289 = scatter(meta[relay.Constant][9], %v817, %v820, meta[relay.attrs.ScatterAttrs][14]);
  %290 = dyn.strided_slice(%279, %286, %288, %289, begin=None, end=None, strides=None);
  %291 = transpose(%290, axes=[1, 0]);
  %292 = reshape(%291, newshape=[-1]);
  %293 = cast(%292, dtype="int64");
  %294 = reshape(%293, newshape=[2, -1]);
  %295 = transpose(%294, axes=None);
  %296 = take(%v826, 0);
  %297 = dyn.nn.pad(%275, %295, %296, pad_width=[]);
  %298 = nn.conv2d(%297, %v1171, strides=[2, 2], padding=[0, 0, 0, 0], groups=816, kernel_size=[5, 5]);
  %299 = nn.bias_add(%298, %v1172);
  %300 = maximum(%299, %v830);
  %301 = minimum(%300, %v831);
  %302 = nn.conv2d(%301, %v1174, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %303 = nn.bias_add(%302, %v1175);
  %304 = nn.conv2d(%303, %v1177, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %305 = nn.bias_add(%304, %v1178);
  %306 = maximum(%305, %v837);
  %307 = minimum(%306, %v838);
  %308 = nn.conv2d(%307, %v1180, padding=[2, 2, 2, 2], groups=1392, kernel_size=[5, 5]);
  %309 = nn.bias_add(%308, %v1181);
  %310 = maximum(%309, %v842);
  %311 = minimum(%310, %v843);
  %312 = nn.conv2d(%311, %v1183, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %313 = nn.bias_add(%312, %v1184);
  %314 = add(%313, %303);
  %315 = nn.conv2d(%314, %v1186, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %316 = nn.bias_add(%315, %v1187);
  %317 = maximum(%316, %v850);
  %318 = minimum(%317, %v851);
  %319 = nn.conv2d(%318, %v1189, padding=[2, 2, 2, 2], groups=1392, kernel_size=[5, 5]);
  %320 = nn.bias_add(%319, %v1190);
  %321 = maximum(%320, %v855);
  %322 = minimum(%321, %v856);
  %323 = nn.conv2d(%322, %v1192, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %324 = nn.bias_add(%323, %v1193);
  %325 = add(%324, %314);
  %326 = nn.conv2d(%325, %v1195, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %327 = nn.bias_add(%326, %v1196);
  %328 = maximum(%327, %v863);
  %329 = minimum(%328, %v864);
  %330 = nn.conv2d(%329, %v1198, padding=[2, 2, 2, 2], groups=1392, kernel_size=[5, 5]);
  %331 = nn.bias_add(%330, %v1199);
  %332 = maximum(%331, %v868);
  %333 = minimum(%332, %v869);
  %334 = nn.conv2d(%333, %v1201, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %335 = nn.bias_add(%334, %v1202);
  %336 = add(%335, %325);
  %337 = nn.conv2d(%336, %v1204, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %338 = nn.bias_add(%337, %v1205);
  %339 = maximum(%338, %v876);
  %340 = minimum(%339, %v877);
  %341 = nn.conv2d(%340, %v1207, padding=[2, 2, 2, 2], groups=1392, kernel_size=[5, 5]);
  %342 = nn.bias_add(%341, %v1208);
  %343 = maximum(%342, %v881);
  %344 = minimum(%343, %v882);
  %345 = nn.conv2d(%344, %v1210, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %346 = nn.bias_add(%345, %v1211);
  %347 = add(%346, %336);
  %348 = nn.conv2d(%347, %v1213, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %349 = nn.bias_add(%348, %v1214);
  %350 = maximum(%349, %v889);
  %351 = minimum(%350, %v890);
  %352 = nn.conv2d(%351, %v1216, padding=[2, 2, 2, 2], groups=1392, kernel_size=[5, 5]);
  %353 = nn.bias_add(%352, %v1217);
  %354 = maximum(%353, %v894);
  %355 = minimum(%354, %v895);
  %356 = nn.conv2d(%355, %v1219, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %357 = nn.bias_add(%356, %v1220);
  %358 = add(%357, %347);
  %359 = nn.conv2d(%358, %v1222, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %360 = nn.bias_add(%359, %v1223);
  %361 = maximum(%360, %v902);
  %362 = minimum(%361, %v903);
  %363 = nn.conv2d(%362, %v1225, padding=[1, 1, 1, 1], groups=1392, kernel_size=[3, 3]);
  %364 = nn.bias_add(%363, %v1226);
  %365 = maximum(%364, %v907);
  %366 = minimum(%365, %v908);
  %367 = nn.conv2d(%366, %v1228, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %368 = nn.bias_add(%367, %v1229);
  %369 = nn.conv2d(%368, %scratch.layer4_rn.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %370 = nn.relu(%369);
  %371 = nn.conv2d(%370, %scratch.refinenet4.resConfUnit2.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %372 = nn.bias_add(%371, %scratch.refinenet4.resConfUnit2.conv1.bias);
  %373 = nn.relu(%372);
  %374 = nn.conv2d(%373, %scratch.refinenet4.resConfUnit2.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %375 = nn.bias_add(%374, %scratch.refinenet4.resConfUnit2.conv2.bias);
  %376 = add(%375, %369);
  %377 = shape_of(%376, dtype="int32");
  %378 = cast(%377, dtype="float32");
  %379 = multiply(%378, %v1259);
  %380 = strided_slice(%379, begin=[2], end=[4], strides=[1]);
  %381 = dyn.image.resize(%376, %380, size=[], coordinate_transformation_mode="align_corners");
  %382 = nn.conv2d(%381, %scratch.refinenet4.out_conv.weight, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %383 = nn.bias_add(%382, %scratch.refinenet4.out_conv.bias);
  %384 = nn.conv2d(%271, %scratch.layer3_rn.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %385 = nn.relu(%384);
  %386 = nn.conv2d(%385, %scratch.refinenet3.resConfUnit1.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %387 = nn.bias_add(%386, %scratch.refinenet3.resConfUnit1.conv1.bias);
  %388 = nn.relu(%387);
  %389 = nn.conv2d(%388, %scratch.refinenet3.resConfUnit1.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %390 = nn.bias_add(%389, %scratch.refinenet3.resConfUnit1.conv2.bias);
  %391 = add(%390, %384);
  %392 = add(%383, %391);
  %393 = nn.relu(%392);
  %394 = nn.conv2d(%393, %scratch.refinenet3.resConfUnit2.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %395 = nn.bias_add(%394, %scratch.refinenet3.resConfUnit2.conv1.bias);
  %396 = nn.relu(%395);
  %397 = nn.conv2d(%396, %scratch.refinenet3.resConfUnit2.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %398 = nn.bias_add(%397, %scratch.refinenet3.resConfUnit2.conv2.bias);
  %399 = add(%398, %392);
  %400 = shape_of(%399, dtype="int32");
  %401 = cast(%400, dtype="float32");
  %402 = multiply(%401, %v1264);
  %403 = strided_slice(%402, begin=[2], end=[4], strides=[1]);
  %404 = dyn.image.resize(%399, %403, size=[], coordinate_transformation_mode="align_corners");
  %405 = nn.conv2d(%404, %scratch.refinenet3.out_conv.weight, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %406 = nn.bias_add(%405, %scratch.refinenet3.out_conv.bias);
  %407 = nn.conv2d(%141, %scratch.layer2_rn.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %408 = nn.relu(%407);
  %409 = nn.conv2d(%408, %scratch.refinenet2.resConfUnit1.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %410 = nn.bias_add(%409, %scratch.refinenet2.resConfUnit1.conv1.bias);
  %411 = nn.relu(%410);
  %412 = nn.conv2d(%411, %scratch.refinenet2.resConfUnit1.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %413 = nn.bias_add(%412, %scratch.refinenet2.resConfUnit1.conv2.bias);
  %414 = add(%413, %407);
  %415 = add(%406, %414);
  %416 = nn.relu(%415);
  %417 = nn.conv2d(%416, %scratch.refinenet2.resConfUnit2.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %418 = nn.bias_add(%417, %scratch.refinenet2.resConfUnit2.conv1.bias);
  %419 = nn.relu(%418);
  %420 = nn.conv2d(%419, %scratch.refinenet2.resConfUnit2.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %421 = nn.bias_add(%420, %scratch.refinenet2.resConfUnit2.conv2.bias);
  %422 = add(%421, %415);
  %423 = shape_of(%422, dtype="int32");
  %424 = cast(%423, dtype="float32");
  %425 = multiply(%424, %v1269);
  %426 = strided_slice(%425, begin=[2], end=[4], strides=[1]);
  %427 = dyn.image.resize(%422, %426, size=[], coordinate_transformation_mode="align_corners");
  %428 = nn.conv2d(%427, %scratch.refinenet2.out_conv.weight, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %429 = nn.bias_add(%428, %scratch.refinenet2.out_conv.bias);
  %430 = nn.conv2d(%87, %scratch.layer1_rn.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %431 = nn.relu(%430);
  %432 = nn.conv2d(%431, %scratch.refinenet1.resConfUnit1.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %433 = nn.bias_add(%432, %scratch.refinenet1.resConfUnit1.conv1.bias);
  %434 = nn.relu(%433);
  %435 = nn.conv2d(%434, %scratch.refinenet1.resConfUnit1.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %436 = nn.bias_add(%435, %scratch.refinenet1.resConfUnit1.conv2.bias);
  %437 = add(%436, %430);
  %438 = add(%429, %437);
  %439 = nn.relu(%438);
  %440 = nn.conv2d(%439, %scratch.refinenet1.resConfUnit2.conv1.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %441 = nn.bias_add(%440, %scratch.refinenet1.resConfUnit2.conv1.bias);
  %442 = nn.relu(%441);
  %443 = nn.conv2d(%442, %scratch.refinenet1.resConfUnit2.conv2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %444 = nn.bias_add(%443, %scratch.refinenet1.resConfUnit2.conv2.bias);
  %445 = add(%444, %438);
  %446 = shape_of(%445, dtype="int32");
  %447 = cast(%446, dtype="float32");
  %448 = multiply(%447, %v1274);
  %449 = strided_slice(%448, begin=[2], end=[4], strides=[1]);
  %450 = dyn.image.resize(%445, %449, size=[], coordinate_transformation_mode="align_corners");
  %451 = nn.conv2d(%450, %scratch.refinenet1.out_conv.weight, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %452 = nn.bias_add(%451, %scratch.refinenet1.out_conv.bias);
  %453 = nn.conv2d(%452, %scratch.output_conv.0.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %454 = nn.bias_add(%453, %scratch.output_conv.0.bias);
  %455 = shape_of(%454, dtype="int32");
  %456 = cast(%455, dtype="float32");
  %457 = multiply(%456, %v1279);
  %458 = strided_slice(%457, begin=[2], end=[4], strides=[1]);
  %459 = dyn.image.resize(%454, %458, size=[]);
  %460 = nn.conv2d(%459, %scratch.output_conv.2.weight, padding=[1, 1, 1, 1], kernel_size=[3, 3]);
  %461 = nn.bias_add(%460, %scratch.output_conv.2.bias);
  %462 = nn.relu(%461);
  %463 = nn.conv2d(%462, %scratch.output_conv.4.weight, padding=[0, 0, 0, 0], kernel_size=[1, 1]);
  %464 = nn.bias_add(%463, %scratch.output_conv.4.bias);
  %465 = nn.relu(%464);
  squeeze(%465, axis=[1])
}

I encountered the same issue. I think the issue is introduced by the PR Dynamic ONNX Importer #6351. The PR supports the dynamic shape while parsing the onnx model, but it will generate dynamic expressions even though the original operators are not dynamic.

Consider the simple onnx model exported by the PyTorch code like:

import torch
class M(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.t = torch.ones(3, 5)
    def forward(self, x):
        return torch.matmul(x, self.t)
data = torch.ones(1, 2, 3)
torch.onnx.export(M(), (data,), "torch_matmul.onnx")

The onnx model will invoke the same error while extracting tuning tasks. Add the DynamicToStatic pass before extracting tasks will solve the issue of this model:

mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
mod = relay.transform.DynamicToStatic()(mod)
tasks = autotvm.task.extract_from_program(mod["main"],\
                                              target=target,\
                                              target_host=target_host,\
                                              params=params,\
                                              ops=None)

But the method cannot solve all the similar issues. Consider another onnx model exported by the PyTorch code:

import torch
class M(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.b = torch.ones(3)
    def forward(self, x):
        return x * self.b.expand(x.shape)
data = torch.ones(1, 3)
torch.onnx.export(M(), (data,), "torch_gather.onnx")

The DynamicToStatic pass cannot transform all the dynamic expressions to static ones, and the issue still existed. We have another monkey patch method for this situation, but it is not elegant.

Thanks for sharing that. It’s very interesting after I updated the tvm to the latest version, the issue changed to:

tensor type `Tensor[(1), bool]` has 1 dimensions, while `bool` has 0 dimensions
The Relay type checker is unable to show the following types match.
In particular `Tensor[(1), bool]` does not match `bool`
tensor type `Tensor[(?, 1, ?, ?), float32]` has 4 dimensions, while `Tensor[(?, ?, ?), float32]` has 3 dimensions
The Relay type checker is unable to show the following types match.
In particular `Tensor[(?, 1, ?, ?), float32]` does not match `Tensor[(?, ?, ?), float32]`
Traceback (most recent call last):
  File "run_onnx_tvm_camera.py", line 118, in <module>
    mod = relay.transform.DynamicToStatic()(mod)
  File "/home/workspacae/installation/TVM/tvm/python/tvm/ir/transform.py", line 127, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "/home/workspacae/installation/TVM/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
  [bt] (8) /home/workspacae/installation/TVM/tvm/build/libtvm.so(+0x872d74) [0x7f8db62fed74]
  [bt] (7) /home/workspacae/installation/TVM/tvm/build/libtvm.so(tvm::relay::transform::FunctionPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x8f7) [0x7f8db6dad867]
  [bt] (6) /home/workspacae/installation/TVM/tvm/build/libtvm.so(+0x112cffe) [0x7f8db6bb8ffe]
  [bt] (5) /home/workspacae/installation/TVM/tvm/build/libtvm.so(tvm::relay::DynamicToStatic(tvm::relay::Function, tvm::IRModule)+0x528) [0x7f8db6bb7028]
  [bt] (4) /home/workspacae/installation/TVM/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1d4) [0x7f8db62fe3d4]
  [bt] (3) /home/workspacae/installation/TVM/tvm/build/libtvm.so(+0x120eeb2) [0x7f8db6c9aeb2]
  [bt] (2) /home/workspacae/installation/TVM/tvm/build/libtvm.so(+0x120de37) [0x7f8db6c99e37]
  [bt] (1) /home/workspacae/installation/TVM/tvm/build/libtvm.so(tvm::DiagnosticContext::Render()+0x231) [0x7f8db62af391]
  [bt] (0) /home/workspacae/installation/TVM/tvm/build/libtvm.so(+0x822f88) [0x7f8db62aef88]
  File "/home/workspacae/installation/TVM/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.

I also tried the " mod = relay.transform.DynamicToStatic()(mod)"

But got the same issue.

One thing I notice that is when I export the model from pytorch to onnx I need to set the opset_version to 11. The 9 doesn’t work. does this matter?

torch.onnx.export(model, sample, ntpath.basename(model_path).rsplit('.', 1)[0]+'.onnx', verbose=True, opset_version=11)

cc @mbrookhart hopefully he can comment on the dynamic ONNX importer problem.

@Bigtree your latest error seems to be a shape inference problem, It looks like maybe there’s a missing expand_dims or broadcast somewhere. This makes me think there’s a problem in the importer. Is it possible you can share your onnx file for debugging?

@huochaitiantang I’m kind of surprised dynamic to static can’t handle this, I will debug today.

An unfortunately issue with ONNX is that inside the ONNX defintions, basically everything is dynamically shaped. To support models with actual dynamism, we’re importing them with the same dynamism that ONNX has and then trying to re-staticify after the fact. Clearly, we’re hitting some holes in that approach. I will see what we can do to harden it.

@huochaitiantang Torch is actually exporting your seemingly simple graph as a very complex ONNX model with a lot of shape-related parameters treated as input weights instead of constants:

To get DynamicToStatic to work here, I need to freeze the params (i.e., turn all of those fabricated weights into constants):

import onnx
import tvm
from tvm import relay
onnx_model = onnx.load("torch_gather.onnx")
mod, params = relay.frontend.from_onnx(onnx_model, freeze_params=True)
print(relay.transform.DynamicToStatic()(mod))

when I do that, I get the expected multiply times a constant:

def @main(%v0: Tensor[(1, 3), float32]) -> Tensor[(1, 3), float32] {
  multiply(%v0, meta[relay.Constant][0] /* ty=Tensor[(1, 3), float32] */) /* ty=Tensor[(1, 3), float32] */
}

@BigTree, maybe try freezing parameters on your model?

1 Like

@mbrookhart Thanks a lot for that. Here is the link for the model. Please let me know whether you can access it.

https://drive.google.com/file/d/1zu7NAKJCQTdo2qvh-VZrI2Lg5xEMEbR4/view?usp=sharing

I changed the batch in onnx model, and solve the problem.

import onnx

def change_input_dim(model):

    sym_batch_dim = "batch"
    actual_batch_dim = 4
    graph = model.graph
    nodes = list(graph.input) + list(graph.value_info) + list(graph.output)
    for node in nodes:
        if not len(node.type.tensor_type.shape.dim):
            continue
        if node.type.tensor_type.shape.dim[0].dim_param == sym_batch_dim:
            print(node)
            node.type.tensor_type.shape.dim[0].dim_value = actual_batch_dim

def apply(transform, infile, outfile):

    model = onnx.load(infile)
    transform(model)
    onnx.save(model, outfile)

apply(change_input_dim, r"input-file-name, r"output-file-name")