Error: Incompatible broadcast type TensorType([1, 218, 290, 64], float32) and TensorType([1, 217, 289, 64], float32)

Hello,

During compilng model for arm archi, using tvmc command I’m getting following error message:

TVMError: The source maps are not populated for this module. Please use `tvm.relay.transform.AnnotateSpans` to attach source maps for error reporting. Error: Incompatible broadcast type TensorType([1, 218, 290, 64], float32) and TensorType([1, 217, 289, 64], float32)

and stacktrace:

Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/driver/tvmc/__main__.py", line 24, in <module>
    tvmc.main.main()
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/driver/tvmc/main.py", line 94, in main
    sys.exit(_main(sys.argv[1:]))
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/driver/tvmc/main.py", line 87, in _main
    return args.func(args)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/driver/tvmc/compiler.py", line 137, in drive_compile
    tvmc_model = frontends.load_model(args.FILE, args.model_format, args.input_shapes)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/driver/tvmc/frontends.py", line 404, in load_model
    mod, params = frontend.load(path, shape_dict, **kwargs)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/driver/tvmc/frontends.py", line 198, in load
    return relay.frontend.from_tensorflow(graph_def, shape=shape_dict, **kwargs)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/tensorflow.py", line 1263, in from_tensorflow
    mod, params = g.from_tensorflow(graph, layout, shape, outputs)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/tensorflow.py", line 659, in from_tensorflow
    func = self._get_relay_func(graph, layout=layout, shape=shape, outputs=outputs)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/tensorflow.py", line 623, in _get_relay_func
    self._backtrack_construct(node.name)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/tensorflow.py", line 1182, in _backtrack_construct
    op = self._convert_operator(node.op, node.name, inputs, attr)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/tensorflow.py", line 1025, in _convert_operator
    sym = convert_map[op_name](inputs, attrs, self._params, self._mod)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/tensorflow_ops.py", line 367, in _impl
    input_shape = _infer_shape(inputs_data, mod)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/common.py", line 513, in infer_shape
    out_type = infer_type(inputs, mod=mod)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/relay/frontend/common.py", line 480, in infer_type
    mod = _transform.InferType()(mod)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/ir/transform.py", line 161, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "/home/piotr/projects/odai/tvm/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  9: TVMFuncCall
  8: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  7: tvm::transform::Pass::operator()(tvm::IRModule) const
  6: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  5: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  4: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  3: tvm::DiagnosticContext::Render()
  2: tvm::DiagnosticRenderer::Render(tvm::DiagnosticContext const&)
  1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<void (tvm::DiagnosticContext)>::AssignTypedLambda<tvm::TerminalRenderer(std::ostream&)::{lambda(tvm::DiagnosticContext const&)#1}>(tvm::TerminalRenderer(std::ostream&)::{lambda(tvm::DiagnosticContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  0: tvm::ReportAt(tvm::DiagnosticContext const&, std::ostream&, tvm::Span const&, tvm::Diagnostic const&)
  File "/home/piotr/projects/odai/tvm/tvm/src/ir/diagnostic.cc", line 238