Importing From Onnx and building through relay

Hi

I am trying to deploy a few onnx models using TVM. For that I need to import the onnx model and build it using relay and export the it as a library. I have successfully been able to do so for the models from the image classification model set from the Onnx Model Zoo.

I get errors to most of the onnx models except the image_classification models from ONNX Model Zoo.

Some of the error messages:

when I try building BiDaf

   mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/onnx.py", line 4760, in from_onnx
    mod, params = g.from_onnx(graph, opset)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/onnx.py", line 4483, in from_onnx
    self._nodes[i_name] = new_var(i_name, shape=i_shape, dtype=dtype)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/common.py", line 610, in new_var
    return _expr.var(name_hint, type_annotation, shape, dtype)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/expr.py", line 476, in var
    type_annotation = _ty.TensorType(shape, dtype)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/ir/tensor_type.py", line 41, in __init__
    self.__init_handle_by_constructor__(_ffi_api.TensorType, shape, dtype)
  File "tvm/_ffi/_cython/./object.pxi", line 126, in tvm._ffi._cy3.core.ObjectBase.__init_handle_by_constructor__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 279, in tvm._ffi._cy3.core.ConstructorCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  1: TVMFuncCall
  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::TensorType (tvm::runtime::Array<tvm::PrimExpr, void>, tvm::runtime::DataType)>::AssignTypedLambda<tvm::{lambda(tvm::runtime::Array<tvm::PrimExpr, void>, tvm::runtime::DataType)#2}>(tvm::{lambda(tvm::runtime::Array<tvm::PrimExpr, void>, tvm::runtime::DataType)#2}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) [clone .cold]
  3: TVMFuncCall
  2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::TensorType (tvm::runtime::Array<tvm::PrimExpr, void>, tvm::runtime::DataType)>::AssignTypedLambda<tvm::{lambda(tvm::runtime::Array<tvm::PrimExpr, void>, tvm::runtime::DataType)#2}>(tvm::{lambda(tvm::runtime::Array<tvm::PrimExpr, void>, tvm::runtime::DataType)#2}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  1: tvm::runtime::TVMArgValue::operator DLDataType() const
  0: tvm::runtime::String2DLDataType(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/conda-bld/tvm-package_1637002504757/work/include/tvm/runtime/packed_func.h", line 714
TVMError: In function ir.TensorType: error while converting argument 1: [15:15:47] /home/nitesh/nitesh_nfs_share/anaconda3/conda-bld/tvm-package_1637002504757/work/include/tvm/runtime/data_type.h:374: unknown type object

This is my shape dict and related variables for BiDaf

    context_string = 'A quick brown fox jumps over the lazy dog.'
    query_string = 'What color is the fox?'

       

    shape_dict = {'context_word': [len(context_string), 1],
            'context_char': [len(context_string), 1, 1, 16],
            'query_word': [len(query_string),1],
            'query_char': [len(query_string),1, 1, 16]}

Here is the output of graph.input:

 graph input [name: "context_word"
    type {
      tensor_type {
        elem_type: 8
        shape {
          dim {
            dim_param: "c"
          }
          dim {
            dim_value: 1
          }
        }
      }
    }
    , name: "context_char"
    type {
      tensor_type {
        elem_type: 8
        shape {
          dim {
            dim_param: "c"
          }
          dim {
            dim_value: 1
          }
          dim {
            dim_value: 1
          }
          dim {
            dim_value: 16
          }
        }
      }
    }
    , name: "query_word"
    type {
      tensor_type {
        elem_type: 8
        shape {
          dim {
            dim_param: "q"
          }
          dim {
            dim_value: 1
          }
        }
      }
    }
    , name: "query_char"
    type {
      tensor_type {
        elem_type: 8
        shape {
          dim {
            dim_param: "q"
          }
          dim {
            dim_value: 1
          }
          dim {
            dim_value: 1
          }
          dim {
            dim_value: 16
          }
        }
      }
    }

As you can see the element type is 8 that is enum as 8 in this dictionary that assigns the Dtype in onnx.py:

TENSOR_TYPE_TO_NP_TYPE = {
   13     int(TensorProto.FLOAT): np.dtype('float32'),
   14     int(TensorProto.UINT8): np.dtype('uint8'),
   15     int(TensorProto.INT8): np.dtype('int8'),
   16     int(TensorProto.UINT16): np.dtype('uint16'),
   17     int(TensorProto.INT16): np.dtype('int16'),
   18     int(TensorProto.INT32): np.dtype('int32'),
   19     int(TensorProto.INT64): np.dtype('int64'),
   20     int(TensorProto.BOOL): np.dtype('bool'),
   21     int(TensorProto.FLOAT16): np.dtype('float16'),
   22     int(TensorProto.DOUBLE): np.dtype('float64'),
   23     int(TensorProto.COMPLEX64): np.dtype('complex64'),
   24     int(TensorProto.COMPLEX128): np.dtype('complex128'),
   25     int(TensorProto.UINT32): np.dtype('uint32'),
   26     int(TensorProto.UINT64): np.dtype('uint64'),
   27     int(TensorProto.STRING): np.dtype(np.object)
   28 }

The dtype seems to string (8) (form here: Enum mappings) and will therefore have to of the type np.object. So there is no meaning for why I get the ’ unknown type object’ error here.

For MaskRCNN:

   Incompatible broadcast type TensorType([1, 256, 31, 31], float32) and TensorType([1, 256, 32, 32], float32)
The type inference pass was unable to infer a type for this expression.

This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
Traceback (most recent call last):
  File "prepare_test_libs.py", line 124, in <module>
    prepare_model_lib(curr_path, sys.argv[1].upper()=='ARM',
  File "prepare_test_libs.py", line 83, in prepare_model_lib
    mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/onnx.py", line 4772, in from_onnx
    mod, params = g.from_onnx(graph, opset)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/onnx.py", line 4532, in from_onnx
    op = self._convert_operator(op_name, inputs, attr, opset)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/onnx.py", line 4661, in _convert_operator
    sym = convert_map[op_name](inputs, attrs, self._params)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/onnx.py", line 425, in _impl_v1
    input_shape = infer_shape(data)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/common.py", line 513, in infer_shape
    out_type = infer_type(inputs, mod=mod)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/common.py", line 488, in infer_type
    new_mod = _transform.InferType()(new_mod)
  File "/home/nitesh/nitesh_nfs_share/anaconda3/envs/tvm-build/lib/python3.8/site-packages/tvm/ir/transform.py", line 161, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm.error.DiagnosticError: Traceback (most recent call last):
 6: TVMFuncCall
  5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  4: tvm::transform::Pass::operator()(tvm::IRModule) const
  3: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  2: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  0: tvm::DiagnosticContext::Render()
  File "/home/nitesh/nitesh_nfs_share/anaconda3/conda-bld/tvm-package_1637002504757/work/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.

This is just for 3 of the error-prone onnx models from ONNX Model Zoo.

Any help will be appreciated!

Thanks

Even I’m getting a similar error for Bidaf. Where you able to solve the error ?

Any help is appreciated!