I am trying optimizing various models from https://github.com/onnx/models#object_detection . Im able to optimize vision models but when I try for language models [ Bidirectional attention flow ] Im getting the below error.
Traceback (most recent call last):
File "/home/hp/Desktop/tvm/major_project/x86-examples/Bidirectional-Attention-Flow/bidirectional.py", line 58, in <module>
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
File "/home/hp/Desktop/tvm/python/tvm/relay/frontend/onnx.py", line 6720, in from_onnx
mod, params = g.from_onnx(graph, opset)
File "/home/hp/Desktop/tvm/python/tvm/relay/frontend/onnx.py", line 6334, in from_onnx
self._parse_graph_input(graph)
File "/home/hp/Desktop/tvm/python/tvm/relay/frontend/onnx.py", line 6403, in _parse_graph_input
self._nodes[i_name] = new_var(i_name, shape=i_shape, dtype=dtype)
File "/home/hp/Desktop/tvm/python/tvm/relay/frontend/common.py", line 630, in new_var
return _expr.var(name_hint, type_annotation, shape, dtype)
File "/home/hp/Desktop/tvm/python/tvm/relay/expr.py", line 663, in var
type_annotation = _ty.TensorType(shape, dtype)
File "/home/hp/Desktop/tvm/python/tvm/ir/tensor_type.py", line 41, in __init__
self.__init_handle_by_constructor__(_ffi_api.TensorType, shape, dtype)
File "/home/hp/Desktop/tvm/python/tvm/_ffi/_ctypes/object.py", line 145, in __init_handle_by_constructor__
handle = __init_by_constructor__(fconstructor, args)
File "/home/hp/Desktop/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 260, in __init_handle_by_constructor__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
8: TVMFuncCall
at /home/hp/Desktop/tvm/src/runtime/c_runtime_api.cc:477
7: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1217
6: Call
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1213
5: operator()
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1730
4: unpack_call<tvm::TensorType, 2, tvm::<lambda(tvm::runtime::Array<tvm::PrimExpr>, tvm::DataType)> >
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1670
3: run<>
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1630
2: run<tvm::runtime::TVMMovableArgValueWithContext_>
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1630
1: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1645
0: tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::runtime::DataType<tvm::runtime::DataType>() const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:777
12: TVMFuncCall
at /home/hp/Desktop/tvm/src/runtime/c_runtime_api.cc:477
11: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1217
10: Call
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1213
9: operator()
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1730
8: unpack_call<tvm::TensorType, 2, tvm::<lambda(tvm::runtime::Array<tvm::PrimExpr>, tvm::DataType)> >
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1670
7: run<>
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1630
6: run<tvm::runtime::TVMMovableArgValueWithContext_>
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1630
5: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1645
4: tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::runtime::DataType<tvm::runtime::DataType>() const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:775
3: tvm::runtime::TVMMovableArgValue_::operator tvm::runtime::DataType() const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:731
2: tvm::runtime::TVMArgValue::operator tvm::runtime::DataType() const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1997
1: tvm::runtime::TVMArgValue::operator DLDataType() const
at /home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h:1983
0: tvm::runtime::String2DLDataType(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)
at /home/hp/Desktop/tvm/include/tvm/runtime/data_type.h:392
File "/home/hp/Desktop/tvm/include/tvm/runtime/packed_func.h", line 777
TVMError: In function ir.TensorType(0: Array<PrimExpr>, 1: DataType) -> relay.TensorType: error while converting argument 1: [10:50:41] /home/hp/Desktop/tvm/include/tvm/runtime/data_type.h:383: unknown type object
my code :
shape_dict = {"context_word": cw.shape,
"context_char": cc.shape,
"query_word": qw.shape,
"query_char": qc.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
onnx model reference : models/text/machine_comprehension/bidirectional_attention_flow at main · onnx/models · GitHub