I am trying to compile “bert-base-uncased” model via the pytorch frontend.
I follows the instruction of Exporting transformers models — transformers 4.7.0 documentation and get a torchscript traced model. Then I try to use relay.frontend.from_pytorch, it says
The Relay type checker is unable to show the following types match.
In particular dimension 0 conflicts: 512 does not match 768.
The Relay type checker is unable to show the following types match.
In particular `Tensor[(768), float32]` does not match `Tensor[(512), float32]`
The full diagnosis is
Traceback (most recent call last):
File "torchscript_compile.py", line 69, in <module>
mod, params = relay.frontend.from_pytorch(script_module, input_infos)
File "/home/yifanlu/TVM/tvm/python/tvm/relay/frontend/pytorch.py", line 3335, in from_pytorch
ret = converter.convert_operators(_get_operator_nodes(graph.nodes()), outputs, ret_name)[0]
File "/home/yifanlu/TVM/tvm/python/tvm/relay/frontend/pytorch.py", line 2759, in convert_operators
self.record_output_type(relay_out)
File "/home/yifanlu/TVM/tvm/python/tvm/relay/frontend/pytorch.py", line 219, in record_output_type
self.infer_type_with_prelude(output)
File "/home/yifanlu/TVM/tvm/python/tvm/relay/frontend/pytorch.py", line 167, in infer_type_with_prelude
body = self.infer_type(val, self.prelude.mod)
File "/home/yifanlu/TVM/tvm/python/tvm/relay/frontend/pytorch.py", line 160, in infer_type
new_mod = transform.InferType()(new_mod)
File "/home/yifanlu/TVM/tvm/python/tvm/ir/transform.py", line 161, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/home/yifanlu/TVM/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
6: TVMFuncCall
5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM8::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM8::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
4: tvm::transform::Pass::operator()(tvm::IRModule) const
3: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
2: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
0: tvm::DiagnosticContext::Render()
File "/home/yifanlu/TVM/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.
My TVM version is the latest one.
LLVM is 12.0.0
I am new to TVM, and want to know how to solve it. The full code is shown below