Hi everyone: Currently, I am trying to pass a model into from_pytorch frontend, but since the model has fp16 layers, so I found it has some type unmatched issue which I have no idea how to fix. Here is the issue I found
finished loading
WARNING:root:Untyped Tensor found, assume it is float16
WARNING:root:Untyped Tensor found, assume it is float16
WARNING:root:Untyped Tensor found, assume it is float16
The Relay type checker is unable to show the following types match.
In particular `Tensor[(64), float32]` does not match `Tensor[(64), float16]`
The Relay type checker is unable to show the following types match.
In particular `Tensor[(64), float32]` does not match `Tensor[(64), float16]`
The Relay type checker is unable to show the following types match.
In particular `Tensor[(64), float32]` does not match `Tensor[(64), float16]`
The Relay type checker is unable to show the following types match.
In particular `Tensor[(64), float32]` does not match `Tensor[(64), float16]`
Traceback (most recent call last):
File "tvm_tuning.py", line 518, in <module>
main(params_dict, producer, mq_data)
File "tvm_tuning.py", line 327, in main
relay_mod, relay_mod_params = my_transformer.apply_transform()
File "/home/tiger/cuiqing.li/ByteTuner/model_transform/pytorch_transform.py", line 48, in apply_transform
tvm_model, tvm_model_params = relay.frontend.from_pytorch(scripted_model, self.shape_list, default_dtype="float16")
File "/root/tvm/python/tvm/relay/frontend/pytorch.py", line 3239, in from_pytorch
ret = converter.convert_operators(_get_operator_nodes(graph.nodes()), outputs, ret_name)[0]
File "/root/tvm/python/tvm/relay/frontend/pytorch.py", line 2663, in convert_operators
self.record_output_type(relay_out)
File "/root/tvm/python/tvm/relay/frontend/pytorch.py", line 222, in record_output_type
self.infer_type_with_prelude(output)
File "/root/tvm/python/tvm/relay/frontend/pytorch.py", line 170, in infer_type_with_prelude
body = self.infer_type(val, self.prelude.mod)
File "/root/tvm/python/tvm/relay/frontend/pytorch.py", line 163, in infer_type
new_mod = transform.InferType()(new_mod)
File "/root/tvm/python/tvm/ir/transform.py", line 127, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "/root/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
[bt] (6) /root/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f434f53e2d3]
[bt] (5) /root/tvm/build/libtvm.so(+0x977250) [0x7f434e8f1250]
[bt] (4) /root/tvm/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule) const+0xc6) [0x7f434e8f0666]
[bt] (3) /root/tvm/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1e9) [0x7f434e8efa99]
[bt] (2) /root/tvm/build/libtvm.so(+0x13a1d4b) [0x7f434f31bd4b]
[bt] (1) /root/tvm/build/libtvm.so(tvm::DiagnosticContext::Render()+0x23e) [0x7f434e89ce7e]
[bt] (0) /root/tvm/build/libtvm.so(+0x920e28) [0x7f434e89ae28]
File "/root/tvm/src/ir/diagnostic.cc", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.
Does anyone have some clue how to fix this? For me, it should be some unspecified dtype issue