As suggested here (https://github.com/open-mmlab/mmdetection/issues/5329#issuecomment-858450163), I am using simple_test() method and returning earlier with tensors that I have at final layers (before postprocessing). Could able to JIT trace but guess there is an issue with FPN (neck) when using solo_r50_fpn_1x_coco_20210821_035055-2290a6b8.pth model.
torch.Size([1, 256, 138, 138])
torch.Size([1, 256, 69, 69])
torch.Size([1, 256, 35, 35])
torch.Size([1, 256, 18, 18])
torch.Size([1, 256, 9, 9])
Pytorch Model JIT trace done
Traceback (most recent call last):
File "compile_solo_vitisai.py", line 162, in <module>
mod, params = relay.frontend.from_pytorch(script_module, shape_list)
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/python/tvm/relay/frontend/pytorch.py", line 3974, in from_pytorch
ret = converter.convert_operators(_get_operator_nodes(graph.nodes()), outputs, ret_name)[0]
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/python/tvm/relay/frontend/pytorch.py", line 3345, in convert_operators
self.record_output_type(relay_out)
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/python/tvm/relay/frontend/pytorch.py", line 222, in record_output_type
self.infer_type_with_prelude(output)
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/python/tvm/relay/frontend/pytorch.py", line 170, in infer_type_with_prelude
body = self.infer_type(val, self.prelude.mod)
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/python/tvm/relay/frontend/pytorch.py", line 163, in infer_type
new_mod = transform.InferType()(new_mod)
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/python/tvm/ir/transform.py", line 160, in __call__
return _ffi_transform_api.RunPass(self, mod)
File "tvm/_ffi/_cython/./packed_func.pxi", line 323, in tvm._ffi._cy3.core.PackedFuncBase.__call__
File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 163, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
7: TVMFuncCall
6: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
5: tvm::transform::Pass::operator()(tvm::IRModule) const
4: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
3: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
1: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
0: tvm::relay::TypeSolver::Solve() [clone .cold]
File "/home/aziza/01_Perception/xilinx/apachetvm_flow/tvm/src/relay/analysis/type_solver.cc", line 624
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
Check failed: (false) is false: relay.concatenate requires all tensors have the same shape on non-concatenating axes