Crash when using `InferType()`

When I use the relay.transform.InferType() to get the type info of RelayIR, the following script crashed.

Is the usage of relay.transform.InferType() in the following script right?

Any comments are welcome.

import keras
from tvm import relay

model_path = "lenet5_mnist_origin.h5"
model = keras.models.load_model(model_path)
input_layer_name = 'conv2d_9'
input_shape = (1, 28, 28, 1)
shape_dict = {input_layer_name: input_shape}
relay_mod, params = relay.frontend.from_keras(model, shape_dict)
relay_mod = relay.transform.InferType()(relay_mod)     # crash in here !!!
print(relay_mod.astext(show_meta_data=False))

The crash message:

This usually occurs when an operator call is under constrained in some way, check other reported errors for hints of what may of happened.
Traceback (most recent call last):
  File "test.py", line 11, in <module>
    relay_mod = relay.transform.InferType()(relay_mod)  # crash here!!
  File "/softwares/tvm/python/tvm/ir/transform.py", line 161, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "/softwares/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
  6: TVMFuncCall
  5: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  4: tvm::transform::Pass::operator()(tvm::IRModule) const
  3: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  2: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  1: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  0: tvm::DiagnosticContext::Render()
  File "/softwares/tvm/src/ir/diagnostic.cc", line 131
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.

you can receive the lenet5_mnist_origin.h5 model by the link:

@masahi @tqchen could you give me some suggestion. Thanks in advance.

It’s likely a bug in the keras frontend. Try converting the model to ONNX.

Thanks for your comment @masahi

When I converted the keras model into equivallent ONNX model, the usage of InferType() in run well.