Segmentation fault (core dumped) when use tvm.relay.optimize

Bug Description:

For the relay IR loading from Keras lenet5-mnist model, if it was optimized using tvm.relay.optimize(), a segmentation fault will be thrown.

Reproducible script

from tensorflow import keras
import tvm
from tvm import relay


model_path = "lenet5_mnist_origin.h5"
model = keras.models.load_model(model_path)
model.summary()
shape_dict =  {'conv2d_9_input': [None, 28, 28, 1]}  # --> lead to seg fault.
#shape_dict =  {'conv2d_9_input': [1, 28, 28, 1]}  # --> lead to crash.
relay_mod, params = relay.frontend.from_keras(model, shape_dict)
relay_ir = relay_mod.astext(show_meta_data=False)
print(relay_ir)
tvm.relay.optimize(relay_mod, target='llvm', params=params)    # seg fault here!!!

The output of the above script:

Model: "sequential_5"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_9 (Conv2D)           (None, 28, 28, 6)         156       
                                                                 
 average_pooling2d_9 (Averag  (None, 14, 14, 6)        0         
 ePooling2D)                                                     
                                                                 
 conv2d_10 (Conv2D)          (None, 10, 10, 16)        2416      
                                                                 
 average_pooling2d_10 (Avera  (None, 5, 5, 16)         0         
 gePooling2D)                                                    
                                                                 
 flatten_5 (Flatten)         (None, 400)               0         
                                                                 
 dense_13 (Dense)            (None, 120)               48120     
                                                                 
 dropout_5 (Dropout)         (None, 120)               0         
                                                                 
 dense_14 (Dense)            (None, 84)                10164     
                                                                 
 dense_15 (Dense)            (None, 10)                850       
                                                                 
=================================================================
Total params: 61,706
Trainable params: 61,706
Non-trainable params: 0
_________________________________________________________________
shape_dict {'conv2d_9_input': [None, 28, 28, 1]}
#[version = "0.0.5"]
def @main(%conv2d_9_input: Tensor[(None, 28, 28, 1), float32], %v_param_1: Tensor[(6, 1, 5, 5), float32], %v_param_2: Tensor[(6), float32], %v_param_3: Tensor[(16, 6, 5, 5), float32], %v_param_4: Tensor[(16), float32], %v_param_5: Tensor[(120, 400), float32], %v_param_6: Tensor[(120), float32], %v_param_7: Tensor[(84, 120), float32], %v_param_8: Tensor[(84), float32], %v_param_9: Tensor[(10, 84), float32], %v_param_10: Tensor[(10), float32]) {
  %0 = nn.conv2d(%conv2d_9_input, %v_param_1, padding=[2i64, 2i64, 2i64, 2i64], channels=6, kernel_size=[5, 5]);
  %1 = nn.bias_add(%0, %v_param_2);
  %2 = nn.relu(%1);
  %3 = nn.avg_pool2d(%2, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]);
  %4 = nn.conv2d(%3, %v_param_3, padding=[0, 0, 0, 0], channels=16, kernel_size=[5, 5]);
  %5 = nn.bias_add(%4, %v_param_4);
  %6 = nn.relu(%5);
  %7 = nn.avg_pool2d(%6, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]);
  %8 = transpose(%7, axes=[0, 2, 3, 1]);
  %9 = nn.batch_flatten(%8);
  %10 = nn.dense(%9, %v_param_5, units=120);
  %11 = nn.bias_add(%10, %v_param_6);
  %12 = nn.relu(%11);
  %13 = nn.dense(%12, %v_param_7, units=84);
  %14 = nn.bias_add(%13, %v_param_8);
  %15 = nn.relu(%14);
  %16 = nn.dense(%15, %v_param_9, units=10);
  %17 = nn.bias_add(%16, %v_param_10);
  nn.softmax(%17, axis=1)
}

Segmentation fault (core dumped)

If the batch_size was set as 1, The script will throw a crash when execute in tvm.relay.optimize()

The crash message is following:

The Relay type checker is unable to show the following types match:
  Tensor[(6, 28, 5, 5), float32]
  Tensor[(6, 1, 5, 5), float32]
In particular:
  dimension 1 conflicts: 28 does not match 1.
The Relay type checker is unable to show the following types match.
In particular `Tensor[(6, 1, 5, 5), float32]` does not match `Tensor[(6, 28, 5, 5), float32]`
The Relay type checker is unable to show the following types match:
  Tensor[(120, -80), float32]
  Tensor[(120, 400), float32]
In particular:
  dimension 1 conflicts: -80 does not match 400.
The Relay type checker is unable to show the following types match.
In particular `Tensor[(120, 400), float32]` does not match `Tensor[(120, -80), float32]`
Traceback (most recent call last):
  File "/share_container/data/keras_model_new/1-bug-seg.py", line 14, in <module>
    tvm.relay.optimize(relay_mod, target='llvm', params=params)    # seg fault here!!!
  File "/softwares/tvm/python/tvm/relay/build_module.py", line 459, in optimize
    mod, params = bld_mod.optimize(mod, target=raw_targets, params=params)
  File "/softwares/tvm/python/tvm/relay/build_module.py", line 211, in optimize
    mod = self._optimize(mod, raw_targets)
  File "/softwares/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
  12: TVMFuncCall
  11: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#12}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  10: tvm::relay::backend::RelayBuildModule::Optimize(tvm::IRModule, tvm::runtime::Array<tvm::Target, void> const&)
  9: tvm::relay::backend::RelayBuildModule::OptimizeImpl(tvm::IRModule)
  8: tvm::transform::Pass::operator()(tvm::IRModule) const
  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  6: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  5: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  4: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  3: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  2: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  1: _ZN3tvm7runtime13PackedFuncObj9ExtractorINS0_16PackedFuncSubObjIZNS0_15TypedPackedFuncIFNS_8IRModuleES5_NS_9transform11PassContextEEE17AssignTypedLambdaIZNS_5relay9transform9InferTypeEvEUlS5_RKS7_E_EEvT_EUlRKNS0_7TVMArgsEPNS0_11TVMRetValueEE_EEE4CallEPKS1_SH_SL_
  0: tvm::DiagnosticContext::Render()
  File "/softwares/tvm/src/ir/diagnostic.cc", line 131
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.

Lenet5-mnist model link: