Relay type checker errors in

I’m trying to compile a model built with Keras, but am getting strange type checking errors during the call to Here’s a minimal working example:

import tensorflow.keras as keras
import tvm
import tvm.relay as relay

model = keras.Sequential()
model.add(keras.layers.InputLayer(input_shape=(512, 1, 1), name='input'))

mod, params_dict = relay.frontend.from_keras(model, {'input': (1, 2, 1, 1)})

with tvm.transform.PassContext(opt_level=3):
    factory_module =, target="llvm", params=params_dict)

When running this, I get the following error message four times in sequence:

The Relay type checker is unable to show the following types match:
  Tensor[(2), float32]
  Tensor[(1), float32]
In particular:
  dimension 0 conflicts: 2 does not match 1.
The Relay type checker is unable to show the following types match.
In particular `Tensor[(1), float32]` does not match `Tensor[(2), float32]`

With TVM_BACKTRACE=1, I get this:

Traceback (most recent call last):
  File ".../", line 12, in <module>
    factory_module =, target="llvm", params=params_dict)
  File ".../tvm/python/tvm/relay/", line 364, in build
    graph_json, runtime_mod, params =
  File ".../tvm/python/tvm/relay/", line 161, in build
  File ".../tvm/python/tvm/_ffi/_ctypes/", line 237, in __call__
    raise get_last_ffi_error()
tvm.error.DiagnosticError: Traceback (most recent call last):
  14: TVMFuncCall
  13: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  12: tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::IRModule, tvm::runtime::String const&)
  11: tvm::relay::backend::RelayBuildModule::OptimizeImpl(tvm::IRModule)
  10: tvm::transform::Pass::operator()(tvm::IRModule) const
  9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  6: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  5: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  4: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  3: _ZN3tvm7runtime13PackedFun
  2: tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const
  1: tvm::DiagnosticContext::Render()
  0: _ZN3tvm7runtime6deta
  File ".../tvm/src/ir/", line 105
DiagnosticError: one or more error diagnostics were emitted, please check diagnostic render for output.

My TVM installation is self-built from v0.10.0 with the default cmake configuration. I’m using tensorflow-cpu v2.10.0.

Setting the input shape to (2,) (instead of (2, 1, 1)) prevents this problem from coming up, but I need more dimensions for a Conv2D layer in my actual application. Making the third dimension equal to the first (input shape (2, 1, 2)) also sidesteps the issue, but obviously this is not a suitable fix either as I can’t add dimensions to my input data.

Is this a bug or am I doing something wrong? Grateful for any help, have a good one!