Converting ONNX model to TVM fails on BufferStore int32/int64 problem

Hello,

I am trying to convert an ONNX model to TVM relay format using the Python API.

import onnx
import tvm.relay as relay
import tvm
from tvm.contrib import graph_executor


onnx_model = onnx.load("models/model_float.onnx")

target = "llvm"

input_name = "data"
shape_dict = {
    "image": [1, 1, 256, 192],
    "rotation_normalized_to_world": [1, 3, 3],
    "principal_point_normalized": [1, 2],
    "focal_length_normalized": [1, 2],
}

mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)

with tvm.transform.PassContext(opt_level=3):
    executor = relay.build_module.create_executor("vm", mod, tvm.cpu(0), target, params).evaluate()

But when I run this i stumble upon a TypeError in TVM:

Output (cropped):

...
5: tvm::runtime::ObjectPtr<tvm::runtime::Object> tvm::runtime::Array<tvm::tir::Stmt, void>::MapHelper<tvm::tir::StmtMutator::Internal::Mutate(tvm::tir::StmtMutator*, tvm::runtime::Array<tvm::tir::Stmt, void> const&)::{lambda(tvm::tir::Stmt const&)#1}, tvm::tir::Stmt>(tvm::runtime::ObjectPtr<tvm::runtime::Object>, tvm::tir::StmtMutator::Internal::Mutate(tvm::tir::StmtMutator*, tvm::runtime::Array<tvm::tir::Stmt, void> const&)::{lambda(tvm::tir::Stmt const&)#1})
  4: tvm::tir::StmtMutator::VisitStmt(tvm::tir::Stmt const&)
  3: tvm::tir::StmtFunctor<tvm::tir::Stmt (tvm::tir::Stmt const&)>::VisitStmt(tvm::tir::Stmt const&)
  2: _ZZN3tvm3tir11StmtFunctorIFNS0_4StmtERKS2_EE10InitVTableEvENUlRKNS_7runti
  1: tvm::te::TensorToBufferMapper::VisitStmt_(tvm::tir::ProducerStoreNode const*)
  0: tvm::tir::BufferStore::BufferStore(tvm::tir::Buffer, tvm::PrimExpr, tvm::runtime::Array<tvm::PrimExpr, void>, tvm::Span)
  File "/home/jv1941/workspace/tvm/src/tir/ir/stmt.cc", line 480
TypeError: dtype mismatch on BufferStore: buffer's dtype is `int32`, the lanes of indexing are: `1`, but RHS's dtype is `int64`

Any ideas of how to solve this?

TVM version used: v0.13.0

BR Johan

I’d try to narrow down which operator contains the code triggering this error, then look at the implementation of the op to see if there are any issues there. What may be happening is that the op gets an input with an unexpected data type, or something like that.

Thank you for you reply @kparzysz . Ok, I suspect there are some int64 constants being used in my ONNX model that maybe are not supported properly in TVM? I will try to recast them to int32. The input to the model are all float32. I will also go through the operators to check for int64 support or not.

There may be some debugging code in TVM that could help you identify the operator with the problematic code. There is an environment variable TVM_LOG_DEBUG that you can set to various verbosity levels.

See the comments at https://github.com/apache/tvm/blob/main/include/tvm/runtime/logging.h#L464-L502 for more info.

I found one way of getting past this error. I dont know if it is the correct one but for future reference if someone gets stuck on same issue:

I rebuilt TVM with this CMAKE-flag INDEX_DEFAULT_I64 set to OFF(by default it is ON): INDEX_DEFAULT_I64=OFF

That seemed to solve it.