Int64 vs int32 dtype error

I am getting int32 vs int64 error with the following codebase. This is related to int64 indices. And the bug lies in between tensorize and codegen. Does anyone has ideas?

@zhiics @kevinthesun @giuseros

import tvm
from tvm import relay

x = relay.var("x", shape=(1, 512, tvm.tir.const(7, 'int64'), tvm.tir.const(7, 'int64')), dtype="int8")
y = relay.var("y", shape=(2048, 512, 1, 1), dtype="int8")

out = relay.qnn.op.conv2d(x, y,
                          relay.const(-128, 'int32'),
                          relay.const(-128, 'int32'),
                          relay.const(0.1, 'float32'),
                          relay.const(0.1, 'float32'),
                          padding=(0, 0, 0, 0),
                          channels=2048,
                          kernel_size=(1, 1),
                          out_dtype='int32')

func = relay.Function([x, y], out)

mod = tvm.IRModule()
mod['main'] = func
print(mod)

with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target='llvm -mcpu=cascadelake')
print("Pass")
1 Like

Hi @animesh,

I did a bit of investigation. It seems there are some issues with the offset. It errors on line 1080 of arg_binder.cc:

if (Bind_(arg->elem_offset, value->elem_offset, arg_name + ".elem_offset", false)) {

Which is called from storage_flatten pass. Apparently, this buffer:

Buffer slice = be.buffer.MakeSlice(begins, extents);

Ends up with a int64 offset, which is different from the offset of the buffer. Did it use to work before?

1 Like

Thanks @giuseros. Yes, it used to work earlier. Not sure when it broke.

Thanks for looking into this. I will spend more time today starting from your observations.

any update? I met some isssues like this…