Cannot allocate memory symbolic tensor shape

The following script crashed when running on the latest TVM version (commit id:aff4e96).

However, it can run well in an early version (commit id: 124813f).

import tvm
from tvm import relay
mod = tvm.IRModule()
x = relay.var("x", dtype = "uint8", shape = (1,2))
y = relay.var("y", dtype = "uint8", shape = (1,2))
z = relay.left_shift(x.astype('uint8'), relay.reshape(y.astype('uint8'), relay.shape_of(x)))
F = relay.Function([x,y,], z)
mod['main'] = F
mod = relay.transform.InferType()(mod)
print(mod)
graph, lib, params = relay.build(mod, target='llvm')

Hi @Evans, the error seems to say the relay.reshape op returns a tensor with a dynamic shape, and Relay does not support symbolic shape memory planning.

The community is developing Relax (Relay Next), which has first-class symbolic shape support. Feel free to check out our pre-RFC and repo.

Thanks for your reply!

All the shapes of tensors are explicit in the script, It seems no dynamic shape, I am curious about why produced a dynamic shape?

Yeah, it seems that the shapes are static so it shouldn’t have dynamic shape involved.

This is because you used relay.shape_of(x). shape_of doesn’t try to tell if the input shape is static or not, so the output of shape_of is always an unknown shape (?, ?). If you replace that with [1, 2] it works.

Thanks, @masahi @yuchenj

I totally agree with you, but I am still confused about why the previous TVM version(commit id: 124813f) can run it well?

1 Like