I want to create a tensor of batch_size when coverting a custom op into tvm. Below is what I’ve done
import tvm.relay.op as _op
from tvm.relay.frontend.common import infer_shape
boxes_num = relay.var("boxes_num", shape=(relay.Any(),), dtype=dtype)
batch_size = infer_shape(boxes_num)[0]
ids = _op.zeros(shape=(batch_size,), dtype=dtype)
But tvm says 『Expected Array[IntImm], but got Array[index 0: tir.Any]』 It seems that zeros do not handle dynamic shape.
Then I looked after the usage of zeros in tvm/python/tvm/relay/frontend/pytorch.py like the snippet below
X_shape = _infer_shape(X) # (seq_num, batch, feature_size)
hidden_size = _infer_shape(_weights[0])[0] / 4
batch_size = X_shape[1]
# Initialize hidden states if not provided.
layers_h = []
layers_c = []
hidden_layers_num = num_directions * num_layers
if h_0 is None:
if has_proj:
h_0 = _op.zeros((batch_size, proj_size), X_dtype)
else:
h_0 = _op.zeros((batch_size, hidden_size), X_dtype)
for i in range(hidden_layers_num):
layers_h.append(h_0)
So the current zeros should support dynamic shape according to the above. Or do I miss anything?