I introduced the problem of dynamic Tensor after transferring a PyTorch model to tvm.
In debug, I found that the computation that produces dynamic tensor is
Blockquote argwhere(%1406) /* span=aten::nonzero_15:0:0 */;
But in pytorch, when I use the torch.nonzero function, I can determine that the amount of non-zero data in my input is fixed, i.e. the output of argwhere should be a fixed shape.
So my question is, does the argwhere operator have to introduce dynamic tensor? I want my model to be static. Is there any way to avoid this problem?