hi all ,
tvm version : 0.14.dev0
pytorch version : 2.1.0+cu118
transformers version : 4.34.1
i am trying to compile gpt-2 model in tvm using pytorch forntend, but getting this error:
Traceback (most recent call last):
File "gpt2.py", line 39, in <module>
mod, params = relay.frontend.from_pytorch(traced_token_predictor, inputs, default_dtype="int64")
File "/home/user/workspace/rib_tvm/tvm/python/tvm/relay/frontend/pytorch.py", line 5013, in from_pytorch
outputs = converter.convert_operators(operator_nodes, outputs, ret_name)
File "/home/user/workspace/rib_tvm/tvm/python/tvm/relay/frontend/pytorch.py", line 4274, in convert_operators
relay_out = relay_op(
File "/home/user/workspace/rib_tvm/tvm/python/tvm/relay/frontend/pytorch.py", line 2013, in matmul
batch_shape[i] = max(batch_shape[i], j)
File "/home/user/workspace/rib_tvm/tvm/python/tvm/tir/expr.py", line 186, in __bool__
return self.__nonzero__()
File "/home/user/workspace/rib_tvm/tvm/python/tvm/tir/expr.py", line 180, in __nonzero__
raise ValueError(
ValueError: Cannot use and / or / not operator to Expr, hint: use tvm.tir.all / tvm.tir.any instead
The code to replicate the error:
from tvm import relay
import torch
from transformers import GPT2LMHeadModel
token_predictor = GPT2LMHeadModel.from_pretrained("gpt2", torchscript=True).eval()
random_tokens = torch.randint(10000, (5,))
traced_token_predictor = torch.jit.trace(token_predictor, random_tokens)
inputs = [("dummy_input_name", (5,))]
mod, params = relay.frontend.from_pytorch(traced_token_predictor, inputs, default_dtype="int64")
print(mod)
Any help would be highly appreciated.