Got error while compiling GPT-2 in tvm

hi all ,
tvm version : 0.14.dev0
pytorch version : 2.1.0+cu118
transformers version : 4.34.1 i am trying to compile gpt-2 model in tvm using pytorch forntend, but getting this error:

Traceback (most recent call last):
  File "gpt2.py", line 39, in <module>
    mod, params = relay.frontend.from_pytorch(traced_token_predictor, inputs, default_dtype="int64")
  File "/home/user/workspace/rib_tvm/tvm/python/tvm/relay/frontend/pytorch.py", line 5013, in from_pytorch
    outputs = converter.convert_operators(operator_nodes, outputs, ret_name)
  File "/home/user/workspace/rib_tvm/tvm/python/tvm/relay/frontend/pytorch.py", line 4274, in convert_operators
    relay_out = relay_op(
  File "/home/user/workspace/rib_tvm/tvm/python/tvm/relay/frontend/pytorch.py", line 2013, in matmul
    batch_shape[i] = max(batch_shape[i], j)
  File "/home/user/workspace/rib_tvm/tvm/python/tvm/tir/expr.py", line 186, in __bool__
    return self.__nonzero__()
  File "/home/user/workspace/rib_tvm/tvm/python/tvm/tir/expr.py", line 180, in __nonzero__
    raise ValueError(
ValueError: Cannot use and / or / not operator to Expr, hint: use tvm.tir.all / tvm.tir.any instead

The code to replicate the error:

from tvm import relay

import torch
from transformers import GPT2LMHeadModel

token_predictor = GPT2LMHeadModel.from_pretrained("gpt2", torchscript=True).eval()

random_tokens = torch.randint(10000, (5,))
traced_token_predictor = torch.jit.trace(token_predictor, random_tokens)

inputs = [("dummy_input_name", (5,))]
mod, params = relay.frontend.from_pytorch(traced_token_predictor, inputs, default_dtype="int64")
print(mod)

Any help would be highly appreciated.

a kind cc to @masahi for your insights on this, and maybe an alternative solution to this ?

Hi all ,
I am putting up the solution here for anyone who runs through the same problem ,
After a lot of debugging and code follow through , i stumbled upon this bert_compilation_problems and value error in bert compilation ,
i thought maybe its a version mismatch error.
So i downgraded pytorch and python and it worked. I was able to convert the torch model to relay model. Pytorch → 1.8.0 with cuda 11.0 support
Python → 3.7