Trouble with building bert by tvm

hello, when i build bert with tvm, something wrong with

TVMError: In function relay.op._make.broadcast_to(0: RelayExpr, 1: Array<IntImm>) -> RelayExpr: error while converting argument 1: [20:02:27] /Users/dewey/tvm/include/tvm/runtime/packed_func.h:1866: InternalError: Check failed: (!checked_type.defined()) is false: Expected Array[IntImm], but got Array[index 0: tir.Any]

my code

import random
import tvm
from tvm import relay
from tvm import relay
from tvm.runtime.vm import VirtualMachine
from tvm.contrib.download import download_testdata

import numpy as np
import cv2
import matplotlib.pyplot as plt
from transformers import AutoModel, AutoTokenizer

# PyTorch imports
import torch



###############################
# change your config here
n_trails = 2000  # higher is better.
n_early_stopping = 600  # higher is better.
set_seqlen_myself = False  # if set to be true, the model will use the seq_len you set below
seq_len = 512  # only take effect when set_seqlen_myself = True
target = "llvm"
##############################

tokenizer = AutoTokenizer.from_pretrained("/Users/dewey/bert-base-uncased")
device = torch.device("cpu")

# Tokenizing input text
if set_seqlen_myself:
    input_ids = list(np.random.randint(0, 25000, seq_len))
    input_ids[0] = 102
    input_ids[-1] = 103
    atten_mask = list(np.ones(seq_len, dtype=int))
    token_type_ids = list(np.zeros(seq_len, dtype=int))
else:
    sentence_a = "Who was Jim Henson ?"
    sentence_b = "Jim Henson was a puppeteer."
    tokenized_text = tokenizer(sentence_a, sentence_b, padding='max_length')  # will expand to 512 length
    input_ids = tokenized_text['input_ids']
    atten_mask = tokenized_text['attention_mask']
    token_type_ids = tokenized_text['token_type_ids']

seq_len = len(input_ids)

# Creating a dummy input
input_ids_tensor = torch.tensor([input_ids])
atten_mask_tensors = torch.tensor([atten_mask])
token_type_ids_tensors = torch.tensor([token_type_ids])

dummy_input = [input_ids_tensor, atten_mask_tensors, token_type_ids_tensors]


# If you are instantiating the model with `from_pretrained` you can also easily set the TorchScript flag
model = AutoModel.from_pretrained("/Users/dewey/bert-base-uncased", torchscript=True)

# The model needs to be in evaluation mode
model.eval()

# Creating the trace
traced_model = torch.jit.trace(model, dummy_input)
# traced_model = torch.jit.script(model, dummy_input)
traced_model.eval()
script_module = traced_model

input_infos = [("input_ids", input_ids_tensor.shape), ("attention_mask", atten_mask_tensors.shape),
               ("token_type_ids", token_type_ids_tensors.shape)]
mod, params = relay.frontend.from_pytorch(script_module, input_infos)


# Add "-libs=mkl" to get best performance on x86 target.
# For x86 machine supports AVX512, the complete target is
# "llvm -mcpu=skylake-avx512 -libs=mkl"
target = "llvm"

with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"]):
    vm_exec = relay.vm.compile(mod, target=target, params=params)

my environment

mac m2 pro

tvm-0.14.0-dev, torch=2.0.1, transformers=4.31.0

i guess the problem is wrong environment, can someone give me a success environment building bert?

thanks!

i test this demo on window with same environment, the only difference is platform(window & mac), the same error occured.

I found that it was because tvm parsed a dense_pack.x86 task_name in the process of parsing torchscript, and its corresponding shape was [?,1024], i found someone down the level of python from py38 to py37, but i want to use python38, can someone tell me how to solve this problem in py38? thanks!

Hi! I met the same error. Have you solved the problem until now?