Fail to convert Bert from pytorch(huggingface) to Relay Graph

Since this is not working, I also tried to convert through the onnx model. But I ran into another problem.

I would really appreciate it if you can help me out.

from transformers import BertModel, BertTokenizer, BertConfig
import torch
import tvm
import numpy as np
from tvm import relay
import onnx

model_path = "/home/yifanlu/TVM/TVM_Sample_Text/bert/onnx/bert-base-uncased.onnx"
onnx_model = onnx.load(model_path)

target = "llvm"
dev = tvm.cpu()

shape_dict = {"input_ids": (1,512), "attention_mask": (1,512), "token_type_ids":(1,512)}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
# succeed in converting to relay graph

with tvm.transform.PassContext(opt_level=1):
    intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)

x1 = np.random.randint(0,2,(1,512))
x2 = np.random.randint(0,2,(1,512))
x3 = np.random.randint(0,2,(1,512))
dtype = "float32"
tvm_output = intrp.evaluate()(tvm.nd.array(x1),tvm.nd.array(x2),tvm.nd.array(x3), **params).numpy()

For the last line code, it raises an error

Traceback (most recent call last):
  File "onnx_compile_test.py", line 25, in <module>
    tvm_output = intrp.evaluate()(tvm.nd.array(x1),tvm.nd.array(x2),tvm.nd.array(x3), **params).numpy()
  File "/home/yifanlu/TVM/tvm/python/tvm/relay/backend/interpreter.py", line 172, in evaluate
    return self._make_executor()
  File "/home/yifanlu/TVM/tvm/python/tvm/relay/build_module.py", line 482, in _make_executor
    "Graph Executor only supports static graphs, got output type", ret_type
ValueError: ('Graph Executor only supports static graphs, got output type', TupleTypeNode([TensorType([?, 512, 768], float32), TensorType([?, 768], float32)]))

I think I have determined the input size with batchsize=1, but it does happen. I modify “graph” to “debug” and “vm”, but that leads to other errors.