While I was following the tutorial relay quick start, I tried to load a module from pytorch but it raises segmentation fault error. The TVM I am using the latest commit bff98843bef9a312587aaff51b679d9b69a7d5a7 and the code to reproduce is attached below
from tvm import relay
import tvm
import numpy as np
import torch
import torch as th
import torch.nn as nn
from torchvision import models
import torch.onnx
from tvm import relay, auto_scheduler
model = nn.Sequential(
nn.Conv2d(3, 3, kernel_size=3, padding=1),
nn.BatchNorm2d(3),
# nn.Dropout()
)
input_shape = [1, 3, 224, 224]
input_data = torch.randn(input_shape)
scripted_model = torch.jit.trace(model, input_data).eval()
input_name = "input0"
shape_list = [(input_name, input_data.shape)]
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
print(mod['main'])
opt_level = 3
# target = tvm.target.cuda()
target = "llvm"
with tvm.transform.PassContext(opt_level=opt_level):
lib = relay.build(mod, target=target, params=params)
I also ran into this issue recently. It turned out to be conflicting symbols between PyTorch and TVM, see https://github.com/apache/tvm/issues/9362#issuecomment-955263494 for the resolution. Alternatively, a quicker (but less elegant) solution is to import torch before tvm. Hope this helps!