Hi.
I have a model saved in torch-jit scrypted format at the beginning.
I load it on the first device and convert to TVM with this code:
scripted_model = torch.jit.load('experiment_3.scrypted', map_location='cuda:2').eval()
image_size = (1, 3, 112, 112)
input_name = "input.1"
shape_list = [(input_name, image_size)]
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
opt_level = 3
target = tvm.target.cuda()
with tvm.transform.PassContext(opt_level=opt_level):
lib = relay.build(mod, target, params=params)
model = graph_executor.GraphModule(lib["default"](dev))
After that I can use it and it gives me what I want.
But if I save model with:
scripted_model = torch.jit.load('experiment_3.scrypted', map_location='cuda:2').eval()
image_size = (1, 3, 112, 112)
input_name = "input.1"
shape_list = [(input_name, image_size)]
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
name = 'llvm'
target = tvm.target.Target('llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon')
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target=target, params=params)
lib.export_library("experiment3.tar")
And then load on another device with:
path_to_tar = './models/experiment3.tar'
lib = tvm.runtime.load_module(path_to_tar)
model = graph_executor.GraphModule(lib["default"](device))
If I run it now it gives results mixed with nan’s. Like that: [-4.51843452e+00 1.09136276e+01 -7.48604584e+00 -5.61006546e+00 nan nan -1.96696491e+01 nan nan -2.26766324e+00 -8.84266663e+00 nan … ]
I tried opt_levels from 0 to 4 and still have this problem. Is it some bug or I do something wrong? Thanks in advance.