Errors when running "Deploy a Framework-prequantized Model with TVM"

I tried to run this scripttutorials/frontend/, however, an unexpected error appeared. AssertionError: parameter classifier.1._packed_params._packed_params not found in state dict

and I also found a post proposed by this author. it said that after jitting, parameters are packed in a different way.

classifier.1._packed_params._packed_params torch.Size([104])

So he added some code in tvm to address this. But I also tested it by using colab.

import numpy as np

import torch

from torchvision.models.quantization import mobilenet as qmobilenet
qmodel = qmobilenet.mobilenet_v2(pretrained=True).eval()

input_size = (1, 3, 224, 224)

inp = np.random.randn(*input_size).astype("float32")

trace = torch.jit.trace(qmodel, torch.from_numpy(inp))

state_dict = trace.state_dict()

for (k, v) in state_dict.items():

print(k, v.size())

the output is

    features.18.1.running_var torch.Size([1280])
    features.18.1.num_batches_tracked torch.Size([])
    classifier.1.weight torch.Size([1000, 1280])
    classifier.1.bias torch.Size([1000])

I guess that this error probably caused by different versions of PyTorch. Then I degraded my PyTorch to 1.4, the error is disappeared.@masahi

Yes, that is the error you see if you run that tutorial with PyTorch 1.6, see Unable to run the tvm tutorial using putorch

This PR fixed that problem

my pytorch version is 1.7.0+cpu and Is the pytorch version too high? I am very looking forward your reply!!

@masahi Hello, when I ran this code, I also encountered an error :sob:.when I debug, I found that this api relay.frontend.from_pytorch reported an error. The error is as follows.:

error: expected text format semantic version, found a Token(span=Span(SourceName(C:\Users\田田\Desktop\tvm\python\tvm\relay\std/prelude.rly, 0000024C5FAB43B0), 1, 1, 1, 2), token_type=EndOfFile, data=(nullptr)) → C:\Users\田田\Desktop\tvm\python\tvm\relay\std/prelude.rly:1:1 | 1 | | help: you can annotate it as #[version = “0.0.5”] → C:\Users\田田\Desktop\tvm\python\tvm\relay\std/prelude.rly:1:1 | 1 | | note: run with TVM_BACKTRACE=1 environment variable to display a backtrace.

How can I solve it please? I am very sorry because I am very anxious to solve it. please help me,thanks!!