Errors when running "Deploy a Framework-prequantized Model with TVM"

I tried to run this scripttutorials/frontend/deploy_prequantized.py, however, an unexpected error appeared. AssertionError: parameter classifier.1._packed_params._packed_params not found in state dict

and I also found a post proposed by this author. it said that after jitting, parameters are packed in a different way.

classifier.1._packed_params._packed_params torch.Size([104])

So he added some code in tvm to address this. But I also tested it by using colab.

import numpy as np

import torch

from torchvision.models.quantization import mobilenet as qmobilenet
qmodel = qmobilenet.mobilenet_v2(pretrained=True).eval()

input_size = (1, 3, 224, 224)

inp = np.random.randn(*input_size).astype("float32")

trace = torch.jit.trace(qmodel, torch.from_numpy(inp))

state_dict = trace.state_dict()

for (k, v) in state_dict.items():

print(k, v.size())

the output is

 ......
    features.18.1.running_var torch.Size([1280])
    features.18.1.num_batches_tracked torch.Size([])
    classifier.1.weight torch.Size([1000, 1280])
    classifier.1.bias torch.Size([1000])

I guess that this error probably caused by different versions of PyTorch. Then I degraded my PyTorch to 1.4, the error is disappeared.@masahi

Yes, that is the error you see if you run that tutorial with PyTorch 1.6, see Unable to run the tvm tutorial deploy_prequantized.py using putorch

This PR fixed that problem https://github.com/apache/incubator-tvm/pull/6602