Unable to run the tvm tutorial deploy_prequantized.py using putorch

I’m trying to run this tutorial (https://tvm.apache.org/docs/tutorials/frontend/deploy_prequantized.html#). but it is giving me this:- File /home/santosh/.tvm_test_data/data/imagenet1000_clsid_to_human.txt exists, skip. File /home/santosh/.tvm_test_data/data/cat.png exists, skip. /home/santosh/.local/lib/python3.8/site-packages/torch/quantization/observer.py:875: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point warnings.warn( ANTLR runtime and generated code versions disagree: 4.8!=4.7.2 ANTLR runtime and generated code versions disagree: 4.8!=4.7.2 Traceback (most recent call last): File “deploy_prequantized.py”, line 166, in mod, params = relay.frontend.from_pytorch(script_module, input_shapes) File “/home/santosh/.local/lib/python3.8/site-packages/tvm-0.7.dev1-py3.8-linux-x86_64.egg/tvm/relay/frontend/pytorch.py”, line 2633, in from_pytorch param_vars, tensors, packed_param_map = convert_params(graph, params) File “/home/santosh/.local/lib/python3.8/site-packages/tvm-0.7.dev1-py3.8-linux-x86_64.egg/tvm/relay/frontend/pytorch.py”, line 2373, in convert_params assert full_attr in state_dict, err_msg AssertionError: parameter classifier.1._packed_params._packed_params not found in state dict

is there any changes that I will have to make in this line 166?

Unfortunately, our quantized PyTorch model support is completely broken for PyTorch 1.6, due to a serious bug they introduced, and that’s the error you would get if you try.

See https://github.com/pytorch/pytorch/issues/42497 Other than waiting for them to fix this, we have no plan at the moment.