TVM quantization block accuracy results for common opensource models

Hello @masahi ,

I am working with the internal quantization block of TVM to see the difference in accuracy of opensource models (resnet, mobilenet & inception) with respect to the corresponding prequantized models by their respective frameworks. I observed that resnet-18’s accuracy is on-par with prequantized models with mxnet. However, mobilenet & inception are giving poor accuracy compared to their corresponding prequantized models. Are there any pre-published results for the same, which you evaluated previously for some opensource models? If you share the same, it would be very helpful.

Thanks in advance,


There is a very old result for PyTorch models at pytorch_quantization/tvm_qnn_evaluation at master · Edgecortix-Inc/pytorch_quantization · GitHub

Thank you for sharing this. As I could see, these are the results of framework prequantized models compiled to tvm models. I am checking the quantization done on float models using tvm quantizer, can you please suggest if there are any evaluations from your end for this?