Hello, i’m not sure what am I missing here but I am trying to compare the inference performance between tvm and simple mxnet resnet-18. I have Autotuned the model and it has some performance increase but it is still slower than the simple Mxnet model. I am getting 76 ms for the tvm auto-tuned model and 9 ms for the simple mxnet model. Can some one please help me?
Yeah, I did but there is no improvement. I noticed that when I load the model I don’t do rt.set_input(**params) because I am loading a bytes and I am not sure how to turn the bytes into dict. I’m not sure if that can effect it.