TVM vs. Onnxruntime with CPU Inference

I am having onnx files and want to find the best framework to serve inference. I am using CPU with onnxruntime. I wonder if there is any perf gain of TVM over onnxruntime(I assume there is). I am looking for some benchmark results for inference performance of tvm vs. onnxruntime, but cloud not find any. Does anybody know about it?

1 Like