Currently, TVM lacks an up-to-date and reproducible benchmark. The only benchmark is hosted at tvm/apps/benchmark. However, this benchmark is too old and has several flaws.
- The results were obtained 2 years ago.
- The deep learning models are old. It does not include new models (e.g., BERT, EfficientNet)
- The input format is TVM’s internal relay format. It does not use formats from high-level frameworks (e.g., pytorch, mxnet) or open exchange format (e.g., ONNX).
- It does not cover Intel CPUs.
- It only provides pre-tuned configurations by tophub, but does not provide the scripts to generate these configurations.
This RFC aims at building a new open, reproducible bechmark for TVM. When the new benchmark is ready, we can run evaluation nightly and run auto-tuning weekly or monthly.
As the first step, we target three models, three hardware platforms and four code generation strategies. To make the comparision with other frameworks easier, we choose ONNX as the input model format.
- models: resnet-50, mobilenet v2 and BERT with batch size 1
- hardware platforms: NVIDIA GPU, Intel CPU, ARM CPU
- code generation strategies: autotvm, auto-scheduler, tvm + manual library, ONNX-runtime.
All logs generated during the auto-tuning should be uploaded for future references.