How to benchmark the imagenet models in TVM/NNVM?

I would like to evalute the performance of these models on Intel CPUs.

A follow-up for https://github.com/dmlc/tvm/pull/1436

You can find some examples here. https://github.com/dmlc/tvm/tree/master/apps/benchmark

Basically we should do some tuning for the workloads and set compilation flags properly.

As for Intel CPU, I think @yzhliu can give more instructions or possibly some scripts.

For intel cpu, currently there are default schedules which gives fair performance. This can be a good starting point. When compile models, you need to set opt_level=3 to enable layout transformation optimization. Also you need to set target properly. For Intel skylake cpu, it’s “llvm -mcpu=skylake-avx512”.

To get optimal performance, you need to tune convolution schedules. Currently this is not a trivial task. We’ll release some tools to support this soon.

https://docs.tvm.ai/tutorials/nnvm/from_mxnet.html
This is a tutorial to convert MXNet imagenet model to tvm model and compile it. You just need to set opt_level and target differently for Intel CPU.