Is there an opt_level that does nothing?

Hi everyone,

Super excited about the project and big shout-out to the creators and the contributors.

I wanted to know if there is an opt-level that does nothing to the input graph. If I am aware, opt_level = 0 does SimplifyInference, does any option exist that would create the exact same run-time as the input?

Since I am researching into the behavior of tvm rather than looking for raw performance, such an option would be really helpful for me. If anyone has any pointers, I can try to implement it myself.

opt_level=0, would be roughly what you are looking for.

Strictly there is no “does nothing”. Because TVM support operator fusion, we don’t bother to implement bulk operators, like fused scale add(batchnorm) and simply breaks things down into small operators.

Because of this breakdown, there would be already differences between the transformed graph and input. The philosophy behind this is to keep everything simple, and operator fusion will get more performance back

Hi @tqchen,

Thank you for taking time out to reply. It’s really inspiring to see the work you are doing here.

I meant to ask that at the graph level, when the opt_level=0, there are still graph level optimizations like SimplifyInference, which drops operation nodes like Dropout.

If I understand your response correctly, there are nnvm nodes which don’t have kernels implemented, so they will anyways always be dropped from the graph or broken down into simpler components, am I right?