How to optimize conv2d+batchnorm in relax

How to optimize conv2d+batchnorm in relax? In relay, we can use SimplifyInference + FoldScaleAxis passes to optimize conv2d+batchnorm to conv2d+add.

Please ref to End-to-End Optimize Model — tvm 0.20.dev0 documentation

1 Like