Batchnorm op Fusion in TVM

Thanks @masahi.

Okay so, I tried with this sequence of passes:

seq1 = tvm.transform.Sequential(

    [relay.transform.InferType(),
    relay.transform.SimplifyInference(),
     relay.transform.FoldConstant(),
     relay.transform.FoldScaleAxis(),
     relay.transform.SimplifyInference(),
     relay.transform.FoldConstant()
    ])

I get “add” ops as it is; they are not getting folded to the preceding conv2d’s bias.

Also, suppose there is no bias_add corresponding to a conv2d but batchnorm is present after this conv2d. So in this case, after folding the batchnorm will a new bias_add op be created eventually to adjust the shift or the shift will remain as an add op?