[BUG Report] Does Sequential pass handle CanonicalizeOps Pass individual?

def prerequisite_optimize(mod, params=None):
    """Prerequisite optimization passes for quantization. Perform
    "SimplifyInference", "FoldScaleAxis", "FoldConstant", and
    "CanonicalizeOps" optimization before quantization."""
    optimize = tvm.transform.Sequential(
        [
            # relay.transform.SimplifyInference(),
            # relay.transform.FoldConstant(),
            # relay.transform.FoldScaleAxis(),
            relay.transform.CanonicalizeOps(),
            # relay.transform.FoldConstant(),
        ]
    )

only make CanonicalizeOps work. then using relay make graph

def test_mul_rewrite():
    """a test case where rhs of mul is not constant"""
    data_shape = (1, 16, 64, 64)    # N, C, H, W
    weight_shape = (16, 16, 3, 3)   # O, I, H, W
    bias_shape = (16, )
    data = relay.var("data", shape=data_shape)
    weight = relay.var("weight", shape=weight_shape)
    bias = relay.var("bias", shape=bias_shape)
    conv = relay.nn.conv2d(
        data, weight, kernel_size=(3, 3), padding=(1, 1), channels=16
    )
    bias = relay.nn.bias_add(conv, bias, axis=1)
    act = relay.nn.relu(bias)
    ins = {
        "data": gen_random(data_shape),
        "weight": gen_random(weight_shape),
        "bias": gen_random(bias_shape)
    }
    f = relay.Function(relay.analysis.free_vars(act), act)
    mod, params = testing.create_workload(f)
    prerequisite_optimize(mod)

It’s weird that output mod still has nn.bias_add op.

Output mod:
def @main /* id=84152336 */(%data: Tensor[(1, 16, 64, 64), float32], %weight: Tensor[(16, 16, 3, 3), float32], %bias: Tensor[(16), float32]) -> Tensor[(1, 16, 64, 64), float32] {
  %0 = nn.conv2d(%data, %weight, padding=[1, 1, 1, 1], channels=16, kernel_size=[3, 3]) /* ty=Tensor[(1, 16, 64, 64), float32] */;
  %1 = nn.bias_add(%0, %bias) /* ty=Tensor[(1, 16, 64, 64), float32] */;
  nn.relu(%1) /* ty=Tensor[(1, 16, 64, 64), float32] */
}

If I directly invoke the CanonicalizeOps op, the pass works.

Only CanonicalizeOps Pass doesn’t work. The FoldScaleAxis has the same opt level as FoldScaleAxis, but it works.

I doesn’t set the PassContext op_level. It use the default opt_level, so the bn not fold int conv.