tvm.relax.transform.ConvertLayout pass is working only for relax.nn.conv2d operator

When I am running following piece of code:

def test_simple_network_1():
    def create_ir_module():
        @I.ir_module
        class SimpleNetwork1:
            @R.function
            def main(inp_0: R.Tensor((1, 32, 32, 3), dtype="float32")) -> R.Tensor((1, 2, 2, 16), dtype="float32"):
                with R.dataflow():
                    conv_weight = R.const(np.ones((16, 3, 3, 3), dtype="float32"), "float32")
                    conv_bias = R.const(np.ones((1, 1, 16), dtype="float32") * 0.1, "float32")
                    lv= R.nn.conv2d(inp_0, conv_weight, strides=[1, 1], padding=[0, 0, 0, 0], dilation=[1, 1], groups=1, data_layout="NHWC", kernel_layout="OHWI", out_layout="NHWC", out_dtype="float32")
                    lv1 = R.add(lv, conv_bias)
                    lv2 = R.nn.relu(lv1)
                    lv3 = R.add(lv2, lv1)
                    lv4 = R.nn.max_pool2d(lv3, pool_size=[2, 2], strides=[2, 2], dilation=[1, 1], padding=[0, 0, 0, 0], ceil_mode=False, count_include_pad=False, layout="NHWC", out_layout="NHWC")
                    lv5 = R.nn.adaptive_avg_pool2d(lv4, output_size = [2, 2], layout="NCHW", out_layout="NCHW")
                    lv6 = R.nn.relu(lv5)
                    gv = lv6
                    R.output(gv)
                return gv
        return SimpleNetwork1
 
    mod = create_ir_module()
 
    desired_layouts = {"relax.nn.conv2d": ["NCHW", "OIHW"],
                       "relax.nn.max_pool2d": ["NCHW"]}
    transform = ConvertLayout(desired_layouts)
 
    transformed_mod = transform(mod)

I am getting Following Error:

/home/ubuntu/tvm_relax/tvm/src/relax/ir/block_builder.cc:64: Warning: BlockBuilder destroyed with remaining blocks!
============================================================================================= short test summary info =============================================================================================
FAILED test_simple_network_1 - tvm.error.InternalError: Check failed: (NoDesiredLayout(call, desired_layouts)) is false:

If I am mentioning only relax.nn.conv2d in desidered_layouts the code is working fine, if only nn.max_pool2d was mentioned in desired_layouts the code is running but no transformation is happening. If I am including both in desired_layouts I am getting above mentioned error. What may be the reason for this?