How to change the layout of whole model

Hi, I notice that Convert Layout pass can change some specific operators layout for example conv2d, if I want to change all operators’s layout, such as NHWCNCHW, and the constant weight will change automatically, does tvm have some pass to do that?

Or I need to give a long map which contains operators’ name and their desired layout, and use convert layout pass, by the way, I have tried like this:

  Map<String, Array<String>> layout_map{{"nn.conv2d", {"NCHW", "default"}},
                                        {"nn.dense", {"NCHW", "default"}}};
  mod = transform::ConvertLayout(layout_map)(mod);

and the original layout is NHWC, but it doesn’t change the layout of nn.dense, only works on nn.conv2d. Any advice? Thanks!

after applying convert Layout pass, I can get some relay expr like this:

  %0 = layout_transform(%input, src_layout="NHWC", dst_layout="NCHW") /* ty=Tensor[(1, 1, 28, 28), float32] */;
  %1 = layout_transform(meta[relay.Constant][0] /* ty=Tensor[(5, 5, 1, 32), float32] */, src_layout="HWIO", dst_layout="OIHW") /* ty=Tensor[(32, 1, 5, 5), float32] */;
  %2 = nn.conv2d(%0, %1, padding=[2, 2, 2, 2], channels=32, kernel_size=[5, 5]) /* ty=Tensor[(1, 32, 28, 28), float32] */;
  %3 = layout_transform(meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1, 32), float32] */, src_layout="NHWC", dst_layout="NCHW") /* ty=Tensor[(1, 32, 1, 1), float32] */;
  %4 = add(%2, %3) /* ty=Tensor[(1, 32, 28, 28), float32] */;

It adds some layout_transform before and after nn.conv2d, why it can’t just delete layout_transform which change the weight of nn.conv2d and just use the new weight since weight is a constant, I mean change the weight during compilation. Does it mean I need to manually implement layout_transform operation, when I put the whole model on device?

need to apply convert layout pass before relay.optimize, then weight layout transform will disappear after relay.optimize.