Can Upsample be implemented on VTA in graph_pack?

  1. When I implemented Unet network on VTA, I encountered the following problems: due to the upsample sampling layer in Unet network, I encountered an error in the graph_pack process. But if you set stop_name before upsample, you can successfully graph_pack. Here are the Settings in my code:
    Upsample in Unet is set as follows:
class Up(nn.Module):
    """Upscaling then double conv"""

    def __init__(self, in_channels, out_channels, bilinear=True):
        super().__init__()

        # if bilinear, use the normal convolutions to reduce the number of channels
        if bilinear:
            self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
            self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
        else:
            self.up = nn.ConvTranspose2d(in_channels, in_channels // 2, kernel_size=2, stride=2)
            self.conv = DoubleConv(in_channels, out_channels)

    def forward(self, x):
        x = self.up(x)
        x = torch.cat([x, x], dim=1)
        return self.conv(x)

After importing the model, its corresponding in Realy is:

%227 = image.resize2d(%226, size=[64, 64], coordinate_transformation_mode="align_corners", rounding_method="", cubic_alpha=-0.75f) /* ty=Tensor[(1, 128, 64, 64), float32] */;
%228 = (%227, %227);
%229 = concatenate(%228, axis=1) /* ty=Tensor[(1, 256, 64, 64), float32] */;
%230 = multiply(%229, 16f /* ty=float32 */) /* ty=Tensor[(1, 256, 64, 64), float32] */;
%231 = round(%230) /* ty=Tensor[(1, 256, 64, 64), float32] */;
%232 = clip(%231, a_min=-127f, a_max=127f) /* ty=Tensor[(1, 256, 64, 64), float32] */;
%233 = cast(%232, dtype="int8") /* ty=Tensor[(1, 256, 64, 64), int8] */;
  1. But before I run Deploy Pretrained Vision Detection Model from Darknet on VTA,I found that I could compile the following layer:
%123 = nn.upsampling(%122, scale_h=2f, scale_w=2f) /* ty=Tensor[(1, 128, 26, 26), float32] */;
  %124 = (%123, %63);
  %125 = concatenate(%124, axis=1) /* ty=Tensor[(1, 384, 26, 26), float32] */;
  %126 = multiply(%125, 5.56522f /* ty=float32 */) /* ty=Tensor[(1, 384, 26, 26), float32] */;
  %127 = round(%126) /* ty=Tensor[(1, 384, 26, 26), float32] */;
  %128 = clip(%127, a_min=-127f, a_max=127f) /* ty=Tensor[(1, 384, 26, 26), float32] */;
  %129 = cast(%128, dtype="int8") /* ty=Tensor[(1, 384, 26, 26), int8] */;

It seems that Upsample can be implemented

Therefore, NOW I am confused. Can Upsample be implemented on VTA? Is there something wrong with the Upsample and cat set in my network? Who has encountered this problem, I hope you can get the answer. Thank you