Pytorch Conv Transpose Padding Fix

Background

  • We noticed a discrepancy between the output shapes produced by Pytorch and TVM for a Pytorch network containing a single torch.nn.ConvTranspose2d operator.
  • When comparing the attributes of the torch.nn.ConvTranspose2d operator and the tvm.relay.nn.conv2d_transpose operator, the output_padding parameter in tvm.relay.nn.conv2d_transpose would always default to 0 regardless of what output padding was set in torch.nn.ConvTranspose2d.
  • Upon further inspection, it was found that in tvm/python/tvm/relay/frontend/pytorch.py, the import logic for convolution layers was missing the output_padding parameter.

The Fix

  • All fixes were implemented in tvm/relay/frontend/pytorch.py.
  • To resolve the missing padding parameter, convolution method of the PyTorchOpConverter class is updated so that when it constructed the relay convolution op it supplied the output_padding attribute in the cases where it was creating convolution transpose operations.
  • Over the course of the fix I also discovered that the convolution class automatically converted torch.nn.ConvTranspose1D operations into tvm.relay.nn.conv2d_transpose. This was fixed so now they were converted into tvm.relay.nn.conv1d_transpose operations.
  • Over the course of the fix we also discovered that torch.nn.Conv1d operations were being converted into tvm.relay.nn.conv2d operations. This was fixed so that they are now converted into tvm.relay.nn.conv1d operations. There is a slight caveat where because tvm does not support grouped 1D convolution as stated in the description of tvm.relay.nn.conv1d, in that case we convert the operation to 2D convolution which does have support for grouped convolution. After the 2D convolution, we then squeeze the output to get the correct shape and values for a grouped 1D convolution.

Test Coverage

  • Extended the test_forward_conv_transpose test in tvm/tests/python/frontend/pytorch/test_forward.py.

Observations That Should Be Looked Into

  • We noticed that the pytorch importer transposes the measured weight shape to the IOHW layout for transpose convolution. However it does not transpose the weight tensors themselves so their stated layout in the conv_transpose operation differs from their actual layout. The most puzzling thing about this is that the operation still executes as expected. It may be that the kernel_layout parameter in tvm.relay.nn.conv#d_transpose operations does nothing and the function expects the weights to be in the form "IO###" even though the default kernel_layout param is "OI###". Looking at other tests that use conv3d_transpose (test_conv3d_transpose_infer_type in tvm/tests/python/relay/test_op_level2.py) We see the same pattern of having the weights have the shape "IODHW" when the default is "OIDHW".
  • We also noticed that for larger kernel sizes (e.g. 7) the outputs of the torch.nn.ConvTranspose3D did not match up with the outputs of tvm.relay.nn.conv3d_transpose. Our tests and other already existing tests seem to pass since the kernel size of these tests along any dimension do not exceed 5.
  • Grouped convolution is not yet supported for conv transpose, but when it is, the pytorch importer will need to be adjusted for how it reshapes the weight tensors for grouped convolution. I got to a point where the type inferencing was correct but i could not validate the actual results so i scrapped those additions. Note that the work in the frontend essentially boils down to making sure the output channels are set correctly by multiplying the initial result for channels by the number of groups.
1 Like