[Quantization] Add support for conv2D transpose

Hi all!

I’m using TVM for post training quantization and noticed that as of now, conv2d_transpose operations can not be quantized and fall back to float32.

  • Is there a limitation behind this or is it simply a missing feature?
  • If it’s a missing feature, which parts of the code would I need to modify to add such support?

Maybe the community experts could help to clarify these questions? @vinx13 @janimesh or @ziheng @shoubhik I would highly appreciate your response.

Thank you & Best regards, Robert

It is a missing feature. Rules should be added to https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/quantize/_annotate.py and https://github.com/apache/incubator-tvm/blob/master/src/relay/quantize/calibrate.cc

For performance part, you might also need to take a look of conv2d_transpose in topi to get better performance for quantized mode

Great! Thanks for the reply @vinx13. At the moment we will rather try to avoid using conv2d_transpose operators if possible. If this can’t work for any reason, I must look into adding this operator to the quantizer.

https://github.com/apache/incubator-tvm/blob/master/src/relay/quantize/realize.cc”