How to autotune quantized model with cudnn

Hi all,

when I use AutoTVM with cuDNN to tune quantized model with calibration_mode=“global_scale”, it gives following error: ValueError: NCHW layout do not support int8 in cudnn how can this be solved?

thx

The simplest workaround is using ConvertLayout to convert your model to NHWC layout. The real solution is improving the cuDNN support to cover NCHW layout in this case.