Relay quantization: "TVMError: AssertionError: assert isinstance(expr.args[0], _expr.Constant)"

Hello, I’m trying to quantize a fairly large PyTorch model (StyleGAN2-ADA generator).

I’m getting the error in the title when running relay.quantize.quantize()

Full output log
Quantizing...
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (512, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 4, 4), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 4, 4), 'float32'), ('TENSOR', (3, 512, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 11, 11), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 512, 11, 11), 'float32'), ('TENSOR', (512, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 8, 8), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 8, 8), 'float32'), ('TENSOR', (3, 512, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 19, 19), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 512, 19, 19), 'float32'), ('TENSOR', (512, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 16, 16), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 16, 16), 'float32'), ('TENSOR', (3, 512, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 35, 35), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 512, 35, 35), 'float32'), ('TENSOR', (512, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 32, 32), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 32, 32), 'float32'), ('TENSOR', (3, 512, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 67, 67), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 512, 67, 67), 'float32'), ('TENSOR', (512, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 64, 64), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 64, 64), 'float32'), ('TENSOR', (3, 512, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 131, 131), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 256, 131, 131), 'float32'), ('TENSOR', (256, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (256, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 128, 128), 'float32'), ('TENSOR', (256, 256, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 128, 128), 'float32'), ('TENSOR', (3, 256, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 259, 259), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 128, 259, 259), 'float32'), ('TENSOR', (128, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (128, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 256, 256), 'float32'), ('TENSOR', (128, 128, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 256, 256), 'float32'), ('TENSOR', (3, 128, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 515, 515), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 515, 515), 'float32'), ('TENSOR', (64, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (64, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 512, 512), 'float32'), ('TENSOR', (64, 64, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 512, 512), 'float32'), ('TENSOR', (3, 64, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 3, 1027, 1027), 'float32'), ('TENSOR', (3, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 32, 1027, 1027), 'float32'), ('TENSOR', (32, 1, 4, 4), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (32, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 1024, 1024), 'float32'), ('TENSOR', (32, 32, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 1024, 1024), 'float32'), ('TENSOR', (3, 32, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 4, 4, 8), 'float32'), ('TENSOR', (64, 64, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (32, 512, 16), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 4, 4, 2), 'float32'), ('TENSOR', (1, 256, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 11, 11, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 11, 11), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 11, 11, 8), 'float32'), ('TENSOR', (64, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 8, 8, 8), 'float32'), ('TENSOR', (64, 64, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 8, 8, 2), 'float32'), ('TENSOR', (1, 256, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 19, 19, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 19, 19), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 19, 19, 8), 'float32'), ('TENSOR', (64, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 16, 16, 8), 'float32'), ('TENSOR', (64, 64, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 16, 16, 2), 'float32'), ('TENSOR', (1, 256, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 35, 35, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 35, 35), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 35, 35, 8), 'float32'), ('TENSOR', (64, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 32, 32, 8), 'float32'), ('TENSOR', (64, 64, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 32, 32, 2), 'float32'), ('TENSOR', (1, 256, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 67, 67, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 67, 67), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 67, 67, 8), 'float32'), ('TENSOR', (64, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 64, 64, 8), 'float32'), ('TENSOR', (64, 64, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 64, 64, 2), 'float32'), ('TENSOR', (1, 256, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 131, 131, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 512, 131, 131), 'float32'), ('TENSOR', (256, 512, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 32, 131, 131, 8), 'float32'), ('TENSOR', (32, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 128, 128, 8), 'float32'), ('TENSOR', (32, 32, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (16, 512, 16), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 128, 128, 2), 'float32'), ('TENSOR', (1, 128, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 259, 259, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 259, 259), 'float32'), ('TENSOR', (128, 256, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 16, 259, 259, 8), 'float32'), ('TENSOR', (16, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 16, 256, 256, 8), 'float32'), ('TENSOR', (16, 16, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (8, 512, 16), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 256, 256, 2), 'float32'), ('TENSOR', (1, 64, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 515, 515, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 515, 515), 'float32'), ('TENSOR', (64, 128, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 8, 515, 515, 8), 'float32'), ('TENSOR', (8, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 8, 512, 512, 8), 'float32'), ('TENSOR', (8, 8, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (4, 512, 16), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 512, 512, 2), 'float32'), ('TENSOR', (1, 32, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 1, 1027, 1027, 3), 'float32'), ('TENSOR', (1, 1, 4, 4, 1, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW3c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 1027, 1027), 'float32'), ('TENSOR', (32, 64, 3, 3), 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
[19:31:55] /home/hans/code/tvm/src/tir/transforms/storage_rewrite.cc:575: Warning: The allocation requires : 67502656 * 32 bits, which is greater than the maximum of int32. The size is cast to int64.

WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 4, 1027, 1027, 8), 'float32'), ('TENSOR', (4, 1, 4, 4, 1, 8), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 4, 1024, 1024, 8), 'float32'), ('TENSOR', (4, 4, 3, 3, 8, 8), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW8c', 'NCHW8c', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('dense_pack.x86', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (2, 512, 16), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm -keys=cpu -link-params=0, workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 16, 1024, 1024, 2), 'float32'), ('TENSOR', (1, 16, 1, 1, 2, 3), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW2c', 'NCHW3c', 'float32'). A fallback configuration is used, which may bring great performance regression.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [01:44<00:00,  3.27s/it]
Traceback (most recent call last):
  File "/home/hans/.conda/envs/hans/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/hans/.conda/envs/hans/lib/python3.8/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/hans/code/maua-stylegan2/nvsg2a/quantization/tvm_quant.py", line 45, in <module>
    mod = relay.quantize.quantize(mod, params, dataset=calibrate_dataset())
  File "/home/hans/code/tvm/python/tvm/relay/quantize/quantize.py", line 370, in quantize
    mod = quantize_seq(mod)
  File "/home/hans/code/tvm/python/tvm/ir/transform.py", line 127, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 322, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  6: TVMFuncCall
  5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#10}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#10}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  4: tvm::transform::Pass::operator()(tvm::IRModule) const
  3: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 236, in wrapped_func
    return _set_params(mod, input_scale_func, weight_scale_func)
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 168, in _set_params
    _analysis.post_order_visit(main_func, visit_func)
  File "/home/hans/code/tvm/python/tvm/relay/analysis/analysis.py", line 59, in post_order_visit
    return _ffi_api.post_order_visit(expr, fvisit)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 322, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
  99: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  98: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  97: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  96: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  95: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  94: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  93: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  92: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  91: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  90: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  89: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  88: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  87: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  86: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  85: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  84: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  83: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  82: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  81: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  80: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  79: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  78: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  77: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  76: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  75: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  74: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  73: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  72: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  71: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  70: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  69: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  68: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  67: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  66: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  65: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  64: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  63: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  62: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  61: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  60: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  59: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  58: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  57: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  56: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  55: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  54: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  53: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  52: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  51: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  50: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  49: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  48: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  47: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  46: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  45: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  44: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  43: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  42: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  41: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  40: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  39: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  38: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  37: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  36: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  35: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  34: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  33: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  32: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  31: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  30: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  29: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  28: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  27: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  26: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  25: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  24: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  23: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  22: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  21: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  20: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  19: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  18: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  17: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  16: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  15: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  14: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  13: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  12: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  11: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  10: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  9: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  8: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  7: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  6: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  5: tvm::relay::ExprApplyVisit::VisitExpr(tvm::RelayExpr const&)
  4: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  3: tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)
  2: tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  1: std::_Function_handler<void (tvm::RelayExpr const&), tvm::relay::{lambda(tvm::RelayExpr, tvm::runtime::PackedFunc)#1}::operator()(tvm::RelayExpr, tvm::runtime::PackedFunc) const::{lambda(tvm::RelayExpr const&)#1}>::_M_invoke(std::_Any_data const&, tvm::RelayExpr const&)
  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 154, in visit_func
    assert isinstance(expr.args[0], _expr.Constant)
TVMError: AssertionError

Version info:

Python 3.8.2
TVM 0.8.dev0
PyTorch 1.7.1
NVIDIA Driver 465.19.01
CUDA 10.1.243  
CUDNN 7.6.5

I assume that something in my model isn’t quite as constant as TVM expects (maybe due to the weird hack I’m using to replace aten::randn?).

Calibrating with global_scale and kl_divergence gives the same result. Although kl_divergence takes ~140 GB of RAM to do it. Is there a way to set the number of parallel processes lower to alleviate swapping?

What’s the best way to debug this error and get quantization working?

The code I’m using:

from timeit import timeit as time

from tqdm import tqdm
from training.networks import Generator

import torch
import tvm
from tvm import relay

torch.set_grad_enabled(False)
torch.backends.cudnn.benchmark = True

batch_size = 1
input_shape = (batch_size, 512)
output_shape = (batch_size, 3, 1024, 1024)

device = "cuda"


G = Generator(z_dim=512, c_dim=0, w_dim=512, img_resolution=1024, img_channels=3).float().eval().to(device)
G = torch.jit.trace(G, torch.randn(input_shape, device=device)).eval()
for _ in range(5):
    G(torch.randn(input_shape, device=device))  # warm up cudnn autotuner


def randn(inputs, input_types):
    return tvm.relay.expr.const(
        torch.randn(
            size=tuple(int(i.data.asnumpy()) if isinstance(i, tvm.relay.Constant) else int(i) for i in inputs[0])
        ).numpy()
    )


mod, params = relay.frontend.from_pytorch(G, [("input", input_shape)], {"aten::randn": randn})


def calibrate_dataset():
    for _ in tqdm(range(32)):
        yield {"input": torch.randn(input_shape)}


print("Quantizing...")
# with relay.quantize.qconfig(calibrate_mode="global_scale"):
    # mod = relay.quantize.quantize(mod, params)
with relay.quantize.qconfig(calibrate_mode="kl_divergence", weight_scale="power2"):
    mod = relay.quantize.quantize(mod, params, dataset=calibrate_dataset())
qG = relay.create_executor("vm", mod, tvm.device(device), device).evaluate()


print("PyTorch")
print(time(lambda: G(torch.randn(size=input_shape, device=device)), number=100) * 10, "ms")

print("Quantized")
print(time(lambda: qG(torch.randn(size=input_shape)), number=100) * 10, "ms")

To reproduce (assuming TVM is already in PYTHONPATH):

git clone https://github.com/JCBrouwer/stylegan2-ada-pytorch.git
cd stylegan2-ada-pytorch
git checkout quant
pip install torch==1.7.1 torchvision click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3
python -m quantization.tvm_quant

To workaround the memory explosion issue, you can try calibrate_chunk_by option at

For the error, you can try running relay.transform.FoldConstant() before quantize. The weight needs to be a constant to quantize, but it seems it is not in your model.

Not that the existing quantization functionality in TVM is very limited and not actively developed or maintained. There is a new proposal to rework our quantization support in [RFC][Quantization] A new quantization framework in TVM: initial RFC (1/4).

Ahh ok, I gave FoldConstant a try:

with tvm.transform.PassContext(opt_level=3):
    mod = relay.transform.FoldConstant()(mod)
with relay.quantize.qconfig(calibrate_mode="global_scale"):
    mod = relay.quantize.quantize(mod, params)

but I’m still getting the same AssertionError.

My model doesn’t have constant weights, though, so I guess FoldConstant can’t change that. StyleGAN uses modulated convolutions where the weights are multiplied by a learned function of the latent vector before being applied to the output of the previous layer.

I guess I’ll have to leave quantization for sometime in the future if the new quantization framework supports it.

Do you know if the auto-tuner has issues with non-constant weights as well?

PS: I also gave calibrate_chunk_by=8 a try, but I get the following error:

Traceback (most recent call last):
  File "/home/hans/.conda/envs/hans/lib/python3.8/runpy.py", line 193, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/hans/.conda/envs/hans/lib/python3.8/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/hans/code/maua-stylegan2/nvsg2a/quantization/tvm_quant.py", line 43, in <module>
    mod = relay.quantize.quantize(
  File "/home/hans/code/tvm/python/tvm/relay/quantize/quantize.py", line 370, in quantize
    mod = quantize_seq(mod)
  File "/home/hans/code/tvm/python/tvm/ir/transform.py", line 127, in __call__
    return _ffi_transform_api.RunPass(self, mod)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 322, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 257, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 246, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
ValueError: Traceback (most recent call last):
  6: TVMFuncCall
  5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#10}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#10}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  4: tvm::transform::Pass::operator()(tvm::IRModule) const
  3: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 221, in wrapped_func
    input_scale_func = _kl_scale(mod, dataset)
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 97, in _kl_scale
    for samples in collect_stats(mod, dataset, chunk_by):
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 90, in collect_stats
    yield [np.concatenate(output).reshape(-1) for output in outputs]
  File "/home/hans/code/tvm/python/tvm/relay/quantize/_calibrate.py", line 90, in <listcomp>
    yield [np.concatenate(output).reshape(-1) for output in outputs]
  File "<__array_function__ internals>", line 5, in concatenate
ValueError: need at least one array to concatenate

PPS: Thanks for your help!

The auto tuner should have no problem as long as the input and weight shape is fixed. It doesn’t look at what the content of weight is.

I’m not sure why you got that error, maybe you can replace the generator in your dataset creation function with the one that returns a concrete array (see the test case in tvm/test_pass_auto_quantize.py at f681359b2e358f5a5e29880a6cbc5ce8e9fe1419 · apache/tvm · GitHub)

def calibrate_dataset():
    for _ in tqdm(range(32)):
        yield {"input": torch.randn(input_shape)}

I am running into the exact same issue … any ideas?