Why I am getting this issue? Doesn’t we have sm_86 schedules available when we install TVM from source?
tvmgen_default_fused_nn_conv2d_expand_dims_add_71
Cannot find tuned schedules for target=cuda -keys=cuda,gpu -arch=sm_86 -max_num_threads=1024 -thread_warp_size=32, workload_key=[“20675e5640e1ea8eef79fda7ff31be4c”, [2, 128, 52, 52], [256, 128, 3, 3],
[256, 1, 1], [2, 256, 52, 52]]. A fallback TOPI schedule is used, which may bring great performance regression or even compilation failure. Compute DAG info:
FunctionVar_1_0 = PLACEHOLDER [2, 128, 52, 52]
pad_temp(i0, i1, i2, i3) = tir.if_then_else(((((i2 >= 1) && (i2 < 53)) && (i3 >= 1)) && (i3 < 53)), FunctionVar_1_0[i0, i1, (i2 - 1), (i3 - 1)], 0f)
FunctionVar_1_1 = PLACEHOLDER [256, 128, 3, 3]
conv2d_nchw(nn, ff, yy, xx) += (pad_temp[nn, rc, (yy + ry), (xx + rx)]*FunctionVar_1_1[ff, rc, ry, rx])
FunctionVar_1_2 = PLACEHOLDER [256, 1, 1]
T_expand_dims(ax0, ax1, ax2, ax3) = FunctionVar_1_2[ax1, ax2, ax3]
T_add(ax0, ax1, ax2, ax3) = (conv2d_nchw[ax0, ax1, ax2, ax3] + T_expand_dims[0, ax1, 0, 0])