Hi guys, I am studying TVM related knowledge. I found that double_buffer is useful, but it is rarely used in operator schedule.
In legacy_te_schedule, the double buffer is not used.
In Autotvm, only 4 operators use double buffer schedule.
@autotvm.register_topi_schedule("dense_large_batch.gpu")
def schedule_dense_large_batch(cfg, outs):
...
s[AA].double_buffer()
...
s[BB].double_buffer()
@autotvm.register_topi_schedule("group_conv2d_NCHWc_int8.cuda")
def schedule_group_conv2d_NCHWc_int8(cfg, outs):
@autotvm.register_topi_schedule("conv2d_NCHWc_int8.cuda")
def schedule_group_conv2d_NCHWc_int8(cfg, outs):
@autotvm.register_topi_schedule("conv2d_HWNCnc_tensorcore.cuda")
def schedule_conv2d_hwnc_tensorcore(cfg, outs):
Does anyone know why? Thanks.