The use of pass InjectDoubleBuffer

Hi guys, I am studying TVM related knowledge. I found that double_buffer is useful, but it is rarely used in operator schedule.

In legacy_te_schedule, the double buffer is not used.

In Autotvm, only 4 operators use double buffer schedule.

@autotvm.register_topi_schedule("dense_large_batch.gpu")
def schedule_dense_large_batch(cfg, outs):
    ...
    s[AA].double_buffer()
    ...
    s[BB].double_buffer()

@autotvm.register_topi_schedule("group_conv2d_NCHWc_int8.cuda")
def schedule_group_conv2d_NCHWc_int8(cfg, outs):

@autotvm.register_topi_schedule("conv2d_NCHWc_int8.cuda")
def schedule_group_conv2d_NCHWc_int8(cfg, outs):

@autotvm.register_topi_schedule("conv2d_HWNCnc_tensorcore.cuda")
def schedule_conv2d_hwnc_tensorcore(cfg, outs):

Does anyone know why? Thanks.

Hi @cheng, as my understanding, the design of double buffer in legacy TE may not be an appropriate scheduling approach. I suggest checking out the injectSoftwarePipeline pass in TIR—it might better suit your needs.

Thanks for your answer. I’ll take a look :grinning: :grinning:

Hi,@LeiWang1999. Leaving aside legacy_te_stchedule, why do only four op schedules in Autotvm use double-buffer(the example above)? Why hasn’t double-buffer been widely used as a common optimization method? :joy: :joy:

Can you give me some advice?