Hi,
I recently try to utilize TVM for my custom deep learning accelerator. Because my accelerator needs specific data layout to operate correctly, I checked some codes, docs, and RFCs( 0039-buffer-physical-layout.md).
I found that that layout specification and layout transformation could be done in both Relay-level and TIR(TE)-level.
Is there any purpose to separate it? So far, I think it’s more efficient to combine the two approaches into one.
please let me know if I’m missing something.