Question on architecture about layout and layout transformation


I recently try to utilize TVM for my custom deep learning accelerator. Because my accelerator needs specific data layout to operate correctly, I checked some codes, docs, and RFCs(

I found that that layout specification and layout transformation could be done in both Relay-level and TIR(TE)-level.

Is there any purpose to separate it? So far, I think it’s more efficient to combine the two approaches into one.

please let me know if I’m missing something.

The layout transformation pass in Relay is restricted to transformations that work with layout strings such as 'NHWC', `‘NCHW’ and things like that.

The transform_layout schedule primitive in TIR can perform a more complex transformation as it takes an IndexMap to specify a new layout from original layout. The index map is specified using a function so an IndexMap to convert from NHWC to NCHW would be lambda n,h,w,c: [n, c, h, w]. You could however implement more complex index maps such as lambda n,h,w,c: [n, h//2, w//2, c//4, h%2, w%2, c%4], which maps to a much complex layout.

In the unity branch, Relax actually allows performing such a layout transformation at the graph level. I’m not sure what exactly you mean by combining the two approaches, but one could say that Relax transform_layout and TIR transform layout are somewhat combined through the AlterOpImpl Pass.


Thanks for fast replying.

I’m not familiar with Unity branch, but what you explained me about Relax seems what I’m looking for(two in one thing). Great thanks for informing me.

Thank you!!