I believe that I am mixing code generation concepts with lowering for my target…
I have been trying to find a way to map high level relax operation straight to low level TIR (without TE+scheduling) for my BYOC target (which is on a new device by the way), and I though LegalizeOps was the way to do it.
Is there any way I can do it then?
Edit: Also, is registering relax operator strategy any different from relay?
After preparing the module for my codegen using FuseOpsByPattern(patterns) and MergeCompositeFunctions(), I can run RunCodegen() and my BYOC will have as entry the annotated relax function(s). These functions contains a series of high level relax operators, and I can perform lowering to TIR if necessary by implementing graph rewriting in the codegen or eventually before calling RunCodegen().
If you are trying to prioritize your TIR over the one in LegalizeOps, you can write a pass that converts your target op to your TIR and apply it before LegalizeOps pass. The implementation should be very similar to LegalizeOps, only difference is the mapping between Relax ops and TIR. This way, we can keep our compilation pipeline composable.
If you want to write down the IRModule directly, you can also do that like @Kevin-XiongC pointed out.
RunCodegen targets Relax-level. So inside, without going through TIR, it directly converts it to external target’s equivalent and compiles it. e.g., relax.conv2d → tensorrt.conv2d then, compiles. Is there any specific reason you want to go through TIR? We do have such mechanism as well, but it is using different paths.
It would be for low level optimisations.
My plan currently is to setup a working template that remains high level, then move the low level logic into TVM.
Long term I believe using TE+scheduling will be the way. Short term I am planning for the case where I have very specific needs (e.g. packing) that I don’t know how to express with TE+scheduling. So eventually writing in TIR directly could be a temporary solution while I am grinding the TE+scheduling learning curve.