Hierarchy in TVM

Hey all,

I’ve been working with the TVM stack lately, and love it!

Does the TVM stack support a concept of hierarchy? That is, when compiling a model with repeating operations (i.e. BERT) is there any way to extract the fact that there are 12 identical layers, and which operators belong to those layers?

Thanks! Aleks

1 Like

This is an interesting question and I’m looking into this recently too.

It depends on how the model was implemented. If the model was implemented in other frameworks (e.g., TensorFlow, PyTorch, etc), then there’s no way for TVM to keep this information, because this hierarchy doesn’t a part of the IR graph but just a nested Python class instance.

If the model was implemented in Relay, then it’s possible to do so by implementing multiple Relay functions. However, I’m not 100% for sure if that would work because some Relay passes may not deal with multiple functions well. More importantly, it may hurt the final end-to-end performance due to unnecessary IR boundaries.

Is it possibile to extend tf/pytorch to keep this information?

Interesting point. I agree that having a hierachy would make the IR more readable. Perhaps the nested structure can be achieved in A-norm form and can be flatten to graph-norm when we need to tune the model?