Google lasted work: MLIR Primer

Good discussions here, the design principle of TVM stack is to “be intelligent and pragmatic”. This means we want as much automation as possible, but also provide ways to make use of human domain information like schedule template, tensorized micro-kernels when necessary. We will likely continue to use this principle.

2 Likes

Actually the current MLIR document says that polyhedral IR is an experimental dialect of MLIR. I find that a bit odd that they would call it “experimental”.

BTW I presented polyhedral compilation of ML graphs at C4ML … and I think that polyhedral and functional approaches like Relay IR are way to go… though I think Relay goes too far on the functional side… (e.g recursion and lists)… but that is not bad just more work needs to be done there…

1 Like

Let me stress the attention on the fact, that MLIR doesn’t only offer different IR, it also offers different approach to scheduling via its Polyhedral dialect. For example, I see affine transformations as types in the standard.

No, I think that automation is not a necessary property of polyhedral approach. See for example Loopy project (https://github.com/nimit-singhania/loopy), where scheduling rules are explicit and authors need only one step to include their grammar in the source language, like halide does now.

In my opinion, it is TVM which may (and should!) benefit of polyhedral approaches. I see Relay as a different story, it may or may not use TVM as a backend.

This is what I meant in my post

Since the loop transformations which TVM does are a subset of all possible with the polyhedral modelling, I guess we would be ok.
Obviously, TVM could offload part of scheduling to MLIR and invoque from there the polyhedral dialect.
That I think is part of the goal of MLIR that the “right” dialect is used for the right part of the whole compilation task

3 Likes

TVM could offload part of scheduling to MLIR and invoque from there the polyhedral dialect.

Yep it will be interesting to see how we could offload parts of Relay IR to different third-party IRs, including MLIR, TensorRT, etc.

There is another project called loo.py https://github.com/inducer/loopy which does loop transformations for CPUs and GPUs

1 Like

It’s in tvm’s acknowledgement list tho

Any updates on this? Since this is the primary thread discussing how MLIR and TVM relate to each other, would love to see a link posted here.

One doc has done it (compare different DL compilers) https://arxiv.org/abs/2002.03794

3 Likes

+1 for this. It is glad to see which one is better.

@tqchen Now that OpenXLA has been opensourced, what are your thoughts on moving forward with regards to interoperability.

Thanks & Regards,

Kuladeep.

indeed stableHLO would be a great bridge to interpolate and bring to tvm unity

@tqchen thanks for the reply. Any plans already in motion for bringing in stableHLO to unity?

Best regards,

Kuladeep.

thanks for these useful info and doc

Just as a info/update on this thread, MLIR now have “controlable schedule” concept that allows a separate schedule to control things on a IR. As a example, in their presentation, Halide and TVM are cited. Probably now, with this, is more easy (or at least it would be) and straightforward to represent TVM’s schedule part and the TIR parts with MLIR. This “controlable part” (or schedule) is a dialect itself, but one controlling elements of a IR.

I leave this here, just as a pure info, but I assume many of you are already being aware.

The presentation: https://youtu.be/P4gUj3QtH_Y

4 Likes

The shared focus between Poly and TVM lies in their use of integer and integer set analysis. I see a valuable opportunity for both MLIR and TVM to enhance and benefit from each other in this area. Essentially, the central concept here involves leveraging integer set analysis, which could be referred to as polyhedral or hypercube analysis. :blush:

May be this will help you more https://mlir.llvm.org/docs/Tutorials/transform-details/Ch0/

@Noahschnapp

Currently there is an intial support (optional) for MLIR Presburger (Polly) right in TVM:

As for the IR expressivness part, there is a example (out on github) for a TVM relay dialect parser done in MLIR:

hi,guys. I’m study deep learning compiler work now, and i’m a little dietressed about there not enough notes and learning route abour that. So, can u share your notes with me if you have time. My email is czy13855444689@163.com.