[RFC] TensorIR: A schedulable IR for TVM

Yeah, I think your explanation is a good summary. I see what you mean about the TensorIR blocks

My understanding though is that the user doesn’t actually write TensorIR (except maybe to start), they still schedule with a separate language? The blocks in TIR seem really nice, but I still worry that the scheduling code itself also needs some ability to abstract. For instance the example here, 4. Matrix Multiplication — Dive into Deep Learning Compiler 0.1 documentation . It doesn’t seem like this changes that too much? There are so many axes in scope in this function at once, and it seems very hard to separate them all from each other.

1 Like