I am in 100 perfect favor of this proposal and voting +1.
There always have been sayings that deep learning compilers are going to converge with no foreseeable innovations required in this field. However, the reality is that the rise of new hardware with new features and new models motivates new compiler frameworks to be built. LLM and its various supporting frameworks are examples of it.
In fact, Transformers have been proposed for around 6 years and have been popular for at least 4 years. Still, people hardly use DL compilers as the first choice when they craft systems to serve LLM inference, but using vendor libs and hand-crafted kernels. As a member of this open-source DL compiler community and one who has been working on TVM for several years, I regard this as a reminder that ML engineering is still a problem far from being resolved, at least for us.
For an infrastructure software that is trying to accommodate the requirement of a fast-changing field, it either gets busy living or gets busy dying.
And to change the way people doing Computer Science, we need to accept changes and being able to change faster than people do.