TVM vs MLIR(TCP Dialect)

Hello Everyone. I’m a newbie to TVM and recently got introduced to it while researching different DL compilers available to the users. Although TVM is intended as an end-to-end compiler framework, I wanted to focus only on the IRs that different compilers offer. From my understanding, the 2 most prominent ones are the Relay IR(TVM) and the MLIR(Google). After having researched quite extensively, I can’t seem to find any resources that highlight the strengths and weaknesses of these 2 IRs and the particular use cases that they are catered towards. I thus wanted to start this thread with the intention of getting a better insight into how these 2 IRs (although technically MLIR is a compiler infrastructure for creating custom IRs) differ. Here is my current understanding of the situation:

MLIR has the concept of creating Dialects that allow for defining new operations as well as attributes and types. Although the dialects present are low-level (eg; Affine, SCF, Linalg, etc.), the MLIR community is currently working on creating a new dialect, called TCP([RFC] Proposal for a high-level ML dialect in MLIR - MLIR - LLVM Discussion Forums), which is of a higher level of abstraction than the current ones. This new dialect could then act as a mid-level dialect in the sense that different frontend frameworks could be lowered to TCP which could then be lowered to other low-level dialects thus providing an end-to-end compilation flow. Along with this, TCP is also intended to support both inference and training and would be agnostic to frontends and backends. In this scenario, how does TVM compare? Also, would the addition of a Relay/Relax dialect (such that Relay->TCP->Affine, etc. could be made possible) be of any benefit?