TVM-VTA Architecture Scope and Roadmap

Yep, that is the usual struggle between top-down vs bottom-up documentation. To be able to get a quick sense of the scope of the problem, and where you as an individual can contribute, requires an overview of the architecture and its major components.

I would like to know what are the major technology components of the full stack that have a ‘stand-alone’ quality: that is, they might be used in isolation. Clearly VTA is one such component, and I presume the IR can be such a component with IR writers, readers, and transformers.

What about the following components:

  • front-ends to TF, Pytorch, MXNET
  • the ‘compiler’ algorithms
  • are tensor ops stand-alone?

I do like the suggestion to make the walk-through very customer-facing. For VTA walking through an instruction is reasonable. What would be good ‘operators’ to show case for IR and its transformations? Are there opportunities to express an IR and show case the data structures and mappings to different back-ends to yield good efficiency (multi-core, DSP, GPU, OpenCL, FPGA, etc.)