[DISCUSS] TVM v0.8 Roadmap

Thanks for everyone’s effort we have successfully released v0.7. As per Apache way of development, we want broader involvement and inputs from community developers on what we want to push for in the v0.8 release cycle.

This is a discussion thread to get everyone’s thoughts about directions we would like to see in the v0.8 cycle. The final public roadmap will be a summary based on the discussion.

In terms of timeline, let us set the expectation for what we want to see in the next three months. But other inputs to the longer term projects are also welcomed. Here are some example directions:

  • Stablize auto scheduling
  • Stablize the tvmc command line driver
  • Better documentations for developments
  • TensorIR scheduling support

Please share your thoughts

7 Likes

For the pending PRs, we could have TensorRT and Xilinx Vitis-AI as v0.8 experimental features.

2 Likes

Looking forward to TensorRT features!

Looking forward to heterogeneous computing for VM;

Looking forward to support external like cudnn when input is dynamic;

Recent advances in building Sparse Networks are with very promising results in terms of optimization and performance. With adequate settings, Sparse Network we can achieve the same accuracy(refer to base-lined FP32 networks) with fewer parameters and FLOPs. Though there are various techniques proposed and involved at different stages like Sparse codes, Sparse Kernels, Sparsifications, the area is still evolving at steep rate.

We are currently researching and working in the Sparse domain. As current TVM has support for few basic operations already, we like to work towards enhancing and strengthening the Sparse support, be it in terms of Sparse codes or Sparse Kernels. Also as different framework like Tensorflow, Pytorch, TFLite etc may realize Sparsity in their own ways. We will also work towards making the Sparse feature more robust and adaptable to various front-ends easily.

Soon we will try to release a Tracking list with more internal details.

6 Likes

I would like to see some thought about release process and release timelines for the TVM project . Initially would like some indication of when 0.8 is likely to happen and future releases are likely to happen.

Is Ansor now considered fully merged into the code base ?

regards Ramana

@merrymercy can answer the question wrt to ansor. I believe the per operator tuning features are considered merged as experimental, and there are still some followup to do end to end integration with relay.

Looking forward it. TVM auto scheduler is also doing some experiment on this. I believe spare network has a good future too.

Looking forward to ansor/auto schedule! Perhaps more polyhedron models work as well?

Sorry for acting a bit late on this thread. It slipped through due to the conference and holiday break. Thankfully a lot of things are already under way. A formal tracking thread is created here [ROADMAP] TVM v0.8 Roadmap · Issue #7434 · apache/tvm · GitHub