I’ve heard there is ongoing work from the SAMPL group to support graph neural networks and sparse operators in TVM. Can anyone comment on the status of this work, if there are any existing design documents, and where someone could potentially pitch in to this effort?
There is an early prototype, but since this work is mostly work-in-progress, we don’t have documentation yet on the topic. It might be good to write a document highlighting the effort (sparse DSL support, operator coverage, scheduling templates) and where people could contribute. Something that you could start @ziheng?
@jknight Thanks for your interest. I have some slides can share with you, which is some works I have done in spring quarter about TVM sparse support. I will also write a design document this week.
@ziheng Are you working on an RFC for this? Any idea when that will be out? It seems like a lot of work is going on in this direction and it’d help to centralize it in one place.
@Huyuwei likewise, anything you an share here or in the RFC would be helpful.
@cylinbao Thanks for the list. Just clarifying, these are things you’d like for the community to work on? Or things you are working on currently? If the former, then it’d be good if you made issues for these and marked them as ‘Help wanted’ possibly with a pointer to someone who would be willing to mentor them.