This point I guess has been touched in the past but I would like to bring it again.
Currently there is a wide number of SoC vendors incorporating more and more new AI SoCs together with SDKs that aim at helping deploying models on these devices. Unfortunately, in many cases these SDKs are still far from mature (e.g., only few operators supported) and only support a few simple typical CNNs models.
Of course, in these cases TVM become an attractive option as anyway one of its main and initial goals is to close this software gap. However, it looks like retargeting TVM to these new accelerators is not a trivial task. For example, Qualcomm offers an SDK (Snapdragon Neural Processing Engine - SNPE) but they have been also working on regarting TVM to the Hexagon since May last year I believe.
So I was wondering if there are already efforts or guidelines to make the process of regarting/porting more manageable?
Moreover, I have some concrete doubts about the process of retargeting TVM:
- Is it possible to directly use the OpenCL target in TVM if the AI accelerator supports it?
- Does the AI accelerators must be programmable to allow TVM to target them (e.g., DSPs)?. Or is possible to target other AI accelerators with a limited interface (e.g., ASIC designs)?
- Is uTVM one way to target new accelerators which in general do not have a fully fledged OS or have a minimal RTOS?
Your input and thoughts are highly appreciated