I’m now trying to run inference on bare-metal devices. More specifically, I’d like to use micro tvm(or the outputs from micro tvm) on MCUs.
However, I went through some tutorials or notebooks created by developers:
and found that MCUs must be connected to the host machine anyway.
Is it possible for me to download some binary into MCUs, and run inference without connecting to the host PC just as I can do with TFL micro?
I’m not sure when MCUs have to communicate with the host PC to run inference, and when standalone MCUs can run inference without being connected to the host PC.
If you’re looking for a demo of a standalone application running on an MCU, you could take a look at https://github.com/apache/tvm/tree/main/apps/microtvm/ethosu . Although this demo is an example of how to use TVM to run a model and offload operators to the microNPU, it should provide a good starting point for running standalone on bare metal.