Hi,
I compiled a model targeting the aot-executor and c-runtime through microTVM. However, if I want to execute a model that has more than one output, the inference will never terminate and locks the MCU up.
I am using TVM-0.14 (the newest non-pre-release version on PyPi) and was not able to figure out, what went wrong.