Hi everyone. I am trying to measure the inference time for my ML model with STM32 board using uTVM. I am following the tutorial microTVM with TFLite Models — tvm 0.8.dev0 documentation.
I am trying to use time_evaluator to measure the inference time(like for other targets when using TVM) but it seems like there is some issue with the function when using it with uTVM.
ftimer = graph_mod.module.time_evaluator("run", session.context,number=1, repeat=1) prof_res = np.array(ftimer().results) * 1000 # multiply 1000 for converting to millisecond print("%.2f ms" % np.mean(prof_res))
Traceback (most recent call last)
<ipython-input-28-7ce7584c44b6> in <module>
1 ftimer = graph_mod.module.time_evaluator("run", session.context,number=1, repeat=1)
----> 2 prof_res = np.array(ftimer().results) * 1000 # multiply 1000 for converting to millisecond
3 print("%.2f ms" % np.mean(prof_res))
~/ai@edge/tvm_eval1/tvm/python/tvm/runtime/module.py in evaluator(*args)
224 """Internal wrapped evaluator."""
225 # Wrap feval so we can add more stats in future.
--> 226 blob = feval(*args)
227 fmt = "@" + ("d" * repeat)
228 results = struct.unpack(fmt, blob)
~/ai@edge/tvm_eval1/tvm/python/tvm/_ffi/_ctypes/packed_func.py in __call__(self, *args)
235 != 0
236 ):
--> 237 raise get_last_ffi_error()
238 _ = temp_args
239 _ = args
TVMError: Traceback (most recent call last):
[bt] (8) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::MicroTransportChannel::Send(void const*, unsigned long)+0x20) [0x7fcac7f0bf20]
[bt] (7) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::Session::SendMessage(tvm::runtime::micro_rpc::MessageType, unsigned char const*, unsigned long)+0x4a) [0x7fcac7f3a2ca]
[bt] (6) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::Session::SendInternal(tvm::runtime::micro_rpc::MessageType, unsigned char const*, unsigned long)+0x2f) [0x7fcac7f3a015]
[bt] (5) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::Session::StartMessage(tvm::runtime::micro_rpc::MessageType, unsigned long)+0x5e) [0x7fcac7f3a0c0]
[bt] (4) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::Framer::StartPacket(unsigned long)+0xee) [0x7fcac7f399ba]
[bt] (3) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::Framer::WriteAndCrc(unsigned char const*, unsigned long, bool, bool)+0x186) [0x7fcac7f39bac]
[bt] (2) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::WriteStream::WriteAll(unsigned char*, unsigned long, unsigned long*)+0x4d) [0x7fcac7f39df3]
[bt] (1) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(tvm::runtime::micro_rpc::CallbackWriteStream::Write(unsigned char const*, unsigned long)+0x270) [0x7fcac7f0d260]
[bt] (0) /home/hw1580381/ai@edge/tvm_eval1/tvm/build/libtvm.so(+0x120947b) [0x7fcac7e6747b]
File "/home/hw1580381/ai@edge/tvm_eval1/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
rv = local_pyfunc(*pyargs)
File "/home/hw1580381/ai@edge/tvm_eval1/tvm/python/tvm/micro/session.py", line 110, in _wrap_transport_write
data, float(timeout_microsec) / 1e6 if timeout_microsec is not None else None
File "/home/hw1580381/ai@edge/tvm_eval1/tvm/python/tvm/micro/transport/base.py", line 287, in write
raise err
File "/home/hw1580381/ai@edge/tvm_eval1/tvm/python/tvm/micro/transport/base.py", line 266, in write
bytes_written = self.child.write(data, timeout_sec)
File "/home/hw1580381/ai@edge/tvm_eval1/tvm/python/tvm/micro/transport/subprocess.py", line 58, in write
return self.child_transport.write(data, timeout_sec)
File "/home/hw1580381/ai@edge/tvm_eval1/tvm/python/tvm/micro/transport/file_descriptor.py", line 100, in write
raise base.TransportClosedError()
TVMError: tvm.micro.transport.base.TransportClosedError
What is the correct way of doing this? Also is there a way to know the memory requirements of the model for the MCU.
Thanks