Hi, I’m new with microTVM, but I want to run the kws model with tvm on Arduino Nano 33 BLE Sense. First of all, I tried this tutorial with micro_speech: demo. It works perfect. Problems appear when I’m trying to compile yes_no.tflite
or my_kws.tflite
models using tvmc
or generate_project.py
(from demo) on my own. My board dont answer after flashing. I think, there is a problem with TVMExecute function.
In tutorial it has this structure:
void TVMExecute(void* input_data, void* output_data) {
int ret_val = tvmgen_default_run_model(input_data, output_data);
if (ret_val != 0) {
TVMPlatformAbort(kTvmErrorPlatformCheckFailure);
}
}
And this is a part of model’s libs:
#include "../../src/standalone_crt/include/tvm/runtime/c_runtime_api.h"
#ifdef __cplusplus
extern "C" {
#endif
TVM_DLL int32_t tvmgen_default_run_model(void* arg0,void* arg1);
int32_t tvmgen_default_run(void* args, void* type_code, int num_args, void* out_value, void* out_type_code, void* resource_handle) {
return tvmgen_default_run_model(((DLTensor*)(((TVMValue*)args)[0].v_handle))[0].data,((DLTensor*)(((TVMValue*)args)[1].v_handle))[0].data);
}
#ifdef __cplusplus
}
#endif
;
So, when I compile the model, TVMExecute looks like:
void TVMExecute(void* input_data, void* output_data) {
int ret_val = tvmgen_default___tvm_main__(input_data, output_data);
if (ret_val != 0) {
TVMPlatformAbort(kTvmErrorPlatformCheckFailure);
}
}
And a part of model’s libs look like:
TVM_DLL int32_t tvmgen_default___tvm_main__(TVMValue* args, int* type_code, int num_args, TVMValue* out_value, int* out_type_code, void* resource_handle);
int32_t tvmgen_default_run(TVMValue* args, int* type_code, int num_args, TVMValue* out_value, int* out_type_code, void* resource_handle) {
TVMValue tensors[4];
tensors[0] = ((TVMValue*)args)[0];
tensors[1] = ((TVMValue*)args)[1];
DLTensor global_const_workspace_dltensor = {
.data = &global_const_workspace
};
TVMValue global_const_workspace_tvm_value = {
.v_handle = &global_const_workspace_dltensor
};
tensors[2] = global_const_workspace_tvm_value;
DLTensor global_workspace_dltensor = {
.data = &global_workspace
};
TVMValue global_workspace_tvm_value = {
.v_handle = &global_workspace_dltensor
};
tensors[3] = global_workspace_tvm_value;
return tvmgen_default___tvm_main__((void*)tensors, type_code, num_args, out_value, out_type_code, resource_handle);
}
For me it looks like the relay of model has changed, and I should pass to TVMExecute something else.
So, if I have inputs, how I should pass it to get answer from model?
I will be happy for any tips about what to do or how it all works.
And if it’s helpful, my tvmc commands:
tvmc compile input/kws_ref_model.tflite \
--target='c -keys=cpu -model=host' \
--runtime=crt \
--runtime-crt-system-lib 1 \
--executor='aot' \
--output output/model.tar \
--output-format mlf \
--pass-config tir.disable_vectorize=1
tvmc micro create -f output/project/ output/model.tar arduino \
--project-option project_type=example_project board=nano33ble